Compare commits
6 Commits
0b027ddb4c
...
7cc6ed0501
| Author | SHA1 | Date |
|---|---|---|
|
|
7cc6ed0501 | |
|
|
205bc438cb | |
|
|
f92e465cd6 | |
|
|
2a771d7722 | |
|
|
c5316c3f2f | |
|
|
1702b07d6c |
|
|
@ -11,6 +11,7 @@ ignores:
|
||||||
- .claude/**
|
- .claude/**
|
||||||
- .roo/**
|
- .roo/**
|
||||||
- .codex/**
|
- .codex/**
|
||||||
|
- .agentvibes/**
|
||||||
- .kiro/**
|
- .kiro/**
|
||||||
- sample-project/**
|
- sample-project/**
|
||||||
- test-project-install/**
|
- test-project-install/**
|
||||||
|
|
|
||||||
|
|
@ -2,23 +2,87 @@
|
||||||
title: "Documentation Style Guide"
|
title: "Documentation Style Guide"
|
||||||
---
|
---
|
||||||
|
|
||||||
This project adheres to the [Google Developer Documentation Style Guide](https://developers.google.com/style) and uses [Diataxis](https://diataxis.fr/) to structure content. Only project-specific conventions follow.
|
Guidelines for consistent documentation across the BMad Method project.
|
||||||
|
|
||||||
## Project-Specific Rules
|
## Universal Formatting Rules
|
||||||
|
|
||||||
| Rule | Specification |
|
These rules apply to ALL document types. Violations fail review.
|
||||||
|------|---------------|
|
|
||||||
| No horizontal rules (`---`) | Fragments reading flow |
|
| Rule | Rationale |
|
||||||
| No `####` headers | Use bold text or admonitions instead |
|
|------|-----------|
|
||||||
|
| No horizontal rules (`---`) | Fragment reading flow |
|
||||||
|
| No `####` headers | Visual noise; use bold text or admonitions |
|
||||||
| No "Related" or "Next:" sections | Sidebar handles navigation |
|
| No "Related" or "Next:" sections | Sidebar handles navigation |
|
||||||
| No deeply nested lists | Break into sections instead |
|
| No deeply nested lists | Hard to parse; break into sections |
|
||||||
| No code blocks for non-code | Use admonitions for dialogue examples |
|
| No code blocks for non-code | Confusing semantics |
|
||||||
| No bold paragraphs for callouts | Use admonitions instead |
|
| No bold paragraphs for callouts | Use admonitions instead |
|
||||||
| 1-2 admonitions per section max | Tutorials allow 3-4 per major section |
|
| 1-2 admonitions per section max | Overuse creates noise |
|
||||||
| Table cells / list items | 1-2 sentences max |
|
| Table cells and long list items (5+) 1-2 sentences max | Walls of text; break into sections or link to details |
|
||||||
| Header budget | 8-12 `##` per doc; 2-3 `###` per section |
|
|
||||||
|
|
||||||
## Admonitions (Starlight Syntax)
|
## Visual Hierarchy
|
||||||
|
|
||||||
|
### Patterns to Use
|
||||||
|
|
||||||
|
| Pattern | When to Use |
|
||||||
|
|---------|-------------|
|
||||||
|
| Whitespace + section headers | Content separation |
|
||||||
|
| Bold text within paragraphs | Inline emphasis |
|
||||||
|
| Admonitions | Callouts requiring attention |
|
||||||
|
| Tables | Structured comparisons (3+ items) |
|
||||||
|
| Flat lists | Scannable options |
|
||||||
|
|
||||||
|
### Header Budget
|
||||||
|
|
||||||
|
- `##` sections: 8-12 per document
|
||||||
|
- `###` subsections: 2-3 per `##` section max
|
||||||
|
- `####`: Never use
|
||||||
|
|
||||||
|
The structure templates in this guide show content flow, not 1:1 header mapping. Admonitions and inline elements appear within sections, not as separate headers.
|
||||||
|
|
||||||
|
### Header Naming
|
||||||
|
|
||||||
|
| Context | Style | Example |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| Steps | Action verbs | "Install BMad", "Create Your Plan" |
|
||||||
|
| Reference | Nouns | "Common Questions", "Quick Reference" |
|
||||||
|
|
||||||
|
## Example: Before and After
|
||||||
|
|
||||||
|
**Before (violations):**
|
||||||
|
```md
|
||||||
|
---
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Step 1: Initialize
|
||||||
|
|
||||||
|
#### What happens during init?
|
||||||
|
|
||||||
|
**Important:** You need to describe your project.
|
||||||
|
|
||||||
|
1. Your project goals
|
||||||
|
- What you want to build
|
||||||
|
- Why you're building it
|
||||||
|
2. The complexity
|
||||||
|
- Small, medium, or large
|
||||||
|
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (correct):**
|
||||||
|
```md
|
||||||
|
## Step 1: Initialize Your Project
|
||||||
|
|
||||||
|
Load the **Analyst agent** in your IDE, wait for the menu, then run `workflow-init`.
|
||||||
|
|
||||||
|
:::note[What Happens]
|
||||||
|
You'll describe your project goals and complexity. The workflow then recommends a planning track.
|
||||||
|
:::
|
||||||
|
```
|
||||||
|
|
||||||
|
## Admonitions
|
||||||
|
|
||||||
|
Starlight admonition syntax:
|
||||||
|
|
||||||
```md
|
```md
|
||||||
:::tip[Title]
|
:::tip[Title]
|
||||||
|
|
@ -47,10 +111,33 @@ Critical warnings only — data loss, security issues
|
||||||
| `:::caution[Important]` | Critical caveats |
|
| `:::caution[Important]` | Critical caveats |
|
||||||
| `:::note[Example]` | Command/response examples |
|
| `:::note[Example]` | Command/response examples |
|
||||||
|
|
||||||
## Standard Table Formats
|
### Rules
|
||||||
|
|
||||||
|
- Always include a title
|
||||||
|
- Keep content to 1-3 sentences (longer rarely needed)
|
||||||
|
- Never nest admonitions
|
||||||
|
|
||||||
|
## Tables
|
||||||
|
|
||||||
|
Use tables for:
|
||||||
|
- Phase descriptions
|
||||||
|
- Agent roles
|
||||||
|
- Command references
|
||||||
|
- Option comparisons
|
||||||
|
- Multi-attribute sequences
|
||||||
|
|
||||||
|
### Constraints
|
||||||
|
|
||||||
|
| Constraint | Value |
|
||||||
|
|------------|-------|
|
||||||
|
| Columns | 2-4 max |
|
||||||
|
| Cell content | Short |
|
||||||
|
| Text alignment | Left |
|
||||||
|
| Number alignment | Right |
|
||||||
|
|
||||||
|
### Standard Formats
|
||||||
|
|
||||||
**Phases:**
|
**Phases:**
|
||||||
|
|
||||||
```md
|
```md
|
||||||
| Phase | Name | What Happens |
|
| Phase | Name | What Happens |
|
||||||
|-------|------|--------------|
|
|-------|------|--------------|
|
||||||
|
|
@ -59,7 +146,6 @@ Critical warnings only — data loss, security issues
|
||||||
```
|
```
|
||||||
|
|
||||||
**Commands:**
|
**Commands:**
|
||||||
|
|
||||||
```md
|
```md
|
||||||
| Command | Agent | Purpose |
|
| Command | Agent | Purpose |
|
||||||
|---------|-------|---------|
|
|---------|-------|---------|
|
||||||
|
|
@ -67,6 +153,53 @@ Critical warnings only — data loss, security issues
|
||||||
| `*prd` | PM | Create Product Requirements Document |
|
| `*prd` | PM | Create Product Requirements Document |
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Code Blocks
|
||||||
|
|
||||||
|
**Correct** — language-tagged command:
|
||||||
|
```md
|
||||||
|
```bash
|
||||||
|
npx bmad-method install
|
||||||
|
```
|
||||||
|
```
|
||||||
|
|
||||||
|
**Incorrect** — untagged dialogue:
|
||||||
|
````md
|
||||||
|
```
|
||||||
|
You: Do something
|
||||||
|
Agent: [Response here]
|
||||||
|
```
|
||||||
|
````
|
||||||
|
|
||||||
|
For dialogue examples, use admonitions:
|
||||||
|
```md
|
||||||
|
:::note[Example]
|
||||||
|
Run `workflow-status` and the agent will tell you the next recommended workflow.
|
||||||
|
:::
|
||||||
|
```
|
||||||
|
|
||||||
|
## Lists
|
||||||
|
|
||||||
|
**Flat lists (preferred):**
|
||||||
|
```md
|
||||||
|
- **Option A** — Description
|
||||||
|
- **Option B** — Description
|
||||||
|
- **Option C** — Description
|
||||||
|
```
|
||||||
|
|
||||||
|
**Numbered steps:**
|
||||||
|
```md
|
||||||
|
1. Load the **PM agent** in a new chat
|
||||||
|
2. Run the PRD workflow: `*prd`
|
||||||
|
3. Output: `PRD.md`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Assets
|
||||||
|
|
||||||
|
| Element | Requirements |
|
||||||
|
|---------|--------------|
|
||||||
|
| **Links** | Descriptive text (not "click here"), site-relative paths |
|
||||||
|
| **Images** | Alt text required, italic caption below, SVG preferred, store in `./images/` |
|
||||||
|
|
||||||
## Folder Structure Blocks
|
## Folder Structure Blocks
|
||||||
|
|
||||||
Show in "What You've Accomplished" sections:
|
Show in "What You've Accomplished" sections:
|
||||||
|
|
@ -82,9 +215,23 @@ your-project/
|
||||||
```
|
```
|
||||||
````
|
````
|
||||||
|
|
||||||
|
## Document Types
|
||||||
|
|
||||||
|
Select document type based on reader goal:
|
||||||
|
|
||||||
|
| Reader Goal | Document Type |
|
||||||
|
|-------------|---------------|
|
||||||
|
| Learn a complete workflow | Tutorial |
|
||||||
|
| Complete a specific task | How-To |
|
||||||
|
| Understand a concept | Explanation |
|
||||||
|
| Look up information | Reference |
|
||||||
|
| Find term definitions | Glossary |
|
||||||
|
|
||||||
## Tutorial Structure
|
## Tutorial Structure
|
||||||
|
|
||||||
```text
|
Tutorials teach complete workflows to new users. Length: 200-400 lines.
|
||||||
|
|
||||||
|
```
|
||||||
1. Title + Hook (1-2 sentences describing outcome)
|
1. Title + Hook (1-2 sentences describing outcome)
|
||||||
2. Version/Module Notice (info or warning admonition) (optional)
|
2. Version/Module Notice (info or warning admonition) (optional)
|
||||||
3. What You'll Learn (bullet list of outcomes)
|
3. What You'll Learn (bullet list of outcomes)
|
||||||
|
|
@ -117,7 +264,9 @@ your-project/
|
||||||
|
|
||||||
## How-To Structure
|
## How-To Structure
|
||||||
|
|
||||||
```text
|
How-to guides complete specific tasks for users who know basics. Length: 50-150 lines (shorter than tutorials, assumes prior knowledge).
|
||||||
|
|
||||||
|
```
|
||||||
1. Title + Hook (one sentence: "Use the `X` workflow to...")
|
1. Title + Hook (one sentence: "Use the `X` workflow to...")
|
||||||
2. When to Use This (bullet list of scenarios)
|
2. When to Use This (bullet list of scenarios)
|
||||||
3. When to Skip This (optional)
|
3. When to Skip This (optional)
|
||||||
|
|
@ -129,6 +278,21 @@ your-project/
|
||||||
9. Next Steps (optional)
|
9. Next Steps (optional)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### How-To Visual Elements
|
||||||
|
|
||||||
|
| Admonition | Use |
|
||||||
|
|------------|-----|
|
||||||
|
| `:::note[Prerequisites]` | Required dependencies, agents, prior steps |
|
||||||
|
| `:::tip[Pro Tip]` | Optional shortcuts |
|
||||||
|
| `:::caution[Common Mistake]` | Pitfalls to avoid |
|
||||||
|
| `:::note[Example]` | Brief inline usage |
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
- 1-2 admonitions max
|
||||||
|
- Prerequisites as admonition
|
||||||
|
- Multiple tips: use flat list instead of admonition
|
||||||
|
- Very simple how-tos: skip admonitions entirely
|
||||||
|
|
||||||
### How-To Checklist
|
### How-To Checklist
|
||||||
|
|
||||||
- [ ] Hook starts with "Use the `X` workflow to..."
|
- [ ] Hook starts with "Use the `X` workflow to..."
|
||||||
|
|
@ -139,31 +303,33 @@ your-project/
|
||||||
|
|
||||||
## Explanation Structure
|
## Explanation Structure
|
||||||
|
|
||||||
|
Explanation documents answer "What is X?" and "Why does X matter?"
|
||||||
|
|
||||||
### Types
|
### Types
|
||||||
|
|
||||||
| Type | Example |
|
| Type | Purpose | Example |
|
||||||
|------|---------|
|
|------|---------|---------|
|
||||||
| **Index/Landing** | `core-concepts/index.md` |
|
| **Index/Landing** | Topic area overview with navigation | `core-concepts/index.md` |
|
||||||
| **Concept** | `what-are-agents.md` |
|
| **Concept** | Define core concept | `what-are-agents.md` |
|
||||||
| **Feature** | `quick-flow.md` |
|
| **Feature** | Deep dive into capability | `quick-flow.md` |
|
||||||
| **Philosophy** | `why-solutioning-matters.md` |
|
| **Philosophy** | Design decisions and rationale | `why-solutioning-matters.md` |
|
||||||
| **FAQ** | `brownfield-faq.md` |
|
| **FAQ** | Answer common questions | `brownfield-faq.md` |
|
||||||
|
|
||||||
### General Template
|
### General Structure
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (1-2 sentences)
|
1. Title + Hook (1-2 sentences)
|
||||||
2. Overview/Definition (what it is, why it matters)
|
2. Overview/Definition (what it is, why it matters)
|
||||||
3. Key Concepts (### subsections)
|
3. Key Concepts (### subsections)
|
||||||
4. Comparison Table (optional)
|
4. Comparison Table (optional)
|
||||||
5. When to Use / When Not to Use (optional)
|
5. When to Use / When Not to Use (optional)
|
||||||
6. Diagram (optional - mermaid, 1 per doc max)
|
6. Diagram (optional - mermaid)
|
||||||
7. Next Steps (optional)
|
7. Next Steps (optional)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Index/Landing Pages
|
### Index/Landing Pages
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (one sentence)
|
1. Title + Hook (one sentence)
|
||||||
2. Content Table (links with descriptions)
|
2. Content Table (links with descriptions)
|
||||||
3. Getting Started (numbered list)
|
3. Getting Started (numbered list)
|
||||||
|
|
@ -172,7 +338,7 @@ your-project/
|
||||||
|
|
||||||
### Concept Explainers
|
### Concept Explainers
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (what it is)
|
1. Title + Hook (what it is)
|
||||||
2. Types/Categories (### subsections) (optional)
|
2. Types/Categories (### subsections) (optional)
|
||||||
3. Key Differences Table
|
3. Key Differences Table
|
||||||
|
|
@ -183,7 +349,7 @@ your-project/
|
||||||
|
|
||||||
### Feature Explainers
|
### Feature Explainers
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (what it does)
|
1. Title + Hook (what it does)
|
||||||
2. Quick Facts (optional - "Perfect for:", "Time to:")
|
2. Quick Facts (optional - "Perfect for:", "Time to:")
|
||||||
3. When to Use / When Not to Use
|
3. When to Use / When Not to Use
|
||||||
|
|
@ -195,7 +361,7 @@ your-project/
|
||||||
|
|
||||||
### Philosophy/Rationale Documents
|
### Philosophy/Rationale Documents
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (the principle)
|
1. Title + Hook (the principle)
|
||||||
2. The Problem
|
2. The Problem
|
||||||
3. The Solution
|
3. The Solution
|
||||||
|
|
@ -204,6 +370,15 @@ your-project/
|
||||||
6. When This Applies
|
6. When This Applies
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Visual Elements
|
||||||
|
|
||||||
|
| Element | Use For |
|
||||||
|
|---------|---------|
|
||||||
|
| Comparison tables | Contrasting types, options, approaches |
|
||||||
|
| Mermaid diagrams | Process flows, decision trees (1 per doc max) |
|
||||||
|
| "Best for:" lists | Quick decision guidance |
|
||||||
|
| Code examples | Brief concept illustration |
|
||||||
|
|
||||||
### Explanation Checklist
|
### Explanation Checklist
|
||||||
|
|
||||||
- [ ] Hook states what document explains
|
- [ ] Hook states what document explains
|
||||||
|
|
@ -211,24 +386,26 @@ your-project/
|
||||||
- [ ] Comparison tables for 3+ options
|
- [ ] Comparison tables for 3+ options
|
||||||
- [ ] Diagrams have clear labels
|
- [ ] Diagrams have clear labels
|
||||||
- [ ] Links to how-to guides for procedural questions
|
- [ ] Links to how-to guides for procedural questions
|
||||||
- [ ] 2-3 admonitions max per document
|
- [ ] 2-3 admonitions max per document (1-2 per section)
|
||||||
|
|
||||||
## Reference Structure
|
## Reference Structure
|
||||||
|
|
||||||
|
Reference documents answer "What are the options?" and "What does X do?" for users who know what they need.
|
||||||
|
|
||||||
### Types
|
### Types
|
||||||
|
|
||||||
| Type | Example |
|
| Type | Purpose | Example |
|
||||||
|------|---------|
|
|------|---------|---------|
|
||||||
| **Index/Landing** | `workflows/index.md` |
|
| **Index/Landing** | Navigation to reference content | `workflows/index.md` |
|
||||||
| **Catalog** | `agents/index.md` |
|
| **Catalog** | Quick-reference item list | `agents/index.md` |
|
||||||
| **Deep-Dive** | `document-project.md` |
|
| **Deep-Dive** | Detailed single-item reference | `document-project.md` |
|
||||||
| **Configuration** | `core-tasks.md` |
|
| **Configuration** | Settings and config docs | `core-tasks.md` |
|
||||||
| **Glossary** | `glossary/index.md` |
|
| **Glossary** | Term definitions | `glossary/index.md` |
|
||||||
| **Comprehensive** | `bmgd-workflows.md` |
|
| **Comprehensive** | Extensive multi-item reference | `bmgd-workflows.md` |
|
||||||
|
|
||||||
### Reference Index Pages
|
### Reference Index Pages
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (one sentence)
|
1. Title + Hook (one sentence)
|
||||||
2. Content Sections (## for each category)
|
2. Content Sections (## for each category)
|
||||||
- Bullet list with links and descriptions
|
- Bullet list with links and descriptions
|
||||||
|
|
@ -236,7 +413,7 @@ your-project/
|
||||||
|
|
||||||
### Catalog Reference
|
### Catalog Reference
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook
|
1. Title + Hook
|
||||||
2. Items (## for each item)
|
2. Items (## for each item)
|
||||||
- Brief description (one sentence)
|
- Brief description (one sentence)
|
||||||
|
|
@ -244,9 +421,13 @@ your-project/
|
||||||
3. Universal/Shared (## section) (optional)
|
3. Universal/Shared (## section) (optional)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
- Use `##` for items, not `###`
|
||||||
|
- Keep descriptions to 1 sentence
|
||||||
|
|
||||||
### Item Deep-Dive Reference
|
### Item Deep-Dive Reference
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook (one sentence purpose)
|
1. Title + Hook (one sentence purpose)
|
||||||
2. Quick Facts (optional note admonition)
|
2. Quick Facts (optional note admonition)
|
||||||
- Module, Command, Input, Output as list
|
- Module, Command, Input, Output as list
|
||||||
|
|
@ -259,7 +440,7 @@ your-project/
|
||||||
|
|
||||||
### Configuration Reference
|
### Configuration Reference
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook
|
1. Title + Hook
|
||||||
2. Table of Contents (jump links if 4+ items)
|
2. Table of Contents (jump links if 4+ items)
|
||||||
3. Items (## for each config/task)
|
3. Items (## for each config/task)
|
||||||
|
|
@ -271,7 +452,7 @@ your-project/
|
||||||
|
|
||||||
### Comprehensive Reference Guide
|
### Comprehensive Reference Guide
|
||||||
|
|
||||||
```text
|
```
|
||||||
1. Title + Hook
|
1. Title + Hook
|
||||||
2. Overview (## section)
|
2. Overview (## section)
|
||||||
- Diagram or table showing organization
|
- Diagram or table showing organization
|
||||||
|
|
@ -281,6 +462,11 @@ your-project/
|
||||||
4. Next Steps (optional)
|
4. Next Steps (optional)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
- Standardize fields across all items
|
||||||
|
- Tables for comparing multiple items
|
||||||
|
- 1 diagram max per document
|
||||||
|
|
||||||
### Reference Checklist
|
### Reference Checklist
|
||||||
|
|
||||||
- [ ] Hook states what document references
|
- [ ] Hook states what document references
|
||||||
|
|
@ -292,8 +478,11 @@ your-project/
|
||||||
|
|
||||||
## Glossary Structure
|
## Glossary Structure
|
||||||
|
|
||||||
Starlight generates right-side "On this page" navigation from headers:
|
Glossaries provide compact, scannable term definitions.
|
||||||
|
|
||||||
|
### Layout Strategy
|
||||||
|
|
||||||
|
Starlight generates right-side "On this page" navigation from headers:
|
||||||
- Categories as `##` headers — appear in right nav
|
- Categories as `##` headers — appear in right nav
|
||||||
- Terms in tables — compact rows, not individual headers
|
- Terms in tables — compact rows, not individual headers
|
||||||
- No inline TOC — right sidebar handles navigation
|
- No inline TOC — right sidebar handles navigation
|
||||||
|
|
@ -316,21 +505,43 @@ Starlight generates right-side "On this page" navigation from headers:
|
||||||
| Start with what it IS or DOES | Start with "This is..." or "A [term] is..." |
|
| Start with what it IS or DOES | Start with "This is..." or "A [term] is..." |
|
||||||
| Keep to 1-2 sentences | Write multi-paragraph explanations |
|
| Keep to 1-2 sentences | Write multi-paragraph explanations |
|
||||||
| Bold term name in cell | Use plain text for terms |
|
| Bold term name in cell | Use plain text for terms |
|
||||||
|
| Link to docs for deep dives | Explain full concepts inline |
|
||||||
|
|
||||||
### Context Markers
|
### Context Markers
|
||||||
|
|
||||||
Add italic context at definition start for limited-scope terms:
|
Add italic context at definition start for limited-scope terms:
|
||||||
|
|
||||||
|
```md
|
||||||
|
| **Tech-Spec** | *Quick Flow only.* Comprehensive technical plan for small changes. |
|
||||||
|
| **PRD** | *BMad Method/Enterprise.* Product-level planning document with vision and goals. |
|
||||||
|
```
|
||||||
|
|
||||||
|
Standard markers:
|
||||||
- `*Quick Flow only.*`
|
- `*Quick Flow only.*`
|
||||||
- `*BMad Method/Enterprise.*`
|
- `*BMad Method/Enterprise.*`
|
||||||
- `*Phase N.*`
|
- `*Phase N.*`
|
||||||
- `*BMGD.*`
|
- `*BMGD.*`
|
||||||
- `*Brownfield.*`
|
- `*Brownfield.*`
|
||||||
|
|
||||||
|
### Cross-References
|
||||||
|
|
||||||
|
Reference category anchor (terms are not headers):
|
||||||
|
|
||||||
|
```md
|
||||||
|
| **Tech-Spec** | *Quick Flow only.* Technical plan for small changes. See [PRD](#planning-documents). |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Organization
|
||||||
|
|
||||||
|
- Alphabetize terms within each category table
|
||||||
|
- Alphabetize categories or order by logical progression
|
||||||
|
- No catch-all sections
|
||||||
|
|
||||||
### Glossary Checklist
|
### Glossary Checklist
|
||||||
|
|
||||||
- [ ] Terms in tables, not individual headers
|
- [ ] Terms in tables, not individual headers
|
||||||
- [ ] Terms alphabetized within categories
|
- [ ] Terms alphabetized within categories
|
||||||
|
- [ ] No inline TOC
|
||||||
- [ ] Definitions 1-2 sentences
|
- [ ] Definitions 1-2 sentences
|
||||||
- [ ] Context markers italicized
|
- [ ] Context markers italicized
|
||||||
- [ ] Term names bolded in cells
|
- [ ] Term names bolded in cells
|
||||||
|
|
@ -338,6 +549,8 @@ Add italic context at definition start for limited-scope terms:
|
||||||
|
|
||||||
## FAQ Sections
|
## FAQ Sections
|
||||||
|
|
||||||
|
Structure:
|
||||||
|
|
||||||
```md
|
```md
|
||||||
## Questions
|
## Questions
|
||||||
|
|
||||||
|
|
@ -355,13 +568,29 @@ Yes. The SM agent has a `correct-course` workflow for handling scope changes.
|
||||||
**Have a question not answered here?** [Open an issue](...) or ask in [Discord](...).
|
**Have a question not answered here?** [Open an issue](...) or ask in [Discord](...).
|
||||||
```
|
```
|
||||||
|
|
||||||
## Validation Commands
|
Rules:
|
||||||
|
- TOC with jump links under `## Questions`
|
||||||
|
- `###` headers for questions (no `Q:` prefix)
|
||||||
|
- Direct answers (no `**A:**` prefix)
|
||||||
|
- End with CTA for unanswered questions
|
||||||
|
|
||||||
Before submitting documentation changes:
|
## Validation Steps
|
||||||
|
|
||||||
|
Before submitting documentation changes, run from repo root:
|
||||||
|
|
||||||
|
1. **Fix link format** — Convert relative links to site-relative paths:
|
||||||
```bash
|
```bash
|
||||||
npm run docs:fix-links # Preview link format fixes
|
npm run docs:fix-links # Preview
|
||||||
npm run docs:fix-links -- --write # Apply fixes
|
npm run docs:fix-links -- --write # Apply
|
||||||
npm run docs:validate-links # Check links exist
|
```
|
||||||
npm run docs:build # Verify no build errors
|
|
||||||
|
2. **Validate links** — Check links point to existing files:
|
||||||
|
```bash
|
||||||
|
npm run docs:validate-links # Preview
|
||||||
|
npm run docs:validate-links -- --write # Auto-fix
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Build the site** — Verify no build errors:
|
||||||
|
```bash
|
||||||
|
npm run docs:build
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -20,13 +20,10 @@ This flexibility enables:
|
||||||
|
|
||||||
## Categories
|
## Categories
|
||||||
|
|
||||||
- [Categories](#categories)
|
|
||||||
- [Custom Stand-Alone Modules](#custom-stand-alone-modules)
|
- [Custom Stand-Alone Modules](#custom-stand-alone-modules)
|
||||||
- [Custom Add-On Modules](#custom-add-on-modules)
|
- [Custom Add-On Modules](#custom-add-on-modules)
|
||||||
- [Custom Global Modules](#custom-global-modules)
|
- [Custom Global Modules](#custom-global-modules)
|
||||||
- [Custom Agents](#custom-agents)
|
- [Custom Agents](#custom-agents)
|
||||||
- [BMad Tiny Agents](#bmad-tiny-agents)
|
|
||||||
- [Simple and Expert Agents](#simple-and-expert-agents)
|
|
||||||
- [Custom Workflows](#custom-workflows)
|
- [Custom Workflows](#custom-workflows)
|
||||||
|
|
||||||
## Custom Stand-Alone Modules
|
## Custom Stand-Alone Modules
|
||||||
|
|
@ -62,6 +59,7 @@ Similar to Custom Stand-Alone Modules, but designed to add functionality that ap
|
||||||
|
|
||||||
Examples include:
|
Examples include:
|
||||||
|
|
||||||
|
- The current TTS (Text-to-Speech) functionality for Claude, which will soon be converted to a global module
|
||||||
- The core module, which is always installed and provides all agents with party mode and advanced elicitation capabilities
|
- The core module, which is always installed and provides all agents with party mode and advanced elicitation capabilities
|
||||||
- Installation and update tools that work with any BMad method configuration
|
- Installation and update tools that work with any BMad method configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -23,16 +23,11 @@ BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pic
|
||||||
1. **No TEA**
|
1. **No TEA**
|
||||||
- Skip all TEA workflows. Use your existing team testing approach.
|
- Skip all TEA workflows. Use your existing team testing approach.
|
||||||
|
|
||||||
2. **TEA Solo (Standalone)**
|
2. **TEA-only (Standalone)**
|
||||||
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
|
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
|
||||||
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
|
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
|
||||||
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline; they work best after you decide the stack/architecture.
|
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline; they work best after you decide the stack/architecture.
|
||||||
|
|
||||||
**TEA Lite (Beginner Approach):**
|
|
||||||
- Simplest way to use TEA - just use `*automate` to test existing features.
|
|
||||||
- Perfect for learning TEA fundamentals in 30 minutes.
|
|
||||||
- See [TEA Lite Quickstart Tutorial](/docs/tutorials/getting-started/tea-lite-quickstart.md).
|
|
||||||
|
|
||||||
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
|
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
|
||||||
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
|
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
|
||||||
- Phase 4: per-epic `*test-design`, optional `*atdd`, then `*automate` and optional `*test-review`.
|
- Phase 4: per-epic `*test-design`, optional `*atdd`, then `*automate` and optional `*test-review`.
|
||||||
|
|
@ -60,8 +55,8 @@ If you are unsure, default to the integrated path for your track and adjust late
|
||||||
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
||||||
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
||||||
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
||||||
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: UI selectors verified with live browser; API tests benefit from trace analysis |
|
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
|
||||||
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Visual debugging + trace analysis for test fixes; **+ Recording**: Verified selectors (UI) + network inspection (API) |
|
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
|
||||||
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
||||||
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
||||||
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
||||||
|
|
@ -284,31 +279,6 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
|
||||||
**Related how-to guides:**
|
**Related how-to guides:**
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
||||||
- [How to Set Up a Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
- [How to Set Up a Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md)
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md)
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md)
|
|
||||||
- [How to Set Up CI Pipeline](/docs/how-to/workflows/setup-ci.md)
|
|
||||||
- [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md)
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md)
|
|
||||||
|
|
||||||
## Deep Dive Concepts
|
|
||||||
|
|
||||||
Want to understand TEA principles and patterns in depth?
|
|
||||||
|
|
||||||
**Core Principles:**
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Probability × impact scoring, P0-P3 priorities
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Definition of Done, determinism, isolation
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - Context engineering with tea-index.csv
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Pure function → fixture → composition
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Eliminating flakiness with intercept-before-navigate
|
|
||||||
|
|
||||||
**Engagement & Strategy:**
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - TEA Lite, TEA Solo, TEA Integrated (5 models explained)
|
|
||||||
|
|
||||||
**Philosophy:**
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Start here to understand WHY TEA exists** - The problem with AI-generated tests and TEA's three-part solution
|
|
||||||
|
|
||||||
## Optional Integrations
|
## Optional Integrations
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,710 +0,0 @@
|
||||||
---
|
|
||||||
title: "TEA Engagement Models Explained"
|
|
||||||
description: Understanding the five ways to use TEA - from standalone to full BMad Method integration
|
|
||||||
---
|
|
||||||
|
|
||||||
# TEA Engagement Models Explained
|
|
||||||
|
|
||||||
TEA is optional and flexible. There are five valid ways to engage with TEA - choose intentionally based on your project needs and methodology.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**TEA is not mandatory.** Pick the engagement model that fits your context:
|
|
||||||
|
|
||||||
1. **No TEA** - Skip all TEA workflows, use existing testing approach
|
|
||||||
2. **TEA Solo** - Use TEA standalone without BMad Method
|
|
||||||
3. **TEA Lite** - Beginner approach using just `*automate`
|
|
||||||
4. **TEA Integrated (Greenfield)** - Full BMad Method integration from scratch
|
|
||||||
5. **TEA Integrated (Brownfield)** - Full BMad Method integration with existing code
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### One-Size-Fits-All Doesn't Work
|
|
||||||
|
|
||||||
**Traditional testing tools force one approach:**
|
|
||||||
- Must use entire framework
|
|
||||||
- All-or-nothing adoption
|
|
||||||
- No flexibility for different project types
|
|
||||||
- Teams abandon tool if it doesn't fit
|
|
||||||
|
|
||||||
**TEA recognizes:**
|
|
||||||
- Different projects have different needs
|
|
||||||
- Different teams have different maturity levels
|
|
||||||
- Different contexts require different approaches
|
|
||||||
- Flexibility increases adoption
|
|
||||||
|
|
||||||
## The Five Engagement Models
|
|
||||||
|
|
||||||
### Model 1: No TEA
|
|
||||||
|
|
||||||
**What:** Skip all TEA workflows, use your existing testing approach.
|
|
||||||
|
|
||||||
**When to Use:**
|
|
||||||
- Team has established testing practices
|
|
||||||
- Quality is already high
|
|
||||||
- Testing tools already in place
|
|
||||||
- TEA doesn't add value
|
|
||||||
|
|
||||||
**What You Miss:**
|
|
||||||
- Risk-based test planning
|
|
||||||
- Systematic quality review
|
|
||||||
- Gate decisions with evidence
|
|
||||||
- Knowledge base patterns
|
|
||||||
|
|
||||||
**What You Keep:**
|
|
||||||
- Full control
|
|
||||||
- Existing tools
|
|
||||||
- Team expertise
|
|
||||||
- No learning curve
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Your team:
|
|
||||||
- 10-year veteran QA team
|
|
||||||
- Established testing practices
|
|
||||||
- High-quality test suite
|
|
||||||
- No problems to solve
|
|
||||||
|
|
||||||
Decision: Skip TEA, keep what works
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict:** Valid choice if existing approach works.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Model 2: TEA Solo
|
|
||||||
|
|
||||||
**What:** Use TEA workflows standalone without full BMad Method integration.
|
|
||||||
|
|
||||||
**When to Use:**
|
|
||||||
- Non-BMad projects
|
|
||||||
- Want TEA's quality operating model only
|
|
||||||
- Don't need full planning workflow
|
|
||||||
- Bring your own requirements
|
|
||||||
|
|
||||||
**Typical Sequence:**
|
|
||||||
```
|
|
||||||
1. *test-design (system or epic)
|
|
||||||
2. *atdd or *automate
|
|
||||||
3. *test-review (optional)
|
|
||||||
4. *trace (coverage + gate decision)
|
|
||||||
```
|
|
||||||
|
|
||||||
**You Bring:**
|
|
||||||
- Requirements (user stories, acceptance criteria)
|
|
||||||
- Development environment
|
|
||||||
- Project context
|
|
||||||
|
|
||||||
**TEA Provides:**
|
|
||||||
- Risk-based test planning (`*test-design`)
|
|
||||||
- Test generation (`*atdd`, `*automate`)
|
|
||||||
- Quality review (`*test-review`)
|
|
||||||
- Coverage traceability (`*trace`)
|
|
||||||
|
|
||||||
**Optional:**
|
|
||||||
- Framework setup (`*framework`) if needed
|
|
||||||
- CI configuration (`*ci`) if needed
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Your project:
|
|
||||||
- Using Scrum (not BMad Method)
|
|
||||||
- Jira for story management
|
|
||||||
- Need better test strategy
|
|
||||||
|
|
||||||
Workflow:
|
|
||||||
1. Export stories from Jira
|
|
||||||
2. Run *test-design on epic
|
|
||||||
3. Run *atdd for each story
|
|
||||||
4. Implement features
|
|
||||||
5. Run *trace for coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict:** Best for teams wanting TEA benefits without BMad Method commitment.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Model 3: TEA Lite
|
|
||||||
|
|
||||||
**What:** Beginner approach using just `*automate` to test existing features.
|
|
||||||
|
|
||||||
**When to Use:**
|
|
||||||
- Learning TEA fundamentals
|
|
||||||
- Want quick results
|
|
||||||
- Testing existing application
|
|
||||||
- No time for full methodology
|
|
||||||
|
|
||||||
**Workflow:**
|
|
||||||
```
|
|
||||||
1. *framework (setup test infrastructure)
|
|
||||||
2. *test-design (optional, risk assessment)
|
|
||||||
3. *automate (generate tests for existing features)
|
|
||||||
4. Run tests (they pass immediately)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Beginner developer:
|
|
||||||
- Never used TEA before
|
|
||||||
- Want to add tests to existing app
|
|
||||||
- 30 minutes available
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
1. Run *framework
|
|
||||||
2. Run *automate on TodoMVC demo
|
|
||||||
3. Tests generated and passing
|
|
||||||
4. Learn TEA basics
|
|
||||||
```
|
|
||||||
|
|
||||||
**What You Get:**
|
|
||||||
- Working test framework
|
|
||||||
- Passing tests for existing features
|
|
||||||
- Learning experience
|
|
||||||
- Foundation to expand
|
|
||||||
|
|
||||||
**What You Miss:**
|
|
||||||
- TDD workflow (ATDD)
|
|
||||||
- Risk-based planning (test-design depth)
|
|
||||||
- Quality gates (trace Phase 2)
|
|
||||||
- Full TEA capabilities
|
|
||||||
|
|
||||||
**Verdict:** Perfect entry point for beginners.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Model 4: TEA Integrated (Greenfield)
|
|
||||||
|
|
||||||
**What:** Full BMad Method integration with TEA workflows across all phases.
|
|
||||||
|
|
||||||
**When to Use:**
|
|
||||||
- New projects starting from scratch
|
|
||||||
- Using BMad Method or Enterprise track
|
|
||||||
- Want complete quality operating model
|
|
||||||
- Testing is critical to success
|
|
||||||
|
|
||||||
**Lifecycle:**
|
|
||||||
|
|
||||||
**Phase 2: Planning**
|
|
||||||
- PM creates PRD with NFRs
|
|
||||||
- (Optional) TEA runs `*nfr-assess` (Enterprise only)
|
|
||||||
|
|
||||||
**Phase 3: Solutioning**
|
|
||||||
- Architect creates architecture
|
|
||||||
- TEA runs `*test-design` (system-level) → testability review
|
|
||||||
- TEA runs `*framework` → test infrastructure
|
|
||||||
- TEA runs `*ci` → CI/CD pipeline
|
|
||||||
- Architect runs `*implementation-readiness` (fed by test design)
|
|
||||||
|
|
||||||
**Phase 4: Implementation (Per Epic)**
|
|
||||||
- SM runs `*sprint-planning`
|
|
||||||
- TEA runs `*test-design` (epic-level) → risk assessment for THIS epic
|
|
||||||
- SM creates stories
|
|
||||||
- (Optional) TEA runs `*atdd` → failing tests before dev
|
|
||||||
- DEV implements story
|
|
||||||
- TEA runs `*automate` → expand coverage
|
|
||||||
- (Optional) TEA runs `*test-review` → quality audit
|
|
||||||
- TEA runs `*trace` Phase 1 → refresh coverage
|
|
||||||
|
|
||||||
**Release Gate:**
|
|
||||||
- (Optional) TEA runs `*test-review` → final audit
|
|
||||||
- (Optional) TEA runs `*nfr-assess` → validate NFRs
|
|
||||||
- TEA runs `*trace` Phase 2 → gate decision (PASS/CONCERNS/FAIL/WAIVED)
|
|
||||||
|
|
||||||
**What You Get:**
|
|
||||||
- Complete quality operating model
|
|
||||||
- Systematic test planning
|
|
||||||
- Risk-based prioritization
|
|
||||||
- Evidence-based gate decisions
|
|
||||||
- Consistent patterns across epics
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
New SaaS product:
|
|
||||||
- 50 stories across 8 epics
|
|
||||||
- Security critical
|
|
||||||
- Need quality gates
|
|
||||||
|
|
||||||
Workflow:
|
|
||||||
- Phase 2: Define NFRs in PRD
|
|
||||||
- Phase 3: Architecture → test design → framework → CI
|
|
||||||
- Phase 4: Per epic: test design → ATDD → dev → automate → review → trace
|
|
||||||
- Gate: NFR assess → trace Phase 2 → decision
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict:** Most comprehensive TEA usage, best for structured teams.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Model 5: TEA Integrated (Brownfield)
|
|
||||||
|
|
||||||
**What:** Full BMad Method integration with TEA for existing codebases.
|
|
||||||
|
|
||||||
**When to Use:**
|
|
||||||
- Existing codebase with legacy tests
|
|
||||||
- Want to improve test quality incrementally
|
|
||||||
- Adding features to existing application
|
|
||||||
- Need to establish coverage baseline
|
|
||||||
|
|
||||||
**Differences from Greenfield:**
|
|
||||||
|
|
||||||
**Phase 0: Documentation (if needed)**
|
|
||||||
```
|
|
||||||
- Run *document-project
|
|
||||||
- Create baseline documentation
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 2: Planning**
|
|
||||||
```
|
|
||||||
- TEA runs *trace Phase 1 → establish coverage baseline
|
|
||||||
- PM creates PRD (with existing system context)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Solutioning**
|
|
||||||
```
|
|
||||||
- Architect creates architecture (with brownfield constraints)
|
|
||||||
- TEA runs *test-design (system-level) → testability review
|
|
||||||
- TEA runs *framework (only if modernizing test infra)
|
|
||||||
- TEA runs *ci (update existing CI or create new)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Implementation**
|
|
||||||
```
|
|
||||||
- TEA runs *test-design (epic-level) → focus on REGRESSION HOTSPOTS
|
|
||||||
- Per story: ATDD → dev → automate
|
|
||||||
- TEA runs *test-review → improve legacy test quality
|
|
||||||
- TEA runs *trace Phase 1 → track coverage improvement
|
|
||||||
```
|
|
||||||
|
|
||||||
**Brownfield-Specific:**
|
|
||||||
- Baseline coverage BEFORE planning
|
|
||||||
- Focus on regression hotspots (bug-prone areas)
|
|
||||||
- Incremental quality improvement
|
|
||||||
- Compare coverage to baseline (trending up?)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Legacy e-commerce platform:
|
|
||||||
- 200 existing tests (30% passing, 70% flaky)
|
|
||||||
- Adding new checkout flow
|
|
||||||
- Want to improve quality
|
|
||||||
|
|
||||||
Workflow:
|
|
||||||
1. Phase 2: *trace baseline → 30% coverage
|
|
||||||
2. Phase 3: *test-design → identify regression risks
|
|
||||||
3. Phase 4: Fix top 20 flaky tests + add tests for new checkout
|
|
||||||
4. Gate: *trace → 60% coverage (2x improvement)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict:** Best for incrementally improving legacy systems.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Decision Guide: Which Model?
|
|
||||||
|
|
||||||
### Quick Decision Tree
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
flowchart TD
|
|
||||||
Start([Choose TEA Model]) --> BMad{Using<br/>BMad Method?}
|
|
||||||
|
|
||||||
BMad -->|No| NonBMad{Project Type?}
|
|
||||||
NonBMad -->|Learning| Lite[TEA Lite<br/>Just *automate<br/>30 min tutorial]
|
|
||||||
NonBMad -->|Serious Project| Solo[TEA Solo<br/>Standalone workflows<br/>Full capabilities]
|
|
||||||
|
|
||||||
BMad -->|Yes| WantTEA{Want TEA?}
|
|
||||||
WantTEA -->|No| None[No TEA<br/>Use existing approach<br/>Valid choice]
|
|
||||||
WantTEA -->|Yes| ProjectType{New or<br/>Existing?}
|
|
||||||
|
|
||||||
ProjectType -->|New Project| Green[TEA Integrated<br/>Greenfield<br/>Full lifecycle]
|
|
||||||
ProjectType -->|Existing Code| Brown[TEA Integrated<br/>Brownfield<br/>Baseline + improve]
|
|
||||||
|
|
||||||
Green --> Compliance{Compliance<br/>Needs?}
|
|
||||||
Compliance -->|Yes| Enterprise[Enterprise Track<br/>NFR + audit trails]
|
|
||||||
Compliance -->|No| Method[BMad Method Track<br/>Standard quality]
|
|
||||||
|
|
||||||
style Lite fill:#bbdefb,stroke:#1565c0,stroke-width:2px
|
|
||||||
style Solo fill:#c5cae9,stroke:#283593,stroke-width:2px
|
|
||||||
style None fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
|
||||||
style Green fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style Brown fill:#fff9c4,stroke:#f57f17,stroke-width:2px
|
|
||||||
style Enterprise fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
|
||||||
style Method fill:#e1f5fe,stroke:#01579b,stroke-width:2px
|
|
||||||
```
|
|
||||||
|
|
||||||
**Decision Path Examples:**
|
|
||||||
- Learning TEA → TEA Lite (blue)
|
|
||||||
- Non-BMad project → TEA Solo (purple)
|
|
||||||
- BMad + new project + compliance → Enterprise (purple)
|
|
||||||
- BMad + existing code → Brownfield (yellow)
|
|
||||||
- Don't want TEA → No TEA (gray)
|
|
||||||
|
|
||||||
### By Project Type
|
|
||||||
|
|
||||||
| Project Type | Recommended Model | Why |
|
|
||||||
|--------------|------------------|-----|
|
|
||||||
| **New SaaS product** | TEA Integrated (Greenfield) | Full quality operating model from day one |
|
|
||||||
| **Existing app + new feature** | TEA Integrated (Brownfield) | Improve incrementally while adding features |
|
|
||||||
| **Bug fix** | TEA Lite or No TEA | Quick flow, minimal overhead |
|
|
||||||
| **Learning project** | TEA Lite | Learn basics with immediate results |
|
|
||||||
| **Non-BMad enterprise** | TEA Solo | Quality model without full methodology |
|
|
||||||
| **High-quality existing tests** | No TEA | Keep what works |
|
|
||||||
|
|
||||||
### By Team Maturity
|
|
||||||
|
|
||||||
| Team Maturity | Recommended Model | Why |
|
|
||||||
|---------------|------------------|-----|
|
|
||||||
| **Beginners** | TEA Lite → TEA Solo | Learn basics, then expand |
|
|
||||||
| **Intermediate** | TEA Solo or Integrated | Depends on methodology |
|
|
||||||
| **Advanced** | TEA Integrated or No TEA | Full model or existing expertise |
|
|
||||||
|
|
||||||
### By Compliance Needs
|
|
||||||
|
|
||||||
| Compliance | Recommended Model | Why |
|
|
||||||
|------------|------------------|-----|
|
|
||||||
| **None** | Any model | Choose based on project needs |
|
|
||||||
| **Light** (internal audit) | TEA Solo or Integrated | Gate decisions helpful |
|
|
||||||
| **Heavy** (SOC 2, HIPAA) | TEA Integrated (Enterprise) | NFR assessment mandatory |
|
|
||||||
|
|
||||||
## Switching Between Models
|
|
||||||
|
|
||||||
### Can Change Models Mid-Project
|
|
||||||
|
|
||||||
**Scenario:** Start with TEA Lite, expand to TEA Solo
|
|
||||||
|
|
||||||
```
|
|
||||||
Week 1: TEA Lite
|
|
||||||
- Run *framework
|
|
||||||
- Run *automate
|
|
||||||
- Learn basics
|
|
||||||
|
|
||||||
Week 2: Expand to TEA Solo
|
|
||||||
- Add *test-design
|
|
||||||
- Use *atdd for new features
|
|
||||||
- Add *test-review
|
|
||||||
|
|
||||||
Week 3: Continue expanding
|
|
||||||
- Add *trace for coverage
|
|
||||||
- Setup *ci
|
|
||||||
- Full TEA Solo workflow
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefit:** Start small, expand as comfortable.
|
|
||||||
|
|
||||||
### Can Mix Models
|
|
||||||
|
|
||||||
**Scenario:** TEA Integrated for main features, No TEA for bug fixes
|
|
||||||
|
|
||||||
```
|
|
||||||
Main features (epics):
|
|
||||||
- Use full TEA workflow
|
|
||||||
- Risk assessment, ATDD, quality gates
|
|
||||||
|
|
||||||
Bug fixes:
|
|
||||||
- Skip TEA
|
|
||||||
- Quick Flow + manual testing
|
|
||||||
- Move fast
|
|
||||||
|
|
||||||
Result: TEA where it adds value, skip where it doesn't
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefit:** Flexible, pragmatic, not dogmatic.
|
|
||||||
|
|
||||||
## Comparison Table
|
|
||||||
|
|
||||||
| Aspect | No TEA | TEA Lite | TEA Solo | Integrated (Green) | Integrated (Brown) |
|
|
||||||
|--------|--------|----------|----------|-------------------|-------------------|
|
|
||||||
| **BMad Required** | No | No | No | Yes | Yes |
|
|
||||||
| **Learning Curve** | None | Low | Medium | High | High |
|
|
||||||
| **Setup Time** | 0 | 30 min | 2 hours | 1 day | 2 days |
|
|
||||||
| **Workflows Used** | 0 | 2-3 | 4-6 | 8 | 8 |
|
|
||||||
| **Test Planning** | Manual | Optional | Yes | Systematic | + Regression focus |
|
|
||||||
| **Quality Gates** | No | No | Optional | Yes | Yes + baseline |
|
|
||||||
| **NFR Assessment** | No | No | No | Optional | Recommended |
|
|
||||||
| **Coverage Tracking** | Manual | No | Optional | Yes | Yes + trending |
|
|
||||||
| **Best For** | Experts | Beginners | Standalone | New projects | Legacy code |
|
|
||||||
|
|
||||||
## Real-World Examples
|
|
||||||
|
|
||||||
### Example 1: Startup (TEA Lite → TEA Integrated)
|
|
||||||
|
|
||||||
**Month 1:** TEA Lite
|
|
||||||
```
|
|
||||||
Team: 3 developers, no QA
|
|
||||||
Testing: Manual only
|
|
||||||
Decision: Start with TEA Lite
|
|
||||||
|
|
||||||
Result:
|
|
||||||
- Run *framework (Playwright setup)
|
|
||||||
- Run *automate (20 tests generated)
|
|
||||||
- Learning TEA basics
|
|
||||||
```
|
|
||||||
|
|
||||||
**Month 3:** TEA Solo
|
|
||||||
```
|
|
||||||
Team: Growing to 5 developers
|
|
||||||
Testing: Automated tests exist
|
|
||||||
Decision: Expand to TEA Solo
|
|
||||||
|
|
||||||
Result:
|
|
||||||
- Add *test-design (risk assessment)
|
|
||||||
- Add *atdd (TDD workflow)
|
|
||||||
- Add *test-review (quality audits)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Month 6:** TEA Integrated
|
|
||||||
```
|
|
||||||
Team: 8 developers, 1 QA
|
|
||||||
Testing: Critical to business
|
|
||||||
Decision: Full BMad Method + TEA Integrated
|
|
||||||
|
|
||||||
Result:
|
|
||||||
- Full lifecycle integration
|
|
||||||
- Quality gates before releases
|
|
||||||
- NFR assessment for enterprise customers
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 2: Enterprise (TEA Integrated - Brownfield)
|
|
||||||
|
|
||||||
**Project:** Legacy banking application
|
|
||||||
|
|
||||||
**Challenge:**
|
|
||||||
- 500 existing tests (50% flaky)
|
|
||||||
- Adding new features
|
|
||||||
- SOC 2 compliance required
|
|
||||||
|
|
||||||
**Model:** TEA Integrated (Brownfield)
|
|
||||||
|
|
||||||
**Phase 2:**
|
|
||||||
```
|
|
||||||
- *trace baseline → 45% coverage (lots of gaps)
|
|
||||||
- Document current state
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3:**
|
|
||||||
```
|
|
||||||
- *test-design (system) → identify regression hotspots
|
|
||||||
- *framework → modernize test infrastructure
|
|
||||||
- *ci → add selective testing
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4:**
|
|
||||||
```
|
|
||||||
Per epic:
|
|
||||||
- *test-design → focus on regression + new features
|
|
||||||
- Fix top 10 flaky tests
|
|
||||||
- *atdd for new features
|
|
||||||
- *automate for coverage expansion
|
|
||||||
- *test-review → track quality improvement
|
|
||||||
- *trace → compare to baseline
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result after 6 months:**
|
|
||||||
- Coverage: 45% → 85%
|
|
||||||
- Quality score: 52 → 82
|
|
||||||
- Flakiness: 50% → 2%
|
|
||||||
- SOC 2 compliant (traceability + NFR evidence)
|
|
||||||
|
|
||||||
### Example 3: Consultancy (TEA Solo)
|
|
||||||
|
|
||||||
**Context:** Testing consultancy working with multiple clients
|
|
||||||
|
|
||||||
**Challenge:**
|
|
||||||
- Different clients use different methodologies
|
|
||||||
- Need consistent testing approach
|
|
||||||
- Not always using BMad Method
|
|
||||||
|
|
||||||
**Model:** TEA Solo (bring to any client project)
|
|
||||||
|
|
||||||
**Workflow:**
|
|
||||||
```
|
|
||||||
Client project 1 (Scrum):
|
|
||||||
- Import Jira stories
|
|
||||||
- Run *test-design
|
|
||||||
- Generate tests with *atdd/*automate
|
|
||||||
- Deliver quality report with *test-review
|
|
||||||
|
|
||||||
Client project 2 (Kanban):
|
|
||||||
- Import requirements from Notion
|
|
||||||
- Same TEA workflow
|
|
||||||
- Consistent quality across clients
|
|
||||||
|
|
||||||
Client project 3 (Ad-hoc):
|
|
||||||
- Document requirements manually
|
|
||||||
- Same TEA workflow
|
|
||||||
- Same patterns, different context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefit:** Consistent testing approach regardless of client methodology.
|
|
||||||
|
|
||||||
## Choosing Your Model
|
|
||||||
|
|
||||||
### Start Here Questions
|
|
||||||
|
|
||||||
**Question 1:** Are you using BMad Method?
|
|
||||||
- **No** → TEA Solo or TEA Lite or No TEA
|
|
||||||
- **Yes** → TEA Integrated or No TEA
|
|
||||||
|
|
||||||
**Question 2:** Is this a new project?
|
|
||||||
- **Yes** → TEA Integrated (Greenfield) or TEA Lite
|
|
||||||
- **No** → TEA Integrated (Brownfield) or TEA Solo
|
|
||||||
|
|
||||||
**Question 3:** What's your testing maturity?
|
|
||||||
- **Beginner** → TEA Lite
|
|
||||||
- **Intermediate** → TEA Solo or Integrated
|
|
||||||
- **Advanced** → TEA Integrated or No TEA (already expert)
|
|
||||||
|
|
||||||
**Question 4:** Do you need compliance/quality gates?
|
|
||||||
- **Yes** → TEA Integrated (Enterprise)
|
|
||||||
- **No** → Any model
|
|
||||||
|
|
||||||
**Question 5:** How much time can you invest?
|
|
||||||
- **30 minutes** → TEA Lite
|
|
||||||
- **Few hours** → TEA Solo
|
|
||||||
- **Multiple days** → TEA Integrated
|
|
||||||
|
|
||||||
### Recommendation Matrix
|
|
||||||
|
|
||||||
| Your Context | Recommended Model | Alternative |
|
|
||||||
|--------------|------------------|-------------|
|
|
||||||
| BMad Method + new project | TEA Integrated (Greenfield) | TEA Lite (learning) |
|
|
||||||
| BMad Method + existing code | TEA Integrated (Brownfield) | TEA Solo |
|
|
||||||
| Non-BMad + need quality | TEA Solo | TEA Lite |
|
|
||||||
| Just learning testing | TEA Lite | No TEA (learn basics first) |
|
|
||||||
| Enterprise + compliance | TEA Integrated (Enterprise) | TEA Solo |
|
|
||||||
| Established QA team | No TEA | TEA Solo (supplement) |
|
|
||||||
|
|
||||||
## Transitioning Between Models
|
|
||||||
|
|
||||||
### TEA Lite → TEA Solo
|
|
||||||
|
|
||||||
**When:** Outgrow beginner approach, need more workflows.
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
1. Continue using `*framework` and `*automate`
|
|
||||||
2. Add `*test-design` for planning
|
|
||||||
3. Add `*atdd` for TDD workflow
|
|
||||||
4. Add `*test-review` for quality audits
|
|
||||||
5. Add `*trace` for coverage tracking
|
|
||||||
|
|
||||||
**Timeline:** 2-4 weeks of gradual expansion
|
|
||||||
|
|
||||||
### TEA Solo → TEA Integrated
|
|
||||||
|
|
||||||
**When:** Adopt BMad Method, want full integration.
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
1. Install BMad Method (see installation guide)
|
|
||||||
2. Run planning workflows (PRD, architecture)
|
|
||||||
3. Integrate TEA into Phase 3 (system-level test design)
|
|
||||||
4. Follow integrated lifecycle (per epic workflows)
|
|
||||||
5. Add release gates (trace Phase 2)
|
|
||||||
|
|
||||||
**Timeline:** 1-2 sprints of transition
|
|
||||||
|
|
||||||
### TEA Integrated → TEA Solo
|
|
||||||
|
|
||||||
**When:** Moving away from BMad Method, keep TEA.
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
1. Export BMad artifacts (PRD, architecture, stories)
|
|
||||||
2. Continue using TEA workflows standalone
|
|
||||||
3. Skip BMad-specific integration
|
|
||||||
4. Bring your own requirements to TEA
|
|
||||||
|
|
||||||
**Timeline:** Immediate (just skip BMad workflows)
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
### Pattern 1: TEA Lite for Learning, Then Choose
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1 (Week 1-2): TEA Lite
|
|
||||||
- Learn with *automate on demo app
|
|
||||||
- Understand TEA fundamentals
|
|
||||||
- Low commitment
|
|
||||||
|
|
||||||
Phase 2 (Week 3-4): Evaluate
|
|
||||||
- Try *test-design (planning)
|
|
||||||
- Try *atdd (TDD)
|
|
||||||
- See if value justifies investment
|
|
||||||
|
|
||||||
Phase 3 (Month 2+): Decide
|
|
||||||
- Valuable → Expand to TEA Solo or Integrated
|
|
||||||
- Not valuable → Stay with TEA Lite or No TEA
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pattern 2: TEA Solo for Quality, Skip Full Method
|
|
||||||
|
|
||||||
```
|
|
||||||
Team decision:
|
|
||||||
- Don't want full BMad Method (too heavyweight)
|
|
||||||
- Want systematic testing (TEA benefits)
|
|
||||||
|
|
||||||
Approach: TEA Solo only
|
|
||||||
- Use existing project management (Jira, Linear)
|
|
||||||
- Use TEA for testing only
|
|
||||||
- Get quality without methodology commitment
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pattern 3: Integrated for Critical, Lite for Non-Critical
|
|
||||||
|
|
||||||
```
|
|
||||||
Critical features (payment, auth):
|
|
||||||
- Full TEA Integrated workflow
|
|
||||||
- Risk assessment, ATDD, quality gates
|
|
||||||
- High confidence required
|
|
||||||
|
|
||||||
Non-critical features (UI tweaks):
|
|
||||||
- TEA Lite or No TEA
|
|
||||||
- Quick tests, minimal overhead
|
|
||||||
- Move fast
|
|
||||||
```
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
Each model uses different TEA workflows. See:
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Model details
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - Workflow reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Setup options
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Risk assessment in different models
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Quality across all models
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - Consistent patterns across models
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Infrastructure in different models
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Reliability in all models
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - 5 engagement models with cheat sheets
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Design philosophy
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**Getting Started:**
|
|
||||||
- [TEA Lite Quickstart Tutorial](/docs/tutorials/getting-started/tea-lite-quickstart.md) - Model 3: TEA Lite
|
|
||||||
|
|
||||||
**Use-Case Guides:**
|
|
||||||
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Model 5: Brownfield
|
|
||||||
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise integration
|
|
||||||
|
|
||||||
**All Workflow Guides:**
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Used in TEA Solo and Integrated
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md)
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md)
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md)
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md)
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - All workflows explained
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config per model
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - TEA Lite, TEA Solo, TEA Integrated terms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,457 +0,0 @@
|
||||||
---
|
|
||||||
title: "Fixture Architecture Explained"
|
|
||||||
description: Understanding TEA's pure function → fixture → composition pattern for reusable test utilities
|
|
||||||
---
|
|
||||||
|
|
||||||
# Fixture Architecture Explained
|
|
||||||
|
|
||||||
Fixture architecture is TEA's pattern for building reusable, testable, and composable test utilities. The core principle: build pure functions first, wrap in framework fixtures second.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**The Pattern:**
|
|
||||||
1. Write utility as pure function (unit-testable)
|
|
||||||
2. Wrap in framework fixture (Playwright, Cypress)
|
|
||||||
3. Compose fixtures with mergeTests (combine capabilities)
|
|
||||||
4. Package for reuse across projects
|
|
||||||
|
|
||||||
**Why this order?**
|
|
||||||
- Pure functions are easier to test
|
|
||||||
- Fixtures depend on framework (less portable)
|
|
||||||
- Composition happens at fixture level
|
|
||||||
- Reusability maximized
|
|
||||||
|
|
||||||
### Fixture Architecture Flow
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
flowchart TD
|
|
||||||
Start([Testing Need]) --> Pure[Step 1: Pure Function<br/>helpers/api-request.ts]
|
|
||||||
Pure -->|Unit testable<br/>Framework agnostic| Fixture[Step 2: Fixture Wrapper<br/>fixtures/api-request.ts]
|
|
||||||
Fixture -->|Injects framework<br/>dependencies| Compose[Step 3: Composition<br/>fixtures/index.ts]
|
|
||||||
Compose -->|mergeTests| Use[Step 4: Use in Tests<br/>tests/**.spec.ts]
|
|
||||||
|
|
||||||
Pure -.->|Can test in isolation| UnitTest[Unit Tests<br/>No framework needed]
|
|
||||||
Fixture -.->|Reusable pattern| Other[Other Projects<br/>Package export]
|
|
||||||
Compose -.->|Combine utilities| Multi[Multiple Fixtures<br/>One test]
|
|
||||||
|
|
||||||
style Pure fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
|
||||||
style Fixture fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
|
||||||
style Compose fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
|
||||||
style Use fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style UnitTest fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
|
||||||
style Other fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
|
||||||
style Multi fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits at Each Step:**
|
|
||||||
1. **Pure Function:** Testable, portable, reusable
|
|
||||||
2. **Fixture:** Framework integration, clean API
|
|
||||||
3. **Composition:** Combine capabilities, flexible
|
|
||||||
4. **Usage:** Simple imports, type-safe
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### Framework-First Approach (Common Anti-Pattern)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ Bad: Built as fixture from the start
|
|
||||||
export const test = base.extend({
|
|
||||||
apiRequest: async ({ request }, use) => {
|
|
||||||
await use(async (options) => {
|
|
||||||
const response = await request.fetch(options.url, {
|
|
||||||
method: options.method,
|
|
||||||
data: options.data
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok()) {
|
|
||||||
throw new Error(`API request failed: ${response.status()}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.json();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Cannot unit test (requires Playwright context)
|
|
||||||
- Tied to framework (not reusable in other tools)
|
|
||||||
- Hard to compose with other fixtures
|
|
||||||
- Difficult to mock for testing the utility itself
|
|
||||||
|
|
||||||
### Copy-Paste Utilities
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// test-1.spec.ts
|
|
||||||
test('test 1', async ({ request }) => {
|
|
||||||
const response = await request.post('/api/users', { data: {...} });
|
|
||||||
const body = await response.json();
|
|
||||||
if (!response.ok()) throw new Error('Failed');
|
|
||||||
// ... repeated in every test
|
|
||||||
});
|
|
||||||
|
|
||||||
// test-2.spec.ts
|
|
||||||
test('test 2', async ({ request }) => {
|
|
||||||
const response = await request.post('/api/users', { data: {...} });
|
|
||||||
const body = await response.json();
|
|
||||||
if (!response.ok()) throw new Error('Failed');
|
|
||||||
// ... same code repeated
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Code duplication (violates DRY)
|
|
||||||
- Inconsistent error handling
|
|
||||||
- Hard to update (change 50 tests)
|
|
||||||
- No shared behavior
|
|
||||||
|
|
||||||
## The Solution: Three-Step Pattern
|
|
||||||
|
|
||||||
### Step 1: Pure Function
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// helpers/api-request.ts
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Make API request with automatic error handling
|
|
||||||
* Pure function - no framework dependencies
|
|
||||||
*/
|
|
||||||
export async function apiRequest({
|
|
||||||
request, // Passed in (dependency injection)
|
|
||||||
method,
|
|
||||||
url,
|
|
||||||
data,
|
|
||||||
headers = {}
|
|
||||||
}: ApiRequestParams): Promise<ApiResponse> {
|
|
||||||
const response = await request.fetch(url, {
|
|
||||||
method,
|
|
||||||
data,
|
|
||||||
headers
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok()) {
|
|
||||||
throw new Error(`API request failed: ${response.status()}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
status: response.status(),
|
|
||||||
body: await response.json()
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// ✅ Can unit test this function!
|
|
||||||
describe('apiRequest', () => {
|
|
||||||
it('should throw on non-OK response', async () => {
|
|
||||||
const mockRequest = {
|
|
||||||
fetch: vi.fn().mockResolvedValue({ ok: () => false, status: () => 500 })
|
|
||||||
};
|
|
||||||
|
|
||||||
await expect(apiRequest({
|
|
||||||
request: mockRequest,
|
|
||||||
method: 'GET',
|
|
||||||
url: '/api/test'
|
|
||||||
})).rejects.toThrow('API request failed: 500');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Unit testable (mock dependencies)
|
|
||||||
- Framework-agnostic (works with any HTTP client)
|
|
||||||
- Easy to reason about (pure function)
|
|
||||||
- Portable (can use in Node scripts, CLI tools)
|
|
||||||
|
|
||||||
### Step 2: Fixture Wrapper
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// fixtures/api-request.ts
|
|
||||||
import { test as base } from '@playwright/test';
|
|
||||||
import { apiRequest as apiRequestFn } from '../helpers/api-request';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Playwright fixture wrapping the pure function
|
|
||||||
*/
|
|
||||||
export const test = base.extend<{ apiRequest: typeof apiRequestFn }>({
|
|
||||||
apiRequest: async ({ request }, use) => {
|
|
||||||
// Inject framework dependency (request)
|
|
||||||
await use((params) => apiRequestFn({ request, ...params }));
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export { expect } from '@playwright/test';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Fixture provides framework context (request)
|
|
||||||
- Pure function handles logic
|
|
||||||
- Clean separation of concerns
|
|
||||||
- Can swap frameworks (Cypress, etc.) by changing wrapper only
|
|
||||||
|
|
||||||
### Step 3: Composition with mergeTests
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// fixtures/index.ts
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
import { test as apiRequestTest } from './api-request';
|
|
||||||
import { test as authSessionTest } from './auth-session';
|
|
||||||
import { test as logTest } from './log';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Compose all fixtures into one test
|
|
||||||
*/
|
|
||||||
export const test = mergeTests(
|
|
||||||
apiRequestTest,
|
|
||||||
authSessionTest,
|
|
||||||
logTest
|
|
||||||
);
|
|
||||||
|
|
||||||
export { expect } from '@playwright/test';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
// tests/profile.spec.ts
|
|
||||||
import { test, expect } from '../support/fixtures';
|
|
||||||
|
|
||||||
test('should update profile', async ({ apiRequest, authToken, log }) => {
|
|
||||||
log.info('Starting profile update test');
|
|
||||||
|
|
||||||
// Use API request fixture (matches pure function signature)
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
url: '/api/profile',
|
|
||||||
data: { name: 'New Name' },
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(body.name).toBe('New Name');
|
|
||||||
|
|
||||||
log.info('Profile updated successfully');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** This example uses the vanilla pure function signature (`url`, `data`). Playwright Utils uses different parameter names (`path`, `body`). See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) for the utilities API.
|
|
||||||
|
|
||||||
**Note:** `authToken` requires auth-session fixture setup with provider configuration. See [auth-session documentation](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Use multiple fixtures in one test
|
|
||||||
- No manual composition needed
|
|
||||||
- Type-safe (TypeScript knows all fixture types)
|
|
||||||
- Clean imports
|
|
||||||
|
|
||||||
## How It Works in TEA
|
|
||||||
|
|
||||||
### TEA Generates This Pattern
|
|
||||||
|
|
||||||
When you run `*framework` with `tea_use_playwright_utils: true`:
|
|
||||||
|
|
||||||
**TEA scaffolds:**
|
|
||||||
```
|
|
||||||
tests/
|
|
||||||
├── support/
|
|
||||||
│ ├── helpers/ # Pure functions
|
|
||||||
│ │ ├── api-request.ts
|
|
||||||
│ │ └── auth-session.ts
|
|
||||||
│ └── fixtures/ # Framework wrappers
|
|
||||||
│ ├── api-request.ts
|
|
||||||
│ ├── auth-session.ts
|
|
||||||
│ └── index.ts # Composition
|
|
||||||
└── e2e/
|
|
||||||
└── example.spec.ts # Uses composed fixtures
|
|
||||||
```
|
|
||||||
|
|
||||||
### TEA Reviews Against This Pattern
|
|
||||||
|
|
||||||
When you run `*test-review`:
|
|
||||||
|
|
||||||
**TEA checks:**
|
|
||||||
- Are utilities pure functions? ✓
|
|
||||||
- Are fixtures minimal wrappers? ✓
|
|
||||||
- Is composition used? ✓
|
|
||||||
- Can utilities be unit tested? ✓
|
|
||||||
|
|
||||||
## Package Export Pattern
|
|
||||||
|
|
||||||
### Make Fixtures Reusable Across Projects
|
|
||||||
|
|
||||||
**Option 1: Build Your Own (Vanilla)**
|
|
||||||
```json
|
|
||||||
// package.json
|
|
||||||
{
|
|
||||||
"name": "@company/test-utils",
|
|
||||||
"exports": {
|
|
||||||
"./api-request": "./fixtures/api-request.ts",
|
|
||||||
"./auth-session": "./fixtures/auth-session.ts",
|
|
||||||
"./log": "./fixtures/log.ts"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test as apiTest } from '@company/test-utils/api-request';
|
|
||||||
import { test as authTest } from '@company/test-utils/auth-session';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
|
|
||||||
export const test = mergeTests(apiTest, authTest);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option 2: Use Playwright Utils (Recommended)**
|
|
||||||
```bash
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test as base } from '@playwright/test';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
|
|
||||||
const authFixtureTest = base.extend(createAuthFixtures());
|
|
||||||
export const test = mergeTests(apiRequestFixture, authFixtureTest);
|
|
||||||
// Production-ready utilities, battle-tested!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Auth-session requires provider configuration. See [auth-session setup guide](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
|
||||||
|
|
||||||
**Why Playwright Utils:**
|
|
||||||
- Already built, tested, and maintained
|
|
||||||
- Consistent patterns across projects
|
|
||||||
- 11 utilities available (API, auth, network, logging, files)
|
|
||||||
- Community support and documentation
|
|
||||||
- Regular updates and improvements
|
|
||||||
|
|
||||||
**When to Build Your Own:**
|
|
||||||
- Company-specific patterns
|
|
||||||
- Custom authentication systems
|
|
||||||
- Unique requirements not covered by utilities
|
|
||||||
|
|
||||||
## Comparison: Good vs Bad Patterns
|
|
||||||
|
|
||||||
### Anti-Pattern: God Fixture
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ Bad: Everything in one fixture
|
|
||||||
export const test = base.extend({
|
|
||||||
testUtils: async ({ page, request, context }, use) => {
|
|
||||||
await use({
|
|
||||||
// 50 different methods crammed into one fixture
|
|
||||||
apiRequest: async (...) => { },
|
|
||||||
login: async (...) => { },
|
|
||||||
createUser: async (...) => { },
|
|
||||||
deleteUser: async (...) => { },
|
|
||||||
uploadFile: async (...) => { },
|
|
||||||
// ... 45 more methods
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Cannot test individual utilities
|
|
||||||
- Cannot compose (all-or-nothing)
|
|
||||||
- Cannot reuse specific utilities
|
|
||||||
- Hard to maintain (1000+ line file)
|
|
||||||
|
|
||||||
### Good Pattern: Single-Concern Fixtures
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ Good: One concern per fixture
|
|
||||||
|
|
||||||
// api-request.ts
|
|
||||||
export const test = base.extend({ apiRequest });
|
|
||||||
|
|
||||||
// auth-session.ts
|
|
||||||
export const test = base.extend({ authSession });
|
|
||||||
|
|
||||||
// log.ts
|
|
||||||
export const test = base.extend({ log });
|
|
||||||
|
|
||||||
// Compose as needed
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
export const test = mergeTests(apiRequestTest, authSessionTest, logTest);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Each fixture is unit-testable
|
|
||||||
- Compose only what you need
|
|
||||||
- Reuse individual fixtures
|
|
||||||
- Easy to maintain (small files)
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
For detailed fixture architecture patterns, see the knowledge base:
|
|
||||||
- [Knowledge Base Index - Architecture & Fixtures](/docs/reference/tea/knowledge-base.md)
|
|
||||||
- [Complete Knowledge Base Index](/docs/reference/tea/knowledge-base.md)
|
|
||||||
|
|
||||||
## When to Use This Pattern
|
|
||||||
|
|
||||||
### Always Use For:
|
|
||||||
|
|
||||||
**Reusable utilities:**
|
|
||||||
- API request helpers
|
|
||||||
- Authentication handlers
|
|
||||||
- File operations
|
|
||||||
- Network mocking
|
|
||||||
|
|
||||||
**Test infrastructure:**
|
|
||||||
- Shared fixtures across teams
|
|
||||||
- Packaged utilities (playwright-utils)
|
|
||||||
- Company-wide test standards
|
|
||||||
|
|
||||||
### Consider Skipping For:
|
|
||||||
|
|
||||||
**One-off test setup:**
|
|
||||||
```typescript
|
|
||||||
// Simple one-time setup - inline is fine
|
|
||||||
test.beforeEach(async ({ page }) => {
|
|
||||||
await page.goto('/');
|
|
||||||
await page.click('#accept-cookies');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test-specific helpers:**
|
|
||||||
```typescript
|
|
||||||
// Used in one test file only - keep local
|
|
||||||
function createTestUser(name: string) {
|
|
||||||
return { name, email: `${name}@test.com` };
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Quality standards fixtures enforce
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - Fixture patterns in knowledge base
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Network fixtures explained
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Fixture complexity matches risk
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Fixture architecture in workflows
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Why fixtures matter
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**Setup Guides:**
|
|
||||||
- [How to Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md) - TEA scaffolds fixtures
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Production-ready fixtures
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Using fixtures in tests
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Fixture composition examples
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - *framework command
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Fixture architecture fragments
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - Fixture architecture term
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,554 +0,0 @@
|
||||||
---
|
|
||||||
title: "Knowledge Base System Explained"
|
|
||||||
description: Understanding how TEA uses tea-index.csv for context engineering and consistent test quality
|
|
||||||
---
|
|
||||||
|
|
||||||
# Knowledge Base System Explained
|
|
||||||
|
|
||||||
TEA's knowledge base system is how context engineering works - automatically loading domain-specific standards into AI context so tests are consistently high-quality regardless of prompt variation.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**The Problem:** AI without context produces inconsistent results.
|
|
||||||
|
|
||||||
**Traditional approach:**
|
|
||||||
```
|
|
||||||
User: "Write tests for login"
|
|
||||||
AI: [Generates tests with random quality]
|
|
||||||
- Sometimes uses hard waits
|
|
||||||
- Sometimes uses good patterns
|
|
||||||
- Inconsistent across sessions
|
|
||||||
- Quality depends on prompt
|
|
||||||
```
|
|
||||||
|
|
||||||
**TEA with knowledge base:**
|
|
||||||
```
|
|
||||||
User: "Write tests for login"
|
|
||||||
TEA: [Loads test-quality.md, network-first.md, auth-session.md]
|
|
||||||
TEA: [Generates tests following established patterns]
|
|
||||||
- Always uses network-first patterns
|
|
||||||
- Always uses proper fixtures
|
|
||||||
- Consistent across all sessions
|
|
||||||
- Quality independent of prompt
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** Systematic quality, not random chance.
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### Prompt-Driven Testing = Inconsistency
|
|
||||||
|
|
||||||
**Session 1:**
|
|
||||||
```
|
|
||||||
User: "Write tests for profile editing"
|
|
||||||
|
|
||||||
AI: [No context loaded]
|
|
||||||
// Generates test with hard waits
|
|
||||||
await page.waitForTimeout(3000);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Session 2:**
|
|
||||||
```
|
|
||||||
User: "Write comprehensive tests for profile editing with best practices"
|
|
||||||
|
|
||||||
AI: [Still no systematic context]
|
|
||||||
// Generates test with some improvements, but still issues
|
|
||||||
await page.waitForSelector('.success', { timeout: 10000 });
|
|
||||||
```
|
|
||||||
|
|
||||||
**Session 3:**
|
|
||||||
```
|
|
||||||
User: "Write tests using network-first patterns and proper fixtures"
|
|
||||||
|
|
||||||
AI: [Better prompt, but still reinventing patterns]
|
|
||||||
// Generates test with network-first, but inconsistent with other tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem:** Quality depends on prompt engineering skill, no consistency.
|
|
||||||
|
|
||||||
### Knowledge Drift
|
|
||||||
|
|
||||||
Without a knowledge base:
|
|
||||||
- Team A uses pattern X
|
|
||||||
- Team B uses pattern Y
|
|
||||||
- Both work, but inconsistent
|
|
||||||
- No single source of truth
|
|
||||||
- Patterns drift over time
|
|
||||||
|
|
||||||
## The Solution: tea-index.csv Manifest
|
|
||||||
|
|
||||||
### How It Works
|
|
||||||
|
|
||||||
**1. Manifest Defines Fragments**
|
|
||||||
|
|
||||||
`src/modules/bmm/testarch/tea-index.csv`:
|
|
||||||
```csv
|
|
||||||
id,name,description,tags,fragment_file
|
|
||||||
test-quality,Test Quality,Execution limits and isolation rules,quality;standards,knowledge/test-quality.md
|
|
||||||
network-first,Network-First Safeguards,Intercept-before-navigate workflow,network;stability,knowledge/network-first.md
|
|
||||||
fixture-architecture,Fixture Architecture,Composable fixture patterns,fixtures;architecture,knowledge/fixture-architecture.md
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Workflow Loads Relevant Fragments**
|
|
||||||
|
|
||||||
When user runs `*atdd`:
|
|
||||||
```
|
|
||||||
TEA reads tea-index.csv
|
|
||||||
Identifies fragments needed for ATDD:
|
|
||||||
- test-quality.md (quality standards)
|
|
||||||
- network-first.md (avoid flakiness)
|
|
||||||
- component-tdd.md (TDD patterns)
|
|
||||||
- fixture-architecture.md (reusable fixtures)
|
|
||||||
- data-factories.md (test data)
|
|
||||||
|
|
||||||
Loads only these 5 fragments (not all 33)
|
|
||||||
Generates tests following these patterns
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Consistent Output**
|
|
||||||
|
|
||||||
Every time `*atdd` runs:
|
|
||||||
- Same fragments loaded
|
|
||||||
- Same patterns applied
|
|
||||||
- Same quality standards
|
|
||||||
- Consistent test structure
|
|
||||||
|
|
||||||
**Result:** Tests look like they were written by the same expert, every time.
|
|
||||||
|
|
||||||
### Knowledge Base Loading Diagram
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
flowchart TD
|
|
||||||
User([User: *atdd]) --> Workflow[TEA Workflow<br/>Triggered]
|
|
||||||
Workflow --> Read[Read Manifest<br/>tea-index.csv]
|
|
||||||
|
|
||||||
Read --> Identify{Identify Relevant<br/>Fragments for ATDD}
|
|
||||||
|
|
||||||
Identify -->|Needed| L1[✓ test-quality.md]
|
|
||||||
Identify -->|Needed| L2[✓ network-first.md]
|
|
||||||
Identify -->|Needed| L3[✓ component-tdd.md]
|
|
||||||
Identify -->|Needed| L4[✓ data-factories.md]
|
|
||||||
Identify -->|Needed| L5[✓ fixture-architecture.md]
|
|
||||||
|
|
||||||
Identify -.->|Skip| S1[✗ contract-testing.md]
|
|
||||||
Identify -.->|Skip| S2[✗ burn-in.md]
|
|
||||||
Identify -.->|Skip| S3[+ 26 other fragments]
|
|
||||||
|
|
||||||
L1 --> Context[AI Context<br/>5 fragments loaded]
|
|
||||||
L2 --> Context
|
|
||||||
L3 --> Context
|
|
||||||
L4 --> Context
|
|
||||||
L5 --> Context
|
|
||||||
|
|
||||||
Context --> Gen[Generate Tests<br/>Following patterns]
|
|
||||||
Gen --> Out([Consistent Output<br/>Same quality every time])
|
|
||||||
|
|
||||||
style User fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
|
||||||
style Read fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
|
||||||
style L1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style L2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style L3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style L4 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style L5 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style S1 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
|
||||||
style S2 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
|
||||||
style S3 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
|
||||||
style Context fill:#f3e5f5,stroke:#6a1b9a,stroke-width:3px
|
|
||||||
style Out fill:#4caf50,stroke:#1b5e20,stroke-width:3px,color:#fff
|
|
||||||
```
|
|
||||||
|
|
||||||
## Fragment Structure
|
|
||||||
|
|
||||||
### Anatomy of a Fragment
|
|
||||||
|
|
||||||
Each fragment follows this structure:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Fragment Name
|
|
||||||
|
|
||||||
## Principle
|
|
||||||
[One sentence - what is this pattern?]
|
|
||||||
|
|
||||||
## Rationale
|
|
||||||
[Why use this instead of alternatives?]
|
|
||||||
Why this pattern exists
|
|
||||||
Problems it solves
|
|
||||||
Benefits it provides
|
|
||||||
|
|
||||||
## Pattern Examples
|
|
||||||
|
|
||||||
### Example 1: Basic Usage
|
|
||||||
```code
|
|
||||||
[Runnable code example]
|
|
||||||
```
|
|
||||||
[Explanation of example]
|
|
||||||
|
|
||||||
### Example 2: Advanced Pattern
|
|
||||||
```code
|
|
||||||
[More complex example]
|
|
||||||
```
|
|
||||||
[Explanation]
|
|
||||||
|
|
||||||
## Anti-Patterns
|
|
||||||
|
|
||||||
### Don't Do This
|
|
||||||
```code
|
|
||||||
[Bad code example]
|
|
||||||
```
|
|
||||||
[Why it's bad]
|
|
||||||
[What breaks]
|
|
||||||
|
|
||||||
## Related Patterns
|
|
||||||
- [Link to related fragment]
|
|
||||||
```
|
|
||||||
|
|
||||||
<!-- markdownlint-disable MD024 -->
|
|
||||||
### Example: test-quality.md Fragment
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Quality
|
|
||||||
|
|
||||||
## Principle
|
|
||||||
Tests must be deterministic, isolated, explicit, focused, and fast.
|
|
||||||
|
|
||||||
## Rationale
|
|
||||||
Tests that fail randomly, depend on each other, or take too long lose team trust.
|
|
||||||
[... detailed explanation ...]
|
|
||||||
|
|
||||||
## Pattern Examples
|
|
||||||
|
|
||||||
### Example 1: Deterministic Test
|
|
||||||
```typescript
|
|
||||||
// ✅ Wait for actual response, not timeout
|
|
||||||
const promise = page.waitForResponse(matcher);
|
|
||||||
await page.click('button');
|
|
||||||
await promise;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 2: Isolated Test
|
|
||||||
```typescript
|
|
||||||
// ✅ Self-cleaning test
|
|
||||||
test('test', async ({ page }) => {
|
|
||||||
const userId = await createTestUser();
|
|
||||||
// ... test logic ...
|
|
||||||
await deleteTestUser(userId); // Cleanup
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Anti-Patterns
|
|
||||||
|
|
||||||
### Hard Waits
|
|
||||||
```typescript
|
|
||||||
// ❌ Non-deterministic
|
|
||||||
await page.waitForTimeout(3000);
|
|
||||||
```
|
|
||||||
[Why this causes flakiness]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Total:** 24.5 KB, 12 code examples
|
|
||||||
<!-- markdownlint-enable MD024 -->
|
|
||||||
|
|
||||||
## How TEA Uses the Knowledge Base
|
|
||||||
|
|
||||||
### Workflow-Specific Loading
|
|
||||||
|
|
||||||
**Different workflows load different fragments:**
|
|
||||||
|
|
||||||
| Workflow | Fragments Loaded | Purpose |
|
|
||||||
|----------|-----------------|---------|
|
|
||||||
| `*framework` | fixture-architecture, playwright-config, fixtures-composition | Infrastructure patterns |
|
|
||||||
| `*test-design` | test-quality, test-priorities-matrix, risk-governance | Planning standards |
|
|
||||||
| `*atdd` | test-quality, component-tdd, network-first, data-factories | TDD patterns |
|
|
||||||
| `*automate` | test-quality, test-levels-framework, selector-resilience | Comprehensive generation |
|
|
||||||
| `*test-review` | All quality/resilience/debugging fragments | Full audit patterns |
|
|
||||||
| `*ci` | ci-burn-in, burn-in, selective-testing | CI/CD optimization |
|
|
||||||
|
|
||||||
**Benefit:** Only load what's needed (focused context, no bloat).
|
|
||||||
|
|
||||||
### Dynamic Fragment Selection
|
|
||||||
|
|
||||||
TEA doesn't load all 33 fragments at once:
|
|
||||||
|
|
||||||
```
|
|
||||||
User runs: *atdd for authentication feature
|
|
||||||
|
|
||||||
TEA analyzes context:
|
|
||||||
- Feature type: Authentication
|
|
||||||
- Relevant fragments:
|
|
||||||
- test-quality.md (always loaded)
|
|
||||||
- auth-session.md (auth patterns)
|
|
||||||
- network-first.md (avoid flakiness)
|
|
||||||
- email-auth.md (if email-based auth)
|
|
||||||
- data-factories.md (test users)
|
|
||||||
|
|
||||||
Skips:
|
|
||||||
- contract-testing.md (not relevant)
|
|
||||||
- feature-flags.md (not relevant)
|
|
||||||
- file-utils.md (not relevant)
|
|
||||||
|
|
||||||
Result: 5 relevant fragments loaded, 28 skipped
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefit:** Focused context = better results, lower token usage.
|
|
||||||
|
|
||||||
## Context Engineering in Practice
|
|
||||||
|
|
||||||
### Example: Consistent Test Generation
|
|
||||||
|
|
||||||
**Without Knowledge Base (Vanilla Playwright, Random Quality):**
|
|
||||||
```
|
|
||||||
Session 1: User runs *atdd
|
|
||||||
AI: [Guesses patterns from general knowledge]
|
|
||||||
|
|
||||||
Generated:
|
|
||||||
test('api test', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/users');
|
|
||||||
await page.waitForTimeout(2000); // Hard wait
|
|
||||||
const users = await response.json();
|
|
||||||
// Random quality
|
|
||||||
});
|
|
||||||
|
|
||||||
Session 2: User runs *atdd (different day)
|
|
||||||
AI: [Different random patterns]
|
|
||||||
|
|
||||||
Generated:
|
|
||||||
test('api test', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/users');
|
|
||||||
const users = await response.json();
|
|
||||||
// Better but inconsistent
|
|
||||||
});
|
|
||||||
|
|
||||||
Result: Inconsistent quality, random patterns
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Knowledge Base (TEA + Playwright Utils):**
|
|
||||||
```
|
|
||||||
Session 1: User runs *atdd
|
|
||||||
TEA: [Loads test-quality.md, network-first.md, api-request.md from tea-index.csv]
|
|
||||||
|
|
||||||
Generated:
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
|
|
||||||
test('should fetch users', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/users'
|
|
||||||
}).validateSchema(UsersSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(body).toBeInstanceOf(Array);
|
|
||||||
});
|
|
||||||
|
|
||||||
Session 2: User runs *atdd (different day)
|
|
||||||
TEA: [Loads same fragments from tea-index.csv]
|
|
||||||
|
|
||||||
Generated: Identical pattern, same quality
|
|
||||||
|
|
||||||
Result: Systematic quality, established patterns (ALWAYS uses apiRequest utility when playwright-utils enabled)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Difference:**
|
|
||||||
- **Without KB:** Random patterns, inconsistent APIs
|
|
||||||
- **With KB:** Always uses `apiRequest` utility, always validates schemas, always returns `{ status, body }`
|
|
||||||
|
|
||||||
### Example: Test Review Consistency
|
|
||||||
|
|
||||||
**Without Knowledge Base:**
|
|
||||||
```
|
|
||||||
*test-review session 1:
|
|
||||||
"This test looks okay" [50 issues missed]
|
|
||||||
|
|
||||||
*test-review session 2:
|
|
||||||
"This test has some issues" [Different issues flagged]
|
|
||||||
|
|
||||||
Result: Inconsistent feedback
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Knowledge Base:**
|
|
||||||
```
|
|
||||||
*test-review session 1:
|
|
||||||
[Loads all quality fragments]
|
|
||||||
Flags: 12 hard waits, 5 conditionals (based on test-quality.md)
|
|
||||||
|
|
||||||
*test-review session 2:
|
|
||||||
[Loads same fragments]
|
|
||||||
Flags: Same issues with same explanations
|
|
||||||
|
|
||||||
Result: Consistent, reliable feedback
|
|
||||||
```
|
|
||||||
|
|
||||||
## Maintaining the Knowledge Base
|
|
||||||
|
|
||||||
### When to Add a Fragment
|
|
||||||
|
|
||||||
**Good reasons:**
|
|
||||||
- Pattern is used across multiple workflows
|
|
||||||
- Standard is non-obvious (needs documentation)
|
|
||||||
- Team asks "how should we handle X?" repeatedly
|
|
||||||
- New tool integration (e.g., new testing library)
|
|
||||||
|
|
||||||
**Bad reasons:**
|
|
||||||
- One-off pattern (document in test file instead)
|
|
||||||
- Obvious pattern (everyone knows this)
|
|
||||||
- Experimental (not proven yet)
|
|
||||||
|
|
||||||
### Fragment Quality Standards
|
|
||||||
|
|
||||||
**Good fragment:**
|
|
||||||
- Principle stated in one sentence
|
|
||||||
- Rationale explains why clearly
|
|
||||||
- 3+ pattern examples with code
|
|
||||||
- Anti-patterns shown (what not to do)
|
|
||||||
- Self-contained (minimal dependencies)
|
|
||||||
|
|
||||||
**Example size:** 10-30 KB optimal
|
|
||||||
|
|
||||||
### Updating Existing Fragments
|
|
||||||
|
|
||||||
**When to update:**
|
|
||||||
- Pattern evolved (better approach discovered)
|
|
||||||
- Tool updated (new Playwright API)
|
|
||||||
- Team feedback (pattern unclear)
|
|
||||||
- Bug in example code
|
|
||||||
|
|
||||||
**How to update:**
|
|
||||||
1. Edit fragment markdown file
|
|
||||||
2. Update examples
|
|
||||||
3. Test with affected workflows
|
|
||||||
4. Ensure no breaking changes
|
|
||||||
|
|
||||||
**No need to update tea-index.csv** unless description/tags change.
|
|
||||||
|
|
||||||
## Benefits of Knowledge Base System
|
|
||||||
|
|
||||||
### 1. Consistency
|
|
||||||
|
|
||||||
**Before:** Test quality varies by who wrote it
|
|
||||||
**After:** All tests follow same patterns (TEA-generated or reviewed)
|
|
||||||
|
|
||||||
### 2. Onboarding
|
|
||||||
|
|
||||||
**Before:** New team member reads 20 documents, asks 50 questions
|
|
||||||
**After:** New team member runs `*atdd`, sees patterns in generated code, learns by example
|
|
||||||
|
|
||||||
### 3. Quality Gates
|
|
||||||
|
|
||||||
**Before:** "Is this test good?" → subjective opinion
|
|
||||||
**After:** "*test-review" → objective score against knowledge base
|
|
||||||
|
|
||||||
### 4. Pattern Evolution
|
|
||||||
|
|
||||||
**Before:** Update tests manually across 100 files
|
|
||||||
**After:** Update fragment once, all new tests use new pattern
|
|
||||||
|
|
||||||
### 5. Cross-Project Reuse
|
|
||||||
|
|
||||||
**Before:** Reinvent patterns for each project
|
|
||||||
**After:** Same fragments across all BMad projects (consistency at scale)
|
|
||||||
|
|
||||||
## Comparison: With vs Without Knowledge Base
|
|
||||||
|
|
||||||
### Scenario: Testing Async Background Job
|
|
||||||
|
|
||||||
**Without Knowledge Base:**
|
|
||||||
|
|
||||||
Developer 1:
|
|
||||||
```typescript
|
|
||||||
// Uses hard wait
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForTimeout(10000); // Hope job finishes
|
|
||||||
```
|
|
||||||
|
|
||||||
Developer 2:
|
|
||||||
```typescript
|
|
||||||
// Uses polling
|
|
||||||
await page.click('button');
|
|
||||||
for (let i = 0; i < 10; i++) {
|
|
||||||
const status = await page.locator('.status').textContent();
|
|
||||||
if (status === 'complete') break;
|
|
||||||
await page.waitForTimeout(1000);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Developer 3:
|
|
||||||
```typescript
|
|
||||||
// Uses waitForSelector
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForSelector('.success', { timeout: 30000 });
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** 3 different patterns, all suboptimal.
|
|
||||||
|
|
||||||
**With Knowledge Base (recurse.md fragment):**
|
|
||||||
|
|
||||||
All developers:
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('job completion', async ({ apiRequest, recurse }) => {
|
|
||||||
// Start async job
|
|
||||||
const { body: job } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/jobs'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Poll until complete (correct API: command, predicate, options)
|
|
||||||
const result = await recurse(
|
|
||||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }),
|
|
||||||
(response) => response.body.status === 'completed', // response.body from apiRequest
|
|
||||||
{
|
|
||||||
timeout: 30000,
|
|
||||||
interval: 2000,
|
|
||||||
log: 'Waiting for job to complete'
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
expect(result.body.status).toBe('completed');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** Consistent pattern using correct playwright-utils API (command, predicate, options).
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
For details on the knowledge base index, see:
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md)
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md)
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Standards in knowledge base
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Risk patterns in knowledge base
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - Knowledge base across all models
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Fixture patterns in knowledge base
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Network patterns in knowledge base
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Knowledge base in workflows
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Foundation: Context engineering philosophy** (why knowledge base solves AI test problems)
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**All Workflow Guides Use Knowledge Base:**
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md)
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md)
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md)
|
|
||||||
|
|
||||||
**Integration:**
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - PW-Utils in knowledge base
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Complete fragment index
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - Which workflows load which fragments
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config affects fragment loading
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - Context engineering, knowledge fragment terms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,853 +0,0 @@
|
||||||
---
|
|
||||||
title: "Network-First Patterns Explained"
|
|
||||||
description: Understanding how TEA eliminates test flakiness by waiting for actual network responses
|
|
||||||
---
|
|
||||||
|
|
||||||
# Network-First Patterns Explained
|
|
||||||
|
|
||||||
Network-first patterns are TEA's solution to test flakiness. Instead of guessing how long to wait with fixed timeouts, wait for the actual network event that causes UI changes.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**The Core Principle:**
|
|
||||||
UI changes because APIs respond. Wait for the API response, not an arbitrary timeout.
|
|
||||||
|
|
||||||
**Traditional approach:**
|
|
||||||
```typescript
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForTimeout(3000); // Hope 3 seconds is enough
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
```
|
|
||||||
|
|
||||||
**Network-first approach:**
|
|
||||||
```typescript
|
|
||||||
const responsePromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
|
||||||
);
|
|
||||||
await page.click('button');
|
|
||||||
await responsePromise; // Wait for actual response
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** Deterministic tests that wait exactly as long as needed.
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### Hard Waits Create Flakiness
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ The flaky test pattern
|
|
||||||
test('should submit form', async ({ page }) => {
|
|
||||||
await page.fill('#name', 'Test User');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
|
|
||||||
await page.waitForTimeout(2000); // Wait 2 seconds
|
|
||||||
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why this fails:**
|
|
||||||
- **Fast network:** Wastes 1.5 seconds waiting
|
|
||||||
- **Slow network:** Not enough time, test fails
|
|
||||||
- **CI environment:** Slower than local, fails randomly
|
|
||||||
- **Under load:** API takes 3 seconds, test fails
|
|
||||||
|
|
||||||
**Result:** "Works on my machine" syndrome, flaky CI.
|
|
||||||
|
|
||||||
### The Timeout Escalation Trap
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Developer sees flaky test
|
|
||||||
await page.waitForTimeout(2000); // Failed in CI
|
|
||||||
|
|
||||||
// Increases timeout
|
|
||||||
await page.waitForTimeout(5000); // Still fails sometimes
|
|
||||||
|
|
||||||
// Increases again
|
|
||||||
await page.waitForTimeout(10000); // Now it passes... slowly
|
|
||||||
|
|
||||||
// Problem: Now EVERY test waits 10 seconds
|
|
||||||
// Suite that took 5 minutes now takes 30 minutes
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** Slow, still-flaky tests.
|
|
||||||
|
|
||||||
### Race Conditions
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ Navigate-then-wait race condition
|
|
||||||
test('should load dashboard data', async ({ page }) => {
|
|
||||||
await page.goto('/dashboard'); // Navigation starts
|
|
||||||
|
|
||||||
// Race condition! API might not have responded yet
|
|
||||||
await expect(page.locator('.data-table')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**What happens:**
|
|
||||||
1. `goto()` starts navigation
|
|
||||||
2. Page loads HTML
|
|
||||||
3. JavaScript requests `/api/dashboard`
|
|
||||||
4. Test checks for `.data-table` BEFORE API responds
|
|
||||||
5. Test fails intermittently
|
|
||||||
|
|
||||||
**Result:** "Sometimes it works, sometimes it doesn't."
|
|
||||||
|
|
||||||
## The Solution: Intercept-Before-Navigate
|
|
||||||
|
|
||||||
### Wait for Response Before Asserting
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ Good: Network-first pattern
|
|
||||||
test('should load dashboard data', async ({ page }) => {
|
|
||||||
// Set up promise BEFORE navigation
|
|
||||||
const dashboardPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/dashboard') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
// Navigate
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
|
|
||||||
// Wait for API response
|
|
||||||
const response = await dashboardPromise;
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
// Now assert UI
|
|
||||||
await expect(page.locator('.data-table')).toBeVisible();
|
|
||||||
await expect(page.locator('.data-table tr')).toHaveCount(data.items.length);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why this works:**
|
|
||||||
- Wait set up BEFORE navigation (no race)
|
|
||||||
- Wait for actual API response (deterministic)
|
|
||||||
- No fixed timeout (fast when API is fast)
|
|
||||||
- Validates API response (catch backend errors)
|
|
||||||
|
|
||||||
**With Playwright Utils (Even Cleaner):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should load dashboard data', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// Set up interception BEFORE navigation
|
|
||||||
const dashboardCall = interceptNetworkCall({
|
|
||||||
method: 'GET',
|
|
||||||
url: '**/api/dashboard'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Navigate
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
|
|
||||||
// Wait for API response (automatic JSON parsing)
|
|
||||||
const { status, responseJson: data } = await dashboardCall;
|
|
||||||
|
|
||||||
// Validate API response
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(data.items).toBeDefined();
|
|
||||||
|
|
||||||
// Assert UI matches API data
|
|
||||||
await expect(page.locator('.data-table')).toBeVisible();
|
|
||||||
await expect(page.locator('.data-table tr')).toHaveCount(data.items.length);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- Automatic JSON parsing (no `await response.json()`)
|
|
||||||
- Returns `{ status, responseJson, requestJson }` structure
|
|
||||||
- Cleaner API (no need to check `resp.ok()`)
|
|
||||||
- Same intercept-before-navigate pattern
|
|
||||||
|
|
||||||
### Intercept-Before-Navigate Pattern
|
|
||||||
|
|
||||||
**Key insight:** Set up wait BEFORE triggering the action.
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ Pattern: Intercept → Action → Await
|
|
||||||
|
|
||||||
// 1. Intercept (set up wait)
|
|
||||||
const promise = page.waitForResponse(matcher);
|
|
||||||
|
|
||||||
// 2. Action (trigger request)
|
|
||||||
await page.click('button');
|
|
||||||
|
|
||||||
// 3. Await (wait for actual response)
|
|
||||||
await promise;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why this order:**
|
|
||||||
- `waitForResponse()` starts listening immediately
|
|
||||||
- Then trigger the action that makes the request
|
|
||||||
- Then wait for the promise to resolve
|
|
||||||
- No race condition possible
|
|
||||||
|
|
||||||
#### Intercept-Before-Navigate Flow
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
sequenceDiagram
|
|
||||||
participant Test
|
|
||||||
participant Playwright
|
|
||||||
participant Browser
|
|
||||||
participant API
|
|
||||||
|
|
||||||
rect rgb(200, 230, 201)
|
|
||||||
Note over Test,Playwright: ✅ CORRECT: Intercept First
|
|
||||||
Test->>Playwright: 1. waitForResponse(matcher)
|
|
||||||
Note over Playwright: Starts listening for response
|
|
||||||
Test->>Browser: 2. click('button')
|
|
||||||
Browser->>API: 3. POST /api/submit
|
|
||||||
API-->>Browser: 4. 200 OK {success: true}
|
|
||||||
Browser-->>Playwright: 5. Response captured
|
|
||||||
Test->>Playwright: 6. await promise
|
|
||||||
Playwright-->>Test: 7. Returns response
|
|
||||||
Note over Test: No race condition!
|
|
||||||
end
|
|
||||||
|
|
||||||
rect rgb(255, 205, 210)
|
|
||||||
Note over Test,API: ❌ WRONG: Action First
|
|
||||||
Test->>Browser: 1. click('button')
|
|
||||||
Browser->>API: 2. POST /api/submit
|
|
||||||
API-->>Browser: 3. 200 OK (already happened!)
|
|
||||||
Test->>Playwright: 4. waitForResponse(matcher)
|
|
||||||
Note over Test,Playwright: Too late - response already occurred
|
|
||||||
Note over Test: Race condition! Test hangs or fails
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
**Correct Order (Green):**
|
|
||||||
1. Set up listener (`waitForResponse`)
|
|
||||||
2. Trigger action (`click`)
|
|
||||||
3. Wait for response (`await promise`)
|
|
||||||
|
|
||||||
**Wrong Order (Red):**
|
|
||||||
1. Trigger action first
|
|
||||||
2. Set up listener too late
|
|
||||||
3. Response already happened - missed!
|
|
||||||
|
|
||||||
## How It Works in TEA
|
|
||||||
|
|
||||||
### TEA Generates Network-First Tests
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// When you run *atdd or *automate, TEA generates:
|
|
||||||
|
|
||||||
test('should create user', async ({ page }) => {
|
|
||||||
// TEA automatically includes network wait
|
|
||||||
const createUserPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/users') &&
|
|
||||||
resp.request().method() === 'POST' &&
|
|
||||||
resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.fill('#name', 'Test User');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
|
|
||||||
const response = await createUserPromise;
|
|
||||||
const user = await response.json();
|
|
||||||
|
|
||||||
// Validate both API and UI
|
|
||||||
expect(user.id).toBeDefined();
|
|
||||||
await expect(page.locator('.success')).toContainText(user.name);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (if `tea_use_playwright_utils: true`):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should create user', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// TEA uses interceptNetworkCall for cleaner interception
|
|
||||||
const createUserCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/users'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.getByLabel('Name').fill('Test User');
|
|
||||||
await page.getByRole('button', { name: 'Submit' }).click();
|
|
||||||
|
|
||||||
// Wait for response (automatic JSON parsing)
|
|
||||||
const { status, responseJson: user } = await createUserCall;
|
|
||||||
|
|
||||||
// Validate both API and UI
|
|
||||||
expect(status).toBe(201);
|
|
||||||
expect(user.id).toBeDefined();
|
|
||||||
await expect(page.locator('.success')).toContainText(user.name);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- Automatic JSON parsing (`responseJson` ready to use)
|
|
||||||
- No manual `await response.json()`
|
|
||||||
- Returns `{ status, responseJson }` structure
|
|
||||||
- Cleaner, more readable code
|
|
||||||
|
|
||||||
### TEA Reviews for Hard Waits
|
|
||||||
|
|
||||||
When you run `*test-review`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Critical Issue: Hard Wait Detected
|
|
||||||
|
|
||||||
**File:** tests/e2e/submit.spec.ts:45
|
|
||||||
**Issue:** Using `page.waitForTimeout(3000)`
|
|
||||||
**Severity:** Critical (causes flakiness)
|
|
||||||
|
|
||||||
**Current Code:**
|
|
||||||
```typescript
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForTimeout(3000); // ❌
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```typescript
|
|
||||||
const responsePromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
|
||||||
);
|
|
||||||
await page.click('button');
|
|
||||||
await responsePromise; // ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why:** Hard waits are non-deterministic. Use network-first patterns.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pattern Variations
|
|
||||||
|
|
||||||
### Basic Response Wait
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Wait for any successful response
|
|
||||||
const promise = page.waitForResponse(resp => resp.ok());
|
|
||||||
await page.click('button');
|
|
||||||
await promise;
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('basic wait', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const responseCall = interceptNetworkCall({ url: '**' }); // Match any
|
|
||||||
await page.click('button');
|
|
||||||
const { status } = await responseCall;
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Specific URL Match
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Wait for specific endpoint
|
|
||||||
const promise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/users/123')
|
|
||||||
);
|
|
||||||
await page.goto('/user/123');
|
|
||||||
await promise;
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
test('specific URL', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const userCall = interceptNetworkCall({ url: '**/api/users/123' });
|
|
||||||
await page.goto('/user/123');
|
|
||||||
const { status, responseJson } = await userCall;
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Method + Status Match
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Wait for POST that returns 201
|
|
||||||
const promise = page.waitForResponse(
|
|
||||||
resp =>
|
|
||||||
resp.url().includes('/api/users') &&
|
|
||||||
resp.request().method() === 'POST' &&
|
|
||||||
resp.status() === 201
|
|
||||||
);
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
await promise;
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
test('method and status', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const createCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/users'
|
|
||||||
});
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
const { status, responseJson } = await createCall;
|
|
||||||
expect(status).toBe(201); // Explicit status check
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Multiple Responses
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Wait for multiple API calls
|
|
||||||
const [usersResp, postsResp] = await Promise.all([
|
|
||||||
page.waitForResponse(resp => resp.url().includes('/api/users')),
|
|
||||||
page.waitForResponse(resp => resp.url().includes('/api/posts')),
|
|
||||||
page.goto('/dashboard') // Triggers both requests
|
|
||||||
]);
|
|
||||||
|
|
||||||
const users = await usersResp.json();
|
|
||||||
const posts = await postsResp.json();
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
test('multiple responses', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
|
||||||
const postsCall = interceptNetworkCall({ url: '**/api/posts' });
|
|
||||||
|
|
||||||
await page.goto('/dashboard'); // Triggers both
|
|
||||||
|
|
||||||
const [{ responseJson: users }, { responseJson: posts }] = await Promise.all([
|
|
||||||
usersCall,
|
|
||||||
postsCall
|
|
||||||
]);
|
|
||||||
|
|
||||||
expect(users).toBeInstanceOf(Array);
|
|
||||||
expect(posts).toBeInstanceOf(Array);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Validate Response Data
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Verify API response before asserting UI
|
|
||||||
const promise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/checkout') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.click('button:has-text("Complete Order")');
|
|
||||||
|
|
||||||
const response = await promise;
|
|
||||||
const order = await response.json();
|
|
||||||
|
|
||||||
// Response validation
|
|
||||||
expect(order.status).toBe('confirmed');
|
|
||||||
expect(order.total).toBeGreaterThan(0);
|
|
||||||
|
|
||||||
// UI validation
|
|
||||||
await expect(page.locator('.order-confirmation')).toContainText(order.id);
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
test('validate response data', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const checkoutCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/checkout'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.click('button:has-text("Complete Order")');
|
|
||||||
|
|
||||||
const { status, responseJson: order } = await checkoutCall;
|
|
||||||
|
|
||||||
// Response validation (automatic JSON parsing)
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(order.status).toBe('confirmed');
|
|
||||||
expect(order.total).toBeGreaterThan(0);
|
|
||||||
|
|
||||||
// UI validation
|
|
||||||
await expect(page.locator('.order-confirmation')).toContainText(order.id);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Advanced Patterns
|
|
||||||
|
|
||||||
### HAR Recording for Offline Testing
|
|
||||||
|
|
||||||
**Vanilla Playwright (Manual HAR Handling):**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// First run: Record mode (saves HAR file)
|
|
||||||
test('offline testing - RECORD', async ({ page, context }) => {
|
|
||||||
// Record mode: Save network traffic to HAR
|
|
||||||
await context.routeFromHAR('./hars/dashboard.har', {
|
|
||||||
url: '**/api/**',
|
|
||||||
update: true // Update HAR file
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
// All network traffic saved to dashboard.har
|
|
||||||
});
|
|
||||||
|
|
||||||
// Subsequent runs: Playback mode (uses saved HAR)
|
|
||||||
test('offline testing - PLAYBACK', async ({ page, context }) => {
|
|
||||||
// Playback mode: Use saved network traffic
|
|
||||||
await context.routeFromHAR('./hars/dashboard.har', {
|
|
||||||
url: '**/api/**',
|
|
||||||
update: false // Use existing HAR, no network calls
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
// Uses recorded responses, no backend needed
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Automatic HAR Management):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
|
||||||
|
|
||||||
// Record mode: Set environment variable
|
|
||||||
process.env.PW_NET_MODE = 'record';
|
|
||||||
|
|
||||||
test('should work offline', async ({ page, context, networkRecorder }) => {
|
|
||||||
await networkRecorder.setup(context); // Handles HAR automatically
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
await page.click('#add-item');
|
|
||||||
// All network traffic recorded, CRUD operations detected
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Switch to playback:**
|
|
||||||
```bash
|
|
||||||
# Playback mode (offline)
|
|
||||||
PW_NET_MODE=playback npx playwright test
|
|
||||||
# Uses HAR file, no backend needed!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- Automatic HAR file management (naming, paths)
|
|
||||||
- CRUD operation detection (stateful mocking)
|
|
||||||
- Environment variable control (easy switching)
|
|
||||||
- Works for complex interactions (create, update, delete)
|
|
||||||
- No manual route configuration
|
|
||||||
|
|
||||||
### Network Request Interception
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
test('should handle API error', async ({ page }) => {
|
|
||||||
// Manual route setup
|
|
||||||
await page.route('**/api/users', (route) => {
|
|
||||||
route.fulfill({
|
|
||||||
status: 500,
|
|
||||||
body: JSON.stringify({ error: 'Internal server error' })
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/users');
|
|
||||||
|
|
||||||
const response = await page.waitForResponse('**/api/users');
|
|
||||||
const error = await response.json();
|
|
||||||
|
|
||||||
expect(error.error).toContain('Internal server');
|
|
||||||
await expect(page.locator('.error-message')).toContainText('Server error');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('should handle API error', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// Stub API to return error (set up BEFORE navigation)
|
|
||||||
const usersCall = interceptNetworkCall({
|
|
||||||
method: 'GET',
|
|
||||||
url: '**/api/users',
|
|
||||||
fulfillResponse: {
|
|
||||||
status: 500,
|
|
||||||
body: { error: 'Internal server error' }
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/users');
|
|
||||||
|
|
||||||
// Wait for mocked response and access parsed data
|
|
||||||
const { status, responseJson } = await usersCall;
|
|
||||||
|
|
||||||
expect(status).toBe(500);
|
|
||||||
expect(responseJson.error).toContain('Internal server');
|
|
||||||
await expect(page.locator('.error-message')).toContainText('Server error');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- Automatic JSON parsing (`responseJson` ready to use)
|
|
||||||
- Returns promise with `{ status, responseJson, requestJson }`
|
|
||||||
- No need to pass `page` (auto-injected by fixture)
|
|
||||||
- Glob pattern matching (simpler than regex)
|
|
||||||
- Single declarative call (setup + wait in one)
|
|
||||||
|
|
||||||
## Comparison: Traditional vs Network-First
|
|
||||||
|
|
||||||
### Loading Dashboard Data
|
|
||||||
|
|
||||||
**Traditional (Flaky):**
|
|
||||||
```typescript
|
|
||||||
test('dashboard loads data', async ({ page }) => {
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
await page.waitForTimeout(2000); // ❌ Magic number
|
|
||||||
await expect(page.locator('table tr')).toHaveCount(5);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Failure modes:**
|
|
||||||
- API takes 2.5s → test fails
|
|
||||||
- API returns 3 items not 5 → hard to debug (which issue?)
|
|
||||||
- CI slower than local → fails in CI only
|
|
||||||
|
|
||||||
**Network-First (Deterministic):**
|
|
||||||
```typescript
|
|
||||||
test('dashboard loads data', async ({ page }) => {
|
|
||||||
const apiPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/dashboard') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
|
|
||||||
const response = await apiPromise;
|
|
||||||
const { items } = await response.json();
|
|
||||||
|
|
||||||
// Validate API response
|
|
||||||
expect(items).toHaveLength(5);
|
|
||||||
|
|
||||||
// Validate UI matches API
|
|
||||||
await expect(page.locator('table tr')).toHaveCount(items.length);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Waits exactly as long as needed (100ms or 5s, doesn't matter)
|
|
||||||
- Validates API response (catch backend errors)
|
|
||||||
- Validates UI matches API (catch frontend bugs)
|
|
||||||
- Works in any environment (local, CI, staging)
|
|
||||||
|
|
||||||
**With Playwright Utils (Even Better):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('dashboard loads data', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const dashboardCall = interceptNetworkCall({
|
|
||||||
method: 'GET',
|
|
||||||
url: '**/api/dashboard'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
|
|
||||||
const { status, responseJson: { items } } = await dashboardCall;
|
|
||||||
|
|
||||||
// Validate API response (automatic JSON parsing)
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(items).toHaveLength(5);
|
|
||||||
|
|
||||||
// Validate UI matches API
|
|
||||||
await expect(page.locator('table tr')).toHaveCount(items.length);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Additional Benefits:**
|
|
||||||
- No manual `await response.json()` (automatic parsing)
|
|
||||||
- Cleaner destructuring of nested data
|
|
||||||
- Consistent API across all network calls
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Form Submission
|
|
||||||
|
|
||||||
**Traditional (Flaky):**
|
|
||||||
```typescript
|
|
||||||
test('form submission', async ({ page }) => {
|
|
||||||
await page.fill('#email', 'test@example.com');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
await page.waitForTimeout(3000); // ❌ Hope it's enough
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Network-First (Deterministic):**
|
|
||||||
```typescript
|
|
||||||
test('form submission', async ({ page }) => {
|
|
||||||
const submitPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/submit') &&
|
|
||||||
resp.request().method() === 'POST' &&
|
|
||||||
resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.fill('#email', 'test@example.com');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
|
|
||||||
const response = await submitPromise;
|
|
||||||
const result = await response.json();
|
|
||||||
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('form submission', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const submitCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/submit'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByRole('button', { name: 'Submit' }).click();
|
|
||||||
|
|
||||||
const { status, responseJson: result } = await submitCall;
|
|
||||||
|
|
||||||
// Automatic JSON parsing, no manual await
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Progression:**
|
|
||||||
- Traditional: Hard waits (flaky)
|
|
||||||
- Network-First (Vanilla): waitForResponse (deterministic)
|
|
||||||
- Network-First (PW-Utils): interceptNetworkCall (deterministic + cleaner API)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common Misconceptions
|
|
||||||
|
|
||||||
### "I Already Use waitForSelector"
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// This is still a hard wait in disguise
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForSelector('.success', { timeout: 5000 });
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem:** Waiting for DOM, not for the API that caused DOM change.
|
|
||||||
|
|
||||||
**Better:**
|
|
||||||
```typescript
|
|
||||||
await page.waitForResponse(matcher); // Wait for root cause
|
|
||||||
await page.waitForSelector('.success'); // Then validate UI
|
|
||||||
```
|
|
||||||
|
|
||||||
### "My Tests Are Fast, Why Add Complexity?"
|
|
||||||
|
|
||||||
**Short-term:** Tests are fast locally
|
|
||||||
|
|
||||||
**Long-term problems:**
|
|
||||||
- Different environments (CI slower)
|
|
||||||
- Under load (API slower)
|
|
||||||
- Network variability (random)
|
|
||||||
- Scaling test suite (100 → 1000 tests)
|
|
||||||
|
|
||||||
**Network-first prevents these issues before they appear.**
|
|
||||||
|
|
||||||
### "Too Much Boilerplate"
|
|
||||||
|
|
||||||
**Problem:** `waitForResponse` is verbose, repeated in every test.
|
|
||||||
|
|
||||||
**Solution:** Use Playwright Utils `interceptNetworkCall` - built-in fixture that reduces boilerplate.
|
|
||||||
|
|
||||||
**Vanilla Playwright (Repetitive):**
|
|
||||||
```typescript
|
|
||||||
test('test 1', async ({ page }) => {
|
|
||||||
const promise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
|
||||||
);
|
|
||||||
await page.click('button');
|
|
||||||
await promise;
|
|
||||||
});
|
|
||||||
|
|
||||||
test('test 2', async ({ page }) => {
|
|
||||||
const promise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/load') && resp.ok()
|
|
||||||
);
|
|
||||||
await page.click('button');
|
|
||||||
await promise;
|
|
||||||
});
|
|
||||||
// Repeated pattern in every test
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Cleaner):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('test 1', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const submitCall = interceptNetworkCall({ url: '**/api/submit' });
|
|
||||||
await page.click('button');
|
|
||||||
const { status, responseJson } = await submitCall;
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('test 2', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const loadCall = interceptNetworkCall({ url: '**/api/load' });
|
|
||||||
await page.click('button');
|
|
||||||
const { responseJson } = await loadCall;
|
|
||||||
// Automatic JSON parsing, cleaner API
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Less boilerplate (fixture handles complexity)
|
|
||||||
- Automatic JSON parsing
|
|
||||||
- Glob pattern matching (`**/api/**`)
|
|
||||||
- Consistent API across all tests
|
|
||||||
|
|
||||||
See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#intercept-network-call) for setup.
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
For detailed network-first patterns, see the knowledge base:
|
|
||||||
- [Knowledge Base Index - Network & Reliability](/docs/reference/tea/knowledge-base.md)
|
|
||||||
- [Complete Knowledge Base Index](/docs/reference/tea/knowledge-base.md)
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Determinism requires network-first
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - High-risk features need reliable tests
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Network utilities as fixtures
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - Network patterns in knowledge base
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Network-first in workflows
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Why flakiness matters
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Review for hard waits
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Generate network-first tests
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Expand with network patterns
|
|
||||||
|
|
||||||
**Use-Case Guides:**
|
|
||||||
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Fix flaky legacy tests
|
|
||||||
|
|
||||||
**Customization:**
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Network utilities (recorder, interceptor, error monitor)
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - All workflows use network-first
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Network-first fragment
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - Network-first pattern term
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,586 +0,0 @@
|
||||||
---
|
|
||||||
title: "Risk-Based Testing Explained"
|
|
||||||
description: Understanding how TEA uses probability × impact scoring to prioritize testing effort
|
|
||||||
---
|
|
||||||
|
|
||||||
# Risk-Based Testing Explained
|
|
||||||
|
|
||||||
Risk-based testing is TEA's core principle: testing depth scales with business impact. Instead of testing everything equally, focus effort where failures hurt most.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Traditional testing approaches treat all features equally:
|
|
||||||
- Every feature gets same test coverage
|
|
||||||
- Same level of scrutiny regardless of impact
|
|
||||||
- No systematic prioritization
|
|
||||||
- Testing becomes checkbox exercise
|
|
||||||
|
|
||||||
**Risk-based testing asks:**
|
|
||||||
- What's the probability this will fail?
|
|
||||||
- What's the impact if it does fail?
|
|
||||||
- How much testing is appropriate for this risk level?
|
|
||||||
|
|
||||||
**Result:** Testing effort matches business criticality.
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### Equal Testing for Unequal Risk
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Feature A: User login (critical path, millions of users)
|
|
||||||
Feature B: Export to PDF (nice-to-have, rarely used)
|
|
||||||
|
|
||||||
Traditional approach:
|
|
||||||
- Both get 10 tests
|
|
||||||
- Both get same review scrutiny
|
|
||||||
- Both take same development time
|
|
||||||
|
|
||||||
Problem: Wasting effort on low-impact features while under-testing critical paths.
|
|
||||||
```
|
|
||||||
|
|
||||||
### No Objective Prioritization
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
PM: "We need more tests for checkout"
|
|
||||||
QA: "How many tests?"
|
|
||||||
PM: "I don't know... a lot?"
|
|
||||||
QA: "How do we know when we have enough?"
|
|
||||||
PM: "When it feels safe?"
|
|
||||||
|
|
||||||
Problem: Subjective decisions, no data, political debates.
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Solution: Probability × Impact Scoring
|
|
||||||
|
|
||||||
### Risk Score = Probability × Impact
|
|
||||||
|
|
||||||
**Probability** (How likely to fail?)
|
|
||||||
- **1 (Low):** Stable, well-tested, simple logic
|
|
||||||
- **2 (Medium):** Moderate complexity, some unknowns
|
|
||||||
- **3 (High):** Complex, untested, many edge cases
|
|
||||||
|
|
||||||
**Impact** (How bad if it fails?)
|
|
||||||
- **1 (Low):** Minor inconvenience, few users affected
|
|
||||||
- **2 (Medium):** Degraded experience, workarounds exist
|
|
||||||
- **3 (High):** Critical path broken, business impact
|
|
||||||
|
|
||||||
**Score Range:** 1-9
|
|
||||||
|
|
||||||
#### Risk Scoring Matrix
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
graph TD
|
|
||||||
subgraph Matrix[" "]
|
|
||||||
direction TB
|
|
||||||
subgraph Impact3["Impact: HIGH (3)"]
|
|
||||||
P1I3["Score: 3<br/>Low Risk"]
|
|
||||||
P2I3["Score: 6<br/>HIGH RISK<br/>Mitigation Required"]
|
|
||||||
P3I3["Score: 9<br/>CRITICAL<br/>Blocks Release"]
|
|
||||||
end
|
|
||||||
subgraph Impact2["Impact: MEDIUM (2)"]
|
|
||||||
P1I2["Score: 2<br/>Low Risk"]
|
|
||||||
P2I2["Score: 4<br/>Medium Risk"]
|
|
||||||
P3I2["Score: 6<br/>HIGH RISK<br/>Mitigation Required"]
|
|
||||||
end
|
|
||||||
subgraph Impact1["Impact: LOW (1)"]
|
|
||||||
P1I1["Score: 1<br/>Low Risk"]
|
|
||||||
P2I1["Score: 2<br/>Low Risk"]
|
|
||||||
P3I1["Score: 3<br/>Low Risk"]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
Prob1["Probability: LOW (1)"] -.-> P1I1
|
|
||||||
Prob1 -.-> P1I2
|
|
||||||
Prob1 -.-> P1I3
|
|
||||||
|
|
||||||
Prob2["Probability: MEDIUM (2)"] -.-> P2I1
|
|
||||||
Prob2 -.-> P2I2
|
|
||||||
Prob2 -.-> P2I3
|
|
||||||
|
|
||||||
Prob3["Probability: HIGH (3)"] -.-> P3I1
|
|
||||||
Prob3 -.-> P3I2
|
|
||||||
Prob3 -.-> P3I3
|
|
||||||
|
|
||||||
style P3I3 fill:#f44336,stroke:#b71c1c,stroke-width:3px,color:#fff
|
|
||||||
style P2I3 fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
|
||||||
style P3I2 fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
|
||||||
style P2I2 fill:#fff9c4,stroke:#f57f17,stroke-width:1px,color:#000
|
|
||||||
style P1I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style P2I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style P3I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style P1I2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style P1I3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
```
|
|
||||||
|
|
||||||
**Legend:**
|
|
||||||
- 🔴 Red (Score 9): CRITICAL - Blocks release
|
|
||||||
- 🟠 Orange (Score 6-8): HIGH RISK - Mitigation required
|
|
||||||
- 🟡 Yellow (Score 4-5): MEDIUM - Mitigation recommended
|
|
||||||
- 🟢 Green (Score 1-3): LOW - Optional mitigation
|
|
||||||
|
|
||||||
### Scoring Examples
|
|
||||||
|
|
||||||
**Score 9 (Critical):**
|
|
||||||
```
|
|
||||||
Feature: Payment processing
|
|
||||||
Probability: 3 (complex third-party integration)
|
|
||||||
Impact: 3 (broken payments = lost revenue)
|
|
||||||
Score: 3 × 3 = 9
|
|
||||||
|
|
||||||
Action: Extensive testing required
|
|
||||||
- E2E tests for all payment flows
|
|
||||||
- API tests for all payment scenarios
|
|
||||||
- Error handling for all failure modes
|
|
||||||
- Security testing for payment data
|
|
||||||
- Load testing for high traffic
|
|
||||||
- Monitoring and alerts
|
|
||||||
```
|
|
||||||
|
|
||||||
**Score 1 (Low):**
|
|
||||||
```
|
|
||||||
Feature: Change profile theme color
|
|
||||||
Probability: 1 (simple UI toggle)
|
|
||||||
Impact: 1 (cosmetic only)
|
|
||||||
Score: 1 × 1 = 1
|
|
||||||
|
|
||||||
Action: Minimal testing
|
|
||||||
- One E2E smoke test
|
|
||||||
- Skip edge cases
|
|
||||||
- No API tests needed
|
|
||||||
```
|
|
||||||
|
|
||||||
**Score 6 (Medium-High):**
|
|
||||||
```
|
|
||||||
Feature: User profile editing
|
|
||||||
Probability: 2 (moderate complexity)
|
|
||||||
Impact: 3 (users can't update info)
|
|
||||||
Score: 2 × 3 = 6
|
|
||||||
|
|
||||||
Action: Focused testing
|
|
||||||
- E2E test for happy path
|
|
||||||
- API tests for CRUD operations
|
|
||||||
- Validation testing
|
|
||||||
- Skip low-value edge cases
|
|
||||||
```
|
|
||||||
|
|
||||||
## How It Works in TEA
|
|
||||||
|
|
||||||
### 1. Risk Categories
|
|
||||||
|
|
||||||
TEA assesses risk across 6 categories:
|
|
||||||
|
|
||||||
**TECH** - Technical debt, architecture fragility
|
|
||||||
```
|
|
||||||
Example: Migrating from REST to GraphQL
|
|
||||||
Probability: 3 (major architectural change)
|
|
||||||
Impact: 3 (affects all API consumers)
|
|
||||||
Score: 9 - Extensive integration testing required
|
|
||||||
```
|
|
||||||
|
|
||||||
**SEC** - Security vulnerabilities
|
|
||||||
```
|
|
||||||
Example: Adding OAuth integration
|
|
||||||
Probability: 2 (third-party dependency)
|
|
||||||
Impact: 3 (auth breach = data exposure)
|
|
||||||
Score: 6 - Security testing mandatory
|
|
||||||
```
|
|
||||||
|
|
||||||
**PERF** - Performance degradation
|
|
||||||
```
|
|
||||||
Example: Adding real-time notifications
|
|
||||||
Probability: 2 (WebSocket complexity)
|
|
||||||
Impact: 2 (slower experience)
|
|
||||||
Score: 4 - Load testing recommended
|
|
||||||
```
|
|
||||||
|
|
||||||
**DATA** - Data integrity, corruption
|
|
||||||
```
|
|
||||||
Example: Database migration
|
|
||||||
Probability: 2 (schema changes)
|
|
||||||
Impact: 3 (data loss unacceptable)
|
|
||||||
Score: 6 - Data validation tests required
|
|
||||||
```
|
|
||||||
|
|
||||||
**BUS** - Business logic errors
|
|
||||||
```
|
|
||||||
Example: Discount calculation
|
|
||||||
Probability: 2 (business rules complex)
|
|
||||||
Impact: 3 (wrong prices = revenue loss)
|
|
||||||
Score: 6 - Business logic tests mandatory
|
|
||||||
```
|
|
||||||
|
|
||||||
**OPS** - Operational issues
|
|
||||||
```
|
|
||||||
Example: Logging system update
|
|
||||||
Probability: 1 (straightforward)
|
|
||||||
Impact: 2 (debugging harder without logs)
|
|
||||||
Score: 2 - Basic smoke test sufficient
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Test Priorities (P0-P3)
|
|
||||||
|
|
||||||
Risk scores inform test priorities (but aren't the only factor):
|
|
||||||
|
|
||||||
**P0 - Critical Path**
|
|
||||||
- **Risk Scores:** Typically 6-9 (high risk)
|
|
||||||
- **Other Factors:** Revenue impact, security-critical, regulatory compliance, frequent usage
|
|
||||||
- **Coverage Target:** 100%
|
|
||||||
- **Test Levels:** E2E + API
|
|
||||||
- **Example:** Login, checkout, payment processing
|
|
||||||
|
|
||||||
**P1 - High Value**
|
|
||||||
- **Risk Scores:** Typically 4-6 (medium-high risk)
|
|
||||||
- **Other Factors:** Core user journeys, complex logic, integration points
|
|
||||||
- **Coverage Target:** 90%
|
|
||||||
- **Test Levels:** API + selective E2E
|
|
||||||
- **Example:** Profile editing, search, filters
|
|
||||||
|
|
||||||
**P2 - Medium Value**
|
|
||||||
- **Risk Scores:** Typically 2-4 (medium risk)
|
|
||||||
- **Other Factors:** Secondary features, admin functionality, reporting
|
|
||||||
- **Coverage Target:** 50%
|
|
||||||
- **Test Levels:** API happy path only
|
|
||||||
- **Example:** Export features, advanced settings
|
|
||||||
|
|
||||||
**P3 - Low Value**
|
|
||||||
- **Risk Scores:** Typically 1-2 (low risk)
|
|
||||||
- **Other Factors:** Rarely used, nice-to-have, cosmetic
|
|
||||||
- **Coverage Target:** 20% (smoke test)
|
|
||||||
- **Test Levels:** E2E smoke test only
|
|
||||||
- **Example:** Theme customization, experimental features
|
|
||||||
|
|
||||||
**Note:** Priorities consider risk scores plus business context (usage frequency, user impact, etc.). See [Test Priorities Matrix](/docs/reference/tea/knowledge-base.md#test-priorities-matrix) for complete criteria.
|
|
||||||
|
|
||||||
### 3. Mitigation Plans
|
|
||||||
|
|
||||||
**Scores ≥6 require documented mitigation:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Risk Mitigation
|
|
||||||
|
|
||||||
**Risk:** Payment integration failure (Score: 9)
|
|
||||||
|
|
||||||
**Mitigation Plan:**
|
|
||||||
- Create comprehensive test suite (20+ tests)
|
|
||||||
- Add payment sandbox environment
|
|
||||||
- Implement retry logic with idempotency
|
|
||||||
- Add monitoring and alerts
|
|
||||||
- Document rollback procedure
|
|
||||||
|
|
||||||
**Owner:** Backend team lead
|
|
||||||
**Deadline:** Before production deployment
|
|
||||||
**Status:** In progress
|
|
||||||
```
|
|
||||||
|
|
||||||
**Gate Rules:**
|
|
||||||
- **Score = 9** (Critical): Mandatory FAIL - blocks release without mitigation
|
|
||||||
- **Score 6-8** (High): Requires mitigation plan, becomes CONCERNS if incomplete
|
|
||||||
- **Score 4-5** (Medium): Mitigation recommended but not required
|
|
||||||
- **Score 1-3** (Low): No mitigation needed
|
|
||||||
|
|
||||||
## Comparison: Traditional vs Risk-Based
|
|
||||||
|
|
||||||
### Traditional Approach
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Test everything equally
|
|
||||||
describe('User profile', () => {
|
|
||||||
test('should display name');
|
|
||||||
test('should display email');
|
|
||||||
test('should display phone');
|
|
||||||
test('should display address');
|
|
||||||
test('should display bio');
|
|
||||||
test('should display avatar');
|
|
||||||
test('should display join date');
|
|
||||||
test('should display last login');
|
|
||||||
test('should display theme preference');
|
|
||||||
test('should display language preference');
|
|
||||||
// 10 tests for profile display (all equal priority)
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Same effort for critical (name) vs trivial (theme)
|
|
||||||
- No guidance on what matters
|
|
||||||
- Wastes time on low-value tests
|
|
||||||
|
|
||||||
### Risk-Based Approach
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Test based on risk
|
|
||||||
|
|
||||||
describe('User profile - Critical (P0)', () => {
|
|
||||||
test('should display name and email'); // Score: 9 (identity critical)
|
|
||||||
test('should allow editing name and email');
|
|
||||||
test('should validate email format');
|
|
||||||
test('should prevent unauthorized edits');
|
|
||||||
// 4 focused tests on high-risk areas
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('User profile - High Value (P1)', () => {
|
|
||||||
test('should upload avatar'); // Score: 6 (users care about this)
|
|
||||||
test('should update bio');
|
|
||||||
// 2 tests for high-value features
|
|
||||||
});
|
|
||||||
|
|
||||||
// P2: Theme preference - single smoke test
|
|
||||||
// P3: Last login display - skip (read-only, low value)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- 6 focused tests vs 10 unfocused tests
|
|
||||||
- Effort matches business impact
|
|
||||||
- Clear priorities guide development
|
|
||||||
- No wasted effort on trivial features
|
|
||||||
|
|
||||||
## When to Use Risk-Based Testing
|
|
||||||
|
|
||||||
### Always Use For:
|
|
||||||
|
|
||||||
**Enterprise projects:**
|
|
||||||
- High stakes (revenue, compliance, security)
|
|
||||||
- Many features competing for test effort
|
|
||||||
- Need objective prioritization
|
|
||||||
|
|
||||||
**Large codebases:**
|
|
||||||
- Can't test everything exhaustively
|
|
||||||
- Need to focus limited QA resources
|
|
||||||
- Want data-driven decisions
|
|
||||||
|
|
||||||
**Regulated industries:**
|
|
||||||
- Must justify testing decisions
|
|
||||||
- Auditors want risk assessments
|
|
||||||
- Compliance requires evidence
|
|
||||||
|
|
||||||
### Consider Skipping For:
|
|
||||||
|
|
||||||
**Tiny projects:**
|
|
||||||
- 5 features total
|
|
||||||
- Can test everything thoroughly
|
|
||||||
- Risk scoring is overhead
|
|
||||||
|
|
||||||
**Prototypes:**
|
|
||||||
- Throw-away code
|
|
||||||
- Speed over quality
|
|
||||||
- Learning experiments
|
|
||||||
|
|
||||||
## Real-World Example
|
|
||||||
|
|
||||||
### Scenario: E-Commerce Checkout Redesign
|
|
||||||
|
|
||||||
**Feature:** Redesigning checkout flow from 5 steps to 3 steps
|
|
||||||
|
|
||||||
**Risk Assessment:**
|
|
||||||
|
|
||||||
| Component | Probability | Impact | Score | Priority | Testing |
|
|
||||||
|-----------|-------------|--------|-------|----------|---------|
|
|
||||||
| **Payment processing** | 3 | 3 | 9 | P0 | 15 E2E + 20 API tests |
|
|
||||||
| **Order validation** | 2 | 3 | 6 | P1 | 5 E2E + 10 API tests |
|
|
||||||
| **Shipping calculation** | 2 | 2 | 4 | P1 | 3 E2E + 8 API tests |
|
|
||||||
| **Promo code validation** | 2 | 2 | 4 | P1 | 2 E2E + 5 API tests |
|
|
||||||
| **Gift message** | 1 | 1 | 1 | P3 | 1 E2E smoke test |
|
|
||||||
|
|
||||||
**Test Budget:** 40 hours
|
|
||||||
|
|
||||||
**Allocation:**
|
|
||||||
- Payment (Score 9): 20 hours (50%)
|
|
||||||
- Order validation (Score 6): 8 hours (20%)
|
|
||||||
- Shipping (Score 4): 6 hours (15%)
|
|
||||||
- Promo codes (Score 4): 4 hours (10%)
|
|
||||||
- Gift message (Score 1): 2 hours (5%)
|
|
||||||
|
|
||||||
**Result:** 50% of effort on highest-risk feature (payment), proportional allocation for others.
|
|
||||||
|
|
||||||
### Without Risk-Based Testing:
|
|
||||||
|
|
||||||
**Equal allocation:** 8 hours per component = wasted effort on gift message, under-testing payment.
|
|
||||||
|
|
||||||
**Result:** Payment bugs slip through (critical), perfect testing of gift message (trivial).
|
|
||||||
|
|
||||||
## Mitigation Strategies by Risk Level
|
|
||||||
|
|
||||||
### Score 9: Mandatory Mitigation (Blocks Release)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
**Gate Impact:** FAIL - Cannot deploy without mitigation
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Comprehensive test suite (E2E, API, security)
|
|
||||||
- Multiple test environments (dev, staging, prod-mirror)
|
|
||||||
- Load testing and performance validation
|
|
||||||
- Security audit and penetration testing
|
|
||||||
- Monitoring and alerting
|
|
||||||
- Rollback plan documented
|
|
||||||
- On-call rotation assigned
|
|
||||||
|
|
||||||
**Cannot deploy until score is mitigated below 9.**
|
|
||||||
```
|
|
||||||
|
|
||||||
### Score 6-8: Required Mitigation (Gate: CONCERNS)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
**Gate Impact:** CONCERNS - Can deploy with documented mitigation plan
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Targeted test suite (happy path + critical errors)
|
|
||||||
- Test environment setup
|
|
||||||
- Monitoring plan
|
|
||||||
- Document mitigation and owners
|
|
||||||
|
|
||||||
**Can deploy with approved mitigation plan.**
|
|
||||||
```
|
|
||||||
|
|
||||||
### Score 4-5: Recommended Mitigation
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
**Gate Impact:** Advisory - Does not affect gate decision
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Basic test coverage
|
|
||||||
- Standard monitoring
|
|
||||||
- Document known limitations
|
|
||||||
|
|
||||||
**Can deploy, mitigation recommended but not required.**
|
|
||||||
```
|
|
||||||
|
|
||||||
### Score 1-3: Optional Mitigation
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
**Gate Impact:** None
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Smoke test if desired
|
|
||||||
- Feature flag for easy disable (optional)
|
|
||||||
|
|
||||||
**Can deploy without mitigation.**
|
|
||||||
```
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
For detailed risk governance patterns, see the knowledge base:
|
|
||||||
- [Knowledge Base Index - Risk & Gates](/docs/reference/tea/knowledge-base.md)
|
|
||||||
- [TEA Command Reference - *test-design](/docs/reference/tea/commands.md#test-design)
|
|
||||||
|
|
||||||
### Risk Scoring Matrix
|
|
||||||
|
|
||||||
TEA uses this framework in `*test-design`:
|
|
||||||
|
|
||||||
```
|
|
||||||
Impact
|
|
||||||
1 2 3
|
|
||||||
┌────┬────┬────┐
|
|
||||||
1 │ 1 │ 2 │ 3 │ Low risk
|
|
||||||
P 2 │ 2 │ 4 │ 6 │ Medium risk
|
|
||||||
r 3 │ 3 │ 6 │ 9 │ High risk
|
|
||||||
o └────┴────┴────┘
|
|
||||||
b Low Med High
|
|
||||||
```
|
|
||||||
|
|
||||||
### Gate Decision Rules
|
|
||||||
|
|
||||||
| Score | Mitigation Required | Gate Impact |
|
|
||||||
|-------|-------------------|-------------|
|
|
||||||
| **9** | Mandatory, blocks release | FAIL if no mitigation |
|
|
||||||
| **6-8** | Required, documented plan | CONCERNS if incomplete |
|
|
||||||
| **4-5** | Recommended | Advisory only |
|
|
||||||
| **1-3** | Optional | No impact |
|
|
||||||
|
|
||||||
#### Gate Decision Flow
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
flowchart TD
|
|
||||||
Start([Risk Assessment]) --> Score{Risk Score?}
|
|
||||||
|
|
||||||
Score -->|Score = 9| Critical[CRITICAL RISK<br/>Score: 9]
|
|
||||||
Score -->|Score 6-8| High[HIGH RISK<br/>Score: 6-8]
|
|
||||||
Score -->|Score 4-5| Medium[MEDIUM RISK<br/>Score: 4-5]
|
|
||||||
Score -->|Score 1-3| Low[LOW RISK<br/>Score: 1-3]
|
|
||||||
|
|
||||||
Critical --> HasMit9{Mitigation<br/>Plan?}
|
|
||||||
HasMit9 -->|Yes| Concerns9[CONCERNS ⚠️<br/>Can deploy with plan]
|
|
||||||
HasMit9 -->|No| Fail[FAIL ❌<br/>Blocks release]
|
|
||||||
|
|
||||||
High --> HasMit6{Mitigation<br/>Plan?}
|
|
||||||
HasMit6 -->|Yes| Pass6[PASS ✅<br/>or CONCERNS ⚠️]
|
|
||||||
HasMit6 -->|No| Concerns6[CONCERNS ⚠️<br/>Document plan needed]
|
|
||||||
|
|
||||||
Medium --> Advisory[Advisory Only<br/>No gate impact]
|
|
||||||
Low --> NoAction[No Action<br/>Proceed]
|
|
||||||
|
|
||||||
style Critical fill:#f44336,stroke:#b71c1c,stroke-width:3px,color:#fff
|
|
||||||
style Fail fill:#d32f2f,stroke:#b71c1c,stroke-width:3px,color:#fff
|
|
||||||
style High fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
|
||||||
style Concerns9 fill:#ffc107,stroke:#f57f17,stroke-width:2px,color:#000
|
|
||||||
style Concerns6 fill:#ffc107,stroke:#f57f17,stroke-width:2px,color:#000
|
|
||||||
style Pass6 fill:#4caf50,stroke:#1b5e20,stroke-width:2px,color:#fff
|
|
||||||
style Medium fill:#fff9c4,stroke:#f57f17,stroke-width:1px,color:#000
|
|
||||||
style Low fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style Advisory fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
style NoAction fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Misconceptions
|
|
||||||
|
|
||||||
### "Risk-based = Less Testing"
|
|
||||||
|
|
||||||
**Wrong:** Risk-based testing often means MORE testing where it matters.
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
- Traditional: 50 tests spread equally
|
|
||||||
- Risk-based: 70 tests focused on P0/P1 (more total, better allocated)
|
|
||||||
|
|
||||||
### "Low Priority = Skip Testing"
|
|
||||||
|
|
||||||
**Wrong:** P3 still gets smoke tests.
|
|
||||||
|
|
||||||
**Correct:**
|
|
||||||
- P3: Smoke test (feature works at all)
|
|
||||||
- P2: Happy path (feature works correctly)
|
|
||||||
- P1: Happy path + errors
|
|
||||||
- P0: Comprehensive (all scenarios)
|
|
||||||
|
|
||||||
### "Risk Scores Are Permanent"
|
|
||||||
|
|
||||||
**Wrong:** Risk changes over time.
|
|
||||||
|
|
||||||
**Correct:**
|
|
||||||
- Initial launch: Payment is Score 9 (untested integration)
|
|
||||||
- After 6 months: Payment is Score 6 (proven in production)
|
|
||||||
- Re-assess risk quarterly
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Quality complements risk assessment
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - When risk-based testing matters most
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - How risk patterns are loaded
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Building risk-appropriate test infrastructure
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Quality patterns for high-risk features
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Risk assessment in TEA lifecycle
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Design philosophy
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Apply risk scoring
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Gate decisions based on risk
|
|
||||||
- [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - NFR risk assessment
|
|
||||||
|
|
||||||
**Use-Case Guides:**
|
|
||||||
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise risk management
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - `*test-design`, `*nfr-assess`, `*trace`
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Risk governance fragments
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - Risk-based testing term
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,907 +0,0 @@
|
||||||
---
|
|
||||||
title: "Test Quality Standards Explained"
|
|
||||||
description: Understanding TEA's Definition of Done for deterministic, isolated, and maintainable tests
|
|
||||||
---
|
|
||||||
|
|
||||||
# Test Quality Standards Explained
|
|
||||||
|
|
||||||
Test quality standards define what makes a test "good" in TEA. These aren't suggestions - they're the Definition of Done that prevents tests from rotting in review.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**TEA's Quality Principles:**
|
|
||||||
- **Deterministic** - Same result every run
|
|
||||||
- **Isolated** - No dependencies on other tests
|
|
||||||
- **Explicit** - Assertions visible in test body
|
|
||||||
- **Focused** - Single responsibility, appropriate size
|
|
||||||
- **Fast** - Execute in reasonable time
|
|
||||||
|
|
||||||
**Why these matter:** Tests that violate these principles create maintenance burden, slow down development, and lose team trust.
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
### Tests That Rot in Review
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ The anti-pattern: This test will rot
|
|
||||||
test('user can do stuff', async ({ page }) => {
|
|
||||||
await page.goto('/');
|
|
||||||
await page.waitForTimeout(5000); // Non-deterministic
|
|
||||||
|
|
||||||
if (await page.locator('.banner').isVisible()) { // Conditional
|
|
||||||
await page.click('.dismiss');
|
|
||||||
}
|
|
||||||
|
|
||||||
try { // Try-catch for flow control
|
|
||||||
await page.click('#load-more');
|
|
||||||
} catch (e) {
|
|
||||||
// Silently continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// ... 300 more lines of test logic
|
|
||||||
// ... no clear assertions
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**What's wrong:**
|
|
||||||
- **Hard wait** - Flaky, wastes time
|
|
||||||
- **Conditional** - Non-deterministic behavior
|
|
||||||
- **Try-catch** - Hides failures
|
|
||||||
- **Too large** - Hard to maintain
|
|
||||||
- **Vague name** - Unclear purpose
|
|
||||||
- **No explicit assertions** - What's being tested?
|
|
||||||
|
|
||||||
**Result:** PR review comments: "This test is flaky, please fix" → never merged → test deleted → coverage lost
|
|
||||||
|
|
||||||
### AI-Generated Tests Without Standards
|
|
||||||
|
|
||||||
AI-generated tests without quality guardrails:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// AI generates 50 tests like this:
|
|
||||||
test('test1', async ({ page }) => {
|
|
||||||
await page.goto('/');
|
|
||||||
await page.waitForTimeout(3000);
|
|
||||||
// ... flaky, vague, redundant
|
|
||||||
});
|
|
||||||
|
|
||||||
test('test2', async ({ page }) => {
|
|
||||||
await page.goto('/');
|
|
||||||
await page.waitForTimeout(3000);
|
|
||||||
// ... duplicates test1
|
|
||||||
});
|
|
||||||
|
|
||||||
// ... 48 more similar tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:** 50 tests, 80% redundant, 90% flaky, 0% trusted by team - low-quality outputs that create maintenance burden.
|
|
||||||
|
|
||||||
## The Solution: TEA's Quality Standards
|
|
||||||
|
|
||||||
### 1. Determinism (No Flakiness)
|
|
||||||
|
|
||||||
**Rule:** Test produces same result every run.
|
|
||||||
|
|
||||||
**Requirements:**
|
|
||||||
- ❌ No hard waits (`waitForTimeout`)
|
|
||||||
- ❌ No conditionals for flow control (`if/else`)
|
|
||||||
- ❌ No try-catch for flow control
|
|
||||||
- ✅ Use network-first patterns (wait for responses)
|
|
||||||
- ✅ Use explicit waits (waitForSelector, waitForResponse)
|
|
||||||
|
|
||||||
**Bad Example:**
|
|
||||||
```typescript
|
|
||||||
test('flaky test', async ({ page }) => {
|
|
||||||
await page.click('button');
|
|
||||||
await page.waitForTimeout(2000); // ❌ Might be too short
|
|
||||||
|
|
||||||
if (await page.locator('.modal').isVisible()) { // ❌ Non-deterministic
|
|
||||||
await page.click('.dismiss');
|
|
||||||
}
|
|
||||||
|
|
||||||
try { // ❌ Silently handles errors
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
} catch (e) {
|
|
||||||
// Test passes even if assertion fails!
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Good Example (Vanilla Playwright):**
|
|
||||||
```typescript
|
|
||||||
test('deterministic test', async ({ page }) => {
|
|
||||||
const responsePromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.click('button');
|
|
||||||
await responsePromise; // ✅ Wait for actual response
|
|
||||||
|
|
||||||
// Modal should ALWAYS show (make it deterministic)
|
|
||||||
await expect(page.locator('.modal')).toBeVisible();
|
|
||||||
await page.click('.dismiss');
|
|
||||||
|
|
||||||
// Explicit assertion (fails if not visible)
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Even Cleaner):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('deterministic test', async ({ page, interceptNetworkCall }) => {
|
|
||||||
const submitCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/submit'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.click('button');
|
|
||||||
|
|
||||||
// Wait for actual response (automatic JSON parsing)
|
|
||||||
const { status, responseJson } = await submitCall;
|
|
||||||
expect(status).toBe(200);
|
|
||||||
|
|
||||||
// Modal should ALWAYS show (make it deterministic)
|
|
||||||
await expect(page.locator('.modal')).toBeVisible();
|
|
||||||
await page.click('.dismiss');
|
|
||||||
|
|
||||||
// Explicit assertion (fails if not visible)
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why both work:**
|
|
||||||
- Waits for actual event (network response)
|
|
||||||
- No conditionals (behavior is deterministic)
|
|
||||||
- Assertions fail loudly (no silent failures)
|
|
||||||
- Same result every run (deterministic)
|
|
||||||
|
|
||||||
**Playwright Utils additional benefits:**
|
|
||||||
- Automatic JSON parsing
|
|
||||||
- `{ status, responseJson }` structure (can validate response data)
|
|
||||||
- No manual `await response.json()`
|
|
||||||
|
|
||||||
### 2. Isolation (No Dependencies)
|
|
||||||
|
|
||||||
**Rule:** Test runs independently, no shared state.
|
|
||||||
|
|
||||||
**Requirements:**
|
|
||||||
- ✅ Self-cleaning (cleanup after test)
|
|
||||||
- ✅ No global state dependencies
|
|
||||||
- ✅ Can run in parallel
|
|
||||||
- ✅ Can run in any order
|
|
||||||
- ✅ Use unique test data
|
|
||||||
|
|
||||||
**Bad Example:**
|
|
||||||
```typescript
|
|
||||||
// ❌ Tests depend on execution order
|
|
||||||
let userId: string; // Shared global state
|
|
||||||
|
|
||||||
test('create user', async ({ apiRequest }) => {
|
|
||||||
const { body } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/users',
|
|
||||||
body: { email: 'test@example.com' } (hard-coded)
|
|
||||||
});
|
|
||||||
userId = body.id; // Store in global
|
|
||||||
});
|
|
||||||
|
|
||||||
test('update user', async ({ apiRequest }) => {
|
|
||||||
// Depends on previous test setting userId
|
|
||||||
await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: `/api/users/${userId}`,
|
|
||||||
body: { name: 'Updated' }
|
|
||||||
});
|
|
||||||
// No cleanup - leaves user in database
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Tests must run in order (can't parallelize)
|
|
||||||
- Second test fails if first skipped (`.only`)
|
|
||||||
- Hard-coded data causes conflicts
|
|
||||||
- No cleanup (database fills with test data)
|
|
||||||
|
|
||||||
**Good Example (Vanilla Playwright):**
|
|
||||||
```typescript
|
|
||||||
test('should update user profile', async ({ request }) => {
|
|
||||||
// Create unique test data
|
|
||||||
const testEmail = `test-${Date.now()}@example.com`;
|
|
||||||
|
|
||||||
// Setup: Create user
|
|
||||||
const createResp = await request.post('/api/users', {
|
|
||||||
data: { email: testEmail, name: 'Original' }
|
|
||||||
});
|
|
||||||
const user = await createResp.json();
|
|
||||||
|
|
||||||
// Test: Update user
|
|
||||||
const updateResp = await request.patch(`/api/users/${user.id}`, {
|
|
||||||
data: { name: 'Updated' }
|
|
||||||
});
|
|
||||||
const updated = await updateResp.json();
|
|
||||||
|
|
||||||
expect(updated.name).toBe('Updated');
|
|
||||||
|
|
||||||
// Cleanup: Delete user
|
|
||||||
await request.delete(`/api/users/${user.id}`);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Even Better (With Playwright Utils):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
import { faker } from '@faker-js/faker';
|
|
||||||
|
|
||||||
test('should update user profile', async ({ apiRequest }) => {
|
|
||||||
// Dynamic unique test data
|
|
||||||
const testEmail = faker.internet.email();
|
|
||||||
|
|
||||||
// Setup: Create user
|
|
||||||
const { status: createStatus, body: user } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/users',
|
|
||||||
body: { email: testEmail, name: faker.person.fullName() }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(createStatus).toBe(201);
|
|
||||||
|
|
||||||
// Test: Update user
|
|
||||||
const { status, body: updated } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: `/api/users/${user.id}`,
|
|
||||||
body: { name: 'Updated Name' }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(updated.name).toBe('Updated Name');
|
|
||||||
|
|
||||||
// Cleanup: Delete user
|
|
||||||
await apiRequest({
|
|
||||||
method: 'DELETE',
|
|
||||||
path: `/api/users/${user.id}`
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- `{ status, body }` destructuring (cleaner than `response.status()` + `await response.json()`)
|
|
||||||
- No manual `await response.json()`
|
|
||||||
- Automatic retry for 5xx errors
|
|
||||||
- Optional schema validation with `.validateSchema()`
|
|
||||||
|
|
||||||
**Why it works:**
|
|
||||||
- No global state
|
|
||||||
- Unique test data (no conflicts)
|
|
||||||
- Self-cleaning (deletes user)
|
|
||||||
- Can run in parallel
|
|
||||||
- Can run in any order
|
|
||||||
|
|
||||||
### 3. Explicit Assertions (No Hidden Validation)
|
|
||||||
|
|
||||||
**Rule:** Assertions visible in test body, not abstracted.
|
|
||||||
|
|
||||||
**Requirements:**
|
|
||||||
- ✅ Assertions in test code (not helper functions)
|
|
||||||
- ✅ Specific assertions (not generic `toBeTruthy`)
|
|
||||||
- ✅ Meaningful expectations (test actual behavior)
|
|
||||||
|
|
||||||
**Bad Example:**
|
|
||||||
```typescript
|
|
||||||
// ❌ Assertions hidden in helper
|
|
||||||
async function verifyProfilePage(page: Page) {
|
|
||||||
// Assertions buried in helper (not visible in test)
|
|
||||||
await expect(page.locator('h1')).toBeVisible();
|
|
||||||
await expect(page.locator('.email')).toContainText('@');
|
|
||||||
await expect(page.locator('.name')).not.toBeEmpty();
|
|
||||||
}
|
|
||||||
|
|
||||||
test('profile page', async ({ page }) => {
|
|
||||||
await page.goto('/profile');
|
|
||||||
await verifyProfilePage(page); // What's being verified?
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Can't see what's tested (need to read helper)
|
|
||||||
- Hard to debug failures (which assertion failed?)
|
|
||||||
- Reduces test readability
|
|
||||||
- Hides important validation
|
|
||||||
|
|
||||||
**Good Example:**
|
|
||||||
```typescript
|
|
||||||
// ✅ Assertions explicit in test
|
|
||||||
test('should display profile with correct data', async ({ page }) => {
|
|
||||||
await page.goto('/profile');
|
|
||||||
|
|
||||||
// Explicit assertions - clear what's tested
|
|
||||||
await expect(page.locator('h1')).toContainText('Test User');
|
|
||||||
await expect(page.locator('.email')).toContainText('test@example.com');
|
|
||||||
await expect(page.locator('.bio')).toContainText('Software Engineer');
|
|
||||||
await expect(page.locator('img[alt="Avatar"]')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works:**
|
|
||||||
- See what's tested at a glance
|
|
||||||
- Debug failures easily (know which assertion failed)
|
|
||||||
- Test is self-documenting
|
|
||||||
- No hidden behavior
|
|
||||||
|
|
||||||
**Exception:** Use helper for setup/cleanup, not assertions.
|
|
||||||
|
|
||||||
### 4. Focused Tests (Appropriate Size)
|
|
||||||
|
|
||||||
**Rule:** Test has single responsibility, reasonable size.
|
|
||||||
|
|
||||||
**Requirements:**
|
|
||||||
- ✅ Test size < 300 lines
|
|
||||||
- ✅ Single responsibility (test one thing well)
|
|
||||||
- ✅ Clear describe/test names
|
|
||||||
- ✅ Appropriate scope (not too granular, not too broad)
|
|
||||||
|
|
||||||
**Bad Example:**
|
|
||||||
```typescript
|
|
||||||
// ❌ 500-line test testing everything
|
|
||||||
test('complete user flow', async ({ page }) => {
|
|
||||||
// Registration (50 lines)
|
|
||||||
await page.goto('/register');
|
|
||||||
await page.fill('#email', 'test@example.com');
|
|
||||||
// ... 48 more lines
|
|
||||||
|
|
||||||
// Profile setup (100 lines)
|
|
||||||
await page.goto('/profile');
|
|
||||||
// ... 98 more lines
|
|
||||||
|
|
||||||
// Settings configuration (150 lines)
|
|
||||||
await page.goto('/settings');
|
|
||||||
// ... 148 more lines
|
|
||||||
|
|
||||||
// Data export (200 lines)
|
|
||||||
await page.goto('/export');
|
|
||||||
// ... 198 more lines
|
|
||||||
|
|
||||||
// Total: 500 lines, testing 4 different features
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problems:**
|
|
||||||
- Failure in line 50 prevents testing lines 51-500
|
|
||||||
- Hard to understand (what's being tested?)
|
|
||||||
- Slow to execute (testing too much)
|
|
||||||
- Hard to debug (which feature failed?)
|
|
||||||
|
|
||||||
**Good Example:**
|
|
||||||
```typescript
|
|
||||||
// ✅ Focused tests - one responsibility each
|
|
||||||
|
|
||||||
test('should register new user', async ({ page }) => {
|
|
||||||
await page.goto('/register');
|
|
||||||
await page.fill('#email', 'test@example.com');
|
|
||||||
await page.fill('#password', 'password123');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
|
|
||||||
await expect(page).toHaveURL('/welcome');
|
|
||||||
await expect(page.locator('h1')).toContainText('Welcome');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should configure user profile', async ({ page, authSession }) => {
|
|
||||||
await authSession.login({ email: 'test@example.com', password: 'pass' });
|
|
||||||
await page.goto('/profile');
|
|
||||||
|
|
||||||
await page.fill('#name', 'Test User');
|
|
||||||
await page.fill('#bio', 'Software Engineer');
|
|
||||||
await page.click('button:has-text("Save")');
|
|
||||||
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
});
|
|
||||||
|
|
||||||
// ... separate tests for settings, export (each < 50 lines)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works:**
|
|
||||||
- Each test has one responsibility
|
|
||||||
- Failure is easy to diagnose
|
|
||||||
- Can run tests independently
|
|
||||||
- Test names describe exactly what's tested
|
|
||||||
|
|
||||||
### 5. Fast Execution (Performance Budget)
|
|
||||||
|
|
||||||
**Rule:** Individual test executes in < 1.5 minutes.
|
|
||||||
|
|
||||||
**Requirements:**
|
|
||||||
- ✅ Test execution < 90 seconds
|
|
||||||
- ✅ Efficient selectors (getByRole > XPath)
|
|
||||||
- ✅ Minimal redundant actions
|
|
||||||
- ✅ Parallel execution enabled
|
|
||||||
|
|
||||||
**Bad Example:**
|
|
||||||
```typescript
|
|
||||||
// ❌ Slow test (3+ minutes)
|
|
||||||
test('slow test', async ({ page }) => {
|
|
||||||
await page.goto('/');
|
|
||||||
await page.waitForTimeout(10000); // 10s wasted
|
|
||||||
|
|
||||||
// Navigate through 10 pages (2 minutes)
|
|
||||||
for (let i = 1; i <= 10; i++) {
|
|
||||||
await page.click(`a[href="/page-${i}"]`);
|
|
||||||
await page.waitForTimeout(5000); // 5s per page = 50s wasted
|
|
||||||
}
|
|
||||||
|
|
||||||
// Complex XPath selector (slow)
|
|
||||||
await page.locator('//div[@class="container"]/section[3]/div[2]/p').click();
|
|
||||||
|
|
||||||
// More waiting
|
|
||||||
await page.waitForTimeout(30000); // 30s wasted
|
|
||||||
|
|
||||||
await expect(page.locator('.result')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Total time:** 3+ minutes (95 seconds wasted on hard waits)
|
|
||||||
|
|
||||||
**Good Example (Vanilla Playwright):**
|
|
||||||
```typescript
|
|
||||||
// ✅ Fast test (< 10 seconds)
|
|
||||||
test('fast test', async ({ page }) => {
|
|
||||||
// Set up response wait
|
|
||||||
const apiPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/result') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.goto('/');
|
|
||||||
|
|
||||||
// Direct navigation (skip intermediate pages)
|
|
||||||
await page.goto('/page-10');
|
|
||||||
|
|
||||||
// Efficient selector
|
|
||||||
await page.getByRole('button', { name: 'Submit' }).click();
|
|
||||||
|
|
||||||
// Wait for actual response (fast when API is fast)
|
|
||||||
await apiPromise;
|
|
||||||
|
|
||||||
await expect(page.locator('.result')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('fast test', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// Set up interception
|
|
||||||
const resultCall = interceptNetworkCall({
|
|
||||||
method: 'GET',
|
|
||||||
url: '**/api/result'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/');
|
|
||||||
|
|
||||||
// Direct navigation (skip intermediate pages)
|
|
||||||
await page.goto('/page-10');
|
|
||||||
|
|
||||||
// Efficient selector
|
|
||||||
await page.getByRole('button', { name: 'Submit' }).click();
|
|
||||||
|
|
||||||
// Wait for actual response (automatic JSON parsing)
|
|
||||||
const { status, responseJson } = await resultCall;
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
await expect(page.locator('.result')).toBeVisible();
|
|
||||||
|
|
||||||
// Can also validate response data if needed
|
|
||||||
// expect(responseJson.data).toBeDefined();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Total time:** < 10 seconds (no wasted waits)
|
|
||||||
|
|
||||||
**Both examples achieve:**
|
|
||||||
- No hard waits (wait for actual events)
|
|
||||||
- Direct navigation (skip unnecessary steps)
|
|
||||||
- Efficient selectors (getByRole)
|
|
||||||
- Fast execution
|
|
||||||
|
|
||||||
**Playwright Utils bonus:**
|
|
||||||
- Can validate API response data easily
|
|
||||||
- Automatic JSON parsing
|
|
||||||
- Cleaner API
|
|
||||||
|
|
||||||
## TEA's Quality Scoring
|
|
||||||
|
|
||||||
TEA reviews tests against these standards in `*test-review`:
|
|
||||||
|
|
||||||
### Scoring Categories (100 points total)
|
|
||||||
|
|
||||||
**Determinism (35 points):**
|
|
||||||
- No hard waits: 10 points
|
|
||||||
- No conditionals: 10 points
|
|
||||||
- No try-catch flow: 10 points
|
|
||||||
- Network-first patterns: 5 points
|
|
||||||
|
|
||||||
**Isolation (25 points):**
|
|
||||||
- Self-cleaning: 15 points
|
|
||||||
- No global state: 5 points
|
|
||||||
- Parallel-safe: 5 points
|
|
||||||
|
|
||||||
**Assertions (20 points):**
|
|
||||||
- Explicit in test body: 10 points
|
|
||||||
- Specific and meaningful: 10 points
|
|
||||||
|
|
||||||
**Structure (10 points):**
|
|
||||||
- Test size < 300 lines: 5 points
|
|
||||||
- Clear naming: 5 points
|
|
||||||
|
|
||||||
**Performance (10 points):**
|
|
||||||
- Execution time < 1.5 min: 10 points
|
|
||||||
|
|
||||||
#### Quality Scoring Breakdown
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
|
||||||
pie title Test Quality Score (100 points)
|
|
||||||
"Determinism" : 35
|
|
||||||
"Isolation" : 25
|
|
||||||
"Assertions" : 20
|
|
||||||
"Structure" : 10
|
|
||||||
"Performance" : 10
|
|
||||||
```
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'13px'}}}%%
|
|
||||||
flowchart LR
|
|
||||||
subgraph Det[Determinism - 35 pts]
|
|
||||||
D1[No hard waits<br/>10 pts]
|
|
||||||
D2[No conditionals<br/>10 pts]
|
|
||||||
D3[No try-catch flow<br/>10 pts]
|
|
||||||
D4[Network-first<br/>5 pts]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Iso[Isolation - 25 pts]
|
|
||||||
I1[Self-cleaning<br/>15 pts]
|
|
||||||
I2[No global state<br/>5 pts]
|
|
||||||
I3[Parallel-safe<br/>5 pts]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Assrt[Assertions - 20 pts]
|
|
||||||
A1[Explicit in body<br/>10 pts]
|
|
||||||
A2[Specific/meaningful<br/>10 pts]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Struct[Structure - 10 pts]
|
|
||||||
S1[Size < 300 lines<br/>5 pts]
|
|
||||||
S2[Clear naming<br/>5 pts]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Perf[Performance - 10 pts]
|
|
||||||
P1[Time < 1.5 min<br/>10 pts]
|
|
||||||
end
|
|
||||||
|
|
||||||
Det --> Total([Total: 100 points])
|
|
||||||
Iso --> Total
|
|
||||||
Assrt --> Total
|
|
||||||
Struct --> Total
|
|
||||||
Perf --> Total
|
|
||||||
|
|
||||||
style Det fill:#ffebee,stroke:#c62828,stroke-width:2px
|
|
||||||
style Iso fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
|
||||||
style Assrt fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
|
||||||
style Struct fill:#fff9c4,stroke:#f57f17,stroke-width:2px
|
|
||||||
style Perf fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
|
||||||
style Total fill:#fff,stroke:#000,stroke-width:3px
|
|
||||||
```
|
|
||||||
|
|
||||||
### Score Interpretation
|
|
||||||
|
|
||||||
| Score | Interpretation | Action |
|
|
||||||
| ---------- | -------------- | -------------------------------------- |
|
|
||||||
| **90-100** | Excellent | Production-ready, minimal changes |
|
|
||||||
| **80-89** | Good | Minor improvements recommended |
|
|
||||||
| **70-79** | Acceptable | Address recommendations before release |
|
|
||||||
| **60-69** | Needs Work | Fix critical issues |
|
|
||||||
| **< 60** | Critical | Significant refactoring needed |
|
|
||||||
|
|
||||||
## Comparison: Good vs Bad Tests
|
|
||||||
|
|
||||||
### Example: User Login
|
|
||||||
|
|
||||||
**Bad Test (Score: 45/100):**
|
|
||||||
```typescript
|
|
||||||
test('login test', async ({ page }) => { // Vague name
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.waitForTimeout(3000); // -10 (hard wait)
|
|
||||||
|
|
||||||
await page.fill('[name="email"]', 'test@example.com');
|
|
||||||
await page.fill('[name="password"]', 'password');
|
|
||||||
|
|
||||||
if (await page.locator('.remember-me').isVisible()) { // -10 (conditional)
|
|
||||||
await page.click('.remember-me');
|
|
||||||
}
|
|
||||||
|
|
||||||
await page.click('button');
|
|
||||||
|
|
||||||
try { // -10 (try-catch flow)
|
|
||||||
await page.waitForURL('/dashboard', { timeout: 5000 });
|
|
||||||
} catch (e) {
|
|
||||||
// Ignore navigation failure
|
|
||||||
}
|
|
||||||
|
|
||||||
// No assertions! -10
|
|
||||||
// No cleanup! -10
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Issues:**
|
|
||||||
- Determinism: 5/35 (hard wait, conditional, try-catch)
|
|
||||||
- Isolation: 10/25 (no cleanup)
|
|
||||||
- Assertions: 0/20 (no assertions!)
|
|
||||||
- Structure: 15/10 (okay)
|
|
||||||
- Performance: 5/10 (slow)
|
|
||||||
- **Total: 45/100**
|
|
||||||
|
|
||||||
**Good Test (Score: 95/100):**
|
|
||||||
```typescript
|
|
||||||
test('should login with valid credentials and redirect to dashboard', async ({ page, authSession }) => {
|
|
||||||
// Use fixture for deterministic auth
|
|
||||||
const loginPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/auth/login') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Password').fill('password123');
|
|
||||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
|
||||||
|
|
||||||
// Wait for actual API response
|
|
||||||
const response = await loginPromise;
|
|
||||||
const { token } = await response.json();
|
|
||||||
|
|
||||||
// Explicit assertions
|
|
||||||
expect(token).toBeDefined();
|
|
||||||
await expect(page).toHaveURL('/dashboard');
|
|
||||||
await expect(page.getByText('Welcome back')).toBeVisible();
|
|
||||||
|
|
||||||
// Cleanup handled by authSession fixture
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Quality:**
|
|
||||||
- Determinism: 35/35 (network-first, no conditionals)
|
|
||||||
- Isolation: 25/25 (fixture handles cleanup)
|
|
||||||
- Assertions: 20/20 (explicit and specific)
|
|
||||||
- Structure: 10/10 (clear name, focused)
|
|
||||||
- Performance: 5/10 (< 1 min)
|
|
||||||
- **Total: 95/100**
|
|
||||||
|
|
||||||
### Example: API Testing
|
|
||||||
|
|
||||||
**Bad Test (Score: 50/100):**
|
|
||||||
```typescript
|
|
||||||
test('api test', async ({ request }) => {
|
|
||||||
const response = await request.post('/api/users', {
|
|
||||||
data: { email: 'test@example.com' } // Hard-coded (conflicts)
|
|
||||||
});
|
|
||||||
|
|
||||||
if (response.ok()) { // Conditional
|
|
||||||
const user = await response.json();
|
|
||||||
// Weak assertion
|
|
||||||
expect(user).toBeTruthy();
|
|
||||||
}
|
|
||||||
|
|
||||||
// No cleanup - user left in database
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Good Test (Score: 92/100):**
|
|
||||||
```typescript
|
|
||||||
test('should create user with valid data', async ({ apiRequest }) => {
|
|
||||||
// Unique test data
|
|
||||||
const testEmail = `test-${Date.now()}@example.com`;
|
|
||||||
|
|
||||||
// Create user
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/users',
|
|
||||||
body: { email: testEmail, name: 'Test User' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Explicit assertions
|
|
||||||
expect(status).toBe(201);
|
|
||||||
expect(body.id).toBeDefined();
|
|
||||||
expect(body.email).toBe(testEmail);
|
|
||||||
expect(body.name).toBe('Test User');
|
|
||||||
|
|
||||||
// Cleanup
|
|
||||||
await apiRequest({
|
|
||||||
method: 'DELETE',
|
|
||||||
path: `/api/users/${body.id}`
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## How TEA Enforces Standards
|
|
||||||
|
|
||||||
### During Test Generation (`*atdd`, `*automate`)
|
|
||||||
|
|
||||||
TEA generates tests following standards by default:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// TEA-generated test (automatically follows standards)
|
|
||||||
test('should submit contact form', async ({ page }) => {
|
|
||||||
// Network-first pattern (no hard waits)
|
|
||||||
const submitPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/contact') && resp.ok()
|
|
||||||
);
|
|
||||||
|
|
||||||
// Accessible selectors (resilient)
|
|
||||||
await page.getByLabel('Name').fill('Test User');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Message').fill('Test message');
|
|
||||||
await page.getByRole('button', { name: 'Send' }).click();
|
|
||||||
|
|
||||||
const response = await submitPromise;
|
|
||||||
const result = await response.json();
|
|
||||||
|
|
||||||
// Explicit assertions
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
await expect(page.getByText('Message sent')).toBeVisible();
|
|
||||||
|
|
||||||
// Size: 15 lines (< 300 ✓)
|
|
||||||
// Execution: ~2 seconds (< 90s ✓)
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### During Test Review (*test-review)
|
|
||||||
|
|
||||||
TEA audits tests and flags violations:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Critical Issues
|
|
||||||
|
|
||||||
### Hard Wait Detected (tests/login.spec.ts:23)
|
|
||||||
**Issue:** `await page.waitForTimeout(3000)`
|
|
||||||
**Score Impact:** -10 (Determinism)
|
|
||||||
**Fix:** Use network-first pattern
|
|
||||||
|
|
||||||
### Conditional Flow Control (tests/profile.spec.ts:45)
|
|
||||||
**Issue:** `if (await page.locator('.banner').isVisible())`
|
|
||||||
**Score Impact:** -10 (Determinism)
|
|
||||||
**Fix:** Make banner presence deterministic
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
|
|
||||||
### Extract Fixture (tests/auth.spec.ts)
|
|
||||||
**Issue:** Login code repeated 5 times
|
|
||||||
**Score Impact:** -3 (Structure)
|
|
||||||
**Fix:** Extract to authSession fixture
|
|
||||||
```
|
|
||||||
|
|
||||||
## Definition of Done Checklist
|
|
||||||
|
|
||||||
When is a test "done"?
|
|
||||||
|
|
||||||
**Test Quality DoD:**
|
|
||||||
- [ ] No hard waits (`waitForTimeout`)
|
|
||||||
- [ ] No conditionals for flow control
|
|
||||||
- [ ] No try-catch for flow control
|
|
||||||
- [ ] Network-first patterns used
|
|
||||||
- [ ] Assertions explicit in test body
|
|
||||||
- [ ] Test size < 300 lines
|
|
||||||
- [ ] Clear, descriptive test name
|
|
||||||
- [ ] Self-cleaning (cleanup in afterEach or test)
|
|
||||||
- [ ] Unique test data (no hard-coded values)
|
|
||||||
- [ ] Execution time < 1.5 minutes
|
|
||||||
- [ ] Can run in parallel
|
|
||||||
- [ ] Can run in any order
|
|
||||||
|
|
||||||
**Code Review DoD:**
|
|
||||||
- [ ] Test quality score > 80
|
|
||||||
- [ ] No critical issues from `*test-review`
|
|
||||||
- [ ] Follows project patterns (fixtures, selectors)
|
|
||||||
- [ ] Test reviewed by team member
|
|
||||||
|
|
||||||
## Common Quality Issues
|
|
||||||
|
|
||||||
### Issue: "My test needs conditionals for optional elements"
|
|
||||||
|
|
||||||
**Wrong approach:**
|
|
||||||
```typescript
|
|
||||||
if (await page.locator('.banner').isVisible()) {
|
|
||||||
await page.click('.dismiss');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Right approach - Make it deterministic:**
|
|
||||||
```typescript
|
|
||||||
// Option 1: Always expect banner
|
|
||||||
await expect(page.locator('.banner')).toBeVisible();
|
|
||||||
await page.click('.dismiss');
|
|
||||||
|
|
||||||
// Option 2: Test both scenarios separately
|
|
||||||
test('should show banner for new users', ...);
|
|
||||||
test('should not show banner for returning users', ...);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Issue: "My test needs try-catch for error handling"
|
|
||||||
|
|
||||||
**Wrong approach:**
|
|
||||||
```typescript
|
|
||||||
try {
|
|
||||||
await page.click('#optional-button');
|
|
||||||
} catch (e) {
|
|
||||||
// Silently continue
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Right approach - Make failures explicit:**
|
|
||||||
```typescript
|
|
||||||
// Option 1: Button should exist
|
|
||||||
await page.click('#optional-button'); // Fails loudly if missing
|
|
||||||
|
|
||||||
// Option 2: Button might not exist (test both)
|
|
||||||
test('should work with optional button', async ({ page }) => {
|
|
||||||
const hasButton = await page.locator('#optional-button').count() > 0;
|
|
||||||
if (hasButton) {
|
|
||||||
await page.click('#optional-button');
|
|
||||||
}
|
|
||||||
// But now you're testing optional behavior explicitly
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Issue: "Hard waits are easier than network patterns"
|
|
||||||
|
|
||||||
**Short-term:** Hard waits seem simpler
|
|
||||||
**Long-term:** Flaky tests waste more time than learning network patterns
|
|
||||||
|
|
||||||
**Investment:**
|
|
||||||
- 30 minutes to learn network-first patterns
|
|
||||||
- Prevents hundreds of hours debugging flaky tests
|
|
||||||
- Tests run faster (no wasted waits)
|
|
||||||
- Team trusts test suite
|
|
||||||
|
|
||||||
## Technical Implementation
|
|
||||||
|
|
||||||
For detailed test quality patterns, see:
|
|
||||||
- [Test Quality Fragment](/docs/reference/tea/knowledge-base.md#test-quality)
|
|
||||||
- [Test Levels Framework Fragment](/docs/reference/tea/knowledge-base.md#test-levels-framework)
|
|
||||||
- [Complete Knowledge Base Index](/docs/reference/tea/knowledge-base.md)
|
|
||||||
|
|
||||||
## Related Concepts
|
|
||||||
|
|
||||||
**Core TEA Concepts:**
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Quality scales with risk
|
|
||||||
- [Knowledge Base System](/docs/explanation/tea/knowledge-base-system.md) - How standards are enforced
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - Quality in different models
|
|
||||||
|
|
||||||
**Technical Patterns:**
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Determinism explained
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Isolation through fixtures
|
|
||||||
|
|
||||||
**Overview:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Quality standards in lifecycle
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Why quality matters
|
|
||||||
|
|
||||||
## Practical Guides
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Audit against these standards
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Generate quality tests
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Expand with quality
|
|
||||||
|
|
||||||
**Use-Case Guides:**
|
|
||||||
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Improve legacy quality
|
|
||||||
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise quality thresholds
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - *test-review command
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Test quality fragment
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,577 +0,0 @@
|
||||||
---
|
|
||||||
title: "Using TEA with Existing Tests (Brownfield)"
|
|
||||||
description: Apply TEA workflows to legacy codebases with existing test suites
|
|
||||||
---
|
|
||||||
|
|
||||||
# Using TEA with Existing Tests (Brownfield)
|
|
||||||
|
|
||||||
Use TEA on brownfield projects (existing codebases with legacy tests) to establish coverage baselines, identify gaps, and improve test quality without starting from scratch.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Existing codebase with some tests already written
|
|
||||||
- Legacy test suite needs quality improvement
|
|
||||||
- Adding features to existing application
|
|
||||||
- Need to understand current test coverage
|
|
||||||
- Want to prevent regression as you add features
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Existing codebase with tests (even if incomplete or low quality)
|
|
||||||
- Tests run successfully (or at least can be executed)
|
|
||||||
|
|
||||||
**Note:** If your codebase is completely undocumented, run `*document-project` first to create baseline documentation.
|
|
||||||
|
|
||||||
## Brownfield Strategy
|
|
||||||
|
|
||||||
### Phase 1: Establish Baseline
|
|
||||||
|
|
||||||
Understand what you have before changing anything.
|
|
||||||
|
|
||||||
#### Step 1: Baseline Coverage with *trace
|
|
||||||
|
|
||||||
Run `*trace` Phase 1 to map existing tests to requirements:
|
|
||||||
|
|
||||||
```
|
|
||||||
*trace
|
|
||||||
```
|
|
||||||
|
|
||||||
**Select:** Phase 1 (Requirements Traceability)
|
|
||||||
|
|
||||||
**Provide:**
|
|
||||||
- Existing requirements docs (PRD, user stories, feature specs)
|
|
||||||
- Test location (`tests/` or wherever tests live)
|
|
||||||
- Focus areas (specific features if large codebase)
|
|
||||||
|
|
||||||
**Output:** `traceability-matrix.md` showing:
|
|
||||||
- Which requirements have tests
|
|
||||||
- Which requirements lack coverage
|
|
||||||
- Coverage classification (FULL/PARTIAL/NONE)
|
|
||||||
- Gap prioritization
|
|
||||||
|
|
||||||
**Example Baseline:**
|
|
||||||
```markdown
|
|
||||||
# Baseline Coverage (Before Improvements)
|
|
||||||
|
|
||||||
**Total Requirements:** 50
|
|
||||||
**Full Coverage:** 15 (30%)
|
|
||||||
**Partial Coverage:** 20 (40%)
|
|
||||||
**No Coverage:** 15 (30%)
|
|
||||||
|
|
||||||
**By Priority:**
|
|
||||||
- P0: 50% coverage (5/10) ❌ Critical gap
|
|
||||||
- P1: 40% coverage (8/20) ⚠️ Needs improvement
|
|
||||||
- P2: 20% coverage (2/10) ✅ Acceptable
|
|
||||||
```
|
|
||||||
|
|
||||||
This baseline becomes your improvement target.
|
|
||||||
|
|
||||||
#### Step 2: Quality Audit with *test-review
|
|
||||||
|
|
||||||
Run `*test-review` on existing tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
*test-review tests/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output:** `test-review.md` with quality score and issues.
|
|
||||||
|
|
||||||
**Common Brownfield Issues:**
|
|
||||||
- Hard waits everywhere (`page.waitForTimeout(5000)`)
|
|
||||||
- Fragile CSS selectors (`.class > div:nth-child(3)`)
|
|
||||||
- No test isolation (tests depend on execution order)
|
|
||||||
- Try-catch for flow control
|
|
||||||
- Tests don't clean up (leave test data in DB)
|
|
||||||
|
|
||||||
**Example Baseline Quality:**
|
|
||||||
```markdown
|
|
||||||
# Quality Score: 55/100
|
|
||||||
|
|
||||||
**Critical Issues:** 12
|
|
||||||
- 8 hard waits
|
|
||||||
- 4 conditional flow control
|
|
||||||
|
|
||||||
**Recommendations:** 25
|
|
||||||
- Extract fixtures
|
|
||||||
- Improve selectors
|
|
||||||
- Add network assertions
|
|
||||||
```
|
|
||||||
|
|
||||||
This shows where to focus improvement efforts.
|
|
||||||
|
|
||||||
### Phase 2: Prioritize Improvements
|
|
||||||
|
|
||||||
Don't try to fix everything at once.
|
|
||||||
|
|
||||||
#### Focus on Critical Path First
|
|
||||||
|
|
||||||
**Priority 1: P0 Requirements**
|
|
||||||
```
|
|
||||||
Goal: Get P0 coverage to 100%
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
1. Identify P0 requirements with no tests (from trace)
|
|
||||||
2. Run *automate to generate tests for missing P0 scenarios
|
|
||||||
3. Fix critical quality issues in P0 tests (from test-review)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Priority 2: Fix Flaky Tests**
|
|
||||||
```
|
|
||||||
Goal: Eliminate flakiness
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
1. Identify tests with hard waits (from test-review)
|
|
||||||
2. Replace with network-first patterns
|
|
||||||
3. Run burn-in loops to verify stability
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example Modernization:**
|
|
||||||
|
|
||||||
**Before (Flaky - Hard Waits):**
|
|
||||||
```typescript
|
|
||||||
test('checkout completes', async ({ page }) => {
|
|
||||||
await page.click('button[name="checkout"]');
|
|
||||||
await page.waitForTimeout(5000); // ❌ Flaky
|
|
||||||
await expect(page.locator('.confirmation')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**After (Network-First - Vanilla):**
|
|
||||||
```typescript
|
|
||||||
test('checkout completes', async ({ page }) => {
|
|
||||||
const checkoutPromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/checkout') && resp.ok()
|
|
||||||
);
|
|
||||||
await page.click('button[name="checkout"]');
|
|
||||||
await checkoutPromise; // ✅ Deterministic
|
|
||||||
await expect(page.locator('.confirmation')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**After (With Playwright Utils - Cleaner API):**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('checkout completes', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// Use interceptNetworkCall for cleaner network interception
|
|
||||||
const checkoutCall = interceptNetworkCall({
|
|
||||||
method: 'POST',
|
|
||||||
url: '**/api/checkout'
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.click('button[name="checkout"]');
|
|
||||||
|
|
||||||
// Wait for response (automatic JSON parsing)
|
|
||||||
const { status, responseJson: order } = await checkoutCall;
|
|
||||||
|
|
||||||
// Validate API response
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(order.status).toBe('confirmed');
|
|
||||||
|
|
||||||
// Validate UI
|
|
||||||
await expect(page.locator('.confirmation')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- `interceptNetworkCall` for cleaner network interception
|
|
||||||
- Automatic JSON parsing (`responseJson` ready to use)
|
|
||||||
- No manual `await response.json()`
|
|
||||||
- Glob pattern matching (`**/api/checkout`)
|
|
||||||
- Cleaner, more maintainable code
|
|
||||||
|
|
||||||
**For automatic error detection,** use `network-error-monitor` fixture separately. See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#network-error-monitor).
|
|
||||||
|
|
||||||
**Priority 3: P1 Requirements**
|
|
||||||
```
|
|
||||||
Goal: Get P1 coverage to 80%+
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
1. Generate tests for highest-risk P1 gaps
|
|
||||||
2. Improve test quality incrementally
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create Improvement Roadmap
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Improvement Roadmap
|
|
||||||
|
|
||||||
## Week 1: Critical Path (P0)
|
|
||||||
- [ ] Add 5 missing P0 tests (Epic 1: Auth)
|
|
||||||
- [ ] Fix 8 hard waits in auth tests
|
|
||||||
- [ ] Verify P0 coverage = 100%
|
|
||||||
|
|
||||||
## Week 2: Flakiness
|
|
||||||
- [ ] Replace all hard waits with network-first
|
|
||||||
- [ ] Fix conditional flow control
|
|
||||||
- [ ] Run burn-in loops (target: 0 failures in 10 runs)
|
|
||||||
|
|
||||||
## Week 3: High-Value Coverage (P1)
|
|
||||||
- [ ] Add 10 missing P1 tests
|
|
||||||
- [ ] Improve selector resilience
|
|
||||||
- [ ] P1 coverage target: 80%
|
|
||||||
|
|
||||||
## Week 4: Quality Polish
|
|
||||||
- [ ] Extract fixtures for common patterns
|
|
||||||
- [ ] Add network assertions
|
|
||||||
- [ ] Quality score target: 75+
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: Incremental Improvement
|
|
||||||
|
|
||||||
Apply TEA workflows to new work while improving legacy tests.
|
|
||||||
|
|
||||||
#### For New Features (Greenfield Within Brownfield)
|
|
||||||
|
|
||||||
**Use full TEA workflow:**
|
|
||||||
```
|
|
||||||
1. *test-design (epic-level) - Plan tests for new feature
|
|
||||||
2. *atdd - Generate failing tests first (TDD)
|
|
||||||
3. Implement feature
|
|
||||||
4. *automate - Expand coverage
|
|
||||||
5. *test-review - Ensure quality
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- New code has high-quality tests from day one
|
|
||||||
- Gradually raises overall quality
|
|
||||||
- Team learns good patterns
|
|
||||||
|
|
||||||
#### For Bug Fixes (Regression Prevention)
|
|
||||||
|
|
||||||
**Add regression tests:**
|
|
||||||
```
|
|
||||||
1. Reproduce bug with failing test
|
|
||||||
2. Fix bug
|
|
||||||
3. Verify test passes
|
|
||||||
4. Run *test-review on regression test
|
|
||||||
5. Add to regression test suite
|
|
||||||
```
|
|
||||||
|
|
||||||
#### For Refactoring (Regression Safety)
|
|
||||||
|
|
||||||
**Before refactoring:**
|
|
||||||
```
|
|
||||||
1. Run *trace - Baseline coverage
|
|
||||||
2. Note current coverage %
|
|
||||||
3. Refactor code
|
|
||||||
4. Run *trace - Verify coverage maintained
|
|
||||||
5. No coverage should decrease
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: Continuous Improvement
|
|
||||||
|
|
||||||
Track improvement over time.
|
|
||||||
|
|
||||||
#### Quarterly Quality Audits
|
|
||||||
|
|
||||||
**Q1 Baseline:**
|
|
||||||
```
|
|
||||||
Coverage: 30%
|
|
||||||
Quality Score: 55/100
|
|
||||||
Flakiness: 15% fail rate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Q2 Target:**
|
|
||||||
```
|
|
||||||
Coverage: 50% (focus on P0)
|
|
||||||
Quality Score: 65/100
|
|
||||||
Flakiness: 5%
|
|
||||||
```
|
|
||||||
|
|
||||||
**Q3 Target:**
|
|
||||||
```
|
|
||||||
Coverage: 70%
|
|
||||||
Quality Score: 75/100
|
|
||||||
Flakiness: 1%
|
|
||||||
```
|
|
||||||
|
|
||||||
**Q4 Target:**
|
|
||||||
```
|
|
||||||
Coverage: 85%
|
|
||||||
Quality Score: 85/100
|
|
||||||
Flakiness: <0.5%
|
|
||||||
```
|
|
||||||
|
|
||||||
## Brownfield-Specific Tips
|
|
||||||
|
|
||||||
### Don't Rewrite Everything
|
|
||||||
|
|
||||||
**Common mistake:**
|
|
||||||
```
|
|
||||||
"Our tests are bad, let's delete them all and start over!"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Better approach:**
|
|
||||||
```
|
|
||||||
"Our tests are bad, let's:
|
|
||||||
1. Keep tests that work (even if not perfect)
|
|
||||||
2. Fix critical quality issues incrementally
|
|
||||||
3. Add tests for gaps
|
|
||||||
4. Gradually improve over time"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why:**
|
|
||||||
- Rewriting is risky (might lose coverage)
|
|
||||||
- Incremental improvement is safer
|
|
||||||
- Team learns gradually
|
|
||||||
- Business value delivered continuously
|
|
||||||
|
|
||||||
### Use Regression Hotspots
|
|
||||||
|
|
||||||
**Identify regression-prone areas:**
|
|
||||||
```markdown
|
|
||||||
## Regression Hotspots
|
|
||||||
|
|
||||||
**Based on:**
|
|
||||||
- Bug reports (last 6 months)
|
|
||||||
- Customer complaints
|
|
||||||
- Code complexity (cyclomatic complexity >10)
|
|
||||||
- Frequent changes (git log analysis)
|
|
||||||
|
|
||||||
**High-Risk Areas:**
|
|
||||||
1. Authentication flow (12 bugs in 6 months)
|
|
||||||
2. Checkout process (8 bugs)
|
|
||||||
3. Payment integration (6 bugs)
|
|
||||||
|
|
||||||
**Test Priority:**
|
|
||||||
- Add regression tests for these areas FIRST
|
|
||||||
- Ensure P0 coverage before touching code
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quarantine Flaky Tests
|
|
||||||
|
|
||||||
Don't let flaky tests block improvement:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Mark flaky tests with .skip temporarily
|
|
||||||
test.skip('flaky test - needs fixing', async ({ page }) => {
|
|
||||||
// TODO: Fix hard wait on line 45
|
|
||||||
// TODO: Add network-first pattern
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Track quarantined tests:**
|
|
||||||
```markdown
|
|
||||||
# Quarantined Tests
|
|
||||||
|
|
||||||
| Test | Reason | Owner | Target Fix Date |
|
|
||||||
| ------------------- | -------------------------- | -------- | --------------- |
|
|
||||||
| checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 |
|
|
||||||
| profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 |
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix systematically:**
|
|
||||||
- Don't accumulate quarantined tests
|
|
||||||
- Set deadlines for fixes
|
|
||||||
- Review quarantine list weekly
|
|
||||||
|
|
||||||
### Migrate One Directory at a Time
|
|
||||||
|
|
||||||
**Large test suite?** Improve incrementally:
|
|
||||||
|
|
||||||
**Week 1:** `tests/auth/`
|
|
||||||
```
|
|
||||||
1. Run *test-review on auth tests
|
|
||||||
2. Fix critical issues
|
|
||||||
3. Re-review
|
|
||||||
4. Mark directory as "modernized"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 2:** `tests/api/`
|
|
||||||
```
|
|
||||||
Same process
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 3:** `tests/e2e/`
|
|
||||||
```
|
|
||||||
Same process
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Focused improvement
|
|
||||||
- Visible progress
|
|
||||||
- Team learns patterns
|
|
||||||
- Lower risk
|
|
||||||
|
|
||||||
### Document Migration Status
|
|
||||||
|
|
||||||
**Track which tests are modernized:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Suite Status
|
|
||||||
|
|
||||||
| Directory | Tests | Quality Score | Status | Notes |
|
|
||||||
| ------------------ | ----- | ------------- | ------------- | -------------- |
|
|
||||||
| tests/auth/ | 15 | 85/100 | ✅ Modernized | Week 1 cleanup |
|
|
||||||
| tests/api/ | 32 | 78/100 | ⚠️ In Progress | Week 2 |
|
|
||||||
| tests/e2e/ | 28 | 62/100 | ❌ Legacy | Week 3 planned |
|
|
||||||
| tests/integration/ | 12 | 45/100 | ❌ Legacy | Week 4 planned |
|
|
||||||
|
|
||||||
**Legend:**
|
|
||||||
- ✅ Modernized: Quality >80, no critical issues
|
|
||||||
- ⚠️ In Progress: Active improvement
|
|
||||||
- ❌ Legacy: Not yet touched
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Brownfield Challenges
|
|
||||||
|
|
||||||
### "We Don't Know What Tests Cover"
|
|
||||||
|
|
||||||
**Problem:** No documentation, unclear what tests do.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
1. Run *trace - TEA analyzes tests and maps to requirements
|
|
||||||
2. Review traceability matrix
|
|
||||||
3. Document findings
|
|
||||||
4. Use as baseline for improvement
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA reverse-engineers test coverage even without documentation.
|
|
||||||
|
|
||||||
### "Tests Are Too Brittle to Touch"
|
|
||||||
|
|
||||||
**Problem:** Afraid to modify tests (might break them).
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
1. Run tests, capture current behavior (baseline)
|
|
||||||
2. Make small improvement (fix one hard wait)
|
|
||||||
3. Run tests again
|
|
||||||
4. If still pass, continue
|
|
||||||
5. If fail, investigate why
|
|
||||||
|
|
||||||
Incremental changes = lower risk
|
|
||||||
```
|
|
||||||
|
|
||||||
### "No One Knows How to Run Tests"
|
|
||||||
|
|
||||||
**Problem:** Test documentation is outdated or missing.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
1. Document manually or ask TEA to help analyze test structure
|
|
||||||
2. Create tests/README.md with:
|
|
||||||
- How to install dependencies
|
|
||||||
- How to run tests (npx playwright test, npm test, etc.)
|
|
||||||
- What each test directory contains
|
|
||||||
- Common issues and troubleshooting
|
|
||||||
3. Commit documentation for team
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** `*framework` is for new test setup, not existing tests. For brownfield, document what you have.
|
|
||||||
|
|
||||||
### "Tests Take Hours to Run"
|
|
||||||
|
|
||||||
**Problem:** Full test suite takes 4+ hours.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
1. Configure parallel execution (shard tests across workers)
|
|
||||||
2. Add selective testing (run only affected tests on PR)
|
|
||||||
3. Run full suite nightly only
|
|
||||||
4. Optimize slow tests (remove hard waits, improve selectors)
|
|
||||||
|
|
||||||
Before: 4 hours sequential
|
|
||||||
After: 15 minutes with sharding + selective testing
|
|
||||||
```
|
|
||||||
|
|
||||||
**How `*ci` helps:**
|
|
||||||
- Scaffolds CI configuration with parallel sharding examples
|
|
||||||
- Provides selective testing script templates
|
|
||||||
- Documents burn-in and optimization strategies
|
|
||||||
- But YOU configure workers, test selection, and optimization
|
|
||||||
|
|
||||||
**With Playwright Utils burn-in:**
|
|
||||||
- Smart selective testing based on git diff
|
|
||||||
- Volume control (run percentage of affected tests)
|
|
||||||
- See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#burn-in)
|
|
||||||
|
|
||||||
### "We Have Tests But They Always Fail"
|
|
||||||
|
|
||||||
**Problem:** Tests are so flaky they're ignored.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
1. Run *test-review to identify flakiness patterns
|
|
||||||
2. Fix top 5 flaky tests (biggest impact)
|
|
||||||
3. Quarantine remaining flaky tests
|
|
||||||
4. Re-enable as you fix them
|
|
||||||
|
|
||||||
Don't let perfect be the enemy of good
|
|
||||||
```
|
|
||||||
|
|
||||||
## Brownfield TEA Workflow
|
|
||||||
|
|
||||||
### Recommended Sequence
|
|
||||||
|
|
||||||
**1. Documentation (if needed):**
|
|
||||||
```
|
|
||||||
*document-project
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Baseline (Phase 2):**
|
|
||||||
```
|
|
||||||
*trace Phase 1 - Establish coverage baseline
|
|
||||||
*test-review - Establish quality baseline
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Planning (Phase 2-3):**
|
|
||||||
```
|
|
||||||
*prd - Document requirements (if missing)
|
|
||||||
*architecture - Document architecture (if missing)
|
|
||||||
*test-design (system-level) - Testability review
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Infrastructure (Phase 3):**
|
|
||||||
```
|
|
||||||
*framework - Modernize test framework (if needed)
|
|
||||||
*ci - Setup or improve CI/CD
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. Per Epic (Phase 4):**
|
|
||||||
```
|
|
||||||
*test-design (epic-level) - Focus on regression hotspots
|
|
||||||
*automate - Add missing tests
|
|
||||||
*test-review - Ensure quality
|
|
||||||
*trace Phase 1 - Refresh coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**6. Release Gate:**
|
|
||||||
```
|
|
||||||
*nfr-assess - Validate NFRs (if enterprise)
|
|
||||||
*trace Phase 2 - Gate decision
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Baseline coverage analysis
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality audit
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Fill coverage gaps
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Risk assessment
|
|
||||||
|
|
||||||
**Customization:**
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Modernize tests with utilities
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - Brownfield model explained
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Fix flakiness
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Prioritize improvements
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - All 8 workflows
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Testing patterns
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,424 +0,0 @@
|
||||||
---
|
|
||||||
title: "Enable TEA MCP Enhancements"
|
|
||||||
description: Configure Playwright MCP servers for live browser verification during TEA workflows
|
|
||||||
---
|
|
||||||
|
|
||||||
# Enable TEA MCP Enhancements
|
|
||||||
|
|
||||||
Configure Model Context Protocol (MCP) servers to enable live browser verification, exploratory mode, and recording mode in TEA workflows.
|
|
||||||
|
|
||||||
## What are MCP Enhancements?
|
|
||||||
|
|
||||||
MCP (Model Context Protocol) servers enable AI agents to interact with live browsers during test generation. This allows TEA to:
|
|
||||||
|
|
||||||
- **Explore UIs interactively** - Discover actual functionality through browser automation
|
|
||||||
- **Verify selectors** - Generate accurate locators from real DOM
|
|
||||||
- **Validate behavior** - Confirm test scenarios against live applications
|
|
||||||
- **Debug visually** - Use trace viewer and screenshots during generation
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
**For UI Testing:**
|
|
||||||
- Want exploratory mode in `*test-design` (browser-based UI discovery)
|
|
||||||
- Want recording mode in `*atdd` or `*automate` (verify selectors with live browser)
|
|
||||||
- Want healing mode in `*automate` (fix tests with visual debugging)
|
|
||||||
- Need accurate selectors from actual DOM
|
|
||||||
- Debugging complex UI interactions
|
|
||||||
|
|
||||||
**For API Testing:**
|
|
||||||
- Want healing mode in `*automate` (analyze failures with trace data)
|
|
||||||
- Need to debug test failures (network responses, request/response data, timing)
|
|
||||||
- Want to inspect trace files (network traffic, errors, race conditions)
|
|
||||||
|
|
||||||
**For Both:**
|
|
||||||
- Visual debugging (trace viewer shows network + UI)
|
|
||||||
- Test failure analysis (MCP can run tests and extract errors)
|
|
||||||
- Understanding complex test failures (network + DOM together)
|
|
||||||
|
|
||||||
**Don't use if:**
|
|
||||||
- You don't have MCP servers configured
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- IDE with MCP support (Cursor, VS Code with Claude extension)
|
|
||||||
- Node.js v18 or later
|
|
||||||
- Playwright installed
|
|
||||||
|
|
||||||
## Available MCP Servers
|
|
||||||
|
|
||||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
|
||||||
|
|
||||||
### 1. Playwright MCP - Browser Automation
|
|
||||||
|
|
||||||
**Command:** `npx @playwright/mcp@latest`
|
|
||||||
|
|
||||||
**Capabilities:**
|
|
||||||
- Navigate to URLs
|
|
||||||
- Click elements
|
|
||||||
- Fill forms
|
|
||||||
- Take screenshots
|
|
||||||
- Extract DOM information
|
|
||||||
|
|
||||||
**Best for:** Exploratory mode, recording mode
|
|
||||||
|
|
||||||
### 2. Playwright Test MCP - Test Runner
|
|
||||||
|
|
||||||
**Command:** `npx playwright run-test-mcp-server`
|
|
||||||
|
|
||||||
**Capabilities:**
|
|
||||||
- Run test files
|
|
||||||
- Analyze failures
|
|
||||||
- Extract error messages
|
|
||||||
- Show trace files
|
|
||||||
|
|
||||||
**Best for:** Healing mode, debugging
|
|
||||||
|
|
||||||
### Recommended: Configure Both
|
|
||||||
|
|
||||||
Both servers work together to provide full TEA MCP capabilities.
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
### 1. Configure MCP Servers
|
|
||||||
|
|
||||||
Add to your IDE's MCP configuration:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"playwright": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["@playwright/mcp@latest"]
|
|
||||||
},
|
|
||||||
"playwright-test": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["playwright", "run-test-mcp-server"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
See [TEA Overview](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements) for IDE-specific config locations.
|
|
||||||
|
|
||||||
### 2. Enable in BMAD
|
|
||||||
|
|
||||||
Answer "Yes" when prompted during installation, or set in config:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml
|
|
||||||
tea_use_mcp_enhancements: true
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Verify MCPs Running
|
|
||||||
|
|
||||||
Ensure your MCP servers are running in your IDE.
|
|
||||||
|
|
||||||
## How MCP Enhances TEA Workflows
|
|
||||||
|
|
||||||
### *test-design: Exploratory Mode
|
|
||||||
|
|
||||||
**Without MCP:**
|
|
||||||
- TEA infers UI functionality from documentation
|
|
||||||
- Relies on your description of features
|
|
||||||
- May miss actual UI behavior
|
|
||||||
|
|
||||||
**With MCP:**
|
|
||||||
TEA can open live browser to:
|
|
||||||
```
|
|
||||||
"Let me explore the profile page to understand the UI"
|
|
||||||
|
|
||||||
[TEA navigates to /profile]
|
|
||||||
[Takes screenshot]
|
|
||||||
[Extracts accessible elements]
|
|
||||||
|
|
||||||
"I see the profile has:
|
|
||||||
- Name field (editable)
|
|
||||||
- Email field (editable)
|
|
||||||
- Avatar upload button
|
|
||||||
- Save button
|
|
||||||
- Cancel button
|
|
||||||
|
|
||||||
I'll design tests for these interactions."
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Accurate test design based on actual UI
|
|
||||||
- Discovers functionality you might not describe
|
|
||||||
- Validates test scenarios are possible
|
|
||||||
|
|
||||||
### *atdd: Recording Mode
|
|
||||||
|
|
||||||
**Without MCP:**
|
|
||||||
- TEA generates selectors from best practices
|
|
||||||
- TEA infers API patterns from documentation
|
|
||||||
|
|
||||||
**With MCP (Recording Mode):**
|
|
||||||
|
|
||||||
**For UI Tests:**
|
|
||||||
```
|
|
||||||
[TEA navigates to /login with live browser]
|
|
||||||
[Inspects actual form fields]
|
|
||||||
|
|
||||||
"I see:
|
|
||||||
- Email input has label 'Email Address' (not 'Email')
|
|
||||||
- Password input has label 'Your Password'
|
|
||||||
- Submit button has text 'Sign In' (not 'Login')
|
|
||||||
|
|
||||||
I'll use these exact selectors."
|
|
||||||
```
|
|
||||||
|
|
||||||
**For API Tests:**
|
|
||||||
```
|
|
||||||
[TEA analyzes trace files from test runs]
|
|
||||||
[Inspects network requests/responses]
|
|
||||||
|
|
||||||
"I see the API returns:
|
|
||||||
- POST /api/login → 200 with { token, userId }
|
|
||||||
- Response time: 150ms
|
|
||||||
- Required headers: Content-Type, Authorization
|
|
||||||
|
|
||||||
I'll validate these in tests."
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- UI: Accurate selectors from real DOM
|
|
||||||
- API: Validated request/response patterns from trace
|
|
||||||
- Both: Tests work on first run
|
|
||||||
|
|
||||||
### *automate: Healing + Recording Modes
|
|
||||||
|
|
||||||
**Without MCP:**
|
|
||||||
- TEA analyzes test code only
|
|
||||||
- Suggests fixes based on static analysis
|
|
||||||
- Generates tests from documentation/code
|
|
||||||
|
|
||||||
**With MCP:**
|
|
||||||
|
|
||||||
**Healing Mode (UI + API):**
|
|
||||||
```
|
|
||||||
[TEA opens trace file]
|
|
||||||
[Analyzes screenshots + network tab]
|
|
||||||
|
|
||||||
UI failures: "Button selector changed from 'Save' to 'Save Changes'"
|
|
||||||
API failures: "Response structure changed, expected {id} got {userId}"
|
|
||||||
|
|
||||||
[TEA makes fixes]
|
|
||||||
[Verifies with trace analysis]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recording Mode (UI + API):**
|
|
||||||
```
|
|
||||||
UI: [Inspects actual DOM, generates verified selectors]
|
|
||||||
API: [Analyzes network traffic, validates request/response patterns]
|
|
||||||
|
|
||||||
[Generates tests with verified patterns]
|
|
||||||
[Tests work on first run]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Visual debugging + trace analysis (not just UI)
|
|
||||||
- Verified selectors (UI) + network patterns (API)
|
|
||||||
- Tests verified against actual application behavior
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Exploratory Mode Example
|
|
||||||
|
|
||||||
```
|
|
||||||
User: Run *test-design
|
|
||||||
|
|
||||||
TEA: What feature are you designing tests for?
|
|
||||||
|
|
||||||
User: The checkout flow at https://shop.example.com
|
|
||||||
|
|
||||||
TEA: [With MCP] Let me explore the checkout flow
|
|
||||||
[Opens browser]
|
|
||||||
[Navigates to checkout]
|
|
||||||
[Takes screenshots]
|
|
||||||
[Identifies all interactive elements]
|
|
||||||
|
|
||||||
TEA: I've explored the checkout flow. I found:
|
|
||||||
- 4-step wizard (Cart → Shipping → Payment → Confirm)
|
|
||||||
- Shipping options (Standard, Express, Overnight)
|
|
||||||
- Payment methods (Credit Card, PayPal)
|
|
||||||
- Promo code field
|
|
||||||
- Order summary with real-time totals
|
|
||||||
|
|
||||||
Test design will cover all these scenarios.
|
|
||||||
```
|
|
||||||
|
|
||||||
### Recording Mode Example
|
|
||||||
|
|
||||||
```
|
|
||||||
User: Run *atdd for user registration feature
|
|
||||||
|
|
||||||
TEA: [With MCP] Let me verify the registration form selectors
|
|
||||||
[Opens browser at /register]
|
|
||||||
[Inspects form]
|
|
||||||
|
|
||||||
TEA: I see the actual form has:
|
|
||||||
- "Full Name" label (not "Name")
|
|
||||||
- "Email Address" label (not "Email")
|
|
||||||
- "Create Password" label (not "Password")
|
|
||||||
- "I agree to Terms" checkbox
|
|
||||||
- "Create Account" button (not "Register")
|
|
||||||
|
|
||||||
Generating tests with verified selectors...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Healing Mode Example
|
|
||||||
|
|
||||||
```
|
|
||||||
User: Run *automate with healing mode
|
|
||||||
|
|
||||||
TEA: [With MCP] Running tests and checking for failures
|
|
||||||
[Runs test suite]
|
|
||||||
[Test fails: selector not found]
|
|
||||||
|
|
||||||
TEA: Test failed on selector '.submit-btn'
|
|
||||||
[Opens trace viewer]
|
|
||||||
[Sees button class changed to '.submit-button']
|
|
||||||
|
|
||||||
Fixing selector and verifying...
|
|
||||||
[Updates test]
|
|
||||||
[Re-runs with MCP]
|
|
||||||
[Test passes]
|
|
||||||
|
|
||||||
Updated test with corrected selector.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### MCP Servers Not Running
|
|
||||||
|
|
||||||
**Problem:** TEA says MCP enhancements aren't available.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. MCP servers not configured in IDE
|
|
||||||
2. Config syntax error in JSON
|
|
||||||
3. IDE not restarted after config
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Verify MCP config file exists
|
|
||||||
ls ~/.cursor/config.json
|
|
||||||
|
|
||||||
# Validate JSON syntax
|
|
||||||
cat ~/.cursor/config.json | python -m json.tool
|
|
||||||
|
|
||||||
# Restart IDE
|
|
||||||
# Cmd+Q (quit) then reopen
|
|
||||||
```
|
|
||||||
|
|
||||||
### Browser Doesn't Open
|
|
||||||
|
|
||||||
**Problem:** MCP enabled but browser never opens.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Playwright browsers not installed
|
|
||||||
2. Headless mode enabled
|
|
||||||
3. MCP server crashed
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Install browsers
|
|
||||||
npx playwright install
|
|
||||||
|
|
||||||
# Check MCP server logs (in IDE)
|
|
||||||
# Look for error messages
|
|
||||||
|
|
||||||
# Try manual MCP server
|
|
||||||
npx @playwright/mcp@latest
|
|
||||||
# Should start without errors
|
|
||||||
```
|
|
||||||
|
|
||||||
### TEA Doesn't Use MCP
|
|
||||||
|
|
||||||
**Problem:** `tea_use_mcp_enhancements: true` but TEA doesn't use browser.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Config not saved
|
|
||||||
2. Workflow run before config update
|
|
||||||
3. MCP servers not running
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Verify config
|
|
||||||
grep tea_use_mcp_enhancements _bmad/bmm/config.yaml
|
|
||||||
# Should show: tea_use_mcp_enhancements: true
|
|
||||||
|
|
||||||
# Restart IDE (reload MCP servers)
|
|
||||||
|
|
||||||
# Start fresh chat (TEA loads config at start)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Selector Verification Fails
|
|
||||||
|
|
||||||
**Problem:** MCP can't find elements TEA is looking for.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Page not fully loaded
|
|
||||||
2. Element behind modal/overlay
|
|
||||||
3. Element requires authentication
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
TEA will handle this automatically:
|
|
||||||
- Wait for page load
|
|
||||||
- Dismiss modals if present
|
|
||||||
- Handle auth if needed
|
|
||||||
|
|
||||||
If persistent, provide TEA more context:
|
|
||||||
```
|
|
||||||
"The element is behind a modal - dismiss the modal first"
|
|
||||||
"The page requires login - use credentials X"
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Slows Down Workflows
|
|
||||||
|
|
||||||
**Problem:** Workflows take much longer with MCP enabled.
|
|
||||||
|
|
||||||
**Cause:** Browser automation adds overhead.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
Use MCP selectively:
|
|
||||||
- **Enable for:** Complex UIs, new projects, debugging
|
|
||||||
- **Disable for:** Simple features, well-known patterns, API-only testing
|
|
||||||
|
|
||||||
Toggle quickly:
|
|
||||||
```yaml
|
|
||||||
# For this feature (complex UI)
|
|
||||||
tea_use_mcp_enhancements: true
|
|
||||||
|
|
||||||
# For next feature (simple API)
|
|
||||||
tea_use_mcp_enhancements: false
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
**Getting Started:**
|
|
||||||
- [TEA Lite Quickstart Tutorial](/docs/tutorials/getting-started/tea-lite-quickstart.md) - Learn TEA basics first
|
|
||||||
|
|
||||||
**Workflow Guides (MCP-Enhanced):**
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Exploratory mode with browser
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Recording mode for accurate selectors
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Healing mode for debugging
|
|
||||||
|
|
||||||
**Other Customization:**
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Production-ready utilities
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - MCP enhancements in lifecycle
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - When to use MCP enhancements
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - tea_use_mcp_enhancements option
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - MCP-enhanced workflows
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - MCP Enhancements term
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,813 +0,0 @@
|
||||||
---
|
|
||||||
title: "Integrate Playwright Utils with TEA"
|
|
||||||
description: Add production-ready fixtures and utilities to your TEA-generated tests
|
|
||||||
---
|
|
||||||
|
|
||||||
# Integrate Playwright Utils with TEA
|
|
||||||
|
|
||||||
Integrate `@seontechnologies/playwright-utils` with TEA to get production-ready fixtures, utilities, and patterns in your test suite.
|
|
||||||
|
|
||||||
## What is Playwright Utils?
|
|
||||||
|
|
||||||
A production-ready utility library that provides:
|
|
||||||
- Typed API request helper
|
|
||||||
- Authentication session management
|
|
||||||
- Network recording and replay (HAR)
|
|
||||||
- Network request interception
|
|
||||||
- Async polling (recurse)
|
|
||||||
- Structured logging
|
|
||||||
- File validation (CSV, PDF, XLSX, ZIP)
|
|
||||||
- Burn-in testing utilities
|
|
||||||
- Network error monitoring
|
|
||||||
|
|
||||||
**Repository:** [https://github.com/seontechnologies/playwright-utils](https://github.com/seontechnologies/playwright-utils)
|
|
||||||
|
|
||||||
**npm Package:** `@seontechnologies/playwright-utils`
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- You want production-ready fixtures (not DIY)
|
|
||||||
- Your team benefits from standardized patterns
|
|
||||||
- You need utilities like API testing, auth handling, network mocking
|
|
||||||
- You want TEA to generate tests using these utilities
|
|
||||||
- You're building reusable test infrastructure
|
|
||||||
|
|
||||||
**Don't use if:**
|
|
||||||
- You're just learning testing (keep it simple first)
|
|
||||||
- You have your own fixture library
|
|
||||||
- You don't need the utilities
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Test framework setup complete (Playwright)
|
|
||||||
- Node.js v18 or later
|
|
||||||
|
|
||||||
**Note:** Playwright Utils is for Playwright only (not Cypress).
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Step 1: Install Package
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Enable in TEA Config
|
|
||||||
|
|
||||||
Edit `_bmad/bmm/config.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** If you enabled this during BMad installation, it's already set.
|
|
||||||
|
|
||||||
### Step 3: Verify Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check package installed
|
|
||||||
npm list @seontechnologies/playwright-utils
|
|
||||||
|
|
||||||
# Check TEA config
|
|
||||||
grep tea_use_playwright_utils _bmad/bmm/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Should show:
|
|
||||||
```
|
|
||||||
@seontechnologies/playwright-utils@2.x.x
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
```
|
|
||||||
|
|
||||||
## What Changes When Enabled
|
|
||||||
|
|
||||||
### *framework Workflow
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
// Basic Playwright fixtures only
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('api test', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/users');
|
|
||||||
const users = await response.json();
|
|
||||||
expect(response.status()).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Combined Fixtures):**
|
|
||||||
```typescript
|
|
||||||
// All utilities available via single import
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('api test', async ({ apiRequest, authToken, log }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/users',
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
log.info('Fetched users', body);
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Selective Merge):**
|
|
||||||
```typescript
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { test as logFixture } from '@seontechnologies/playwright-utils/log/fixtures';
|
|
||||||
|
|
||||||
export const test = mergeTests(apiRequestFixture, logFixture);
|
|
||||||
export { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('api test', async ({ apiRequest, log }) => {
|
|
||||||
log.info('Fetching users');
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/users'
|
|
||||||
});
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### `*atdd` and `*automate` Workflows
|
|
||||||
|
|
||||||
**Without Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
// Manual API calls
|
|
||||||
test('should fetch profile', async ({ page, request }) => {
|
|
||||||
const response = await request.get('/api/profile');
|
|
||||||
const profile = await response.json();
|
|
||||||
// Manual parsing and validation
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
|
|
||||||
test('should fetch profile', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/profile' // 'path' not 'url'
|
|
||||||
}).validateSchema(ProfileSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
// body is type-safe: { id: string, name: string, email: string }
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### *test-review Workflow
|
|
||||||
|
|
||||||
**Without Playwright Utils:**
|
|
||||||
Reviews against generic Playwright patterns
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
Reviews against playwright-utils best practices:
|
|
||||||
- Fixture composition patterns
|
|
||||||
- Utility usage (apiRequest, authSession, etc.)
|
|
||||||
- Network-first patterns
|
|
||||||
- Structured logging
|
|
||||||
|
|
||||||
### *ci Workflow
|
|
||||||
|
|
||||||
**Without Playwright Utils:**
|
|
||||||
- Parallel sharding
|
|
||||||
- Burn-in loops (basic shell scripts)
|
|
||||||
- CI triggers (PR, push, schedule)
|
|
||||||
- Artifact collection
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
Enhanced with smart testing:
|
|
||||||
- Burn-in utility (git diff-based, volume control)
|
|
||||||
- Selective testing (skip config/docs/types changes)
|
|
||||||
- Test prioritization by file changes
|
|
||||||
|
|
||||||
## Available Utilities
|
|
||||||
|
|
||||||
### api-request
|
|
||||||
|
|
||||||
Typed HTTP client with schema validation.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/api-request.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright | api-request Utility |
|
|
||||||
|-------------------|---------------------|
|
|
||||||
| Manual `await response.json()` | Automatic JSON parsing |
|
|
||||||
| `response.status()` + separate body parsing | Returns `{ status, body }` structure |
|
|
||||||
| No built-in retry | Automatic retry for 5xx errors |
|
|
||||||
| No schema validation | Single-line `.validateSchema()` |
|
|
||||||
| Verbose status checking | Clean destructuring |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
import { z } from 'zod';
|
|
||||||
|
|
||||||
const UserSchema = z.object({
|
|
||||||
id: z.string(),
|
|
||||||
name: z.string(),
|
|
||||||
email: z.string().email()
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should create user', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/users', // Note: 'path' not 'url'
|
|
||||||
body: { name: 'Test User', email: 'test@example.com' } // Note: 'body' not 'data'
|
|
||||||
}).validateSchema(UserSchema); // Chained method (can await separately if needed)
|
|
||||||
|
|
||||||
expect(status).toBe(201);
|
|
||||||
expect(body.id).toBeDefined();
|
|
||||||
expect(body.email).toBe('test@example.com');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Returns `{ status, body }` structure
|
|
||||||
- Schema validation with `.validateSchema()` chained method
|
|
||||||
- Automatic retry for 5xx errors
|
|
||||||
- Type-safe response body
|
|
||||||
|
|
||||||
### auth-session
|
|
||||||
|
|
||||||
Authentication session management with token persistence.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/auth-session.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright Auth | auth-session |
|
|
||||||
|------------------------|--------------|
|
|
||||||
| Re-authenticate every test run (slow) | Authenticate once, persist to disk |
|
|
||||||
| Single user per setup | Multi-user support (roles, accounts) |
|
|
||||||
| No token expiration handling | Automatic token renewal |
|
|
||||||
| Manual session management | Provider pattern (flexible auth) |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should access protected route', async ({ page, authToken }) => {
|
|
||||||
// authToken automatically fetched and persisted
|
|
||||||
// No manual login needed - handled by fixture
|
|
||||||
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
await expect(page).toHaveURL('/dashboard');
|
|
||||||
|
|
||||||
// Token is reused across tests (persisted to disk)
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Configuration required** (see auth-session docs for provider setup):
|
|
||||||
```typescript
|
|
||||||
// global-setup.ts
|
|
||||||
import { authStorageInit, setAuthProvider, authGlobalInit } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
|
|
||||||
async function globalSetup() {
|
|
||||||
authStorageInit();
|
|
||||||
setAuthProvider(myCustomProvider); // Define your auth mechanism
|
|
||||||
await authGlobalInit(); // Fetch token once
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Token fetched once, reused across all tests
|
|
||||||
- Persisted to disk (faster subsequent runs)
|
|
||||||
- Multi-user support via `authOptions.userIdentifier`
|
|
||||||
- Automatic token renewal if expired
|
|
||||||
|
|
||||||
### network-recorder
|
|
||||||
|
|
||||||
Record and replay network traffic (HAR) for offline testing.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-recorder.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright HAR | network-recorder |
|
|
||||||
|------------------------|------------------|
|
|
||||||
| Manual `routeFromHAR()` configuration | Automatic HAR management with `PW_NET_MODE` |
|
|
||||||
| Separate record/playback test files | Same test, switch env var |
|
|
||||||
| No CRUD detection | Stateful mocking (POST/PUT/DELETE work) |
|
|
||||||
| Manual HAR file paths | Auto-organized by test name |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
|
||||||
|
|
||||||
// Record mode: Set environment variable
|
|
||||||
process.env.PW_NET_MODE = 'record';
|
|
||||||
|
|
||||||
test('should work with recorded traffic', async ({ page, context, networkRecorder }) => {
|
|
||||||
// Setup recorder (records or replays based on PW_NET_MODE)
|
|
||||||
await networkRecorder.setup(context);
|
|
||||||
|
|
||||||
// Your normal test code
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
await page.click('#add-item');
|
|
||||||
|
|
||||||
// First run (record): Saves traffic to HAR file
|
|
||||||
// Subsequent runs (playback): Uses HAR file, no backend needed
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Switch modes:**
|
|
||||||
```bash
|
|
||||||
# Record traffic
|
|
||||||
PW_NET_MODE=record npx playwright test
|
|
||||||
|
|
||||||
# Playback traffic (offline)
|
|
||||||
PW_NET_MODE=playback npx playwright test
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Offline testing (no backend needed)
|
|
||||||
- Deterministic responses (same every time)
|
|
||||||
- Faster execution (no network latency)
|
|
||||||
- Stateful mocking (CRUD operations work)
|
|
||||||
|
|
||||||
### intercept-network-call
|
|
||||||
|
|
||||||
Spy or stub network requests with automatic JSON parsing.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/intercept-network-call.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright | interceptNetworkCall |
|
|
||||||
|-------------------|----------------------|
|
|
||||||
| Route setup + response waiting (separate steps) | Single declarative call |
|
|
||||||
| Manual `await response.json()` | Automatic JSON parsing (`responseJson`) |
|
|
||||||
| Complex filter predicates | Simple glob patterns (`**/api/**`) |
|
|
||||||
| Verbose syntax | Concise, readable API |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('should handle API errors', async ({ page, interceptNetworkCall }) => {
|
|
||||||
// Stub API to return error (set up BEFORE navigation)
|
|
||||||
const profileCall = interceptNetworkCall({
|
|
||||||
method: 'GET',
|
|
||||||
url: '**/api/profile',
|
|
||||||
fulfillResponse: {
|
|
||||||
status: 500,
|
|
||||||
body: { error: 'Server error' }
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
await page.goto('/profile');
|
|
||||||
|
|
||||||
// Wait for the intercepted response
|
|
||||||
const { status, responseJson } = await profileCall;
|
|
||||||
|
|
||||||
expect(status).toBe(500);
|
|
||||||
expect(responseJson.error).toBe('Server error');
|
|
||||||
await expect(page.getByText('Server error occurred')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Automatic JSON parsing (`responseJson` ready to use)
|
|
||||||
- Spy mode (observe real traffic) or stub mode (mock responses)
|
|
||||||
- Glob pattern URL matching
|
|
||||||
- Returns promise with `{ status, responseJson, requestJson }`
|
|
||||||
|
|
||||||
### recurse
|
|
||||||
|
|
||||||
Async polling for eventual consistency (Cypress-style).
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/recurse.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Manual Polling | recurse Utility |
|
|
||||||
|----------------|-----------------|
|
|
||||||
| `while` loops with `waitForTimeout` | Smart polling with exponential backoff |
|
|
||||||
| Hard-coded retry logic | Configurable timeout/interval |
|
|
||||||
| No logging visibility | Optional logging with custom messages |
|
|
||||||
| Verbose, error-prone | Clean, readable API |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
|
|
||||||
test('should wait for async job completion', async ({ apiRequest, recurse }) => {
|
|
||||||
// Start async job
|
|
||||||
const { body: job } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/jobs'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Poll until complete (smart waiting)
|
|
||||||
const completed = await recurse(
|
|
||||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }),
|
|
||||||
(result) => result.body.status === 'completed',
|
|
||||||
{
|
|
||||||
timeout: 30000,
|
|
||||||
interval: 2000,
|
|
||||||
log: 'Waiting for job to complete'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(completed.body.status).toBe('completed');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Smart polling with configurable interval
|
|
||||||
- Handles async jobs, background tasks
|
|
||||||
- Optional logging for debugging
|
|
||||||
- Better than hard waits or manual polling loops
|
|
||||||
|
|
||||||
### log
|
|
||||||
|
|
||||||
Structured logging that integrates with Playwright reports.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/log.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Console.log / print | log Utility |
|
|
||||||
|--------------------|-------------|
|
|
||||||
| Not in test reports | Integrated with Playwright reports |
|
|
||||||
| No step visualization | `.step()` shows in Playwright UI |
|
|
||||||
| Manual object formatting | Logs objects seamlessly |
|
|
||||||
| No structured output | JSON artifacts for debugging |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { log } from '@seontechnologies/playwright-utils';
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should login', async ({ page }) => {
|
|
||||||
await log.info('Starting login test');
|
|
||||||
|
|
||||||
await page.goto('/login');
|
|
||||||
await log.step('Navigated to login page'); // Shows in Playwright UI
|
|
||||||
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await log.debug('Filled email field');
|
|
||||||
|
|
||||||
await log.success('Login completed');
|
|
||||||
// Logs appear in test output and Playwright reports
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Direct import (no fixture needed for basic usage)
|
|
||||||
- Structured logs in test reports
|
|
||||||
- `.step()` shows in Playwright UI
|
|
||||||
- Logs objects seamlessly (no special handling needed)
|
|
||||||
- Trace test execution
|
|
||||||
|
|
||||||
### file-utils
|
|
||||||
|
|
||||||
Read and validate CSV, PDF, XLSX, ZIP files.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/file-utils.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright | file-utils |
|
|
||||||
|-------------------|------------|
|
|
||||||
| ~80 lines per CSV flow | ~10 lines end-to-end |
|
|
||||||
| Manual download event handling | `handleDownload()` encapsulates all |
|
|
||||||
| External parsing libraries | Auto-parsing (CSV, XLSX, PDF, ZIP) |
|
|
||||||
| No validation helpers | Built-in validation (headers, row count) |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { handleDownload, readCSV } from '@seontechnologies/playwright-utils/file-utils';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
import path from 'node:path';
|
|
||||||
|
|
||||||
const DOWNLOAD_DIR = path.join(__dirname, '../downloads');
|
|
||||||
|
|
||||||
test('should export valid CSV', async ({ page }) => {
|
|
||||||
// Handle download and get file path
|
|
||||||
const downloadPath = await handleDownload({
|
|
||||||
page,
|
|
||||||
downloadDir: DOWNLOAD_DIR,
|
|
||||||
trigger: () => page.click('button:has-text("Export")')
|
|
||||||
});
|
|
||||||
|
|
||||||
// Read and parse CSV
|
|
||||||
const csvResult = await readCSV({ filePath: downloadPath });
|
|
||||||
const { data, headers } = csvResult.content;
|
|
||||||
|
|
||||||
// Validate structure
|
|
||||||
expect(headers).toEqual(['Name', 'Email', 'Status']);
|
|
||||||
expect(data.length).toBeGreaterThan(0);
|
|
||||||
expect(data[0]).toMatchObject({
|
|
||||||
Name: expect.any(String),
|
|
||||||
Email: expect.any(String),
|
|
||||||
Status: expect.any(String)
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Handles downloads automatically
|
|
||||||
- Auto-parses CSV, XLSX, PDF, ZIP
|
|
||||||
- Type-safe access to parsed data
|
|
||||||
- Returns structured `{ headers, data }`
|
|
||||||
|
|
||||||
### burn-in
|
|
||||||
|
|
||||||
Smart test selection with git diff analysis for CI optimization.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/burn-in.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Playwright `--only-changed` | burn-in Utility |
|
|
||||||
|-----------------------------|-----------------|
|
|
||||||
| Config changes trigger all tests | Smart filtering (skip configs, types, docs) |
|
|
||||||
| All or nothing | Volume control (run percentage) |
|
|
||||||
| No customization | Custom dependency analysis |
|
|
||||||
| Slow CI on minor changes | Fast CI with intelligent selection |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
// scripts/burn-in-changed.ts
|
|
||||||
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in';
|
|
||||||
|
|
||||||
async function main() {
|
|
||||||
await runBurnIn({
|
|
||||||
configPath: 'playwright.burn-in.config.ts',
|
|
||||||
baseBranch: 'main'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
main().catch(console.error);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Config:**
|
|
||||||
```typescript
|
|
||||||
// playwright.burn-in.config.ts
|
|
||||||
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in';
|
|
||||||
|
|
||||||
const config: BurnInConfig = {
|
|
||||||
skipBurnInPatterns: [
|
|
||||||
'**/config/**',
|
|
||||||
'**/*.md',
|
|
||||||
'**/*types*'
|
|
||||||
],
|
|
||||||
burnInTestPercentage: 0.3,
|
|
||||||
burnIn: {
|
|
||||||
repeatEach: 3,
|
|
||||||
retries: 1
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
export default config;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Package script:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scripts": {
|
|
||||||
"test:burn-in": "tsx scripts/burn-in-changed.ts"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- **Ensure flake-free tests upfront** - Never deal with test flake again
|
|
||||||
- Smart filtering (skip config, types, docs changes)
|
|
||||||
- Volume control (run percentage of affected tests)
|
|
||||||
- Git diff-based test selection
|
|
||||||
- Faster CI feedback
|
|
||||||
|
|
||||||
### network-error-monitor
|
|
||||||
|
|
||||||
Automatically detect HTTP 4xx/5xx errors during tests.
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-error-monitor.html>
|
|
||||||
|
|
||||||
**Why Use This?**
|
|
||||||
|
|
||||||
| Vanilla Playwright | network-error-monitor |
|
|
||||||
|-------------------|----------------------|
|
|
||||||
| UI passes, backend 500 ignored | Auto-fails on any 4xx/5xx |
|
|
||||||
| Manual error checking | Zero boilerplate (auto-enabled) |
|
|
||||||
| Silent failures slip through | Acts like Sentry for tests |
|
|
||||||
| No domino effect prevention | Limits cascading failures |
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
|
||||||
|
|
||||||
// That's it! Network monitoring is automatically enabled
|
|
||||||
test('should not have API errors', async ({ page }) => {
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
await page.click('button');
|
|
||||||
|
|
||||||
// Test fails automatically if any HTTP 4xx/5xx errors occur
|
|
||||||
// Error message shows: "Network errors detected: 2 request(s) failed"
|
|
||||||
// GET 500 https://api.example.com/users
|
|
||||||
// POST 503 https://api.example.com/metrics
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Opt-out for validation tests:**
|
|
||||||
```typescript
|
|
||||||
// When testing error scenarios, opt-out with annotation
|
|
||||||
test('should show error message on 404',
|
|
||||||
{ annotation: [{ type: 'skipNetworkMonitoring' }] }, // Array format
|
|
||||||
async ({ page }) => {
|
|
||||||
await page.goto('/invalid-page'); // Will 404
|
|
||||||
await expect(page.getByText('Page not found')).toBeVisible();
|
|
||||||
// Test won't fail on 404 because of annotation
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
// Or opt-out entire describe block
|
|
||||||
test.describe('error handling',
|
|
||||||
{ annotation: [{ type: 'skipNetworkMonitoring' }] },
|
|
||||||
() => {
|
|
||||||
test('handles 404', async ({ page }) => {
|
|
||||||
// Monitoring disabled for all tests in block
|
|
||||||
});
|
|
||||||
}
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Auto-enabled (zero setup)
|
|
||||||
- Catches silent backend failures (500, 503, 504)
|
|
||||||
- **Prevents domino effect** (limits cascading failures from one bad endpoint)
|
|
||||||
- Opt-out with annotations for validation tests
|
|
||||||
- Structured error reporting (JSON artifacts)
|
|
||||||
|
|
||||||
## Fixture Composition
|
|
||||||
|
|
||||||
**Option 1: Use Package's Combined Fixtures (Simplest)**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Import all utilities at once
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
|
||||||
import { log } from '@seontechnologies/playwright-utils';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
|
|
||||||
await log.info('Fetching users');
|
|
||||||
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/users'
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option 2: Create Custom Merged Fixtures (Selective)**
|
|
||||||
|
|
||||||
**File 1: support/merged-fixtures.ts**
|
|
||||||
```typescript
|
|
||||||
import { test as base, mergeTests } from '@playwright/test';
|
|
||||||
import { test as apiRequest } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { test as interceptNetworkCall } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures';
|
|
||||||
import { test as networkErrorMonitor } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
|
||||||
import { log } from '@seontechnologies/playwright-utils';
|
|
||||||
|
|
||||||
// Merge only what you need
|
|
||||||
export const test = mergeTests(
|
|
||||||
base,
|
|
||||||
apiRequest,
|
|
||||||
interceptNetworkCall,
|
|
||||||
networkErrorMonitor
|
|
||||||
);
|
|
||||||
|
|
||||||
export const expect = base.expect;
|
|
||||||
export { log };
|
|
||||||
```
|
|
||||||
|
|
||||||
**File 2: tests/api/users.spec.ts**
|
|
||||||
```typescript
|
|
||||||
import { test, expect, log } from '../support/merged-fixtures';
|
|
||||||
|
|
||||||
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
|
|
||||||
await log.info('Fetching users');
|
|
||||||
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/users'
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Contrast:**
|
|
||||||
- Option 1: All utilities available, zero setup
|
|
||||||
- Option 2: Pick utilities you need, one central file
|
|
||||||
|
|
||||||
**See working examples:** <https://github.com/seontechnologies/playwright-utils/tree/main/playwright/support>
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Import Errors
|
|
||||||
|
|
||||||
**Problem:** Cannot find module '@seontechnologies/playwright-utils/api-request'
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Verify package installed
|
|
||||||
npm list @seontechnologies/playwright-utils
|
|
||||||
|
|
||||||
# Check package.json has correct version
|
|
||||||
"@seontechnologies/playwright-utils": "^2.0.0"
|
|
||||||
|
|
||||||
# Reinstall if needed
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
```
|
|
||||||
|
|
||||||
### TEA Not Using Utilities
|
|
||||||
|
|
||||||
**Problem:** TEA generates tests without playwright-utils.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Config not set: `tea_use_playwright_utils: false`
|
|
||||||
2. Workflow run before config change
|
|
||||||
3. Package not installed
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Check config
|
|
||||||
grep tea_use_playwright_utils _bmad/bmm/config.yaml
|
|
||||||
|
|
||||||
# Should show: tea_use_playwright_utils: true
|
|
||||||
|
|
||||||
# Start fresh chat (TEA loads config at start)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Type Errors with apiRequest
|
|
||||||
|
|
||||||
**Problem:** TypeScript errors on apiRequest response.
|
|
||||||
|
|
||||||
**Cause:** No schema validation.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```typescript
|
|
||||||
// Add Zod schema for type safety
|
|
||||||
import { z } from 'zod';
|
|
||||||
|
|
||||||
const ProfileSchema = z.object({
|
|
||||||
id: z.string(),
|
|
||||||
name: z.string(),
|
|
||||||
email: z.string().email()
|
|
||||||
});
|
|
||||||
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/profile' // 'path' not 'url'
|
|
||||||
}).validateSchema(ProfileSchema); // Chained method
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
// body is typed as { id: string, name: string, email: string }
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration Guide
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
**Getting Started:**
|
|
||||||
- [TEA Lite Quickstart Tutorial](/docs/tutorials/getting-started/tea-lite-quickstart.md) - Learn TEA basics
|
|
||||||
- [How to Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md) - Initial framework setup
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Generate tests with utilities
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Expand coverage with utilities
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Review against PW-Utils patterns
|
|
||||||
|
|
||||||
**Other Customization:**
|
|
||||||
- [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) - Live browser verification
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why Playwright Utils matters** (part of TEA's three-part solution)
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Pure function → fixture pattern
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Network utilities explained
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Patterns PW-Utils enforces
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - tea_use_playwright_utils option
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Playwright Utils fragments
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - Playwright Utils term
|
|
||||||
- [Official PW-Utils Docs](https://seontechnologies.github.io/playwright-utils/) - Complete API reference
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,526 +0,0 @@
|
||||||
---
|
|
||||||
title: "Running TEA for Enterprise Projects"
|
|
||||||
description: Use TEA with compliance, security, and regulatory requirements in enterprise environments
|
|
||||||
---
|
|
||||||
|
|
||||||
# Running TEA for Enterprise Projects
|
|
||||||
|
|
||||||
Use TEA on enterprise projects with compliance, security, audit, and regulatory requirements. This guide covers NFR assessment, audit trails, and evidence collection.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Enterprise track projects (not Quick Flow or simple BMad Method)
|
|
||||||
- Compliance requirements (SOC 2, HIPAA, GDPR, etc.)
|
|
||||||
- Security-critical applications (finance, healthcare, government)
|
|
||||||
- Audit trail requirements
|
|
||||||
- Strict NFR thresholds (performance, security, reliability)
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed (Enterprise track selected)
|
|
||||||
- TEA agent available
|
|
||||||
- Compliance requirements documented
|
|
||||||
- Stakeholders identified (who approves gates)
|
|
||||||
|
|
||||||
## Enterprise-Specific TEA Workflows
|
|
||||||
|
|
||||||
### NFR Assessment (*nfr-assess)
|
|
||||||
|
|
||||||
**Purpose:** Validate non-functional requirements with evidence.
|
|
||||||
|
|
||||||
**When:** Phase 2 (early) and Release Gate
|
|
||||||
|
|
||||||
**Why Enterprise Needs This:**
|
|
||||||
- Compliance mandates specific thresholds
|
|
||||||
- Audit trails required for certification
|
|
||||||
- Security requirements are non-negotiable
|
|
||||||
- Performance SLAs are contractual
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
*nfr-assess
|
|
||||||
|
|
||||||
Categories: Security, Performance, Reliability, Maintainability
|
|
||||||
|
|
||||||
Security thresholds:
|
|
||||||
- Zero critical vulnerabilities (required by SOC 2)
|
|
||||||
- All endpoints require authentication
|
|
||||||
- Data encrypted at rest (FIPS 140-2)
|
|
||||||
- Audit logging on all data access
|
|
||||||
|
|
||||||
Evidence:
|
|
||||||
- Security scan: reports/nessus-scan.pdf
|
|
||||||
- Penetration test: reports/pentest-2026-01.pdf
|
|
||||||
- Compliance audit: reports/soc2-evidence.zip
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output:** NFR assessment with PASS/CONCERNS/FAIL for each category.
|
|
||||||
|
|
||||||
### Trace with Audit Evidence (*trace)
|
|
||||||
|
|
||||||
**Purpose:** Requirements traceability with audit trail.
|
|
||||||
|
|
||||||
**When:** Phase 2 (baseline), Phase 4 (refresh), Release Gate
|
|
||||||
|
|
||||||
**Why Enterprise Needs This:**
|
|
||||||
- Auditors require requirements-to-test mapping
|
|
||||||
- Compliance certifications need traceability
|
|
||||||
- Regulatory bodies want evidence
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
*trace Phase 1
|
|
||||||
|
|
||||||
Requirements: PRD.md (with compliance requirements)
|
|
||||||
Test location: tests/
|
|
||||||
|
|
||||||
Output: traceability-matrix.md with:
|
|
||||||
- Requirement-to-test mapping
|
|
||||||
- Compliance requirement coverage
|
|
||||||
- Gap prioritization
|
|
||||||
- Recommendations
|
|
||||||
```
|
|
||||||
|
|
||||||
**For Release Gate:**
|
|
||||||
```
|
|
||||||
*trace Phase 2
|
|
||||||
|
|
||||||
Generate gate-decision-{gate_type}-{story_id}.md with:
|
|
||||||
- Evidence references
|
|
||||||
- Approver signatures
|
|
||||||
- Compliance checklist
|
|
||||||
- Decision rationale
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Design with Compliance Focus (*test-design)
|
|
||||||
|
|
||||||
**Purpose:** Risk assessment with compliance and security focus.
|
|
||||||
|
|
||||||
**When:** Phase 3 (system-level), Phase 4 (epic-level)
|
|
||||||
|
|
||||||
**Why Enterprise Needs This:**
|
|
||||||
- Security architecture alignment required
|
|
||||||
- Compliance requirements must be testable
|
|
||||||
- Performance requirements are contractual
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
*test-design
|
|
||||||
|
|
||||||
Mode: System-level
|
|
||||||
|
|
||||||
Focus areas:
|
|
||||||
- Security architecture (authentication, authorization, encryption)
|
|
||||||
- Performance requirements (SLA: P99 <200ms)
|
|
||||||
- Compliance (HIPAA PHI handling, audit logging)
|
|
||||||
|
|
||||||
Output: test-design-system.md with:
|
|
||||||
- Security testing strategy
|
|
||||||
- Compliance requirement → test mapping
|
|
||||||
- Performance testing plan
|
|
||||||
- Audit logging validation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Enterprise TEA Lifecycle
|
|
||||||
|
|
||||||
### Phase 1: Discovery (Optional but Recommended)
|
|
||||||
|
|
||||||
**Research compliance requirements:**
|
|
||||||
```
|
|
||||||
Analyst: *research
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- Industry compliance (SOC 2, HIPAA, GDPR)
|
|
||||||
- Security standards (OWASP Top 10)
|
|
||||||
- Performance benchmarks (industry P99)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2: Planning (Required)
|
|
||||||
|
|
||||||
**1. Define NFRs early:**
|
|
||||||
```
|
|
||||||
PM: *prd
|
|
||||||
|
|
||||||
Include in PRD:
|
|
||||||
- Security requirements (authentication, encryption)
|
|
||||||
- Performance SLAs (response time, throughput)
|
|
||||||
- Reliability targets (uptime, RTO, RPO)
|
|
||||||
- Compliance mandates (data retention, audit logs)
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Assess NFRs:**
|
|
||||||
```
|
|
||||||
TEA: *nfr-assess
|
|
||||||
|
|
||||||
Categories: All (Security, Performance, Reliability, Maintainability)
|
|
||||||
|
|
||||||
Output: nfr-assessment.md
|
|
||||||
- NFR requirements documented
|
|
||||||
- Acceptance criteria defined
|
|
||||||
- Test strategy planned
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Baseline (brownfield only):**
|
|
||||||
```
|
|
||||||
TEA: *trace Phase 1
|
|
||||||
|
|
||||||
Establish baseline coverage before new work
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: Solutioning (Required)
|
|
||||||
|
|
||||||
**1. Architecture with testability review:**
|
|
||||||
```
|
|
||||||
Architect: *architecture
|
|
||||||
|
|
||||||
TEA: *test-design (system-level)
|
|
||||||
|
|
||||||
Focus:
|
|
||||||
- Security architecture testability
|
|
||||||
- Performance testing strategy
|
|
||||||
- Compliance requirement mapping
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Test infrastructure:**
|
|
||||||
```
|
|
||||||
TEA: *framework
|
|
||||||
|
|
||||||
Requirements:
|
|
||||||
- Separate test environments (dev, staging, prod-mirror)
|
|
||||||
- Secure test data handling (PHI, PII)
|
|
||||||
- Audit logging in tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. CI/CD with compliance:**
|
|
||||||
```
|
|
||||||
TEA: *ci
|
|
||||||
|
|
||||||
Requirements:
|
|
||||||
- Secrets management (Vault, AWS Secrets Manager)
|
|
||||||
- Test isolation (no cross-contamination)
|
|
||||||
- Artifact retention (compliance audit trail)
|
|
||||||
- Access controls (who can run production tests)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: Implementation (Required)
|
|
||||||
|
|
||||||
**Per epic:**
|
|
||||||
```
|
|
||||||
1. TEA: *test-design (epic-level)
|
|
||||||
Focus: Compliance, security, performance for THIS epic
|
|
||||||
|
|
||||||
2. TEA: *atdd (optional)
|
|
||||||
Generate tests including security/compliance scenarios
|
|
||||||
|
|
||||||
3. DEV: Implement story
|
|
||||||
|
|
||||||
4. TEA: *automate
|
|
||||||
Expand coverage including compliance edge cases
|
|
||||||
|
|
||||||
5. TEA: *test-review
|
|
||||||
Audit quality (score >80 per epic, rises to >85 at release)
|
|
||||||
|
|
||||||
6. TEA: *trace Phase 1
|
|
||||||
Refresh coverage, verify compliance requirements tested
|
|
||||||
```
|
|
||||||
|
|
||||||
### Release Gate (Required)
|
|
||||||
|
|
||||||
**1. Final NFR assessment:**
|
|
||||||
```
|
|
||||||
TEA: *nfr-assess
|
|
||||||
|
|
||||||
All categories (if not done earlier)
|
|
||||||
Latest evidence (performance tests, security scans)
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Final quality audit:**
|
|
||||||
```
|
|
||||||
TEA: *test-review tests/
|
|
||||||
|
|
||||||
Full suite review
|
|
||||||
Quality target: >85 for enterprise
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Gate decision:**
|
|
||||||
```
|
|
||||||
TEA: *trace Phase 2
|
|
||||||
|
|
||||||
Evidence required:
|
|
||||||
- traceability-matrix.md (from Phase 1)
|
|
||||||
- test-review.md (from quality audit)
|
|
||||||
- nfr-assessment.md (from NFR assessment)
|
|
||||||
- Test execution results (must have test results available)
|
|
||||||
|
|
||||||
Decision: PASS/CONCERNS/FAIL/WAIVED
|
|
||||||
|
|
||||||
Archive all artifacts for compliance audit
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Phase 2 requires test execution results. If results aren't available, Phase 2 will be skipped.
|
|
||||||
|
|
||||||
**4. Archive for audit:**
|
|
||||||
```
|
|
||||||
Archive:
|
|
||||||
- All test results
|
|
||||||
- Coverage reports
|
|
||||||
- NFR assessments
|
|
||||||
- Gate decisions
|
|
||||||
- Approver signatures
|
|
||||||
|
|
||||||
Retention: Per compliance requirements (7 years for HIPAA)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Enterprise-Specific Requirements
|
|
||||||
|
|
||||||
### Evidence Collection
|
|
||||||
|
|
||||||
**Required artifacts:**
|
|
||||||
- Requirements traceability matrix
|
|
||||||
- Test execution results (with timestamps)
|
|
||||||
- NFR assessment reports
|
|
||||||
- Security scan results
|
|
||||||
- Performance test results
|
|
||||||
- Gate decision records
|
|
||||||
- Approver signatures
|
|
||||||
|
|
||||||
**Storage:**
|
|
||||||
```
|
|
||||||
compliance/
|
|
||||||
├── 2026-Q1/
|
|
||||||
│ ├── release-1.2.0/
|
|
||||||
│ │ ├── traceability-matrix.md
|
|
||||||
│ │ ├── test-review.md
|
|
||||||
│ │ ├── nfr-assessment.md
|
|
||||||
│ │ ├── gate-decision-release-v1.2.0.md
|
|
||||||
│ │ ├── test-results/
|
|
||||||
│ │ ├── security-scans/
|
|
||||||
│ │ └── approvals.pdf
|
|
||||||
```
|
|
||||||
|
|
||||||
**Retention:** 7 years (HIPAA), 3 years (SOC 2), per your compliance needs
|
|
||||||
|
|
||||||
### Approver Workflows
|
|
||||||
|
|
||||||
**Multi-level approval required:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Gate Approvals Required
|
|
||||||
|
|
||||||
### Technical Approval
|
|
||||||
- [ ] QA Lead - Test coverage adequate
|
|
||||||
- [ ] Tech Lead - Technical quality acceptable
|
|
||||||
- [ ] Security Lead - Security requirements met
|
|
||||||
|
|
||||||
### Business Approval
|
|
||||||
- [ ] Product Manager - Business requirements met
|
|
||||||
- [ ] Compliance Officer - Regulatory requirements met
|
|
||||||
|
|
||||||
### Executive Approval (for major releases)
|
|
||||||
- [ ] VP Engineering - Overall quality acceptable
|
|
||||||
- [ ] CTO - Architecture approved for production
|
|
||||||
```
|
|
||||||
|
|
||||||
### Compliance Checklists
|
|
||||||
|
|
||||||
**SOC 2 Example:**
|
|
||||||
```markdown
|
|
||||||
## SOC 2 Compliance Checklist
|
|
||||||
|
|
||||||
### Access Controls
|
|
||||||
- [ ] All API endpoints require authentication
|
|
||||||
- [ ] Authorization tested for all protected resources
|
|
||||||
- [ ] Session management secure (token expiration tested)
|
|
||||||
|
|
||||||
### Audit Logging
|
|
||||||
- [ ] All data access logged
|
|
||||||
- [ ] Logs immutable (append-only)
|
|
||||||
- [ ] Log retention policy enforced
|
|
||||||
|
|
||||||
### Data Protection
|
|
||||||
- [ ] Data encrypted at rest (tested)
|
|
||||||
- [ ] Data encrypted in transit (HTTPS enforced)
|
|
||||||
- [ ] PII handling compliant (masking tested)
|
|
||||||
|
|
||||||
### Testing Evidence
|
|
||||||
- [ ] Test coverage >80% (verified)
|
|
||||||
- [ ] Security tests passing (100%)
|
|
||||||
- [ ] Traceability matrix complete
|
|
||||||
```
|
|
||||||
|
|
||||||
**HIPAA Example:**
|
|
||||||
```markdown
|
|
||||||
## HIPAA Compliance Checklist
|
|
||||||
|
|
||||||
### PHI Protection
|
|
||||||
- [ ] PHI encrypted at rest (AES-256)
|
|
||||||
- [ ] PHI encrypted in transit (TLS 1.3)
|
|
||||||
- [ ] PHI access logged (audit trail)
|
|
||||||
|
|
||||||
### Access Controls
|
|
||||||
- [ ] Role-based access control (RBAC tested)
|
|
||||||
- [ ] Minimum necessary access (tested)
|
|
||||||
- [ ] Authentication strong (MFA tested)
|
|
||||||
|
|
||||||
### Breach Notification
|
|
||||||
- [ ] Breach detection tested
|
|
||||||
- [ ] Notification workflow tested
|
|
||||||
- [ ] Incident response plan tested
|
|
||||||
```
|
|
||||||
|
|
||||||
## Enterprise Tips
|
|
||||||
|
|
||||||
### Start with Security
|
|
||||||
|
|
||||||
**Priority 1:** Security requirements
|
|
||||||
```
|
|
||||||
1. Document all security requirements
|
|
||||||
2. Generate security tests with *atdd
|
|
||||||
3. Run security test suite
|
|
||||||
4. Pass security audit BEFORE moving forward
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why:** Security failures block everything in enterprise.
|
|
||||||
|
|
||||||
**Example: RBAC Testing**
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
test('should enforce role-based access', async ({ request }) => {
|
|
||||||
// Login as regular user
|
|
||||||
const userResp = await request.post('/api/auth/login', {
|
|
||||||
data: { email: 'user@example.com', password: 'pass' }
|
|
||||||
});
|
|
||||||
const { token: userToken } = await userResp.json();
|
|
||||||
|
|
||||||
// Try to access admin endpoint
|
|
||||||
const adminResp = await request.get('/api/admin/users', {
|
|
||||||
headers: { Authorization: `Bearer ${userToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(adminResp.status()).toBe(403); // Forbidden
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils (Cleaner, Reusable):**
|
|
||||||
```typescript
|
|
||||||
import { test as base, expect } from '@playwright/test';
|
|
||||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
|
|
||||||
const authFixtureTest = base.extend(createAuthFixtures());
|
|
||||||
export const testWithAuth = mergeTests(apiRequestFixture, authFixtureTest);
|
|
||||||
|
|
||||||
testWithAuth('should enforce role-based access', async ({ apiRequest, authToken }) => {
|
|
||||||
// Auth token from fixture (configured for 'user' role)
|
|
||||||
const { status } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/admin/users', // Admin endpoint
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(403); // Regular user denied
|
|
||||||
});
|
|
||||||
|
|
||||||
testWithAuth('admin can access admin endpoint', async ({ apiRequest, authToken, authOptions }) => {
|
|
||||||
// Override to admin role
|
|
||||||
authOptions.userIdentifier = 'admin';
|
|
||||||
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/admin/users',
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200); // Admin allowed
|
|
||||||
expect(body).toBeInstanceOf(Array);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Auth-session requires provider setup in global-setup.ts. See [auth-session configuration](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
|
||||||
|
|
||||||
**Playwright Utils Benefits for Compliance:**
|
|
||||||
- Multi-user auth testing (regular, admin, etc.)
|
|
||||||
- Token persistence (faster test execution)
|
|
||||||
- Consistent auth patterns (audit trail)
|
|
||||||
- Automatic cleanup
|
|
||||||
|
|
||||||
### Set Higher Quality Thresholds
|
|
||||||
|
|
||||||
**Enterprise quality targets:**
|
|
||||||
- Test coverage: >85% (vs 80% for non-enterprise)
|
|
||||||
- Quality score: >85 (vs 75 for non-enterprise)
|
|
||||||
- P0 coverage: 100% (non-negotiable)
|
|
||||||
- P1 coverage: >95% (vs 90% for non-enterprise)
|
|
||||||
|
|
||||||
**Rationale:** Enterprise systems affect more users, higher stakes.
|
|
||||||
|
|
||||||
### Document Everything
|
|
||||||
|
|
||||||
**Auditors need:**
|
|
||||||
- Why decisions were made (rationale)
|
|
||||||
- Who approved (signatures)
|
|
||||||
- When (timestamps)
|
|
||||||
- What evidence (test results, scan reports)
|
|
||||||
|
|
||||||
**Use TEA's structured outputs:**
|
|
||||||
- Reports have timestamps
|
|
||||||
- Decisions have rationale
|
|
||||||
- Evidence is referenced
|
|
||||||
- Audit trail is automatic
|
|
||||||
|
|
||||||
### Budget for Compliance Testing
|
|
||||||
|
|
||||||
**Enterprise testing costs more:**
|
|
||||||
- Penetration testing: $10k-50k
|
|
||||||
- Security audits: $5k-20k
|
|
||||||
- Performance testing tools: $500-5k/month
|
|
||||||
- Compliance consulting: $200-500/hour
|
|
||||||
|
|
||||||
**Plan accordingly:**
|
|
||||||
- Budget in project cost
|
|
||||||
- Schedule early (3+ months for SOC 2)
|
|
||||||
- Don't skip (non-negotiable for compliance)
|
|
||||||
|
|
||||||
### Use External Validators
|
|
||||||
|
|
||||||
**Don't self-certify:**
|
|
||||||
- Penetration testing: Hire external firm
|
|
||||||
- Security audits: Independent auditor
|
|
||||||
- Compliance: Certification body
|
|
||||||
- Performance: Load testing service
|
|
||||||
|
|
||||||
**TEA's role:** Prepare for external validation, don't replace it.
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
**Workflow Guides:**
|
|
||||||
- [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - Deep dive on NFRs
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Gate decisions with evidence
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality audits
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Compliance-focused planning
|
|
||||||
|
|
||||||
**Use-Case Guides:**
|
|
||||||
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Brownfield patterns
|
|
||||||
|
|
||||||
**Customization:**
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Production-ready utilities
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - Enterprise model explained
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Probability × impact scoring
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Enterprise quality thresholds
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA lifecycle
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - All 8 workflows
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Enterprise config options
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Testing patterns
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,436 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Run ATDD with TEA"
|
|
||||||
description: Generate failing acceptance tests before implementation using TEA's ATDD workflow
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Run ATDD with TEA
|
|
||||||
|
|
||||||
Use TEA's `*atdd` workflow to generate failing acceptance tests BEFORE implementation. This is the TDD (Test-Driven Development) red phase - tests fail first, guide development, then pass.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- You're about to implement a NEW feature (feature doesn't exist yet)
|
|
||||||
- You want to follow TDD workflow (red → green → refactor)
|
|
||||||
- You want tests to guide your implementation
|
|
||||||
- You're practicing acceptance test-driven development
|
|
||||||
|
|
||||||
**Don't use this if:**
|
|
||||||
- Feature already exists (use `*automate` instead)
|
|
||||||
- You want tests that pass immediately
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Test framework setup complete (run `*framework` if needed)
|
|
||||||
- Story or feature defined with acceptance criteria
|
|
||||||
|
|
||||||
**Note:** This guide uses Playwright examples. If using Cypress, commands and syntax will differ (e.g., `cy.get()` instead of `page.locator()`).
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Load TEA Agent
|
|
||||||
|
|
||||||
Start a fresh chat and load TEA:
|
|
||||||
|
|
||||||
```
|
|
||||||
*tea
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Run the ATDD Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
*atdd
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Provide Context
|
|
||||||
|
|
||||||
TEA will ask for:
|
|
||||||
|
|
||||||
**Story/Feature Details:**
|
|
||||||
```
|
|
||||||
We're adding a user profile page where users can:
|
|
||||||
- View their profile information
|
|
||||||
- Edit their name and email
|
|
||||||
- Upload a profile picture
|
|
||||||
- Save changes with validation
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
```
|
|
||||||
Given I'm logged in
|
|
||||||
When I navigate to /profile
|
|
||||||
Then I see my current name and email
|
|
||||||
|
|
||||||
Given I'm on the profile page
|
|
||||||
When I click "Edit Profile"
|
|
||||||
Then I can modify my name and email
|
|
||||||
|
|
||||||
Given I've edited my profile
|
|
||||||
When I click "Save"
|
|
||||||
Then my changes are persisted
|
|
||||||
And I see a success message
|
|
||||||
|
|
||||||
Given I upload an invalid file type
|
|
||||||
When I try to save
|
|
||||||
Then I see an error message
|
|
||||||
And changes are not saved
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reference Documents** (optional):
|
|
||||||
- Point to your story file
|
|
||||||
- Reference PRD or tech spec
|
|
||||||
- Link to test design (if you ran `*test-design` first)
|
|
||||||
|
|
||||||
### 4. Specify Test Levels
|
|
||||||
|
|
||||||
TEA will ask what test levels to generate:
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
- E2E tests (browser-based, full user journey)
|
|
||||||
- API tests (backend only, faster)
|
|
||||||
- Component tests (UI components in isolation)
|
|
||||||
- Mix of levels (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip)
|
|
||||||
|
|
||||||
### Component Testing by Framework
|
|
||||||
|
|
||||||
TEA generates component tests using framework-appropriate tools:
|
|
||||||
|
|
||||||
| Your Framework | Component Testing Tool |
|
|
||||||
| -------------- | ------------------------------------------- |
|
|
||||||
| **Cypress** | Cypress Component Testing (*.cy.tsx) |
|
|
||||||
| **Playwright** | Vitest + React Testing Library (*.test.tsx) |
|
|
||||||
|
|
||||||
**Example response:**
|
|
||||||
```
|
|
||||||
Generate:
|
|
||||||
- API tests for profile CRUD operations
|
|
||||||
- E2E tests for the complete profile editing flow
|
|
||||||
- Component tests for ProfileForm validation (if using Cypress or Vitest)
|
|
||||||
- Focus on P0 and P1 scenarios
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Review Generated Tests
|
|
||||||
|
|
||||||
TEA generates **failing tests** in appropriate directories:
|
|
||||||
|
|
||||||
#### API Tests (`tests/api/profile.spec.ts`):
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test.describe('Profile API', () => {
|
|
||||||
test('should fetch user profile', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/profile');
|
|
||||||
|
|
||||||
expect(response.status()).toBe(200);
|
|
||||||
const profile = await response.json();
|
|
||||||
expect(profile).toHaveProperty('name');
|
|
||||||
expect(profile).toHaveProperty('email');
|
|
||||||
expect(profile).toHaveProperty('avatarUrl');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should update user profile', async ({ request }) => {
|
|
||||||
const response = await request.patch('/api/profile', {
|
|
||||||
data: {
|
|
||||||
name: 'Updated Name',
|
|
||||||
email: 'updated@example.com'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(response.status()).toBe(200);
|
|
||||||
const updated = await response.json();
|
|
||||||
expect(updated.name).toBe('Updated Name');
|
|
||||||
expect(updated.email).toBe('updated@example.com');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should validate email format', async ({ request }) => {
|
|
||||||
const response = await request.patch('/api/profile', {
|
|
||||||
data: {
|
|
||||||
email: 'invalid-email'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(response.status()).toBe(400);
|
|
||||||
const error = await response.json();
|
|
||||||
expect(error.message).toContain('Invalid email format');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
import { z } from 'zod';
|
|
||||||
|
|
||||||
const ProfileSchema = z.object({
|
|
||||||
name: z.string(),
|
|
||||||
email: z.string().email(),
|
|
||||||
avatarUrl: z.string().url()
|
|
||||||
});
|
|
||||||
|
|
||||||
test.describe('Profile API', () => {
|
|
||||||
test('should fetch user profile', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/profile'
|
|
||||||
}).validateSchema(ProfileSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
// Schema already validated, type-safe access
|
|
||||||
expect(body.name).toBeDefined();
|
|
||||||
expect(body.email).toContain('@');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should update user profile', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: '/api/profile',
|
|
||||||
body: {
|
|
||||||
name: 'Updated Name',
|
|
||||||
email: 'updated@example.com'
|
|
||||||
}
|
|
||||||
}).validateSchema(ProfileSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(body.name).toBe('Updated Name');
|
|
||||||
expect(body.email).toBe('updated@example.com');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should validate email format', async ({ apiRequest }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: '/api/profile',
|
|
||||||
body: { email: 'invalid-email' }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(400);
|
|
||||||
expect(body.message).toContain('Invalid email format');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Benefits:**
|
|
||||||
- Returns `{ status, body }` (cleaner than `response.status()` + `await response.json()`)
|
|
||||||
- Automatic schema validation with Zod
|
|
||||||
- Type-safe response bodies
|
|
||||||
- Automatic retry for 5xx errors
|
|
||||||
- Less boilerplate
|
|
||||||
|
|
||||||
#### E2E Tests (`tests/e2e/profile.spec.ts`):
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should edit and save profile', async ({ page }) => {
|
|
||||||
// Login first
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Password').fill('password123');
|
|
||||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
|
||||||
|
|
||||||
// Navigate to profile
|
|
||||||
await page.goto('/profile');
|
|
||||||
|
|
||||||
// Edit profile
|
|
||||||
await page.getByRole('button', { name: 'Edit Profile' }).click();
|
|
||||||
await page.getByLabel('Name').fill('Updated Name');
|
|
||||||
await page.getByRole('button', { name: 'Save' }).click();
|
|
||||||
|
|
||||||
// Verify success
|
|
||||||
await expect(page.getByText('Profile updated')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA generates additional E2E tests for display, validation errors, etc. based on acceptance criteria.
|
|
||||||
|
|
||||||
#### Implementation Checklist
|
|
||||||
|
|
||||||
TEA also provides an implementation checklist:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Implementation Checklist
|
|
||||||
|
|
||||||
### Backend
|
|
||||||
- [ ] Create `GET /api/profile` endpoint
|
|
||||||
- [ ] Create `PATCH /api/profile` endpoint
|
|
||||||
- [ ] Add email validation middleware
|
|
||||||
- [ ] Add profile picture upload handling
|
|
||||||
- [ ] Write API unit tests
|
|
||||||
|
|
||||||
### Frontend
|
|
||||||
- [ ] Create ProfilePage component
|
|
||||||
- [ ] Implement profile form with validation
|
|
||||||
- [ ] Add file upload for avatar
|
|
||||||
- [ ] Handle API errors gracefully
|
|
||||||
- [ ] Add loading states
|
|
||||||
|
|
||||||
### Tests
|
|
||||||
- [x] API tests generated (failing)
|
|
||||||
- [x] E2E tests generated (failing)
|
|
||||||
- [ ] Run tests after implementation (should pass)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Verify Tests Fail
|
|
||||||
|
|
||||||
This is the TDD red phase - tests MUST fail before implementation.
|
|
||||||
|
|
||||||
**For Playwright:**
|
|
||||||
```bash
|
|
||||||
npx playwright test
|
|
||||||
```
|
|
||||||
|
|
||||||
**For Cypress:**
|
|
||||||
```bash
|
|
||||||
npx cypress run
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
```
|
|
||||||
Running 6 tests using 1 worker
|
|
||||||
|
|
||||||
✗ tests/api/profile.spec.ts:3:3 › should fetch user profile
|
|
||||||
Error: expect(received).toBe(expected)
|
|
||||||
Expected: 200
|
|
||||||
Received: 404
|
|
||||||
|
|
||||||
✗ tests/e2e/profile.spec.ts:10:3 › should display current profile information
|
|
||||||
Error: page.goto: net::ERR_ABORTED
|
|
||||||
```
|
|
||||||
|
|
||||||
**All tests should fail!** This confirms:
|
|
||||||
- Feature doesn't exist yet
|
|
||||||
- Tests will guide implementation
|
|
||||||
- You have clear success criteria
|
|
||||||
|
|
||||||
### 7. Implement the Feature
|
|
||||||
|
|
||||||
Now implement the feature following the test guidance:
|
|
||||||
|
|
||||||
1. Start with API tests (backend first)
|
|
||||||
2. Make API tests pass
|
|
||||||
3. Move to E2E tests (frontend)
|
|
||||||
4. Make E2E tests pass
|
|
||||||
5. Refactor with confidence (tests protect you)
|
|
||||||
|
|
||||||
### 8. Verify Tests Pass
|
|
||||||
|
|
||||||
After implementation, run your test suite.
|
|
||||||
|
|
||||||
**For Playwright:**
|
|
||||||
```bash
|
|
||||||
npx playwright test
|
|
||||||
```
|
|
||||||
|
|
||||||
**For Cypress:**
|
|
||||||
```bash
|
|
||||||
npx cypress run
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
```
|
|
||||||
Running 6 tests using 1 worker
|
|
||||||
|
|
||||||
✓ tests/api/profile.spec.ts:3:3 › should fetch user profile (850ms)
|
|
||||||
✓ tests/api/profile.spec.ts:15:3 › should update user profile (1.2s)
|
|
||||||
✓ tests/api/profile.spec.ts:30:3 › should validate email format (650ms)
|
|
||||||
✓ tests/e2e/profile.spec.ts:10:3 › should display current profile (2.1s)
|
|
||||||
✓ tests/e2e/profile.spec.ts:18:3 › should edit and save profile (3.2s)
|
|
||||||
✓ tests/e2e/profile.spec.ts:35:3 › should show validation error (1.8s)
|
|
||||||
|
|
||||||
6 passed (9.8s)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Green!** You've completed the TDD cycle: red → green → refactor.
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### Failing Tests
|
|
||||||
- API tests for backend endpoints
|
|
||||||
- E2E tests for user workflows
|
|
||||||
- Component tests (if requested)
|
|
||||||
- All tests fail initially (red phase)
|
|
||||||
|
|
||||||
### Implementation Guidance
|
|
||||||
- Clear checklist of what to build
|
|
||||||
- Acceptance criteria translated to assertions
|
|
||||||
- Edge cases and error scenarios identified
|
|
||||||
|
|
||||||
### TDD Workflow Support
|
|
||||||
- Tests guide implementation
|
|
||||||
- Confidence to refactor
|
|
||||||
- Living documentation of features
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Start with Test Design
|
|
||||||
|
|
||||||
Run `*test-design` before `*atdd` for better results:
|
|
||||||
|
|
||||||
```
|
|
||||||
*test-design # Risk assessment and priorities
|
|
||||||
*atdd # Generate tests based on design
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Enhancements (Optional)
|
|
||||||
|
|
||||||
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*atdd`.
|
|
||||||
|
|
||||||
**Note:** ATDD is for features that don't exist yet, so recording mode (verify selectors with live UI) only applies if you have skeleton/mockup UI already implemented. For typical ATDD (no UI yet), TEA infers selectors from best practices.
|
|
||||||
|
|
||||||
See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
|
|
||||||
|
|
||||||
### Focus on P0/P1 Scenarios
|
|
||||||
|
|
||||||
Don't generate tests for everything at once:
|
|
||||||
|
|
||||||
```
|
|
||||||
Generate tests for:
|
|
||||||
- P0: Critical path (happy path)
|
|
||||||
- P1: High value (validation, errors)
|
|
||||||
|
|
||||||
Skip P2/P3 for now - add later with *automate
|
|
||||||
```
|
|
||||||
|
|
||||||
### API Tests First, E2E Later
|
|
||||||
|
|
||||||
Recommended order:
|
|
||||||
1. Generate API tests with `*atdd`
|
|
||||||
2. Implement backend (make API tests pass)
|
|
||||||
3. Generate E2E tests with `*atdd` (or `*automate`)
|
|
||||||
4. Implement frontend (make E2E tests pass)
|
|
||||||
|
|
||||||
This "outside-in" approach is faster and more reliable.
|
|
||||||
|
|
||||||
### Keep Tests Deterministic
|
|
||||||
|
|
||||||
TEA generates deterministic tests by default:
|
|
||||||
- No hard waits (`waitForTimeout`)
|
|
||||||
- Network-first patterns (wait for responses)
|
|
||||||
- Explicit assertions (no conditionals)
|
|
||||||
|
|
||||||
Don't modify these patterns - they prevent flakiness!
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Plan before generating
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Tests for existing features
|
|
||||||
- [How to Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md) - Initial setup
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why P0 vs P3 matters
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Avoiding flakiness
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *atdd](/docs/reference/tea/commands.md#atdd) - Full command reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - MCP and Playwright Utils options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,653 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Run Automate with TEA"
|
|
||||||
description: Expand test automation coverage after implementation using TEA's automate workflow
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Run Automate with TEA
|
|
||||||
|
|
||||||
Use TEA's `*automate` workflow to generate comprehensive tests for existing features. Unlike `*atdd`, these tests pass immediately because the feature already exists.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Feature already exists and works
|
|
||||||
- Want to add test coverage to existing code
|
|
||||||
- Need tests that pass immediately
|
|
||||||
- Expanding existing test suite
|
|
||||||
- Adding tests to legacy code
|
|
||||||
|
|
||||||
**Don't use this if:**
|
|
||||||
- Feature doesn't exist yet (use `*atdd` instead)
|
|
||||||
- Want failing tests to guide development (use `*atdd` for TDD)
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Test framework setup complete (run `*framework` if needed)
|
|
||||||
- Feature implemented and working
|
|
||||||
|
|
||||||
**Note:** This guide uses Playwright examples. If using Cypress, commands and syntax will differ.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Load TEA Agent
|
|
||||||
|
|
||||||
Start a fresh chat and load TEA:
|
|
||||||
|
|
||||||
```
|
|
||||||
*tea
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Run the Automate Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
*automate
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Provide Context
|
|
||||||
|
|
||||||
TEA will ask for context about what you're testing.
|
|
||||||
|
|
||||||
#### Option A: BMad-Integrated Mode (Recommended)
|
|
||||||
|
|
||||||
If you have BMad artifacts (stories, test designs, PRDs):
|
|
||||||
|
|
||||||
**What are you testing?**
|
|
||||||
```
|
|
||||||
I'm testing the user profile feature we just implemented.
|
|
||||||
Story: story-profile-management.md
|
|
||||||
Test Design: test-design-epic-1.md
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reference documents:**
|
|
||||||
- Story file with acceptance criteria
|
|
||||||
- Test design document (if available)
|
|
||||||
- PRD sections relevant to this feature
|
|
||||||
- Tech spec (if available)
|
|
||||||
|
|
||||||
**Existing tests:**
|
|
||||||
```
|
|
||||||
We have basic tests in tests/e2e/profile-view.spec.ts
|
|
||||||
Avoid duplicating that coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA will analyze your artifacts and generate comprehensive tests that:
|
|
||||||
- Cover acceptance criteria from the story
|
|
||||||
- Follow priorities from test design (P0 → P1 → P2)
|
|
||||||
- Avoid duplicating existing tests
|
|
||||||
- Include edge cases and error scenarios
|
|
||||||
|
|
||||||
#### Option B: Standalone Mode
|
|
||||||
|
|
||||||
If you're using TEA Solo or don't have BMad artifacts:
|
|
||||||
|
|
||||||
**What are you testing?**
|
|
||||||
```
|
|
||||||
TodoMVC React application at https://todomvc.com/examples/react/
|
|
||||||
Features: Create todos, mark as complete, filter by status, delete todos
|
|
||||||
```
|
|
||||||
|
|
||||||
**Specific scenarios to cover:**
|
|
||||||
```
|
|
||||||
- Creating todos (happy path)
|
|
||||||
- Marking todos as complete/incomplete
|
|
||||||
- Filtering (All, Active, Completed)
|
|
||||||
- Deleting todos
|
|
||||||
- Edge cases (empty input, long text)
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA will analyze the application and generate tests based on your description.
|
|
||||||
|
|
||||||
### 4. Specify Test Levels
|
|
||||||
|
|
||||||
TEA will ask which test levels to generate:
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
- **E2E tests** - Full browser-based user workflows
|
|
||||||
- **API tests** - Backend endpoint testing (faster, more reliable)
|
|
||||||
- **Component tests** - UI component testing in isolation (framework-dependent)
|
|
||||||
- **Mix** - Combination of levels (recommended)
|
|
||||||
|
|
||||||
**Example response:**
|
|
||||||
```
|
|
||||||
Generate:
|
|
||||||
- API tests for all CRUD operations
|
|
||||||
- E2E tests for critical user workflows (P0)
|
|
||||||
- Focus on P0 and P1 scenarios
|
|
||||||
- Skip P3 (low priority edge cases)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Review Generated Tests
|
|
||||||
|
|
||||||
TEA generates a comprehensive test suite with multiple test levels.
|
|
||||||
|
|
||||||
#### API Tests (`tests/api/profile.spec.ts`):
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test.describe('Profile API', () => {
|
|
||||||
let authToken: string;
|
|
||||||
|
|
||||||
test.beforeAll(async ({ request }) => {
|
|
||||||
// Manual auth token fetch
|
|
||||||
const response = await request.post('/api/auth/login', {
|
|
||||||
data: { email: 'test@example.com', password: 'password123' }
|
|
||||||
});
|
|
||||||
const { token } = await response.json();
|
|
||||||
authToken = token;
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should fetch user profile', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/profile', {
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(response.ok()).toBeTruthy();
|
|
||||||
const profile = await response.json();
|
|
||||||
expect(profile).toMatchObject({
|
|
||||||
id: expect.any(String),
|
|
||||||
name: expect.any(String),
|
|
||||||
email: expect.any(String)
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should update profile successfully', async ({ request }) => {
|
|
||||||
const response = await request.patch('/api/profile', {
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` },
|
|
||||||
data: {
|
|
||||||
name: 'Updated Name',
|
|
||||||
bio: 'Test bio'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(response.ok()).toBeTruthy();
|
|
||||||
const updated = await response.json();
|
|
||||||
expect(updated.name).toBe('Updated Name');
|
|
||||||
expect(updated.bio).toBe('Test bio');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should validate email format', async ({ request }) => {
|
|
||||||
const response = await request.patch('/api/profile', {
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` },
|
|
||||||
data: { email: 'invalid-email' }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(response.status()).toBe(400);
|
|
||||||
const error = await response.json();
|
|
||||||
expect(error.message).toContain('Invalid email');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should require authentication', async ({ request }) => {
|
|
||||||
const response = await request.get('/api/profile');
|
|
||||||
expect(response.status()).toBe(401);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test as base, expect } from '@playwright/test';
|
|
||||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
import { z } from 'zod';
|
|
||||||
|
|
||||||
const ProfileSchema = z.object({
|
|
||||||
id: z.string(),
|
|
||||||
name: z.string(),
|
|
||||||
email: z.string().email()
|
|
||||||
});
|
|
||||||
|
|
||||||
// Merge API and auth fixtures
|
|
||||||
const authFixtureTest = base.extend(createAuthFixtures());
|
|
||||||
export const testWithAuth = mergeTests(apiRequestFixture, authFixtureTest);
|
|
||||||
|
|
||||||
testWithAuth.describe('Profile API', () => {
|
|
||||||
testWithAuth('should fetch user profile', async ({ apiRequest, authToken }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'GET',
|
|
||||||
path: '/api/profile',
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
}).validateSchema(ProfileSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
// Schema already validated, type-safe access
|
|
||||||
expect(body.name).toBeDefined();
|
|
||||||
});
|
|
||||||
|
|
||||||
testWithAuth('should update profile successfully', async ({ apiRequest, authToken }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: '/api/profile',
|
|
||||||
body: { name: 'Updated Name', bio: 'Test bio' },
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
}).validateSchema(ProfileSchema); // Chained validation
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(body.name).toBe('Updated Name');
|
|
||||||
});
|
|
||||||
|
|
||||||
testWithAuth('should validate email format', async ({ apiRequest, authToken }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: '/api/profile',
|
|
||||||
body: { email: 'invalid-email' },
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(400);
|
|
||||||
expect(body.message).toContain('Invalid email');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Differences:**
|
|
||||||
- `authToken` fixture (persisted, reused across tests)
|
|
||||||
- `apiRequest` returns `{ status, body }` (cleaner)
|
|
||||||
- Schema validation with Zod (type-safe)
|
|
||||||
- Automatic retry for 5xx errors
|
|
||||||
- Less boilerplate (no manual `await response.json()` everywhere)
|
|
||||||
|
|
||||||
#### E2E Tests (`tests/e2e/profile.spec.ts`):
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should edit profile', async ({ page }) => {
|
|
||||||
// Login
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Password').fill('password123');
|
|
||||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
|
||||||
|
|
||||||
// Edit profile
|
|
||||||
await page.goto('/profile');
|
|
||||||
await page.getByRole('button', { name: 'Edit Profile' }).click();
|
|
||||||
await page.getByLabel('Name').fill('New Name');
|
|
||||||
await page.getByRole('button', { name: 'Save' }).click();
|
|
||||||
|
|
||||||
// Verify success
|
|
||||||
await expect(page.getByText('Profile updated')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA generates additional tests for validation, edge cases, etc. based on priorities.
|
|
||||||
|
|
||||||
#### Fixtures (`tests/support/fixtures/profile.ts`):
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
import { test as base, Page } from '@playwright/test';
|
|
||||||
|
|
||||||
type ProfileFixtures = {
|
|
||||||
authenticatedPage: Page;
|
|
||||||
testProfile: {
|
|
||||||
name: string;
|
|
||||||
email: string;
|
|
||||||
bio: string;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
export const test = base.extend<ProfileFixtures>({
|
|
||||||
authenticatedPage: async ({ page }, use) => {
|
|
||||||
// Manual login flow
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Password').fill('password123');
|
|
||||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
|
||||||
await page.waitForURL(/\/dashboard/);
|
|
||||||
|
|
||||||
await use(page);
|
|
||||||
},
|
|
||||||
|
|
||||||
testProfile: async ({ request }, use) => {
|
|
||||||
// Static test data
|
|
||||||
const profile = {
|
|
||||||
name: 'Test User',
|
|
||||||
email: 'test@example.com',
|
|
||||||
bio: 'Test bio'
|
|
||||||
};
|
|
||||||
|
|
||||||
await use(profile);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test as base } from '@playwright/test';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
import { faker } from '@faker-js/faker';
|
|
||||||
|
|
||||||
type ProfileFixtures = {
|
|
||||||
testProfile: {
|
|
||||||
name: string;
|
|
||||||
email: string;
|
|
||||||
bio: string;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
// Merge auth fixtures with custom fixtures
|
|
||||||
const authTest = base.extend(createAuthFixtures());
|
|
||||||
const profileTest = base.extend<ProfileFixtures>({
|
|
||||||
testProfile: async ({}, use) => {
|
|
||||||
// Dynamic test data with faker
|
|
||||||
const profile = {
|
|
||||||
name: faker.person.fullName(),
|
|
||||||
email: faker.internet.email(),
|
|
||||||
bio: faker.person.bio()
|
|
||||||
};
|
|
||||||
|
|
||||||
await use(profile);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export const test = mergeTests(authTest, profileTest);
|
|
||||||
export { expect } from '@playwright/test';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '../support/fixtures/profile';
|
|
||||||
|
|
||||||
test('should update profile', async ({ page, authToken, testProfile }) => {
|
|
||||||
// authToken from auth-session (automatic, persisted)
|
|
||||||
// testProfile from custom fixture (dynamic data)
|
|
||||||
|
|
||||||
await page.goto('/profile');
|
|
||||||
// Test with dynamic, unique data
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Benefits:**
|
|
||||||
- `authToken` fixture (persisted token, no manual login)
|
|
||||||
- Dynamic test data with faker (no conflicts)
|
|
||||||
- Fixture composition with mergeTests
|
|
||||||
- Reusable across test files
|
|
||||||
|
|
||||||
### 6. Review Additional Artifacts
|
|
||||||
|
|
||||||
TEA also generates:
|
|
||||||
|
|
||||||
#### Updated README (`tests/README.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Suite
|
|
||||||
|
|
||||||
## Running Tests
|
|
||||||
|
|
||||||
### All Tests
|
|
||||||
npm test
|
|
||||||
|
|
||||||
### Specific Levels
|
|
||||||
npm run test:api # API tests only
|
|
||||||
npm run test:e2e # E2E tests only
|
|
||||||
npm run test:smoke # Smoke tests (@smoke tag)
|
|
||||||
|
|
||||||
### Single File
|
|
||||||
npx playwright test tests/api/profile.spec.ts
|
|
||||||
|
|
||||||
## Test Structure
|
|
||||||
|
|
||||||
tests/
|
|
||||||
├── api/ # API tests (fast, reliable)
|
|
||||||
├── e2e/ # E2E tests (full workflows)
|
|
||||||
├── fixtures/ # Shared test utilities
|
|
||||||
└── README.md
|
|
||||||
|
|
||||||
## Writing Tests
|
|
||||||
|
|
||||||
Follow the patterns in existing tests:
|
|
||||||
- Use fixtures for authentication
|
|
||||||
- Network-first patterns (no hard waits)
|
|
||||||
- Explicit assertions
|
|
||||||
- Self-cleaning tests
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Definition of Done Summary:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Test Quality Checklist
|
|
||||||
|
|
||||||
✅ All tests pass on first run
|
|
||||||
✅ No hard waits (waitForTimeout)
|
|
||||||
✅ No conditionals for flow control
|
|
||||||
✅ Assertions are explicit
|
|
||||||
✅ Tests clean up after themselves
|
|
||||||
✅ Tests can run in parallel
|
|
||||||
✅ Execution time < 1.5 minutes per test
|
|
||||||
✅ Test files < 300 lines
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7. Run the Tests
|
|
||||||
|
|
||||||
All tests should pass immediately since the feature exists:
|
|
||||||
|
|
||||||
**For Playwright:**
|
|
||||||
```bash
|
|
||||||
npx playwright test
|
|
||||||
```
|
|
||||||
|
|
||||||
**For Cypress:**
|
|
||||||
```bash
|
|
||||||
npx cypress run
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
```
|
|
||||||
Running 15 tests using 4 workers
|
|
||||||
|
|
||||||
✓ tests/api/profile.spec.ts (4 tests) - 2.1s
|
|
||||||
✓ tests/e2e/profile-workflow.spec.ts (2 tests) - 5.3s
|
|
||||||
|
|
||||||
15 passed (7.4s)
|
|
||||||
```
|
|
||||||
|
|
||||||
**All green!** Tests pass because feature already exists.
|
|
||||||
|
|
||||||
### 8. Review Test Coverage
|
|
||||||
|
|
||||||
Check which scenarios are covered:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# View test report
|
|
||||||
npx playwright show-report
|
|
||||||
|
|
||||||
# Check coverage (if configured)
|
|
||||||
npm run test:coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
Compare against:
|
|
||||||
- Acceptance criteria from story
|
|
||||||
- Test priorities from test design
|
|
||||||
- Edge cases and error scenarios
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### Comprehensive Test Suite
|
|
||||||
- **API tests** - Fast, reliable backend testing
|
|
||||||
- **E2E tests** - Critical user workflows
|
|
||||||
- **Component tests** - UI component testing (if requested)
|
|
||||||
- **Fixtures** - Shared utilities and setup
|
|
||||||
|
|
||||||
### Component Testing by Framework
|
|
||||||
|
|
||||||
TEA supports component testing using framework-appropriate tools:
|
|
||||||
|
|
||||||
| Your Framework | Component Testing Tool | Tests Location |
|
|
||||||
| -------------- | ------------------------------ | ----------------------------------------- |
|
|
||||||
| **Cypress** | Cypress Component Testing | `tests/component/` |
|
|
||||||
| **Playwright** | Vitest + React Testing Library | `tests/component/` or `src/**/*.test.tsx` |
|
|
||||||
|
|
||||||
**Note:** Component tests use separate tooling from E2E tests:
|
|
||||||
- Cypress users: TEA generates Cypress Component Tests
|
|
||||||
- Playwright users: TEA generates Vitest + React Testing Library tests
|
|
||||||
|
|
||||||
### Quality Features
|
|
||||||
- **Network-first patterns** - Wait for actual responses, not timeouts
|
|
||||||
- **Deterministic tests** - No flakiness, no conditionals
|
|
||||||
- **Self-cleaning** - Tests don't leave test data behind
|
|
||||||
- **Parallel-safe** - Can run all tests concurrently
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
- **Updated README** - How to run tests
|
|
||||||
- **Test structure explanation** - Where tests live
|
|
||||||
- **Definition of Done** - Quality standards
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Start with Test Design
|
|
||||||
|
|
||||||
Run `*test-design` before `*automate` for better results:
|
|
||||||
|
|
||||||
```
|
|
||||||
*test-design # Risk assessment, priorities
|
|
||||||
*automate # Generate tests based on priorities
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA will focus on P0/P1 scenarios and skip low-value tests.
|
|
||||||
|
|
||||||
### Prioritize Test Levels
|
|
||||||
|
|
||||||
Not everything needs E2E tests:
|
|
||||||
|
|
||||||
**Good strategy:**
|
|
||||||
```
|
|
||||||
- P0 scenarios: API + E2E tests
|
|
||||||
- P1 scenarios: API tests only
|
|
||||||
- P2 scenarios: API tests (happy path)
|
|
||||||
- P3 scenarios: Skip or add later
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why?**
|
|
||||||
- API tests are 10x faster than E2E
|
|
||||||
- API tests are more reliable (no browser flakiness)
|
|
||||||
- E2E tests reserved for critical user journeys
|
|
||||||
|
|
||||||
### Avoid Duplicate Coverage
|
|
||||||
|
|
||||||
Tell TEA about existing tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
We already have tests in:
|
|
||||||
- tests/e2e/profile-view.spec.ts (viewing profile)
|
|
||||||
- tests/api/auth.spec.ts (authentication)
|
|
||||||
|
|
||||||
Don't duplicate that coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA will analyze existing tests and only generate new scenarios.
|
|
||||||
|
|
||||||
### MCP Enhancements (Optional)
|
|
||||||
|
|
||||||
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*automate` for:
|
|
||||||
|
|
||||||
- **Healing mode:** Fix broken selectors, update assertions, enhance with trace analysis
|
|
||||||
- **Recording mode:** Verify selectors with live browser, capture network requests
|
|
||||||
|
|
||||||
No prompts - TEA uses MCPs automatically when available. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
|
|
||||||
|
|
||||||
### Generate Tests Incrementally
|
|
||||||
|
|
||||||
Don't generate all tests at once:
|
|
||||||
|
|
||||||
**Iteration 1:**
|
|
||||||
```
|
|
||||||
Generate P0 tests only (critical path)
|
|
||||||
Run: *automate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Iteration 2:**
|
|
||||||
```
|
|
||||||
Generate P1 tests (high value scenarios)
|
|
||||||
Run: *automate
|
|
||||||
Tell TEA to avoid P0 coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**Iteration 3:**
|
|
||||||
```
|
|
||||||
Generate P2 tests (if time permits)
|
|
||||||
Run: *automate
|
|
||||||
```
|
|
||||||
|
|
||||||
This iterative approach:
|
|
||||||
- Provides fast feedback
|
|
||||||
- Allows validation before proceeding
|
|
||||||
- Keeps test generation focused
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
### Tests Pass But Coverage Is Incomplete
|
|
||||||
|
|
||||||
**Problem:** Tests pass but don't cover all scenarios.
|
|
||||||
|
|
||||||
**Cause:** TEA wasn't given complete context.
|
|
||||||
|
|
||||||
**Solution:** Provide more details:
|
|
||||||
```
|
|
||||||
Generate tests for:
|
|
||||||
- All acceptance criteria in story-profile.md
|
|
||||||
- Error scenarios (validation, authorization)
|
|
||||||
- Edge cases (empty fields, long inputs)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Too Many Tests Generated
|
|
||||||
|
|
||||||
**Problem:** TEA generated 50 tests for a simple feature.
|
|
||||||
|
|
||||||
**Cause:** Didn't specify priorities or scope.
|
|
||||||
|
|
||||||
**Solution:** Be specific:
|
|
||||||
```
|
|
||||||
Generate ONLY:
|
|
||||||
- P0 and P1 scenarios
|
|
||||||
- API tests for all scenarios
|
|
||||||
- E2E tests only for critical workflows
|
|
||||||
- Skip P2/P3 for now
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tests Duplicate Existing Coverage
|
|
||||||
|
|
||||||
**Problem:** New tests cover the same scenarios as existing tests.
|
|
||||||
|
|
||||||
**Cause:** Didn't tell TEA about existing tests.
|
|
||||||
|
|
||||||
**Solution:** Specify existing coverage:
|
|
||||||
```
|
|
||||||
We already have these tests:
|
|
||||||
- tests/api/profile.spec.ts (GET /api/profile)
|
|
||||||
- tests/e2e/profile-view.spec.ts (viewing profile)
|
|
||||||
|
|
||||||
Generate tests for scenarios NOT covered by those files
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Enhancements for Better Selectors
|
|
||||||
|
|
||||||
If you have MCP servers configured, TEA verifies selectors against live browser. Otherwise, TEA generates accessible selectors (`getByRole`, `getByLabel`) by default.
|
|
||||||
|
|
||||||
Setup: Answer "Yes" to MCPs in BMad installer + configure MCP servers in your IDE. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md).
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Plan before generating
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Failing tests before implementation
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Audit generated quality
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why prioritize P0 over P3
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Reusable test patterns
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *automate](/docs/reference/tea/commands.md#automate) - Full command reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - MCP and Playwright Utils options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,679 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Run NFR Assessment with TEA"
|
|
||||||
description: Validate non-functional requirements for security, performance, reliability, and maintainability using TEA
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Run NFR Assessment with TEA
|
|
||||||
|
|
||||||
Use TEA's `*nfr-assess` workflow to validate non-functional requirements (NFRs) with evidence-based assessment across security, performance, reliability, and maintainability.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Enterprise projects with compliance requirements
|
|
||||||
- Projects with strict NFR thresholds
|
|
||||||
- Before production release
|
|
||||||
- When NFRs are critical to project success
|
|
||||||
- Security or performance is mission-critical
|
|
||||||
|
|
||||||
**Best for:**
|
|
||||||
- Enterprise track projects
|
|
||||||
- Compliance-heavy industries (finance, healthcare, government)
|
|
||||||
- High-traffic applications
|
|
||||||
- Security-critical systems
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- NFRs defined in PRD or requirements doc
|
|
||||||
- Evidence preferred but not required (test results, security scans, performance metrics)
|
|
||||||
|
|
||||||
**Note:** You can run NFR assessment without complete evidence. TEA will mark categories as CONCERNS where evidence is missing and document what's needed.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Run the NFR Assessment Workflow
|
|
||||||
|
|
||||||
Start a fresh chat and run:
|
|
||||||
|
|
||||||
```
|
|
||||||
*nfr-assess
|
|
||||||
```
|
|
||||||
|
|
||||||
This loads TEA and starts the NFR assessment workflow.
|
|
||||||
|
|
||||||
### 2. Specify NFR Categories
|
|
||||||
|
|
||||||
TEA will ask which NFR categories to assess.
|
|
||||||
|
|
||||||
**Available Categories:**
|
|
||||||
|
|
||||||
| Category | Focus Areas |
|
|
||||||
|----------|-------------|
|
|
||||||
| **Security** | Authentication, authorization, encryption, vulnerabilities, security headers, input validation |
|
|
||||||
| **Performance** | Response time, throughput, resource usage, database queries, frontend load time |
|
|
||||||
| **Reliability** | Error handling, recovery mechanisms, availability, failover, data backup |
|
|
||||||
| **Maintainability** | Code quality, test coverage, technical debt, documentation, dependency health |
|
|
||||||
|
|
||||||
**Example Response:**
|
|
||||||
```
|
|
||||||
Assess:
|
|
||||||
- Security (critical for user data)
|
|
||||||
- Performance (API must be fast)
|
|
||||||
- Reliability (99.9% uptime requirement)
|
|
||||||
|
|
||||||
Skip maintainability for now
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Provide NFR Thresholds
|
|
||||||
|
|
||||||
TEA will ask for specific thresholds for each category.
|
|
||||||
|
|
||||||
**Critical Principle: Never guess thresholds.**
|
|
||||||
|
|
||||||
If you don't know the exact requirement, tell TEA to mark as CONCERNS and request clarification from stakeholders.
|
|
||||||
|
|
||||||
#### Security Thresholds
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Requirements:
|
|
||||||
- All endpoints require authentication: YES
|
|
||||||
- Data encrypted at rest: YES (PostgreSQL TDE)
|
|
||||||
- Zero critical vulnerabilities: YES (npm audit)
|
|
||||||
- Input validation on all endpoints: YES (Zod schemas)
|
|
||||||
- Security headers configured: YES (helmet.js)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Performance Thresholds
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Requirements:
|
|
||||||
- API response time P99: < 200ms
|
|
||||||
- API response time P95: < 150ms
|
|
||||||
- Throughput: > 1000 requests/second
|
|
||||||
- Frontend initial load: < 2 seconds
|
|
||||||
- Database query time P99: < 50ms
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Reliability Thresholds
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Requirements:
|
|
||||||
- Error handling: All endpoints return structured errors
|
|
||||||
- Availability: 99.9% uptime
|
|
||||||
- Recovery time: < 5 minutes (RTO)
|
|
||||||
- Data backup: Daily automated backups
|
|
||||||
- Failover: Automatic with < 30s downtime
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Maintainability Thresholds
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Requirements:
|
|
||||||
- Test coverage: > 80%
|
|
||||||
- Code quality: SonarQube grade A
|
|
||||||
- Documentation: All APIs documented
|
|
||||||
- Dependency age: < 6 months outdated
|
|
||||||
- Technical debt: < 10% of codebase
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Provide Evidence
|
|
||||||
|
|
||||||
TEA will ask where to find evidence for each requirement.
|
|
||||||
|
|
||||||
**Evidence Sources:**
|
|
||||||
|
|
||||||
| Category | Evidence Type | Location |
|
|
||||||
|----------|---------------|----------|
|
|
||||||
| Security | Security scan reports | `/reports/security-scan.pdf` |
|
|
||||||
| Security | Vulnerability scan | `npm audit`, `snyk test` results |
|
|
||||||
| Security | Auth test results | Test reports showing auth coverage |
|
|
||||||
| Performance | Load test results | `/reports/k6-load-test.json` |
|
|
||||||
| Performance | APM data | Datadog, New Relic dashboards |
|
|
||||||
| Performance | Lighthouse scores | `/reports/lighthouse.json` |
|
|
||||||
| Reliability | Error rate metrics | Production monitoring dashboards |
|
|
||||||
| Reliability | Uptime data | StatusPage, PagerDuty logs |
|
|
||||||
| Maintainability | Coverage reports | `/reports/coverage/index.html` |
|
|
||||||
| Maintainability | Code quality | SonarQube dashboard |
|
|
||||||
|
|
||||||
**Example Response:**
|
|
||||||
```
|
|
||||||
Evidence:
|
|
||||||
- Security: npm audit results (clean), auth tests 15/15 passing
|
|
||||||
- Performance: k6 load test at /reports/k6-results.json
|
|
||||||
- Reliability: Error rate 0.01% in staging (logs in Datadog)
|
|
||||||
|
|
||||||
Don't have:
|
|
||||||
- Uptime data (new system, no baseline)
|
|
||||||
- Mark as CONCERNS and request monitoring setup
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Review NFR Assessment Report
|
|
||||||
|
|
||||||
TEA generates a comprehensive assessment report.
|
|
||||||
|
|
||||||
#### Assessment Report (`nfr-assessment.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Non-Functional Requirements Assessment
|
|
||||||
|
|
||||||
**Date:** 2026-01-13
|
|
||||||
**Epic:** User Profile Management
|
|
||||||
**Release:** v1.2.0
|
|
||||||
**Overall Decision:** CONCERNS ⚠️
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
|
|
||||||
| Category | Status | Critical Issues |
|
|
||||||
|----------|--------|-----------------|
|
|
||||||
| Security | PASS ✅ | 0 |
|
|
||||||
| Performance | CONCERNS ⚠️ | 2 |
|
|
||||||
| Reliability | PASS ✅ | 0 |
|
|
||||||
| Maintainability | PASS ✅ | 0 |
|
|
||||||
|
|
||||||
**Decision Rationale:**
|
|
||||||
Performance metrics below target (P99 latency, throughput). Mitigation plan in place. Security and reliability meet all requirements.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Security Assessment
|
|
||||||
|
|
||||||
**Status:** PASS ✅
|
|
||||||
|
|
||||||
### Requirements Met
|
|
||||||
|
|
||||||
| Requirement | Target | Actual | Status |
|
|
||||||
|-------------|--------|--------|--------|
|
|
||||||
| Authentication required | All endpoints | 100% enforced | ✅ |
|
|
||||||
| Data encryption at rest | PostgreSQL TDE | Enabled | ✅ |
|
|
||||||
| Critical vulnerabilities | 0 | 0 | ✅ |
|
|
||||||
| Input validation | All endpoints | Zod schemas on 100% | ✅ |
|
|
||||||
| Security headers | Configured | helmet.js enabled | ✅ |
|
|
||||||
|
|
||||||
### Evidence
|
|
||||||
|
|
||||||
**Security Scan:**
|
|
||||||
```bash
|
|
||||||
$ npm audit
|
|
||||||
found 0 vulnerabilities
|
|
||||||
```
|
|
||||||
|
|
||||||
**Authentication Tests:**
|
|
||||||
- 15/15 auth tests passing
|
|
||||||
- Tested unauthorized access (401 responses)
|
|
||||||
- Token validation working
|
|
||||||
|
|
||||||
**Penetration Testing:**
|
|
||||||
- Report: `/reports/pentest-2026-01.pdf`
|
|
||||||
- Findings: 0 critical, 2 low (addressed)
|
|
||||||
|
|
||||||
**Conclusion:** All security requirements met. No blockers.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Performance Assessment
|
|
||||||
|
|
||||||
**Status:** CONCERNS ⚠️
|
|
||||||
|
|
||||||
### Requirements Status
|
|
||||||
|
|
||||||
| Metric | Target | Actual | Status |
|
|
||||||
|--------|--------|--------|--------|
|
|
||||||
| API response P99 | < 200ms | 350ms | ❌ Exceeds |
|
|
||||||
| API response P95 | < 150ms | 180ms | ⚠️ Exceeds |
|
|
||||||
| Throughput | > 1000 rps | 850 rps | ⚠️ Below |
|
|
||||||
| Frontend load | < 2s | 1.8s | ✅ Met |
|
|
||||||
| DB query P99 | < 50ms | 85ms | ❌ Exceeds |
|
|
||||||
|
|
||||||
### Issues Identified
|
|
||||||
|
|
||||||
#### Issue 1: P99 Latency Exceeds Target
|
|
||||||
|
|
||||||
**Measured:** 350ms P99 (target: <200ms)
|
|
||||||
**Root Cause:** Database queries not optimized
|
|
||||||
- Missing indexes on profile queries
|
|
||||||
- N+1 query problem in profile endpoint
|
|
||||||
|
|
||||||
**Impact:** User experience degraded for 1% of requests
|
|
||||||
|
|
||||||
**Mitigation Plan:**
|
|
||||||
- Add composite index on `(user_id, profile_id)` - backend team, 2 days
|
|
||||||
- Refactor profile endpoint to use joins instead of multiple queries - backend team, 3 days
|
|
||||||
- Re-run load tests after optimization - QA team, 1 day
|
|
||||||
|
|
||||||
**Owner:** Backend team lead
|
|
||||||
**Deadline:** Before release (January 20, 2026)
|
|
||||||
|
|
||||||
#### Issue 2: Throughput Below Target
|
|
||||||
|
|
||||||
**Measured:** 850 rps (target: >1000 rps)
|
|
||||||
**Root Cause:** Connection pool size too small
|
|
||||||
- PostgreSQL max_connections = 100 (too low)
|
|
||||||
- No connection pooling in application
|
|
||||||
|
|
||||||
**Impact:** System cannot handle expected traffic
|
|
||||||
|
|
||||||
**Mitigation Plan:**
|
|
||||||
- Increase PostgreSQL max_connections to 500 - DevOps, 1 day
|
|
||||||
- Implement connection pooling with pg-pool - backend team, 2 days
|
|
||||||
- Re-run load tests - QA team, 1 day
|
|
||||||
|
|
||||||
**Owner:** DevOps + Backend team
|
|
||||||
**Deadline:** Before release (January 20, 2026)
|
|
||||||
|
|
||||||
### Evidence
|
|
||||||
|
|
||||||
**Load Testing:**
|
|
||||||
```
|
|
||||||
Tool: k6
|
|
||||||
Duration: 10 minutes
|
|
||||||
Virtual Users: 500 concurrent
|
|
||||||
Report: /reports/k6-load-test.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**Results:**
|
|
||||||
```
|
|
||||||
scenarios: (100.00%) 1 scenario, 500 max VUs, 10m30s max duration
|
|
||||||
✓ http_req_duration..............: avg=250ms min=45ms med=180ms max=2.1s p(90)=280ms p(95)=350ms
|
|
||||||
http_reqs......................: 85000 (850/s)
|
|
||||||
http_req_failed................: 0.1%
|
|
||||||
```
|
|
||||||
|
|
||||||
**APM Data:**
|
|
||||||
- Tool: Datadog
|
|
||||||
- Dashboard: <https://app.datadoghq.com/dashboard/abc123>
|
|
||||||
|
|
||||||
**Conclusion:** Performance issues identified with mitigation plan. Re-assess after optimization.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Reliability Assessment
|
|
||||||
|
|
||||||
**Status:** PASS ✅
|
|
||||||
|
|
||||||
### Requirements Met
|
|
||||||
|
|
||||||
| Requirement | Target | Actual | Status |
|
|
||||||
|-------------|--------|--------|--------|
|
|
||||||
| Error handling | Structured errors | 100% endpoints | ✅ |
|
|
||||||
| Availability | 99.9% uptime | 99.95% (staging) | ✅ |
|
|
||||||
| Recovery time | < 5 min (RTO) | 3 min (tested) | ✅ |
|
|
||||||
| Data backup | Daily | Automated daily | ✅ |
|
|
||||||
| Failover | < 30s downtime | 15s (tested) | ✅ |
|
|
||||||
|
|
||||||
### Evidence
|
|
||||||
|
|
||||||
**Error Handling Tests:**
|
|
||||||
- All endpoints return structured JSON errors
|
|
||||||
- Error codes standardized (400, 401, 403, 404, 500)
|
|
||||||
- Error messages user-friendly (no stack traces)
|
|
||||||
|
|
||||||
**Chaos Engineering:**
|
|
||||||
- Tested database failover: 15s downtime ✅
|
|
||||||
- Tested service crash recovery: 3 min ✅
|
|
||||||
- Tested network partition: Graceful degradation ✅
|
|
||||||
|
|
||||||
**Monitoring:**
|
|
||||||
- Staging uptime (30 days): 99.95%
|
|
||||||
- Error rate: 0.01% (target: <0.1%)
|
|
||||||
- P50 availability: 100%
|
|
||||||
|
|
||||||
**Conclusion:** All reliability requirements exceeded. No issues.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Maintainability Assessment
|
|
||||||
|
|
||||||
**Status:** PASS ✅
|
|
||||||
|
|
||||||
### Requirements Met
|
|
||||||
|
|
||||||
| Requirement | Target | Actual | Status |
|
|
||||||
|-------------|--------|--------|--------|
|
|
||||||
| Test coverage | > 80% | 85% | ✅ |
|
|
||||||
| Code quality | Grade A | Grade A | ✅ |
|
|
||||||
| Documentation | All APIs | 100% documented | ✅ |
|
|
||||||
| Outdated dependencies | < 6 months | 3 months avg | ✅ |
|
|
||||||
| Technical debt | < 10% | 7% | ✅ |
|
|
||||||
|
|
||||||
### Evidence
|
|
||||||
|
|
||||||
**Test Coverage:**
|
|
||||||
```
|
|
||||||
Statements : 85.2% ( 1205/1414 )
|
|
||||||
Branches : 82.1% ( 412/502 )
|
|
||||||
Functions : 88.5% ( 201/227 )
|
|
||||||
Lines : 85.2% ( 1205/1414 )
|
|
||||||
```
|
|
||||||
|
|
||||||
**Code Quality:**
|
|
||||||
- SonarQube: Grade A
|
|
||||||
- Maintainability rating: A
|
|
||||||
- Technical debt ratio: 7%
|
|
||||||
- Code smells: 12 (all minor)
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
- API docs: 100% coverage (OpenAPI spec)
|
|
||||||
- README: Complete and up-to-date
|
|
||||||
- Architecture docs: ADRs for all major decisions
|
|
||||||
|
|
||||||
**Conclusion:** All maintainability requirements met. Codebase is healthy.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Overall Gate Decision
|
|
||||||
|
|
||||||
### Decision: CONCERNS ⚠️
|
|
||||||
|
|
||||||
**Rationale:**
|
|
||||||
- **Blockers:** None
|
|
||||||
- **Concerns:** Performance metrics below target (P99 latency, throughput)
|
|
||||||
- **Mitigation:** Plan in place with clear owners and deadlines (5 days total)
|
|
||||||
- **Passing:** Security, reliability, maintainability all green
|
|
||||||
|
|
||||||
### Actions Required Before Release
|
|
||||||
|
|
||||||
1. **Optimize database queries** (backend team, 3 days)
|
|
||||||
- Add indexes
|
|
||||||
- Fix N+1 queries
|
|
||||||
- Implement connection pooling
|
|
||||||
|
|
||||||
2. **Re-run performance tests** (QA team, 1 day)
|
|
||||||
- Validate P99 < 200ms
|
|
||||||
- Validate throughput > 1000 rps
|
|
||||||
|
|
||||||
3. **Update this assessment** (TEA, 1 hour)
|
|
||||||
- Re-run `*nfr-assess` with new results
|
|
||||||
- Confirm PASS status
|
|
||||||
|
|
||||||
### Waiver Option (If Business Approves)
|
|
||||||
|
|
||||||
If business decides to deploy with current performance:
|
|
||||||
|
|
||||||
**Waiver Justification:**
|
|
||||||
```markdown
|
|
||||||
## Performance Waiver
|
|
||||||
|
|
||||||
**Waived By:** VP Engineering, Product Manager
|
|
||||||
**Date:** 2026-01-15
|
|
||||||
**Reason:** Business priority to launch by Q1
|
|
||||||
**Conditions:**
|
|
||||||
- Set monitoring alerts for P99 > 300ms
|
|
||||||
- Plan optimization for v1.3 (February release)
|
|
||||||
- Document known performance limitations in release notes
|
|
||||||
|
|
||||||
**Accepted Risk:**
|
|
||||||
- 1% of users experience slower response (350ms vs 200ms)
|
|
||||||
- System can handle current traffic (850 rps sufficient for launch)
|
|
||||||
- Optimization planned for next release
|
|
||||||
```
|
|
||||||
|
|
||||||
### Approvals
|
|
||||||
|
|
||||||
- [ ] Product Manager - Review business impact
|
|
||||||
- [ ] Tech Lead - Review mitigation plan
|
|
||||||
- [ ] QA Lead - Validate test evidence
|
|
||||||
- [ ] DevOps - Confirm infrastructure ready
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Monitoring Plan Post-Release
|
|
||||||
|
|
||||||
**Performance Alerts:**
|
|
||||||
- P99 latency > 400ms (critical)
|
|
||||||
- Throughput < 700 rps (warning)
|
|
||||||
- Error rate > 1% (critical)
|
|
||||||
|
|
||||||
**Review Cadence:**
|
|
||||||
- Daily: Check performance dashboards
|
|
||||||
- Weekly: Review alert trends
|
|
||||||
- Monthly: Re-assess NFRs
|
|
||||||
```
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### NFR Assessment Report
|
|
||||||
- Category-by-category analysis (Security, Performance, Reliability, Maintainability)
|
|
||||||
- Requirements status (target vs actual)
|
|
||||||
- Evidence for each requirement
|
|
||||||
- Issues identified with root cause analysis
|
|
||||||
|
|
||||||
### Gate Decision
|
|
||||||
- **PASS** ✅ - All NFRs met, ready to release
|
|
||||||
- **CONCERNS** ⚠️ - Some NFRs not met, mitigation plan exists
|
|
||||||
- **FAIL** ❌ - Critical NFRs not met, blocks release
|
|
||||||
- **WAIVED** ⏭️ - Business-approved waiver with documented risk
|
|
||||||
|
|
||||||
### Mitigation Plans
|
|
||||||
- Specific actions to address concerns
|
|
||||||
- Owners and deadlines
|
|
||||||
- Re-assessment criteria
|
|
||||||
|
|
||||||
### Monitoring Plan
|
|
||||||
- Post-release monitoring strategy
|
|
||||||
- Alert thresholds
|
|
||||||
- Review cadence
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Run NFR Assessment Early
|
|
||||||
|
|
||||||
**Phase 2 (Enterprise):**
|
|
||||||
Run `*nfr-assess` during planning to:
|
|
||||||
- Identify NFR requirements early
|
|
||||||
- Plan for performance testing
|
|
||||||
- Budget for security audits
|
|
||||||
- Set up monitoring infrastructure
|
|
||||||
|
|
||||||
**Phase 4 or Gate:**
|
|
||||||
Re-run before release to validate all requirements met.
|
|
||||||
|
|
||||||
### Never Guess Thresholds
|
|
||||||
|
|
||||||
If you don't know the NFR target:
|
|
||||||
|
|
||||||
**Don't:**
|
|
||||||
```
|
|
||||||
API response time should probably be under 500ms
|
|
||||||
```
|
|
||||||
|
|
||||||
**Do:**
|
|
||||||
```
|
|
||||||
Mark as CONCERNS - Request threshold from stakeholders
|
|
||||||
"What is the acceptable API response time?"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Collect Evidence Beforehand
|
|
||||||
|
|
||||||
Before running `*nfr-assess`, gather:
|
|
||||||
|
|
||||||
**Security:**
|
|
||||||
```bash
|
|
||||||
npm audit # Vulnerability scan
|
|
||||||
snyk test # Alternative security scan
|
|
||||||
npm run test:security # Security test suite
|
|
||||||
```
|
|
||||||
|
|
||||||
**Performance:**
|
|
||||||
```bash
|
|
||||||
npm run test:load # k6 or artillery load tests
|
|
||||||
npm run test:lighthouse # Frontend performance
|
|
||||||
npm run test:db-performance # Database query analysis
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reliability:**
|
|
||||||
- Production error rate (last 30 days)
|
|
||||||
- Uptime data (StatusPage, PagerDuty)
|
|
||||||
- Incident response times
|
|
||||||
|
|
||||||
**Maintainability:**
|
|
||||||
```bash
|
|
||||||
npm run test:coverage # Test coverage report
|
|
||||||
npm run lint # Code quality check
|
|
||||||
npm outdated # Dependency freshness
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use Real Data, Not Assumptions
|
|
||||||
|
|
||||||
**Don't:**
|
|
||||||
```
|
|
||||||
System is probably fast enough
|
|
||||||
Security seems fine
|
|
||||||
```
|
|
||||||
|
|
||||||
**Do:**
|
|
||||||
```
|
|
||||||
Load test results show P99 = 350ms
|
|
||||||
npm audit shows 0 vulnerabilities
|
|
||||||
Test coverage report shows 85%
|
|
||||||
```
|
|
||||||
|
|
||||||
Evidence-based decisions prevent surprises in production.
|
|
||||||
|
|
||||||
### Document Waivers Thoroughly
|
|
||||||
|
|
||||||
If business approves waiver:
|
|
||||||
|
|
||||||
**Required:**
|
|
||||||
- Who approved (name, role, date)
|
|
||||||
- Why (business justification)
|
|
||||||
- Conditions (monitoring, future plans)
|
|
||||||
- Accepted risk (quantified impact)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```markdown
|
|
||||||
Waived by: CTO, VP Product (2026-01-15)
|
|
||||||
Reason: Q1 launch critical for investor demo
|
|
||||||
Conditions: Optimize in v1.3, monitor closely
|
|
||||||
Risk: 1% of users experience 350ms latency (acceptable for launch)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Re-Assess After Fixes
|
|
||||||
|
|
||||||
After implementing mitigations:
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Fix performance issues
|
|
||||||
2. Run load tests again
|
|
||||||
3. Run *nfr-assess with new evidence
|
|
||||||
4. Verify PASS status
|
|
||||||
```
|
|
||||||
|
|
||||||
Don't deploy with CONCERNS without mitigation or waiver.
|
|
||||||
|
|
||||||
### Integrate with Release Checklist
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Release Checklist
|
|
||||||
|
|
||||||
### Pre-Release
|
|
||||||
- [ ] All tests passing
|
|
||||||
- [ ] Test coverage > 80%
|
|
||||||
- [ ] Run *nfr-assess
|
|
||||||
- [ ] NFR status: PASS or WAIVED
|
|
||||||
|
|
||||||
### Performance
|
|
||||||
- [ ] Load tests completed
|
|
||||||
- [ ] P99 latency meets threshold
|
|
||||||
- [ ] Throughput meets threshold
|
|
||||||
|
|
||||||
### Security
|
|
||||||
- [ ] Security scan clean
|
|
||||||
- [ ] Auth tests passing
|
|
||||||
- [ ] Penetration test complete
|
|
||||||
|
|
||||||
### Post-Release
|
|
||||||
- [ ] Monitoring alerts configured
|
|
||||||
- [ ] Dashboards updated
|
|
||||||
- [ ] Incident response plan ready
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
### No Evidence Available
|
|
||||||
|
|
||||||
**Problem:** Don't have performance data, security scans, etc.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
Mark as CONCERNS for categories without evidence
|
|
||||||
Document what evidence is needed
|
|
||||||
Set up tests/scans before re-assessment
|
|
||||||
```
|
|
||||||
|
|
||||||
**Don't block on missing evidence** - document what's needed and proceed.
|
|
||||||
|
|
||||||
### Thresholds Too Strict
|
|
||||||
|
|
||||||
**Problem:** Can't meet unrealistic thresholds.
|
|
||||||
|
|
||||||
**Symptoms:**
|
|
||||||
- P99 < 50ms (impossible for complex queries)
|
|
||||||
- 100% test coverage (impractical)
|
|
||||||
- Zero technical debt (unrealistic)
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
Negotiate thresholds with stakeholders:
|
|
||||||
- "P99 < 50ms is unrealistic for our DB queries"
|
|
||||||
- "Propose P99 < 200ms based on industry standards"
|
|
||||||
- "Show evidence from load tests"
|
|
||||||
```
|
|
||||||
|
|
||||||
Use data to negotiate realistic requirements.
|
|
||||||
|
|
||||||
### Assessment Takes Too Long
|
|
||||||
|
|
||||||
**Problem:** Gathering evidence for all categories is time-consuming.
|
|
||||||
|
|
||||||
**Solution:** Focus on critical categories first:
|
|
||||||
|
|
||||||
**For most projects:**
|
|
||||||
```
|
|
||||||
Priority 1: Security (always critical)
|
|
||||||
Priority 2: Performance (if high-traffic)
|
|
||||||
Priority 3: Reliability (if uptime critical)
|
|
||||||
Priority 4: Maintainability (nice to have)
|
|
||||||
```
|
|
||||||
|
|
||||||
Assess categories incrementally, not all at once.
|
|
||||||
|
|
||||||
### CONCERNS vs FAIL - When to Block?
|
|
||||||
|
|
||||||
**CONCERNS** ⚠️:
|
|
||||||
- Issues exist but not critical
|
|
||||||
- Mitigation plan in place
|
|
||||||
- Business accepts risk (with waiver)
|
|
||||||
- Can deploy with monitoring
|
|
||||||
|
|
||||||
**FAIL** ❌:
|
|
||||||
- Critical security vulnerability (CVE critical)
|
|
||||||
- System unusable (error rate >10%)
|
|
||||||
- Data loss risk (no backups)
|
|
||||||
- Zero mitigation possible
|
|
||||||
|
|
||||||
**Rule of thumb:** If you can mitigate or monitor, use CONCERNS. Reserve FAIL for absolute blockers.
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Gate decision complements NFR
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality complements NFR
|
|
||||||
- [Run TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise workflow
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Risk assessment principles
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - NFR in release gates
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *nfr-assess](/docs/reference/tea/commands.md#nfr-assess) - Full command reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Enterprise config options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,605 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Run Test Review with TEA"
|
|
||||||
description: Audit test quality using TEA's comprehensive knowledge base and get 0-100 scoring
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Run Test Review with TEA
|
|
||||||
|
|
||||||
Use TEA's `*test-review` workflow to audit test quality with objective scoring and actionable feedback. TEA reviews tests against its knowledge base of best practices.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Want to validate test quality objectively
|
|
||||||
- Need quality metrics for release gates
|
|
||||||
- Preparing for production deployment
|
|
||||||
- Reviewing team-written tests
|
|
||||||
- Auditing AI-generated tests
|
|
||||||
- Onboarding new team members (show good patterns)
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Tests written (to review)
|
|
||||||
- Test framework configured
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Load TEA Agent
|
|
||||||
|
|
||||||
Start a fresh chat and load TEA:
|
|
||||||
|
|
||||||
```
|
|
||||||
*tea
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Run the Test Review Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
*test-review
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Specify Review Scope
|
|
||||||
|
|
||||||
TEA will ask what to review.
|
|
||||||
|
|
||||||
#### Option A: Single File
|
|
||||||
|
|
||||||
Review one test file:
|
|
||||||
|
|
||||||
```
|
|
||||||
tests/e2e/checkout.spec.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
**Best for:**
|
|
||||||
- Reviewing specific failing tests
|
|
||||||
- Quick feedback on new tests
|
|
||||||
- Learning from specific examples
|
|
||||||
|
|
||||||
#### Option B: Directory
|
|
||||||
|
|
||||||
Review all tests in a directory:
|
|
||||||
|
|
||||||
```
|
|
||||||
tests/e2e/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Best for:**
|
|
||||||
- Reviewing E2E test suite
|
|
||||||
- Comparing test quality across files
|
|
||||||
- Finding patterns of issues
|
|
||||||
|
|
||||||
#### Option C: Entire Suite
|
|
||||||
|
|
||||||
Review all tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
tests/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Best for:**
|
|
||||||
- Release gate quality check
|
|
||||||
- Comprehensive audit
|
|
||||||
- Establishing baseline metrics
|
|
||||||
|
|
||||||
### 4. Review the Quality Report
|
|
||||||
|
|
||||||
TEA generates a comprehensive quality report with scoring.
|
|
||||||
|
|
||||||
#### Report Structure (`test-review.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Quality Review Report
|
|
||||||
|
|
||||||
**Date:** 2026-01-13
|
|
||||||
**Scope:** tests/e2e/
|
|
||||||
**Overall Score:** 76/100
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
- **Tests Reviewed:** 12
|
|
||||||
- **Passing Quality:** 9 tests (75%)
|
|
||||||
- **Needs Improvement:** 3 tests (25%)
|
|
||||||
- **Critical Issues:** 2
|
|
||||||
- **Recommendations:** 6
|
|
||||||
|
|
||||||
## Critical Issues
|
|
||||||
|
|
||||||
### 1. Hard Waits Detected
|
|
||||||
|
|
||||||
**File:** `tests/e2e/checkout.spec.ts:45`
|
|
||||||
**Issue:** Using `page.waitForTimeout(3000)`
|
|
||||||
**Impact:** Test is flaky and unnecessarily slow
|
|
||||||
**Severity:** Critical
|
|
||||||
|
|
||||||
**Current Code:**
|
|
||||||
```typescript
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
await page.waitForTimeout(3000); // ❌ Hard wait
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```typescript
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
// Wait for the API response that triggers success message
|
|
||||||
await page.waitForResponse(resp =>
|
|
||||||
resp.url().includes('/api/checkout') && resp.ok()
|
|
||||||
);
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why This Matters:**
|
|
||||||
- Hard waits are fixed timeouts that don't wait for actual conditions
|
|
||||||
- Tests fail intermittently on slower machines
|
|
||||||
- Wastes time waiting even when response is fast
|
|
||||||
- Network-first patterns are more reliable
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Conditional Flow Control
|
|
||||||
|
|
||||||
**File:** `tests/e2e/profile.spec.ts:28`
|
|
||||||
**Issue:** Using if/else to handle optional elements
|
|
||||||
**Impact:** Non-deterministic test behavior
|
|
||||||
**Severity:** Critical
|
|
||||||
|
|
||||||
**Current Code:**
|
|
||||||
```typescript
|
|
||||||
if (await page.locator('.banner').isVisible()) {
|
|
||||||
await page.click('.dismiss');
|
|
||||||
}
|
|
||||||
// ❌ Test behavior changes based on banner presence
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```typescript
|
|
||||||
// Option 1: Make banner presence deterministic
|
|
||||||
await expect(page.locator('.banner')).toBeVisible();
|
|
||||||
await page.click('.dismiss');
|
|
||||||
|
|
||||||
// Option 2: Test both scenarios separately
|
|
||||||
test('should show banner for new users', async ({ page }) => {
|
|
||||||
// Test with banner
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should not show banner for returning users', async ({ page }) => {
|
|
||||||
// Test without banner
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why This Matters:**
|
|
||||||
- Tests should be deterministic (same result every run)
|
|
||||||
- Conditionals hide bugs (what if banner should always show?)
|
|
||||||
- Makes debugging harder
|
|
||||||
- Violates test isolation principle
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
|
|
||||||
### 1. Extract Repeated Setup
|
|
||||||
|
|
||||||
**File:** `tests/e2e/profile.spec.ts`
|
|
||||||
**Issue:** Login code duplicated in every test
|
|
||||||
**Severity:** Medium
|
|
||||||
**Impact:** Maintenance burden, test verbosity
|
|
||||||
|
|
||||||
**Current:**
|
|
||||||
```typescript
|
|
||||||
test('test 1', async ({ page }) => {
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.fill('[name="email"]', 'test@example.com');
|
|
||||||
await page.fill('[name="password"]', 'password');
|
|
||||||
await page.click('button[type="submit"]');
|
|
||||||
// Test logic...
|
|
||||||
});
|
|
||||||
|
|
||||||
test('test 2', async ({ page }) => {
|
|
||||||
// Same login code repeated
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix (Vanilla Playwright):**
|
|
||||||
```typescript
|
|
||||||
// Create fixture in tests/support/fixtures/auth.ts
|
|
||||||
import { test as base, Page } from '@playwright/test';
|
|
||||||
|
|
||||||
export const test = base.extend<{ authenticatedPage: Page }>({
|
|
||||||
authenticatedPage: async ({ page }, use) => {
|
|
||||||
await page.goto('/login');
|
|
||||||
await page.getByLabel('Email').fill('test@example.com');
|
|
||||||
await page.getByLabel('Password').fill('password');
|
|
||||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
|
||||||
await page.waitForURL(/\/dashboard/);
|
|
||||||
await use(page);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Use in tests
|
|
||||||
test('test 1', async ({ authenticatedPage }) => {
|
|
||||||
// Already logged in
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Better (With Playwright Utils):**
|
|
||||||
```typescript
|
|
||||||
// Use built-in auth-session fixture
|
|
||||||
import { test as base } from '@playwright/test';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
|
|
||||||
export const test = base.extend(createAuthFixtures());
|
|
||||||
|
|
||||||
// Use in tests - even simpler
|
|
||||||
test('test 1', async ({ page, authToken }) => {
|
|
||||||
// authToken already available (persisted, reused)
|
|
||||||
await page.goto('/dashboard');
|
|
||||||
// Already authenticated via authToken
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Playwright Utils Benefits:**
|
|
||||||
- Token persisted to disk (faster subsequent runs)
|
|
||||||
- Multi-user support out of the box
|
|
||||||
- Automatic token renewal if expired
|
|
||||||
- No manual login flow needed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Add Network Assertions
|
|
||||||
|
|
||||||
**File:** `tests/e2e/api-calls.spec.ts`
|
|
||||||
**Issue:** No verification of API responses
|
|
||||||
**Severity:** Low
|
|
||||||
**Impact:** Tests don't catch API errors
|
|
||||||
|
|
||||||
**Current:**
|
|
||||||
```typescript
|
|
||||||
await page.click('button[name="save"]');
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
// ❌ What if API returned 500 but UI shows cached success?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enhancement:**
|
|
||||||
```typescript
|
|
||||||
const responsePromise = page.waitForResponse(
|
|
||||||
resp => resp.url().includes('/api/profile') && resp.status() === 200
|
|
||||||
);
|
|
||||||
await page.click('button[name="save"]');
|
|
||||||
const response = await responsePromise;
|
|
||||||
|
|
||||||
// Verify API response
|
|
||||||
const data = await response.json();
|
|
||||||
expect(data.success).toBe(true);
|
|
||||||
|
|
||||||
// Verify UI
|
|
||||||
await expect(page.locator('.success')).toBeVisible();
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Improve Test Names
|
|
||||||
|
|
||||||
**File:** `tests/e2e/checkout.spec.ts`
|
|
||||||
**Issue:** Vague test names
|
|
||||||
**Severity:** Low
|
|
||||||
**Impact:** Hard to understand test purpose
|
|
||||||
|
|
||||||
**Current:**
|
|
||||||
```typescript
|
|
||||||
test('should work', async ({ page }) => { });
|
|
||||||
test('test checkout', async ({ page }) => { });
|
|
||||||
```
|
|
||||||
|
|
||||||
**Better:**
|
|
||||||
```typescript
|
|
||||||
test('should complete checkout with valid credit card', async ({ page }) => { });
|
|
||||||
test('should show validation error for expired card', async ({ page }) => { });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quality Scores by Category
|
|
||||||
|
|
||||||
| Category | Score | Target | Status |
|
|
||||||
|----------|-------|--------|--------|
|
|
||||||
| **Determinism** | 26/35 | 30/35 | ⚠️ Needs Improvement |
|
|
||||||
| **Isolation** | 22/25 | 20/25 | ✅ Good |
|
|
||||||
| **Assertions** | 18/20 | 16/20 | ✅ Good |
|
|
||||||
| **Structure** | 7/10 | 8/10 | ⚠️ Minor Issues |
|
|
||||||
| **Performance** | 3/10 | 8/10 | ❌ Critical |
|
|
||||||
|
|
||||||
### Scoring Breakdown
|
|
||||||
|
|
||||||
**Determinism (35 points max):**
|
|
||||||
- No hard waits: 0/10 ❌ (found 3 instances)
|
|
||||||
- No conditionals: 8/10 ⚠️ (found 2 instances)
|
|
||||||
- No try-catch flow control: 10/10 ✅
|
|
||||||
- Network-first patterns: 8/15 ⚠️ (some tests missing)
|
|
||||||
|
|
||||||
**Isolation (25 points max):**
|
|
||||||
- Self-cleaning: 20/20 ✅
|
|
||||||
- No global state: 5/5 ✅
|
|
||||||
- Parallel-safe: 0/0 ✅ (not tested)
|
|
||||||
|
|
||||||
**Assertions (20 points max):**
|
|
||||||
- Explicit in test body: 15/15 ✅
|
|
||||||
- Specific and meaningful: 3/5 ⚠️ (some weak assertions)
|
|
||||||
|
|
||||||
**Structure (10 points max):**
|
|
||||||
- Test size < 300 lines: 5/5 ✅
|
|
||||||
- Clear names: 2/5 ⚠️ (some vague names)
|
|
||||||
|
|
||||||
**Performance (10 points max):**
|
|
||||||
- Execution time < 1.5 min: 3/10 ❌ (3 tests exceed limit)
|
|
||||||
|
|
||||||
## Files Reviewed
|
|
||||||
|
|
||||||
| File | Score | Issues | Status |
|
|
||||||
|------|-------|--------|--------|
|
|
||||||
| `tests/e2e/checkout.spec.ts` | 65/100 | 4 | ❌ Needs Work |
|
|
||||||
| `tests/e2e/profile.spec.ts` | 72/100 | 3 | ⚠️ Needs Improvement |
|
|
||||||
| `tests/e2e/search.spec.ts` | 88/100 | 1 | ✅ Good |
|
|
||||||
| `tests/api/profile.spec.ts` | 92/100 | 0 | ✅ Excellent |
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
### Immediate (Fix Critical Issues)
|
|
||||||
1. Remove hard waits in `checkout.spec.ts` (line 45, 67, 89)
|
|
||||||
2. Fix conditional in `profile.spec.ts` (line 28)
|
|
||||||
3. Optimize slow tests in `checkout.spec.ts`
|
|
||||||
|
|
||||||
### Short-term (Apply Recommendations)
|
|
||||||
4. Extract login fixture from `profile.spec.ts`
|
|
||||||
5. Add network assertions to `api-calls.spec.ts`
|
|
||||||
6. Improve test names in `checkout.spec.ts`
|
|
||||||
|
|
||||||
### Long-term (Continuous Improvement)
|
|
||||||
7. Re-run `*test-review` after fixes (target: 85/100)
|
|
||||||
8. Add performance budgets to CI
|
|
||||||
9. Document test patterns for team
|
|
||||||
|
|
||||||
## Knowledge Base References
|
|
||||||
|
|
||||||
TEA reviewed against these patterns:
|
|
||||||
- [test-quality.md](/docs/reference/tea/knowledge-base.md#test-quality) - Execution limits, isolation
|
|
||||||
- [network-first.md](/docs/reference/tea/knowledge-base.md#network-first) - Deterministic waits
|
|
||||||
- [timing-debugging.md](/docs/reference/tea/knowledge-base.md#timing-debugging) - Race conditions
|
|
||||||
- [selector-resilience.md](/docs/reference/tea/knowledge-base.md#selector-resilience) - Robust selectors
|
|
||||||
```
|
|
||||||
|
|
||||||
## Understanding the Scores
|
|
||||||
|
|
||||||
### What Do Scores Mean?
|
|
||||||
|
|
||||||
| Score Range | Interpretation | Action |
|
|
||||||
|-------------|----------------|--------|
|
|
||||||
| **90-100** | Excellent | Minimal changes needed, production-ready |
|
|
||||||
| **80-89** | Good | Minor improvements recommended |
|
|
||||||
| **70-79** | Acceptable | Address recommendations before release |
|
|
||||||
| **60-69** | Needs Improvement | Fix critical issues, apply recommendations |
|
|
||||||
| **< 60** | Critical | Significant refactoring needed |
|
|
||||||
|
|
||||||
### Scoring Criteria
|
|
||||||
|
|
||||||
**Determinism (35 points):**
|
|
||||||
- Tests produce same result every run
|
|
||||||
- No random failures (flakiness)
|
|
||||||
- No environment-dependent behavior
|
|
||||||
|
|
||||||
**Isolation (25 points):**
|
|
||||||
- Tests don't depend on each other
|
|
||||||
- Can run in any order
|
|
||||||
- Clean up after themselves
|
|
||||||
|
|
||||||
**Assertions (20 points):**
|
|
||||||
- Verify actual behavior
|
|
||||||
- Specific and meaningful
|
|
||||||
- Not abstracted away in helpers
|
|
||||||
|
|
||||||
**Structure (10 points):**
|
|
||||||
- Readable and maintainable
|
|
||||||
- Appropriate size
|
|
||||||
- Clear naming
|
|
||||||
|
|
||||||
**Performance (10 points):**
|
|
||||||
- Fast execution
|
|
||||||
- Efficient selectors
|
|
||||||
- No unnecessary waits
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### Quality Report
|
|
||||||
- Overall score (0-100)
|
|
||||||
- Category scores (Determinism, Isolation, etc.)
|
|
||||||
- File-by-file breakdown
|
|
||||||
|
|
||||||
### Critical Issues
|
|
||||||
- Specific line numbers
|
|
||||||
- Code examples (current vs fixed)
|
|
||||||
- Why it matters explanation
|
|
||||||
- Impact assessment
|
|
||||||
|
|
||||||
### Recommendations
|
|
||||||
- Actionable improvements
|
|
||||||
- Code examples
|
|
||||||
- Priority/severity levels
|
|
||||||
|
|
||||||
### Next Steps
|
|
||||||
- Immediate actions (fix critical)
|
|
||||||
- Short-term improvements
|
|
||||||
- Long-term quality goals
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Review Before Release
|
|
||||||
|
|
||||||
Make test review part of release checklist:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Release Checklist
|
|
||||||
- [ ] All tests passing
|
|
||||||
- [ ] Test review score > 80
|
|
||||||
- [ ] Critical issues resolved
|
|
||||||
- [ ] Performance within budget
|
|
||||||
```
|
|
||||||
|
|
||||||
### Review After AI Generation
|
|
||||||
|
|
||||||
Always review AI-generated tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Run *atdd or *automate
|
|
||||||
2. Run *test-review on generated tests
|
|
||||||
3. Fix critical issues
|
|
||||||
4. Commit tests
|
|
||||||
```
|
|
||||||
|
|
||||||
### Set Quality Gates
|
|
||||||
|
|
||||||
Use scores as quality gates:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# .github/workflows/test.yml
|
|
||||||
- name: Review test quality
|
|
||||||
run: |
|
|
||||||
# Run test review
|
|
||||||
# Parse score from report
|
|
||||||
if [ $SCORE -lt 80 ]; then
|
|
||||||
echo "Test quality below threshold"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Review Regularly
|
|
||||||
|
|
||||||
Schedule periodic reviews:
|
|
||||||
|
|
||||||
- **Per story:** Optional (spot check new tests)
|
|
||||||
- **Per epic:** Recommended (ensure consistency)
|
|
||||||
- **Per release:** Recommended for quality gates (required if using formal gate process)
|
|
||||||
- **Quarterly:** Audit entire suite
|
|
||||||
|
|
||||||
### Focus Reviews
|
|
||||||
|
|
||||||
For large suites, review incrementally:
|
|
||||||
|
|
||||||
**Week 1:** Review E2E tests
|
|
||||||
**Week 2:** Review API tests
|
|
||||||
**Week 3:** Review component tests (Cypress CT or Vitest)
|
|
||||||
**Week 4:** Apply fixes across all suites
|
|
||||||
|
|
||||||
**Component Testing Note:** TEA reviews component tests using framework-specific knowledge:
|
|
||||||
- **Cypress:** Reviews Cypress Component Testing specs (*.cy.tsx)
|
|
||||||
- **Playwright:** Reviews Vitest component tests (*.test.tsx)
|
|
||||||
|
|
||||||
### Use Reviews for Learning
|
|
||||||
|
|
||||||
Share reports with team:
|
|
||||||
|
|
||||||
```
|
|
||||||
Team Meeting:
|
|
||||||
- Review test-review.md
|
|
||||||
- Discuss critical issues
|
|
||||||
- Agree on patterns
|
|
||||||
- Update team guidelines
|
|
||||||
```
|
|
||||||
|
|
||||||
### Compare Over Time
|
|
||||||
|
|
||||||
Track improvement:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Quality Trend
|
|
||||||
|
|
||||||
| Date | Score | Critical Issues | Notes |
|
|
||||||
|------|-------|-----------------|-------|
|
|
||||||
| 2026-01-01 | 65 | 5 | Baseline |
|
|
||||||
| 2026-01-15 | 72 | 2 | Fixed hard waits |
|
|
||||||
| 2026-02-01 | 84 | 0 | All critical resolved |
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
### Low Determinism Score
|
|
||||||
|
|
||||||
**Symptoms:**
|
|
||||||
- Tests fail randomly
|
|
||||||
- "Works on my machine"
|
|
||||||
- CI failures that don't reproduce locally
|
|
||||||
|
|
||||||
**Common Causes:**
|
|
||||||
- Hard waits (`waitForTimeout`)
|
|
||||||
- Conditional flow control (`if/else`)
|
|
||||||
- Try-catch for flow control
|
|
||||||
- Missing network-first patterns
|
|
||||||
|
|
||||||
**Fix:** Review determinism section, apply network-first patterns
|
|
||||||
|
|
||||||
### Low Performance Score
|
|
||||||
|
|
||||||
**Symptoms:**
|
|
||||||
- Tests take > 1.5 minutes each
|
|
||||||
- Test suite takes hours
|
|
||||||
- CI times out
|
|
||||||
|
|
||||||
**Common Causes:**
|
|
||||||
- Unnecessary waits (hard timeouts)
|
|
||||||
- Inefficient selectors (XPath, complex CSS)
|
|
||||||
- Not using parallelization
|
|
||||||
- Heavy setup in every test
|
|
||||||
|
|
||||||
**Fix:** Optimize waits, improve selectors, use fixtures
|
|
||||||
|
|
||||||
### Low Isolation Score
|
|
||||||
|
|
||||||
**Symptoms:**
|
|
||||||
- Tests fail when run in different order
|
|
||||||
- Tests fail in parallel
|
|
||||||
- Test data conflicts
|
|
||||||
|
|
||||||
**Common Causes:**
|
|
||||||
- Shared global state
|
|
||||||
- Tests don't clean up
|
|
||||||
- Hard-coded test data
|
|
||||||
- Database not reset between tests
|
|
||||||
|
|
||||||
**Fix:** Use fixtures, clean up in afterEach, use unique test data
|
|
||||||
|
|
||||||
### "Too Many Issues to Fix"
|
|
||||||
|
|
||||||
**Problem:** Report shows 50+ issues, overwhelming.
|
|
||||||
|
|
||||||
**Solution:** Prioritize:
|
|
||||||
1. Fix all critical issues first
|
|
||||||
2. Apply top 3 recommendations
|
|
||||||
3. Re-run review
|
|
||||||
4. Iterate
|
|
||||||
|
|
||||||
Don't try to fix everything at once.
|
|
||||||
|
|
||||||
### Reviews Take Too Long
|
|
||||||
|
|
||||||
**Problem:** Reviewing entire suite takes hours.
|
|
||||||
|
|
||||||
**Solution:** Review incrementally:
|
|
||||||
- Review new tests in PR review
|
|
||||||
- Schedule directory reviews weekly
|
|
||||||
- Full suite review quarterly
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Generate tests to review
|
|
||||||
- [How to Run Automate](/docs/how-to/workflows/run-automate.md) - Expand coverage to review
|
|
||||||
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Coverage complements quality
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Avoiding flakiness
|
|
||||||
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Reusable patterns
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *test-review](/docs/reference/tea/commands.md#test-review) - Full command reference
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Patterns TEA reviews against
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,883 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Run Trace with TEA"
|
|
||||||
description: Map requirements to tests and make quality gate decisions using TEA's trace workflow
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Run Trace with TEA
|
|
||||||
|
|
||||||
Use TEA's `*trace` workflow for requirements traceability and quality gate decisions. This is a two-phase workflow: Phase 1 analyzes coverage, Phase 2 makes the go/no-go decision.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
### Phase 1: Requirements Traceability
|
|
||||||
- Map acceptance criteria to implemented tests
|
|
||||||
- Identify coverage gaps
|
|
||||||
- Prioritize missing tests
|
|
||||||
- Refresh coverage after each story/epic
|
|
||||||
|
|
||||||
### Phase 2: Quality Gate Decision
|
|
||||||
- Make go/no-go decision for release
|
|
||||||
- Validate coverage meets thresholds
|
|
||||||
- Document gate decision with evidence
|
|
||||||
- Support business-approved waivers
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Requirements defined (stories, acceptance criteria, test design)
|
|
||||||
- Tests implemented
|
|
||||||
- For brownfield: Existing codebase with tests
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Run the Trace Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
*trace
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Specify Phase
|
|
||||||
|
|
||||||
TEA will ask which phase you're running.
|
|
||||||
|
|
||||||
**Phase 1: Requirements Traceability**
|
|
||||||
- Analyze coverage
|
|
||||||
- Identify gaps
|
|
||||||
- Generate recommendations
|
|
||||||
|
|
||||||
**Phase 2: Quality Gate Decision**
|
|
||||||
- Make PASS/CONCERNS/FAIL/WAIVED decision
|
|
||||||
- Requires Phase 1 complete
|
|
||||||
|
|
||||||
**Typical flow:** Run Phase 1 first, review gaps, then run Phase 2 for gate decision.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Requirements Traceability
|
|
||||||
|
|
||||||
### 3. Provide Requirements Source
|
|
||||||
|
|
||||||
TEA will ask where requirements are defined.
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
|
|
||||||
| Source | Example | Best For |
|
|
||||||
| --------------- | ----------------------------- | ---------------------- |
|
|
||||||
| **Story file** | `story-profile-management.md` | Single story coverage |
|
|
||||||
| **Test design** | `test-design-epic-1.md` | Epic coverage |
|
|
||||||
| **PRD** | `PRD.md` | System-level coverage |
|
|
||||||
| **Multiple** | All of the above | Comprehensive analysis |
|
|
||||||
|
|
||||||
**Example Response:**
|
|
||||||
```
|
|
||||||
Requirements:
|
|
||||||
- story-profile-management.md (acceptance criteria)
|
|
||||||
- test-design-epic-1.md (test priorities)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Specify Test Location
|
|
||||||
|
|
||||||
TEA will ask where tests are located.
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Test location: tests/
|
|
||||||
Include:
|
|
||||||
- tests/api/
|
|
||||||
- tests/e2e/
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Specify Focus Areas (Optional)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Focus on:
|
|
||||||
- Profile CRUD operations
|
|
||||||
- Validation scenarios
|
|
||||||
- Authorization checks
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Review Coverage Matrix
|
|
||||||
|
|
||||||
TEA generates a comprehensive traceability matrix.
|
|
||||||
|
|
||||||
#### Traceability Matrix (`traceability-matrix.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Requirements Traceability Matrix
|
|
||||||
|
|
||||||
**Date:** 2026-01-13
|
|
||||||
**Scope:** Epic 1 - User Profile Management
|
|
||||||
**Phase:** Phase 1 (Traceability Analysis)
|
|
||||||
|
|
||||||
## Coverage Summary
|
|
||||||
|
|
||||||
| Metric | Count | Percentage |
|
|
||||||
| ---------------------- | ----- | ---------- |
|
|
||||||
| **Total Requirements** | 15 | 100% |
|
|
||||||
| **Full Coverage** | 11 | 73% |
|
|
||||||
| **Partial Coverage** | 3 | 20% |
|
|
||||||
| **No Coverage** | 1 | 7% |
|
|
||||||
|
|
||||||
### By Priority
|
|
||||||
|
|
||||||
| Priority | Total | Covered | Percentage |
|
|
||||||
| -------- | ----- | ------- | ----------------- |
|
|
||||||
| **P0** | 5 | 5 | 100% ✅ |
|
|
||||||
| **P1** | 6 | 5 | 83% ⚠️ |
|
|
||||||
| **P2** | 3 | 1 | 33% ⚠️ |
|
|
||||||
| **P3** | 1 | 0 | 0% ✅ (acceptable) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Detailed Traceability
|
|
||||||
|
|
||||||
### ✅ Requirement 1: User can view their profile (P0)
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- User navigates to /profile
|
|
||||||
- Profile displays name, email, avatar
|
|
||||||
- Data is current (not cached)
|
|
||||||
|
|
||||||
**Test Coverage:** FULL ✅
|
|
||||||
|
|
||||||
**Tests:**
|
|
||||||
- `tests/e2e/profile-view.spec.ts:15` - "should display profile page with current data"
|
|
||||||
- ✅ Navigates to /profile
|
|
||||||
- ✅ Verifies name, email visible
|
|
||||||
- ✅ Verifies avatar displayed
|
|
||||||
- ✅ Validates data freshness via API assertion
|
|
||||||
|
|
||||||
- `tests/api/profile.spec.ts:8` - "should fetch user profile via API"
|
|
||||||
- ✅ Calls GET /api/profile
|
|
||||||
- ✅ Validates response schema
|
|
||||||
- ✅ Confirms all fields present
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ⚠️ Requirement 2: User can edit profile (P0)
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- User clicks "Edit Profile"
|
|
||||||
- Can modify name, email, bio
|
|
||||||
- Can upload avatar
|
|
||||||
- Changes are persisted
|
|
||||||
- Success message shown
|
|
||||||
|
|
||||||
**Test Coverage:** PARTIAL ⚠️
|
|
||||||
|
|
||||||
**Tests:**
|
|
||||||
- `tests/e2e/profile-edit.spec.ts:22` - "should edit and save profile"
|
|
||||||
- ✅ Clicks edit button
|
|
||||||
- ✅ Modifies name and email
|
|
||||||
- ⚠️ **Does NOT test bio field**
|
|
||||||
- ❌ **Does NOT test avatar upload**
|
|
||||||
- ✅ Verifies persistence
|
|
||||||
- ✅ Verifies success message
|
|
||||||
|
|
||||||
- `tests/api/profile.spec.ts:25` - "should update profile via PATCH"
|
|
||||||
- ✅ Calls PATCH /api/profile
|
|
||||||
- ✅ Validates update response
|
|
||||||
- ⚠️ **Only tests name/email, not bio/avatar**
|
|
||||||
|
|
||||||
**Missing Coverage:**
|
|
||||||
- Bio field not tested in E2E or API
|
|
||||||
- Avatar upload not tested
|
|
||||||
|
|
||||||
**Gap Severity:** HIGH (P0 requirement, critical path)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Requirement 3: Invalid email shows validation error (P1)
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Enter invalid email format
|
|
||||||
- See error message
|
|
||||||
- Cannot save changes
|
|
||||||
|
|
||||||
**Test Coverage:** FULL ✅
|
|
||||||
|
|
||||||
**Tests:**
|
|
||||||
- `tests/e2e/profile-edit.spec.ts:45` - "should show validation error for invalid email"
|
|
||||||
- `tests/api/profile.spec.ts:50` - "should return 400 for invalid email"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ❌ Requirement 15: Profile export as PDF (P2)
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- User clicks "Export Profile"
|
|
||||||
- PDF downloads with profile data
|
|
||||||
|
|
||||||
**Test Coverage:** NONE ❌
|
|
||||||
|
|
||||||
**Gap Analysis:**
|
|
||||||
- **Priority:** P2 (medium)
|
|
||||||
- **Risk:** Low (non-critical feature)
|
|
||||||
- **Recommendation:** Add in next iteration (not blocking for release)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Gap Prioritization
|
|
||||||
|
|
||||||
### Critical Gaps (Must Fix Before Release)
|
|
||||||
|
|
||||||
| Gap | Requirement | Priority | Risk | Recommendation |
|
|
||||||
| --- | ------------------------ | -------- | ---- | ------------------- |
|
|
||||||
| 1 | Bio field not tested | P0 | High | Add E2E + API tests |
|
|
||||||
| 2 | Avatar upload not tested | P0 | High | Add E2E + API tests |
|
|
||||||
|
|
||||||
**Estimated Effort:** 3 hours
|
|
||||||
**Owner:** QA team
|
|
||||||
**Deadline:** Before release
|
|
||||||
|
|
||||||
### Non-Critical Gaps (Can Defer)
|
|
||||||
|
|
||||||
| Gap | Requirement | Priority | Risk | Recommendation |
|
|
||||||
| --- | ------------------------- | -------- | ---- | ------------------- |
|
|
||||||
| 3 | Profile export not tested | P2 | Low | Add in v1.3 release |
|
|
||||||
|
|
||||||
**Estimated Effort:** 2 hours
|
|
||||||
**Owner:** QA team
|
|
||||||
**Deadline:** Next release (February)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
|
|
||||||
### 1. Add Bio Field Tests
|
|
||||||
|
|
||||||
**Tests Needed (Vanilla Playwright):**
|
|
||||||
```typescript
|
|
||||||
// tests/e2e/profile-edit.spec.ts
|
|
||||||
test('should edit bio field', async ({ page }) => {
|
|
||||||
await page.goto('/profile');
|
|
||||||
await page.getByRole('button', { name: 'Edit' }).click();
|
|
||||||
await page.getByLabel('Bio').fill('New bio text');
|
|
||||||
await page.getByRole('button', { name: 'Save' }).click();
|
|
||||||
await expect(page.getByText('New bio text')).toBeVisible();
|
|
||||||
});
|
|
||||||
|
|
||||||
// tests/api/profile.spec.ts
|
|
||||||
test('should update bio via API', async ({ request }) => {
|
|
||||||
const response = await request.patch('/api/profile', {
|
|
||||||
data: { bio: 'Updated bio' }
|
|
||||||
});
|
|
||||||
expect(response.ok()).toBeTruthy();
|
|
||||||
const { bio } = await response.json();
|
|
||||||
expect(bio).toBe('Updated bio');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
// tests/e2e/profile-edit.spec.ts
|
|
||||||
import { test } from '../support/fixtures'; // Composed with authToken
|
|
||||||
|
|
||||||
test('should edit bio field', async ({ page, authToken }) => {
|
|
||||||
await page.goto('/profile');
|
|
||||||
await page.getByRole('button', { name: 'Edit' }).click();
|
|
||||||
await page.getByLabel('Bio').fill('New bio text');
|
|
||||||
await page.getByRole('button', { name: 'Save' }).click();
|
|
||||||
await expect(page.getByText('New bio text')).toBeVisible();
|
|
||||||
});
|
|
||||||
|
|
||||||
// tests/api/profile.spec.ts
|
|
||||||
import { test as base, expect } from '@playwright/test';
|
|
||||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
|
||||||
import { mergeTests } from '@playwright/test';
|
|
||||||
|
|
||||||
// Merge API request + auth fixtures
|
|
||||||
const authFixtureTest = base.extend(createAuthFixtures());
|
|
||||||
const test = mergeTests(apiRequestFixture, authFixtureTest);
|
|
||||||
|
|
||||||
test('should update bio via API', async ({ apiRequest, authToken }) => {
|
|
||||||
const { status, body } = await apiRequest({
|
|
||||||
method: 'PATCH',
|
|
||||||
path: '/api/profile',
|
|
||||||
body: { bio: 'Updated bio' },
|
|
||||||
headers: { Authorization: `Bearer ${authToken}` }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(200);
|
|
||||||
expect(body.bio).toBe('Updated bio');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** `authToken` requires auth-session fixture setup. See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#auth-session).
|
|
||||||
|
|
||||||
### 2. Add Avatar Upload Tests
|
|
||||||
|
|
||||||
**Tests Needed:**
|
|
||||||
```typescript
|
|
||||||
// tests/e2e/profile-edit.spec.ts
|
|
||||||
test('should upload avatar image', async ({ page }) => {
|
|
||||||
await page.goto('/profile');
|
|
||||||
await page.getByRole('button', { name: 'Edit' }).click();
|
|
||||||
|
|
||||||
// Upload file
|
|
||||||
await page.setInputFiles('[type="file"]', 'fixtures/avatar.png');
|
|
||||||
await page.getByRole('button', { name: 'Save' }).click();
|
|
||||||
|
|
||||||
// Verify uploaded image displays
|
|
||||||
await expect(page.locator('img[alt="Profile avatar"]')).toBeVisible();
|
|
||||||
});
|
|
||||||
|
|
||||||
// tests/api/profile.spec.ts
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
import fs from 'fs/promises';
|
|
||||||
|
|
||||||
test('should accept valid image upload', async ({ request }) => {
|
|
||||||
const response = await request.post('/api/profile/avatar', {
|
|
||||||
multipart: {
|
|
||||||
file: {
|
|
||||||
name: 'avatar.png',
|
|
||||||
mimeType: 'image/png',
|
|
||||||
buffer: await fs.readFile('fixtures/avatar.png')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
expect(response.ok()).toBeTruthy();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
After reviewing traceability:
|
|
||||||
|
|
||||||
1. **Fix critical gaps** - Add tests for P0/P1 requirements
|
|
||||||
2. **Run *test-review** - Ensure new tests meet quality standards
|
|
||||||
3. **Run Phase 2** - Make gate decision after gaps addressed
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Quality Gate Decision
|
|
||||||
|
|
||||||
After Phase 1 coverage analysis is complete, run Phase 2 for the gate decision.
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
- Phase 1 traceability matrix complete
|
|
||||||
- Test execution results available (must have test results)
|
|
||||||
|
|
||||||
**Note:** Phase 2 will skip if test execution results aren't provided. The workflow requires actual test run results to make gate decisions.
|
|
||||||
|
|
||||||
### 7. Run Phase 2
|
|
||||||
|
|
||||||
```
|
|
||||||
*trace
|
|
||||||
```
|
|
||||||
|
|
||||||
Select "Phase 2: Quality Gate Decision"
|
|
||||||
|
|
||||||
### 8. Provide Additional Context
|
|
||||||
|
|
||||||
TEA will ask for:
|
|
||||||
|
|
||||||
**Gate Type:**
|
|
||||||
- Story gate (small release)
|
|
||||||
- Epic gate (larger release)
|
|
||||||
- Release gate (production deployment)
|
|
||||||
- Hotfix gate (emergency fix)
|
|
||||||
|
|
||||||
**Decision Mode:**
|
|
||||||
- **Deterministic** - Rule-based (coverage %, quality scores)
|
|
||||||
- **Manual** - Team decision with TEA guidance
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Gate type: Epic gate
|
|
||||||
Decision mode: Deterministic
|
|
||||||
```
|
|
||||||
|
|
||||||
### 9. Provide Supporting Evidence
|
|
||||||
|
|
||||||
TEA will request:
|
|
||||||
|
|
||||||
**Phase 1 Results:**
|
|
||||||
```
|
|
||||||
traceability-matrix.md (from Phase 1)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test Quality (Optional):**
|
|
||||||
```
|
|
||||||
test-review.md (from *test-review)
|
|
||||||
```
|
|
||||||
|
|
||||||
**NFR Assessment (Optional):**
|
|
||||||
```
|
|
||||||
nfr-assessment.md (from *nfr-assess)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 10. Review Gate Decision
|
|
||||||
|
|
||||||
TEA makes evidence-based gate decision and writes to separate file.
|
|
||||||
|
|
||||||
#### Gate Decision (`gate-decision-{gate_type}-{story_id}.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
|
|
||||||
# Phase 2: Quality Gate Decision
|
|
||||||
|
|
||||||
**Gate Type:** Epic Gate
|
|
||||||
**Decision:** PASS ✅
|
|
||||||
**Date:** 2026-01-13
|
|
||||||
**Approvers:** Product Manager, Tech Lead, QA Lead
|
|
||||||
|
|
||||||
## Decision Summary
|
|
||||||
|
|
||||||
**Verdict:** Ready to release
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
- P0 coverage: 100% (5/5 requirements)
|
|
||||||
- P1 coverage: 100% (6/6 requirements)
|
|
||||||
- P2 coverage: 33% (1/3 requirements) - acceptable
|
|
||||||
- Test quality score: 84/100
|
|
||||||
- NFR assessment: PASS
|
|
||||||
|
|
||||||
## Coverage Analysis
|
|
||||||
|
|
||||||
| Priority | Required Coverage | Actual Coverage | Status |
|
|
||||||
| -------- | ----------------- | --------------- | --------------------- |
|
|
||||||
| **P0** | 100% | 100% | ✅ PASS |
|
|
||||||
| **P1** | 90% | 100% | ✅ PASS |
|
|
||||||
| **P2** | 50% | 33% | ⚠️ Below (acceptable) |
|
|
||||||
| **P3** | 20% | 0% | ✅ PASS (low priority) |
|
|
||||||
|
|
||||||
**Rationale:**
|
|
||||||
- All critical path (P0) requirements fully tested
|
|
||||||
- All high-value (P1) requirements fully tested
|
|
||||||
- P2 gap (profile export) is low risk and deferred to next release
|
|
||||||
|
|
||||||
## Quality Metrics
|
|
||||||
|
|
||||||
| Metric | Threshold | Actual | Status |
|
|
||||||
| ------------------ | --------- | ------ | ------ |
|
|
||||||
| P0/P1 Coverage | >95% | 100% | ✅ |
|
|
||||||
| Test Quality Score | >80 | 84 | ✅ |
|
|
||||||
| NFR Status | PASS | PASS | ✅ |
|
|
||||||
|
|
||||||
## Risks and Mitigations
|
|
||||||
|
|
||||||
### Accepted Risks
|
|
||||||
|
|
||||||
**Risk 1: Profile export not tested (P2)**
|
|
||||||
- **Impact:** Medium (users can't export profile)
|
|
||||||
- **Mitigation:** Feature flag disabled by default
|
|
||||||
- **Plan:** Add tests in v1.3 release (February)
|
|
||||||
- **Monitoring:** Track feature flag usage
|
|
||||||
|
|
||||||
## Approvals
|
|
||||||
|
|
||||||
- [x] **Product Manager** - Business requirements met (Approved: 2026-01-13)
|
|
||||||
- [x] **Tech Lead** - Technical quality acceptable (Approved: 2026-01-13)
|
|
||||||
- [x] **QA Lead** - Test coverage sufficient (Approved: 2026-01-13)
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
### Deployment
|
|
||||||
1. Merge to main branch
|
|
||||||
2. Deploy to staging
|
|
||||||
3. Run smoke tests in staging
|
|
||||||
4. Deploy to production
|
|
||||||
5. Monitor for 24 hours
|
|
||||||
|
|
||||||
### Monitoring
|
|
||||||
- Set alerts for profile endpoint (P99 > 200ms)
|
|
||||||
- Track error rates (target: <0.1%)
|
|
||||||
- Monitor profile export feature flag usage
|
|
||||||
|
|
||||||
### Future Work
|
|
||||||
- Add profile export tests (v1.3)
|
|
||||||
- Expand P2 coverage to 50%
|
|
||||||
```
|
|
||||||
|
|
||||||
### Gate Decision Rules
|
|
||||||
|
|
||||||
TEA uses deterministic rules when decision_mode = "deterministic":
|
|
||||||
|
|
||||||
| P0 Coverage | P1 Coverage | Overall Coverage | Decision |
|
|
||||||
| ----------- | ----------- | ---------------- | ---------------------------- |
|
|
||||||
| 100% | ≥90% | ≥80% | **PASS** ✅ |
|
|
||||||
| 100% | 80-89% | ≥80% | **CONCERNS** ⚠️ |
|
|
||||||
| <100% | Any | Any | **FAIL** ❌ |
|
|
||||||
| Any | <80% | Any | **FAIL** ❌ |
|
|
||||||
| Any | Any | <80% | **FAIL** ❌ |
|
|
||||||
| Any | Any | Any | **WAIVED** ⏭️ (with approval) |
|
|
||||||
|
|
||||||
**Detailed Rules:**
|
|
||||||
- **PASS:** P0=100%, P1≥90%, Overall≥80%
|
|
||||||
- **CONCERNS:** P0=100%, P1 80-89%, Overall≥80% (below threshold but not critical)
|
|
||||||
- **FAIL:** P0<100% OR P1<80% OR Overall<80% (critical gaps)
|
|
||||||
|
|
||||||
**PASS** ✅: All criteria met, ready to release
|
|
||||||
|
|
||||||
**CONCERNS** ⚠️: Some criteria not met, but:
|
|
||||||
- Mitigation plan exists
|
|
||||||
- Risk is acceptable
|
|
||||||
- Team approves proceeding
|
|
||||||
- Monitoring in place
|
|
||||||
|
|
||||||
**FAIL** ❌: Critical criteria not met:
|
|
||||||
- P0 requirements not tested
|
|
||||||
- Critical security vulnerabilities
|
|
||||||
- System is broken
|
|
||||||
- Cannot deploy
|
|
||||||
|
|
||||||
**WAIVED** ⏭️: Business approves proceeding despite concerns:
|
|
||||||
- Documented business justification
|
|
||||||
- Accepted risks quantified
|
|
||||||
- Approver signatures
|
|
||||||
- Future plans documented
|
|
||||||
|
|
||||||
### Example CONCERNS Decision
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Decision Summary
|
|
||||||
|
|
||||||
**Verdict:** CONCERNS ⚠️ - Proceed with monitoring
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
- P0 coverage: 100%
|
|
||||||
- P1 coverage: 85% (below 90% target)
|
|
||||||
- Test quality: 78/100 (below 80 target)
|
|
||||||
|
|
||||||
**Gaps:**
|
|
||||||
- 1 P1 requirement not tested (avatar upload)
|
|
||||||
- Test quality score slightly below threshold
|
|
||||||
|
|
||||||
**Mitigation:**
|
|
||||||
- Avatar upload not critical for v1.2 launch
|
|
||||||
- Test quality issues are minor (no flakiness)
|
|
||||||
- Monitoring alerts configured
|
|
||||||
|
|
||||||
**Approvals:**
|
|
||||||
- Product Manager: APPROVED (business priority to launch)
|
|
||||||
- Tech Lead: APPROVED (technical risk acceptable)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example FAIL Decision
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Decision Summary
|
|
||||||
|
|
||||||
**Verdict:** FAIL ❌ - Cannot release
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
- P0 coverage: 60% (below 95% threshold)
|
|
||||||
- Critical security vulnerability (CVE-2024-12345)
|
|
||||||
- Test quality: 55/100
|
|
||||||
|
|
||||||
**Blockers:**
|
|
||||||
1. **Login flow not tested** (P0 requirement)
|
|
||||||
- Critical path completely untested
|
|
||||||
- Must add E2E and API tests
|
|
||||||
|
|
||||||
2. **SQL injection vulnerability**
|
|
||||||
- Critical security issue
|
|
||||||
- Must fix before deployment
|
|
||||||
|
|
||||||
**Actions Required:**
|
|
||||||
1. Add login tests (QA team, 2 days)
|
|
||||||
2. Fix SQL injection (backend team, 1 day)
|
|
||||||
3. Re-run security scan (DevOps, 1 hour)
|
|
||||||
4. Re-run *trace after fixes
|
|
||||||
|
|
||||||
**Cannot proceed until all blockers resolved.**
|
|
||||||
```
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### Phase 1: Traceability Matrix
|
|
||||||
- Requirement-to-test mapping
|
|
||||||
- Coverage classification (FULL/PARTIAL/NONE)
|
|
||||||
- Gap identification with priorities
|
|
||||||
- Actionable recommendations
|
|
||||||
|
|
||||||
### Phase 2: Gate Decision
|
|
||||||
- Go/no-go verdict (PASS/CONCERNS/FAIL/WAIVED)
|
|
||||||
- Evidence summary
|
|
||||||
- Approval signatures
|
|
||||||
- Next steps and monitoring plan
|
|
||||||
|
|
||||||
## Usage Patterns
|
|
||||||
|
|
||||||
### Greenfield Projects
|
|
||||||
|
|
||||||
**Phase 3:**
|
|
||||||
```
|
|
||||||
After architecture complete:
|
|
||||||
1. Run *test-design (system-level)
|
|
||||||
2. Run *trace Phase 1 (baseline)
|
|
||||||
3. Use for implementation-readiness gate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4:**
|
|
||||||
```
|
|
||||||
After each epic/story:
|
|
||||||
1. Run *trace Phase 1 (refresh coverage)
|
|
||||||
2. Identify gaps
|
|
||||||
3. Add missing tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Release Gate:**
|
|
||||||
```
|
|
||||||
Before deployment:
|
|
||||||
1. Run *trace Phase 1 (final coverage check)
|
|
||||||
2. Run *trace Phase 2 (make gate decision)
|
|
||||||
3. Get approvals
|
|
||||||
4. Deploy (if PASS or WAIVED)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Brownfield Projects
|
|
||||||
|
|
||||||
**Phase 2:**
|
|
||||||
```
|
|
||||||
Before planning new work:
|
|
||||||
1. Run *trace Phase 1 (establish baseline)
|
|
||||||
2. Understand existing coverage
|
|
||||||
3. Plan testing strategy
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4:**
|
|
||||||
```
|
|
||||||
After each epic/story:
|
|
||||||
1. Run *trace Phase 1 (refresh)
|
|
||||||
2. Compare to baseline
|
|
||||||
3. Track coverage improvement
|
|
||||||
```
|
|
||||||
|
|
||||||
**Release Gate:**
|
|
||||||
```
|
|
||||||
Before deployment:
|
|
||||||
1. Run *trace Phase 1 (final check)
|
|
||||||
2. Run *trace Phase 2 (gate decision)
|
|
||||||
3. Compare to baseline
|
|
||||||
4. Deploy if coverage maintained or improved
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Run Phase 1 Frequently
|
|
||||||
|
|
||||||
Don't wait until release gate:
|
|
||||||
|
|
||||||
```
|
|
||||||
After Story 1: *trace Phase 1 (identify gaps early)
|
|
||||||
After Story 2: *trace Phase 1 (refresh)
|
|
||||||
After Story 3: *trace Phase 1 (refresh)
|
|
||||||
Before Release: *trace Phase 1 + Phase 2 (final gate)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefit:** Catch gaps early when they're cheap to fix.
|
|
||||||
|
|
||||||
### Use Coverage Trends
|
|
||||||
|
|
||||||
Track improvement over time:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Coverage Trend
|
|
||||||
|
|
||||||
| Date | Epic | P0/P1 Coverage | Quality Score | Status |
|
|
||||||
| ---------- | -------- | -------------- | ------------- | -------------- |
|
|
||||||
| 2026-01-01 | Baseline | 45% | - | Starting point |
|
|
||||||
| 2026-01-08 | Epic 1 | 78% | 72 | Improving |
|
|
||||||
| 2026-01-15 | Epic 2 | 92% | 84 | Near target |
|
|
||||||
| 2026-01-20 | Epic 3 | 100% | 88 | Ready! |
|
|
||||||
```
|
|
||||||
|
|
||||||
### Set Coverage Targets by Priority
|
|
||||||
|
|
||||||
Don't aim for 100% across all priorities:
|
|
||||||
|
|
||||||
**Recommended Targets:**
|
|
||||||
- **P0:** 100% (critical path must be tested)
|
|
||||||
- **P1:** 90% (high-value scenarios)
|
|
||||||
- **P2:** 50% (nice-to-have features)
|
|
||||||
- **P3:** 20% (low-value edge cases)
|
|
||||||
|
|
||||||
### Use Classification Strategically
|
|
||||||
|
|
||||||
**FULL** ✅: Requirement completely tested
|
|
||||||
- E2E test covers full user workflow
|
|
||||||
- API test validates backend behavior
|
|
||||||
- All acceptance criteria covered
|
|
||||||
|
|
||||||
**PARTIAL** ⚠️: Some aspects tested
|
|
||||||
- E2E test exists but missing scenarios
|
|
||||||
- API test exists but incomplete
|
|
||||||
- Some acceptance criteria not covered
|
|
||||||
|
|
||||||
**NONE** ❌: No tests exist
|
|
||||||
- Requirement identified but not tested
|
|
||||||
- May be intentional (low priority) or oversight
|
|
||||||
|
|
||||||
**Classification helps prioritize:**
|
|
||||||
- Fix NONE coverage for P0/P1 requirements first
|
|
||||||
- Enhance PARTIAL coverage for P0 requirements
|
|
||||||
- Accept PARTIAL or NONE for P2/P3 if time-constrained
|
|
||||||
|
|
||||||
### Automate Gate Decisions
|
|
||||||
|
|
||||||
Use traceability in CI:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# .github/workflows/gate-check.yml
|
|
||||||
- name: Check coverage
|
|
||||||
run: |
|
|
||||||
# Run trace Phase 1
|
|
||||||
# Parse coverage percentages
|
|
||||||
if [ $P0_COVERAGE -lt 95 ]; then
|
|
||||||
echo "P0 coverage below 95%"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Document Waivers Clearly
|
|
||||||
|
|
||||||
If proceeding with WAIVED:
|
|
||||||
|
|
||||||
**Required:**
|
|
||||||
```markdown
|
|
||||||
## Waiver Documentation
|
|
||||||
|
|
||||||
**Waived By:** VP Engineering, Product Lead
|
|
||||||
**Date:** 2026-01-15
|
|
||||||
**Gate Type:** Release Gate v1.2
|
|
||||||
|
|
||||||
**Justification:**
|
|
||||||
Business critical to launch by Q1 for investor demo.
|
|
||||||
Performance concerns acceptable for initial user base.
|
|
||||||
|
|
||||||
**Conditions:**
|
|
||||||
- Set monitoring alerts for P99 > 300ms
|
|
||||||
- Plan optimization for v1.3 (due February 28)
|
|
||||||
- Monitor user feedback closely
|
|
||||||
|
|
||||||
**Accepted Risks:**
|
|
||||||
- 1% of users may experience 350ms latency
|
|
||||||
- Avatar upload feature incomplete
|
|
||||||
- Profile export deferred to next release
|
|
||||||
|
|
||||||
**Quantified Impact:**
|
|
||||||
- Affects <100 users at current scale
|
|
||||||
- Workaround exists (manual export)
|
|
||||||
- Monitoring will catch issues early
|
|
||||||
|
|
||||||
**Approvals:**
|
|
||||||
- VP Engineering: [Signature] Date: 2026-01-15
|
|
||||||
- Product Lead: [Signature] Date: 2026-01-15
|
|
||||||
- QA Lead: [Signature] Date: 2026-01-15
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
### Too Many Gaps to Fix
|
|
||||||
|
|
||||||
**Problem:** Phase 1 shows 50 uncovered requirements.
|
|
||||||
|
|
||||||
**Solution:** Prioritize ruthlessly:
|
|
||||||
1. Fix all P0 gaps (critical path)
|
|
||||||
2. Fix high-risk P1 gaps
|
|
||||||
3. Accept low-risk P1 gaps with mitigation
|
|
||||||
4. Defer all P2/P3 gaps
|
|
||||||
|
|
||||||
**Don't try to fix everything** - focus on what matters for release.
|
|
||||||
|
|
||||||
### Can't Find Test Coverage
|
|
||||||
|
|
||||||
**Problem:** Tests exist but TEA can't map them to requirements.
|
|
||||||
|
|
||||||
**Cause:** Tests don't reference requirements.
|
|
||||||
|
|
||||||
**Solution:** Add traceability comments:
|
|
||||||
```typescript
|
|
||||||
test('should display profile', async ({ page }) => {
|
|
||||||
// Covers: Requirement 1 - User can view profile
|
|
||||||
// Acceptance criteria: Navigate to /profile, see name/email
|
|
||||||
await page.goto('/profile');
|
|
||||||
await expect(page.getByText('Test User')).toBeVisible();
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
Or use test IDs:
|
|
||||||
```typescript
|
|
||||||
test('[REQ-1] should display profile', async ({ page }) => {
|
|
||||||
// Test code...
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Unclear What "FULL" vs "PARTIAL" Means
|
|
||||||
|
|
||||||
**FULL** ✅: All acceptance criteria tested
|
|
||||||
```
|
|
||||||
Requirement: User can edit profile
|
|
||||||
Acceptance criteria:
|
|
||||||
- Can modify name ✅ Tested
|
|
||||||
- Can modify email ✅ Tested
|
|
||||||
- Can upload avatar ✅ Tested
|
|
||||||
- Changes persist ✅ Tested
|
|
||||||
Result: FULL coverage
|
|
||||||
```
|
|
||||||
|
|
||||||
**PARTIAL** ⚠️: Some criteria tested, some not
|
|
||||||
```
|
|
||||||
Requirement: User can edit profile
|
|
||||||
Acceptance criteria:
|
|
||||||
- Can modify name ✅ Tested
|
|
||||||
- Can modify email ✅ Tested
|
|
||||||
- Can upload avatar ❌ Not tested
|
|
||||||
- Changes persist ✅ Tested
|
|
||||||
Result: PARTIAL coverage (3/4 criteria)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Gate Decision Unclear
|
|
||||||
|
|
||||||
**Problem:** Not sure if PASS or CONCERNS is appropriate.
|
|
||||||
|
|
||||||
**Guideline:**
|
|
||||||
|
|
||||||
**Use PASS** ✅ if:
|
|
||||||
- All P0 requirements 100% covered
|
|
||||||
- P1 requirements >90% covered
|
|
||||||
- No critical issues
|
|
||||||
- NFRs met
|
|
||||||
|
|
||||||
**Use CONCERNS** ⚠️ if:
|
|
||||||
- P1 coverage 85-90% (close to threshold)
|
|
||||||
- Minor quality issues (score 70-79)
|
|
||||||
- NFRs have mitigation plans
|
|
||||||
- Team agrees risk is acceptable
|
|
||||||
|
|
||||||
**Use FAIL** ❌ if:
|
|
||||||
- P0 coverage <100% (critical path gaps)
|
|
||||||
- P1 coverage <85%
|
|
||||||
- Critical security/performance issues
|
|
||||||
- No mitigation possible
|
|
||||||
|
|
||||||
**When in doubt, use CONCERNS** and document the risk.
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Provides requirements for traceability
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality scores feed gate
|
|
||||||
- [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - NFR status feeds gate
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why P0 vs P3 matters
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Gate decisions in context
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *trace](/docs/reference/tea/commands.md#trace) - Full command reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,712 +0,0 @@
|
||||||
---
|
|
||||||
title: "How to Set Up CI Pipeline with TEA"
|
|
||||||
description: Configure automated test execution with selective testing and burn-in loops using TEA
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Set Up CI Pipeline with TEA
|
|
||||||
|
|
||||||
Use TEA's `*ci` workflow to scaffold production-ready CI/CD configuration for automated test execution with selective testing, parallel sharding, and flakiness detection.
|
|
||||||
|
|
||||||
## When to Use This
|
|
||||||
|
|
||||||
- Need to automate test execution in CI/CD
|
|
||||||
- Want selective testing (only run affected tests)
|
|
||||||
- Need parallel execution for faster feedback
|
|
||||||
- Want burn-in loops for flakiness detection
|
|
||||||
- Setting up new CI/CD pipeline
|
|
||||||
- Optimizing existing CI/CD workflow
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- BMad Method installed
|
|
||||||
- TEA agent available
|
|
||||||
- Test framework configured (run `*framework` first)
|
|
||||||
- Tests written (have something to run in CI)
|
|
||||||
- CI/CD platform access (GitHub Actions, GitLab CI, etc.)
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Load TEA Agent
|
|
||||||
|
|
||||||
Start a fresh chat and load TEA:
|
|
||||||
|
|
||||||
```
|
|
||||||
*tea
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Run the CI Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
*ci
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Select CI/CD Platform
|
|
||||||
|
|
||||||
TEA will ask which platform you're using.
|
|
||||||
|
|
||||||
**Supported Platforms:**
|
|
||||||
- **GitHub Actions** (most common)
|
|
||||||
- **GitLab CI**
|
|
||||||
- **Circle CI**
|
|
||||||
- **Jenkins**
|
|
||||||
- **Other** (TEA provides generic template)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
GitHub Actions
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Configure Test Strategy
|
|
||||||
|
|
||||||
TEA will ask about your test execution strategy.
|
|
||||||
|
|
||||||
#### Repository Structure
|
|
||||||
|
|
||||||
**Question:** "What's your repository structure?"
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
- **Single app** - One application in root
|
|
||||||
- **Monorepo** - Multiple apps/packages
|
|
||||||
- **Monorepo with affected detection** - Only test changed packages
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Monorepo with multiple apps
|
|
||||||
Need selective testing for changed packages only
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Parallel Execution
|
|
||||||
|
|
||||||
**Question:** "Want to shard tests for parallel execution?"
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
- **No sharding** - Run tests sequentially
|
|
||||||
- **Shard by workers** - Split across N workers
|
|
||||||
- **Shard by file** - Each file runs in parallel
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Yes, shard across 4 workers for faster execution
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why Shard?**
|
|
||||||
- **4 workers:** 20-minute suite → 5 minutes
|
|
||||||
- **Better resource usage:** Utilize CI runners efficiently
|
|
||||||
- **Faster feedback:** Developers wait less
|
|
||||||
|
|
||||||
#### Burn-In Loops
|
|
||||||
|
|
||||||
**Question:** "Want burn-in loops for flakiness detection?"
|
|
||||||
|
|
||||||
**Options:**
|
|
||||||
- **No burn-in** - Run tests once
|
|
||||||
- **PR burn-in** - Run tests multiple times on PRs
|
|
||||||
- **Nightly burn-in** - Dedicated flakiness detection job
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Yes, run tests 5 times on PRs to catch flaky tests early
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why Burn-In?**
|
|
||||||
- Catches flaky tests before they merge
|
|
||||||
- Prevents intermittent CI failures
|
|
||||||
- Builds confidence in test suite
|
|
||||||
|
|
||||||
### 5. Review Generated CI Configuration
|
|
||||||
|
|
||||||
TEA generates platform-specific workflow files.
|
|
||||||
|
|
||||||
#### GitHub Actions (`.github/workflows/test.yml`):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: Test Suite
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
push:
|
|
||||||
branches: [main, develop]
|
|
||||||
schedule:
|
|
||||||
- cron: '0 2 * * *' # Nightly at 2 AM
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
# Main test job with sharding
|
|
||||||
test:
|
|
||||||
name: Test (Shard ${{ matrix.shard }})
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
timeout-minutes: 15
|
|
||||||
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
shard: [1, 2, 3, 4]
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version-file: '.nvmrc'
|
|
||||||
cache: 'npm'
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm ci
|
|
||||||
|
|
||||||
- name: Install Playwright browsers
|
|
||||||
run: npx playwright install --with-deps
|
|
||||||
|
|
||||||
- name: Run tests
|
|
||||||
run: npx playwright test --shard=${{ matrix.shard }}/4
|
|
||||||
|
|
||||||
- name: Upload test results
|
|
||||||
if: always()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: test-results-${{ matrix.shard }}
|
|
||||||
path: test-results/
|
|
||||||
retention-days: 7
|
|
||||||
|
|
||||||
- name: Upload test report
|
|
||||||
if: always()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: playwright-report-${{ matrix.shard }}
|
|
||||||
path: playwright-report/
|
|
||||||
retention-days: 7
|
|
||||||
|
|
||||||
# Burn-in job for flakiness detection (PRs only)
|
|
||||||
burn-in:
|
|
||||||
name: Burn-In (Flakiness Detection)
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: github.event_name == 'pull_request'
|
|
||||||
timeout-minutes: 30
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version-file: '.nvmrc'
|
|
||||||
cache: 'npm'
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm ci
|
|
||||||
|
|
||||||
- name: Install Playwright browsers
|
|
||||||
run: npx playwright install --with-deps
|
|
||||||
|
|
||||||
- name: Run burn-in loop
|
|
||||||
run: |
|
|
||||||
for i in {1..5}; do
|
|
||||||
echo "=== Burn-in iteration $i/5 ==="
|
|
||||||
npx playwright test --grep-invert "@skip" || exit 1
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Upload burn-in results
|
|
||||||
if: failure()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: burn-in-failures
|
|
||||||
path: test-results/
|
|
||||||
|
|
||||||
# Selective testing (changed files only)
|
|
||||||
selective:
|
|
||||||
name: Selective Tests
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: github.event_name == 'pull_request'
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0 # Full history for git diff
|
|
||||||
|
|
||||||
- name: Setup Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version-file: '.nvmrc'
|
|
||||||
cache: 'npm'
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: npm ci
|
|
||||||
|
|
||||||
- name: Install Playwright browsers
|
|
||||||
run: npx playwright install --with-deps
|
|
||||||
|
|
||||||
- name: Run selective tests
|
|
||||||
run: npm run test:changed
|
|
||||||
```
|
|
||||||
|
|
||||||
#### GitLab CI (`.gitlab-ci.yml`):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
variables:
|
|
||||||
NODE_VERSION: "18"
|
|
||||||
|
|
||||||
stages:
|
|
||||||
- test
|
|
||||||
- burn-in
|
|
||||||
|
|
||||||
# Test job with parallel execution
|
|
||||||
test:
|
|
||||||
stage: test
|
|
||||||
image: node:$NODE_VERSION
|
|
||||||
parallel: 4
|
|
||||||
script:
|
|
||||||
- npm ci
|
|
||||||
- npx playwright install --with-deps
|
|
||||||
- npx playwright test --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL
|
|
||||||
artifacts:
|
|
||||||
when: always
|
|
||||||
paths:
|
|
||||||
- test-results/
|
|
||||||
- playwright-report/
|
|
||||||
expire_in: 7 days
|
|
||||||
rules:
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
|
|
||||||
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
|
|
||||||
|
|
||||||
# Burn-in job for flakiness detection
|
|
||||||
burn-in:
|
|
||||||
stage: burn-in
|
|
||||||
image: node:$NODE_VERSION
|
|
||||||
script:
|
|
||||||
- npm ci
|
|
||||||
- npx playwright install --with-deps
|
|
||||||
- |
|
|
||||||
for i in {1..5}; do
|
|
||||||
echo "=== Burn-in iteration $i/5 ==="
|
|
||||||
npx playwright test || exit 1
|
|
||||||
done
|
|
||||||
artifacts:
|
|
||||||
when: on_failure
|
|
||||||
paths:
|
|
||||||
- test-results/
|
|
||||||
rules:
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Burn-In Testing
|
|
||||||
|
|
||||||
**Option 1: Classic Burn-In (Playwright Built-In)**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scripts": {
|
|
||||||
"test": "playwright test",
|
|
||||||
"test:burn-in": "playwright test --repeat-each=5 --retries=0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**How it works:**
|
|
||||||
- Runs every test 5 times
|
|
||||||
- Fails if any iteration fails
|
|
||||||
- Detects flakiness before merge
|
|
||||||
|
|
||||||
**Use when:** Small test suite, want to run everything multiple times
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Option 2: Smart Burn-In (Playwright Utils)**
|
|
||||||
|
|
||||||
If `tea_use_playwright_utils: true`:
|
|
||||||
|
|
||||||
**scripts/burn-in-changed.ts:**
|
|
||||||
```typescript
|
|
||||||
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in';
|
|
||||||
|
|
||||||
await runBurnIn({
|
|
||||||
configPath: 'playwright.burn-in.config.ts',
|
|
||||||
baseBranch: 'main'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**playwright.burn-in.config.ts:**
|
|
||||||
```typescript
|
|
||||||
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in';
|
|
||||||
|
|
||||||
const config: BurnInConfig = {
|
|
||||||
skipBurnInPatterns: ['**/config/**', '**/*.md', '**/*types*'],
|
|
||||||
burnInTestPercentage: 0.3,
|
|
||||||
burnIn: { repeatEach: 5, retries: 0 }
|
|
||||||
};
|
|
||||||
|
|
||||||
export default config;
|
|
||||||
```
|
|
||||||
|
|
||||||
**package.json:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"scripts": {
|
|
||||||
"test:burn-in": "tsx scripts/burn-in-changed.ts"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**How it works:**
|
|
||||||
- Git diff analysis (only affected tests)
|
|
||||||
- Smart filtering (skip configs, docs, types)
|
|
||||||
- Volume control (run 30% of affected tests)
|
|
||||||
- Each test runs 5 times
|
|
||||||
|
|
||||||
**Use when:** Large test suite, want intelligent selection
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Comparison:**
|
|
||||||
|
|
||||||
| Feature | Classic Burn-In | Smart Burn-In (PW-Utils) |
|
|
||||||
|---------|----------------|--------------------------|
|
|
||||||
| Changed 1 file | Runs all 500 tests × 5 = 2500 runs | Runs 3 affected tests × 5 = 15 runs |
|
|
||||||
| Config change | Runs all tests | Skips (no tests affected) |
|
|
||||||
| Type change | Runs all tests | Skips (no runtime impact) |
|
|
||||||
| Setup | Zero config | Requires config file |
|
|
||||||
|
|
||||||
**Recommendation:** Start with classic (simple), upgrade to smart (faster) when suite grows.
|
|
||||||
|
|
||||||
### 6. Configure Secrets
|
|
||||||
|
|
||||||
TEA provides a secrets checklist.
|
|
||||||
|
|
||||||
**Required Secrets** (add to CI/CD platform):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## GitHub Actions Secrets
|
|
||||||
|
|
||||||
Repository Settings → Secrets and variables → Actions
|
|
||||||
|
|
||||||
### Required
|
|
||||||
- None (tests run without external auth)
|
|
||||||
|
|
||||||
### Optional
|
|
||||||
- `TEST_USER_EMAIL` - Test user credentials
|
|
||||||
- `TEST_USER_PASSWORD` - Test user password
|
|
||||||
- `API_BASE_URL` - API endpoint for tests
|
|
||||||
- `DATABASE_URL` - Test database (if needed)
|
|
||||||
```
|
|
||||||
|
|
||||||
**How to Add Secrets:**
|
|
||||||
|
|
||||||
**GitHub Actions:**
|
|
||||||
1. Go to repo Settings → Secrets → Actions
|
|
||||||
2. Click "New repository secret"
|
|
||||||
3. Add name and value
|
|
||||||
4. Use in workflow: `${{ secrets.TEST_USER_EMAIL }}`
|
|
||||||
|
|
||||||
**GitLab CI:**
|
|
||||||
1. Go to Project Settings → CI/CD → Variables
|
|
||||||
2. Add variable name and value
|
|
||||||
3. Use in workflow: `$TEST_USER_EMAIL`
|
|
||||||
|
|
||||||
### 7. Test the CI Pipeline
|
|
||||||
|
|
||||||
#### Push and Verify
|
|
||||||
|
|
||||||
**Commit the workflow file:**
|
|
||||||
```bash
|
|
||||||
git add .github/workflows/test.yml
|
|
||||||
git commit -m "ci: add automated test pipeline"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
**Watch the CI run:**
|
|
||||||
- GitHub Actions: Go to Actions tab
|
|
||||||
- GitLab CI: Go to CI/CD → Pipelines
|
|
||||||
- Circle CI: Go to Pipelines
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
```
|
|
||||||
✓ test (shard 1/4) - 3m 24s
|
|
||||||
✓ test (shard 2/4) - 3m 18s
|
|
||||||
✓ test (shard 3/4) - 3m 31s
|
|
||||||
✓ test (shard 4/4) - 3m 15s
|
|
||||||
✓ burn-in - 15m 42s
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Test on Pull Request
|
|
||||||
|
|
||||||
**Create test PR:**
|
|
||||||
```bash
|
|
||||||
git checkout -b test-ci-setup
|
|
||||||
echo "# Test" > test.md
|
|
||||||
git add test.md
|
|
||||||
git commit -m "test: verify CI setup"
|
|
||||||
git push -u origin test-ci-setup
|
|
||||||
```
|
|
||||||
|
|
||||||
**Open PR and verify:**
|
|
||||||
- Tests run automatically
|
|
||||||
- Burn-in runs (if configured for PRs)
|
|
||||||
- Selective tests run (if applicable)
|
|
||||||
- All checks pass ✓
|
|
||||||
|
|
||||||
## What You Get
|
|
||||||
|
|
||||||
### Automated Test Execution
|
|
||||||
- **On every PR** - Catch issues before merge
|
|
||||||
- **On every push to main** - Protect production
|
|
||||||
- **Nightly** - Comprehensive regression testing
|
|
||||||
|
|
||||||
### Parallel Execution
|
|
||||||
- **4x faster feedback** - Shard across multiple workers
|
|
||||||
- **Efficient resource usage** - Maximize CI runner utilization
|
|
||||||
|
|
||||||
### Selective Testing
|
|
||||||
- **Run only affected tests** - Git diff-based selection
|
|
||||||
- **Faster PR feedback** - Don't run entire suite every time
|
|
||||||
|
|
||||||
### Flakiness Detection
|
|
||||||
- **Burn-in loops** - Run tests multiple times
|
|
||||||
- **Early detection** - Catch flaky tests in PRs
|
|
||||||
- **Confidence building** - Know tests are reliable
|
|
||||||
|
|
||||||
### Artifact Collection
|
|
||||||
- **Test results** - Saved for 7 days
|
|
||||||
- **Screenshots** - On test failures
|
|
||||||
- **Videos** - Full test recordings
|
|
||||||
- **Traces** - Playwright trace files for debugging
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
### Start Simple, Add Complexity
|
|
||||||
|
|
||||||
**Week 1:** Basic pipeline
|
|
||||||
```yaml
|
|
||||||
- Run tests on PR
|
|
||||||
- Single worker (no sharding)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 2:** Add parallelization
|
|
||||||
```yaml
|
|
||||||
- Shard across 4 workers
|
|
||||||
- Faster feedback
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 3:** Add selective testing
|
|
||||||
```yaml
|
|
||||||
- Git diff-based selection
|
|
||||||
- Skip unaffected tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 4:** Add burn-in
|
|
||||||
```yaml
|
|
||||||
- Detect flaky tests
|
|
||||||
- Run on PR and nightly
|
|
||||||
```
|
|
||||||
|
|
||||||
### Optimize for Feedback Speed
|
|
||||||
|
|
||||||
**Goal:** PR feedback in < 5 minutes
|
|
||||||
|
|
||||||
**Strategies:**
|
|
||||||
- Shard tests across workers (4 workers = 4x faster)
|
|
||||||
- Use selective testing (run 20% of tests, not 100%)
|
|
||||||
- Cache dependencies (`actions/cache`, `cache: 'npm'`)
|
|
||||||
- Run smoke tests first, full suite after
|
|
||||||
|
|
||||||
**Example fast workflow:**
|
|
||||||
```yaml
|
|
||||||
jobs:
|
|
||||||
smoke:
|
|
||||||
# Run critical path tests (2 min)
|
|
||||||
run: npm run test:smoke
|
|
||||||
|
|
||||||
full:
|
|
||||||
needs: smoke
|
|
||||||
# Run full suite only if smoke passes (10 min)
|
|
||||||
run: npm test
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use Test Tags
|
|
||||||
|
|
||||||
Tag tests for selective execution:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Critical path tests (always run)
|
|
||||||
test('@critical should login', async ({ page }) => { });
|
|
||||||
|
|
||||||
// Smoke tests (run first)
|
|
||||||
test('@smoke should load homepage', async ({ page }) => { });
|
|
||||||
|
|
||||||
// Slow tests (run nightly only)
|
|
||||||
test('@slow should process large file', async ({ page }) => { });
|
|
||||||
|
|
||||||
// Skip in CI
|
|
||||||
test('@local-only should use local service', async ({ page }) => { });
|
|
||||||
```
|
|
||||||
|
|
||||||
**In CI:**
|
|
||||||
```bash
|
|
||||||
# PR: Run critical and smoke only
|
|
||||||
npx playwright test --grep "@critical|@smoke"
|
|
||||||
|
|
||||||
# Nightly: Run everything except local-only
|
|
||||||
npx playwright test --grep-invert "@local-only"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Monitor CI Performance
|
|
||||||
|
|
||||||
Track metrics:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## CI Metrics
|
|
||||||
|
|
||||||
| Metric | Target | Current | Status |
|
|
||||||
|--------|--------|---------|--------|
|
|
||||||
| PR feedback time | < 5 min | 3m 24s | ✅ |
|
|
||||||
| Full suite time | < 15 min | 12m 18s | ✅ |
|
|
||||||
| Flakiness rate | < 1% | 0.3% | ✅ |
|
|
||||||
| CI cost/month | < $100 | $75 | ✅ |
|
|
||||||
```
|
|
||||||
|
|
||||||
### Handle Flaky Tests
|
|
||||||
|
|
||||||
When burn-in detects flakiness:
|
|
||||||
|
|
||||||
1. **Quarantine flaky test:**
|
|
||||||
```typescript
|
|
||||||
test.skip('flaky test - investigating', async ({ page }) => {
|
|
||||||
// TODO: Fix flakiness
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Investigate with trace viewer:**
|
|
||||||
```bash
|
|
||||||
npx playwright show-trace test-results/trace.zip
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Fix root cause:**
|
|
||||||
- Add network-first patterns
|
|
||||||
- Remove hard waits
|
|
||||||
- Fix race conditions
|
|
||||||
|
|
||||||
4. **Verify fix:**
|
|
||||||
```bash
|
|
||||||
npm run test:burn-in -- tests/flaky.spec.ts --repeat 20
|
|
||||||
```
|
|
||||||
|
|
||||||
### Secure Secrets
|
|
||||||
|
|
||||||
**Don't commit secrets to code:**
|
|
||||||
```yaml
|
|
||||||
# ❌ Bad
|
|
||||||
- run: API_KEY=sk-1234... npm test
|
|
||||||
|
|
||||||
# ✅ Good
|
|
||||||
- run: npm test
|
|
||||||
env:
|
|
||||||
API_KEY: ${{ secrets.API_KEY }}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use environment-specific secrets:**
|
|
||||||
- `STAGING_API_URL`
|
|
||||||
- `PROD_API_URL`
|
|
||||||
- `TEST_API_URL`
|
|
||||||
|
|
||||||
### Cache Aggressively
|
|
||||||
|
|
||||||
Speed up CI with caching:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Cache node_modules
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
cache: 'npm'
|
|
||||||
|
|
||||||
# Cache Playwright browsers
|
|
||||||
- name: Cache Playwright browsers
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: ~/.cache/ms-playwright
|
|
||||||
key: playwright-${{ hashFiles('package-lock.json') }}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
### Tests Pass Locally, Fail in CI
|
|
||||||
|
|
||||||
**Symptoms:**
|
|
||||||
- Green locally, red in CI
|
|
||||||
- "Works on my machine"
|
|
||||||
|
|
||||||
**Common Causes:**
|
|
||||||
- Different Node version
|
|
||||||
- Different browser version
|
|
||||||
- Missing environment variables
|
|
||||||
- Timezone differences
|
|
||||||
- Race conditions (CI slower)
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
```yaml
|
|
||||||
# Pin Node version
|
|
||||||
- uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version-file: '.nvmrc'
|
|
||||||
|
|
||||||
# Pin browser versions
|
|
||||||
- run: npx playwright install --with-deps chromium@1.40.0
|
|
||||||
|
|
||||||
# Set timezone
|
|
||||||
env:
|
|
||||||
TZ: 'America/New_York'
|
|
||||||
```
|
|
||||||
|
|
||||||
### CI Takes Too Long
|
|
||||||
|
|
||||||
**Problem:** CI takes 30+ minutes, developers wait too long.
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. **Shard tests:** 4 workers = 4x faster
|
|
||||||
2. **Selective testing:** Only run affected tests on PR
|
|
||||||
3. **Smoke tests first:** Run critical path (2 min), full suite after
|
|
||||||
4. **Cache dependencies:** `npm ci` with cache
|
|
||||||
5. **Optimize tests:** Remove slow tests, hard waits
|
|
||||||
|
|
||||||
### Burn-In Always Fails
|
|
||||||
|
|
||||||
**Problem:** Burn-in job fails every time.
|
|
||||||
|
|
||||||
**Cause:** Test suite is flaky.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
1. Identify flaky tests (check which iteration fails)
|
|
||||||
2. Fix flaky tests using `*test-review`
|
|
||||||
3. Re-run burn-in on specific files:
|
|
||||||
```bash
|
|
||||||
npm run test:burn-in tests/flaky.spec.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Out of CI Minutes
|
|
||||||
|
|
||||||
**Problem:** Using too many CI minutes, hitting plan limit.
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. Run full suite only on main branch
|
|
||||||
2. Use selective testing on PRs
|
|
||||||
3. Run expensive tests nightly only
|
|
||||||
4. Self-host runners (for GitHub Actions)
|
|
||||||
|
|
||||||
## Related Guides
|
|
||||||
|
|
||||||
- [How to Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md) - Run first
|
|
||||||
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Audit CI tests
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - Burn-in utility
|
|
||||||
|
|
||||||
## Understanding the Concepts
|
|
||||||
|
|
||||||
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Why determinism matters
|
|
||||||
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Avoid CI flakiness
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
- [Command: *ci](/docs/reference/tea/commands.md#ci) - Full command reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - CI-related config options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -67,7 +67,7 @@ Type "exit" or "done" to conclude the session. Participating agents will say per
|
||||||
## Example Party Compositions
|
## Example Party Compositions
|
||||||
|
|
||||||
| Topic | Typical Agents |
|
| Topic | Typical Agents |
|
||||||
| ---------------------- | ------------------------------------------------------------- |
|
|-------|---------------|
|
||||||
| **Product Strategy** | PM + Innovation Strategist (CIS) + Analyst |
|
| **Product Strategy** | PM + Innovation Strategist (CIS) + Analyst |
|
||||||
| **Technical Design** | Architect + Creative Problem Solver (CIS) + Game Architect |
|
| **Technical Design** | Architect + Creative Problem Solver (CIS) + Game Architect |
|
||||||
| **User Experience** | UX Designer + Design Thinking Coach (CIS) + Storyteller (CIS) |
|
| **User Experience** | UX Designer + Design Thinking Coach (CIS) + Storyteller (CIS) |
|
||||||
|
|
@ -78,6 +78,7 @@ Type "exit" or "done" to conclude the session. Participating agents will say per
|
||||||
- **Intelligent agent selection** — Selects based on expertise needed
|
- **Intelligent agent selection** — Selects based on expertise needed
|
||||||
- **Authentic personalities** — Each agent maintains their unique voice
|
- **Authentic personalities** — Each agent maintains their unique voice
|
||||||
- **Natural cross-talk** — Agents reference and build on each other
|
- **Natural cross-talk** — Agents reference and build on each other
|
||||||
|
- **Optional TTS** — Voice configurations for each agent
|
||||||
- **Graceful exit** — Personalized farewells
|
- **Graceful exit** — Personalized farewells
|
||||||
|
|
||||||
## Tips
|
## Tips
|
||||||
|
|
|
||||||
|
|
@ -7,9 +7,9 @@ Terminology reference for the BMad Method.
|
||||||
## Core Concepts
|
## Core Concepts
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **Agent** | Specialized AI persona with specific expertise (PM, Architect, SM, DEV, TEA) that guides users through workflows and creates deliverables. |
|
| **Agent** | Specialized AI persona with specific expertise (PM, Architect, SM, DEV, TEA) that guides users through workflows and creates deliverables. |
|
||||||
| **BMad** | Breakthrough Method of Agile AI-Driven Development — AI-driven agile framework with specialized agents, guided workflows, and scale-adaptive intelligence. |
|
| **BMad** | Breakthrough Method of Agile AI Driven Development — AI-driven agile framework with specialized agents, guided workflows, and scale-adaptive intelligence. |
|
||||||
| **BMad Method** | Complete methodology for AI-assisted software development, encompassing planning, architecture, implementation, and quality assurance workflows that adapt to project complexity. |
|
| **BMad Method** | Complete methodology for AI-assisted software development, encompassing planning, architecture, implementation, and quality assurance workflows that adapt to project complexity. |
|
||||||
| **BMM** | BMad Method Module — core orchestration system providing comprehensive lifecycle management through specialized agents and workflows. |
|
| **BMM** | BMad Method Module — core orchestration system providing comprehensive lifecycle management through specialized agents and workflows. |
|
||||||
| **Scale-Adaptive System** | Intelligent workflow orchestration that adjusts planning depth and documentation requirements based on project needs through three planning tracks. |
|
| **Scale-Adaptive System** | Intelligent workflow orchestration that adjusts planning depth and documentation requirements based on project needs through three planning tracks. |
|
||||||
|
|
@ -18,7 +18,7 @@ Terminology reference for the BMad Method.
|
||||||
## Scale and Complexity
|
## Scale and Complexity
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **BMad Method Track** | Full product planning track using PRD + Architecture + UX. Best for products, platforms, and complex features. Typical range: 10-50+ stories. |
|
| **BMad Method Track** | Full product planning track using PRD + Architecture + UX. Best for products, platforms, and complex features. Typical range: 10-50+ stories. |
|
||||||
| **Enterprise Method Track** | Extended planning track adding Security Architecture, DevOps Strategy, and Test Strategy. Best for compliance needs and multi-tenant systems. Typical range: 30+ stories. |
|
| **Enterprise Method Track** | Extended planning track adding Security Architecture, DevOps Strategy, and Test Strategy. Best for compliance needs and multi-tenant systems. Typical range: 30+ stories. |
|
||||||
| **Planning Track** | Methodology path (Quick Flow, BMad Method, or Enterprise) chosen based on planning needs and complexity, not story count alone. |
|
| **Planning Track** | Methodology path (Quick Flow, BMad Method, or Enterprise) chosen based on planning needs and complexity, not story count alone. |
|
||||||
|
|
@ -27,7 +27,7 @@ Terminology reference for the BMad Method.
|
||||||
## Planning Documents
|
## Planning Documents
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **Architecture Document** | *BMad Method/Enterprise.* System-wide design document defining structure, components, data models, integration patterns, security, and deployment. |
|
| **Architecture Document** | *BMad Method/Enterprise.* System-wide design document defining structure, components, data models, integration patterns, security, and deployment. |
|
||||||
| **Epics** | High-level feature groupings containing multiple related stories. Typically 5-15 stories each representing cohesive functionality. |
|
| **Epics** | High-level feature groupings containing multiple related stories. Typically 5-15 stories each representing cohesive functionality. |
|
||||||
| **Game Brief** | *BMGD.* Document capturing game's core vision, pillars, target audience, and scope. Foundation for the GDD. |
|
| **Game Brief** | *BMGD.* Document capturing game's core vision, pillars, target audience, and scope. Foundation for the GDD. |
|
||||||
|
|
@ -39,7 +39,7 @@ Terminology reference for the BMad Method.
|
||||||
## Workflow and Phases
|
## Workflow and Phases
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **Phase 0: Documentation** | *Brownfield.* Conditional prerequisite phase creating codebase documentation before planning. Only required if existing docs are insufficient. |
|
| **Phase 0: Documentation** | *Brownfield.* Conditional prerequisite phase creating codebase documentation before planning. Only required if existing docs are insufficient. |
|
||||||
| **Phase 1: Analysis** | Discovery phase including brainstorming, research, and product brief creation. Optional for Quick Flow, recommended for BMad Method. |
|
| **Phase 1: Analysis** | Discovery phase including brainstorming, research, and product brief creation. Optional for Quick Flow, recommended for BMad Method. |
|
||||||
| **Phase 2: Planning** | Required phase creating formal requirements. Routes to tech-spec (Quick Flow) or PRD (BMad Method/Enterprise). |
|
| **Phase 2: Planning** | Required phase creating formal requirements. Routes to tech-spec (Quick Flow) or PRD (BMad Method/Enterprise). |
|
||||||
|
|
@ -52,7 +52,7 @@ Terminology reference for the BMad Method.
|
||||||
## Agents and Roles
|
## Agents and Roles
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
|
|------|------------|
|
||||||
| **Analyst** | Agent that initializes workflows, conducts research, creates product briefs, and tracks progress. Often the entry point for new projects. |
|
| **Analyst** | Agent that initializes workflows, conducts research, creates product briefs, and tracks progress. Often the entry point for new projects. |
|
||||||
| **Architect** | Agent designing system architecture, creating architecture documents, and validating designs. Primary agent for Phase 3. |
|
| **Architect** | Agent designing system architecture, creating architecture documents, and validating designs. Primary agent for Phase 3. |
|
||||||
| **BMad Master** | Meta-level orchestrator from BMad Core facilitating party mode and providing high-level guidance across all modules. |
|
| **BMad Master** | Meta-level orchestrator from BMad Core facilitating party mode and providing high-level guidance across all modules. |
|
||||||
|
|
@ -69,7 +69,7 @@ Terminology reference for the BMad Method.
|
||||||
## Status and Tracking
|
## Status and Tracking
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **bmm-workflow-status.yaml** | *Phases 1-3.* Tracking file showing current phase, completed workflows, and next recommended actions. |
|
| **bmm-workflow-status.yaml** | *Phases 1-3.* Tracking file showing current phase, completed workflows, and next recommended actions. |
|
||||||
| **DoD** | Definition of Done — criteria for marking a story complete: implementation done, tests passing, code reviewed, docs updated. |
|
| **DoD** | Definition of Done — criteria for marking a story complete: implementation done, tests passing, code reviewed, docs updated. |
|
||||||
| **Epic Status Progression** | `backlog → in-progress → done` — lifecycle states for epics during implementation. |
|
| **Epic Status Progression** | `backlog → in-progress → done` — lifecycle states for epics during implementation. |
|
||||||
|
|
@ -81,7 +81,7 @@ Terminology reference for the BMad Method.
|
||||||
## Project Types
|
## Project Types
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **Brownfield** | Existing project with established codebase and patterns. Requires understanding existing architecture and planning integration. |
|
| **Brownfield** | Existing project with established codebase and patterns. Requires understanding existing architecture and planning integration. |
|
||||||
| **Convention Detection** | *Quick Flow.* Feature auto-detecting existing code style, naming conventions, and frameworks from brownfield codebases. |
|
| **Convention Detection** | *Quick Flow.* Feature auto-detecting existing code style, naming conventions, and frameworks from brownfield codebases. |
|
||||||
| **document-project** | *Brownfield.* Workflow analyzing and documenting existing codebase with three scan levels: quick, deep, exhaustive. |
|
| **document-project** | *Brownfield.* Workflow analyzing and documenting existing codebase with three scan levels: quick, deep, exhaustive. |
|
||||||
|
|
@ -92,7 +92,7 @@ Terminology reference for the BMad Method.
|
||||||
## Implementation Terms
|
## Implementation Terms
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
|------|------------|
|
||||||
| **Context Engineering** | Loading domain-specific standards into AI context automatically via manifests, ensuring consistent outputs regardless of prompt variation. |
|
| **Context Engineering** | Loading domain-specific standards into AI context automatically via manifests, ensuring consistent outputs regardless of prompt variation. |
|
||||||
| **Correct Course** | Workflow for navigating significant changes when implementation is off-track. Analyzes impact and recommends adjustments. |
|
| **Correct Course** | Workflow for navigating significant changes when implementation is off-track. Analyzes impact and recommends adjustments. |
|
||||||
| **Shard / Sharding** | Splitting large planning documents into section-based files for LLM optimization. Phase 4 workflows load only needed sections. |
|
| **Shard / Sharding** | Splitting large planning documents into section-based files for LLM optimization. Phase 4 workflows load only needed sections. |
|
||||||
|
|
@ -106,7 +106,7 @@ Terminology reference for the BMad Method.
|
||||||
## Game Development Terms
|
## Game Development Terms
|
||||||
|
|
||||||
| Term | Definition |
|
| Term | Definition |
|
||||||
| ------------------------------ | ---------------------------------------------------------------------------------------------------- |
|
|------|------------|
|
||||||
| **Core Fantasy** | *BMGD.* The emotional experience players seek from your game — what they want to FEEL. |
|
| **Core Fantasy** | *BMGD.* The emotional experience players seek from your game — what they want to FEEL. |
|
||||||
| **Core Loop** | *BMGD.* Fundamental cycle of actions players repeat throughout gameplay. The heart of your game. |
|
| **Core Loop** | *BMGD.* Fundamental cycle of actions players repeat throughout gameplay. The heart of your game. |
|
||||||
| **Design Pillar** | *BMGD.* Core principle guiding all design decisions. Typically 3-5 pillars define a game's identity. |
|
| **Design Pillar** | *BMGD.* Core principle guiding all design decisions. Typically 3-5 pillars define a game's identity. |
|
||||||
|
|
@ -120,40 +120,3 @@ Terminology reference for the BMad Method.
|
||||||
| **Player Agency** | *BMGD.* Degree to which players can make meaningful choices affecting outcomes. |
|
| **Player Agency** | *BMGD.* Degree to which players can make meaningful choices affecting outcomes. |
|
||||||
| **Procedural Generation** | *BMGD.* Algorithmic creation of game content (levels, items, characters) rather than hand-crafted. |
|
| **Procedural Generation** | *BMGD.* Algorithmic creation of game content (levels, items, characters) rather than hand-crafted. |
|
||||||
| **Roguelike** | *BMGD.* Genre featuring procedural generation, permadeath, and run-based progression. |
|
| **Roguelike** | *BMGD.* Genre featuring procedural generation, permadeath, and run-based progression. |
|
||||||
|
|
||||||
## Test Architect (TEA) Concepts
|
|
||||||
|
|
||||||
| Term | Definition |
|
|
||||||
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| **ATDD** | Acceptance Test-Driven Development — Generating failing acceptance tests BEFORE implementation (TDD red phase). |
|
|
||||||
| **Burn-in Testing** | Running tests multiple times (typically 5-10 iterations) to detect flakiness and intermittent failures. |
|
|
||||||
| **Component Testing** | Testing UI components in isolation using framework-specific tools (Cypress Component Testing or Vitest + React Testing Library). |
|
|
||||||
| **Coverage Traceability** | Mapping acceptance criteria to implemented tests with classification (FULL/PARTIAL/NONE) to identify gaps and measure completeness. |
|
|
||||||
| **Epic-Level Test Design** | Test planning per epic (Phase 4) focusing on risk assessment, priorities, and coverage strategy for that specific epic. |
|
|
||||||
| **Fixture Architecture** | Pattern of building pure functions first, then wrapping in framework-specific fixtures for testability, reusability, and composition. |
|
|
||||||
| **Gate Decision** | Go/no-go decision for release with four outcomes: PASS ✅ (ready), CONCERNS ⚠️ (proceed with mitigation), FAIL ❌ (blocked), WAIVED ⏭️ (approved despite issues). |
|
|
||||||
| **Knowledge Fragment** | Individual markdown file in TEA's knowledge base covering a specific testing pattern or practice (33 fragments total). |
|
|
||||||
| **MCP Enhancements** | Model Context Protocol servers enabling live browser verification during test generation (exploratory, recording, and healing modes). |
|
|
||||||
| **Network-First Pattern** | Testing pattern that waits for actual network responses instead of fixed timeouts to avoid race conditions and flakiness. |
|
|
||||||
| **NFR Assessment** | Validation of non-functional requirements (security, performance, reliability, maintainability) with evidence-based decisions. |
|
|
||||||
| **Playwright Utils** | Optional package (`@seontechnologies/playwright-utils`) providing production-ready fixtures and utilities for Playwright tests. |
|
|
||||||
| **Risk-Based Testing** | Testing approach where depth scales with business impact using probability × impact scoring (1-9 scale). |
|
|
||||||
| **System-Level Test Design** | Test planning at architecture level (Phase 3) focusing on testability review, ADR mapping, and test infrastructure needs. |
|
|
||||||
| **tea-index.csv** | Manifest file tracking all knowledge fragments, their descriptions, tags, and which workflows load them. |
|
|
||||||
| **TEA Integrated** | Full BMad Method integration with TEA workflows across all phases (Phase 2, 3, 4, and Release Gate). |
|
|
||||||
| **TEA Lite** | Beginner approach using just `*automate` workflow to test existing features (simplest way to use TEA). |
|
|
||||||
| **TEA Solo** | Standalone engagement model using TEA without full BMad Method integration (bring your own requirements). |
|
|
||||||
| **Test Priorities** | Classification system for test importance: P0 (critical path), P1 (high value), P2 (medium value), P3 (low value). |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## See Also
|
|
||||||
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA capabilities
|
|
||||||
- [TEA Knowledge Base](/docs/reference/tea/knowledge-base.md) - Fragment index
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - Workflow reference
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org)
|
|
||||||
|
|
|
||||||
|
|
@ -1,254 +0,0 @@
|
||||||
---
|
|
||||||
title: "TEA Command Reference"
|
|
||||||
description: Quick reference for all 8 TEA workflows - inputs, outputs, and links to detailed guides
|
|
||||||
---
|
|
||||||
|
|
||||||
# TEA Command Reference
|
|
||||||
|
|
||||||
Quick reference for all 8 TEA (Test Architect) workflows. For detailed step-by-step guides, see the how-to documentation.
|
|
||||||
|
|
||||||
## Quick Index
|
|
||||||
|
|
||||||
- [*framework](#framework) - Scaffold test framework
|
|
||||||
- [*ci](#ci) - Setup CI/CD pipeline
|
|
||||||
- [*test-design](#test-design) - Risk-based test planning
|
|
||||||
- [*atdd](#atdd) - Acceptance TDD
|
|
||||||
- [*automate](#automate) - Test automation
|
|
||||||
- [*test-review](#test-review) - Quality audit
|
|
||||||
- [*nfr-assess](#nfr-assess) - NFR assessment
|
|
||||||
- [*trace](#trace) - Coverage traceability
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *framework
|
|
||||||
|
|
||||||
**Purpose:** Scaffold production-ready test framework (Playwright or Cypress)
|
|
||||||
|
|
||||||
**Phase:** Phase 3 (Solutioning)
|
|
||||||
|
|
||||||
**Frequency:** Once per project
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- Tech stack, test framework choice, testing scope
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- `tests/` directory with `support/fixtures/` and `support/helpers/`
|
|
||||||
- `playwright.config.ts` or `cypress.config.ts`
|
|
||||||
- `.env.example`, `.nvmrc`
|
|
||||||
- Sample tests with best practices
|
|
||||||
|
|
||||||
**How-To Guide:** [Setup Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *ci
|
|
||||||
|
|
||||||
**Purpose:** Setup CI/CD pipeline with selective testing and burn-in
|
|
||||||
|
|
||||||
**Phase:** Phase 3 (Solutioning)
|
|
||||||
|
|
||||||
**Frequency:** Once per project
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- CI platform (GitHub Actions, GitLab CI, etc.)
|
|
||||||
- Sharding strategy, burn-in preferences
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- Platform-specific CI workflow (`.github/workflows/test.yml`, etc.)
|
|
||||||
- Parallel execution configuration
|
|
||||||
- Burn-in loops for flakiness detection
|
|
||||||
- Secrets checklist
|
|
||||||
|
|
||||||
**How-To Guide:** [Setup CI Pipeline](/docs/how-to/workflows/setup-ci.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *test-design
|
|
||||||
|
|
||||||
**Purpose:** Risk-based test planning with coverage strategy
|
|
||||||
|
|
||||||
**Phase:** Phase 3 (system-level), Phase 4 (epic-level)
|
|
||||||
|
|
||||||
**Frequency:** Once (system), per epic (epic-level)
|
|
||||||
|
|
||||||
**Modes:**
|
|
||||||
- **System-level:** Architecture testability review
|
|
||||||
- **Epic-level:** Per-epic risk assessment
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- Architecture/epic, requirements, ADRs
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- `test-design-system.md` or `test-design-epic-N.md`
|
|
||||||
- Risk assessment (probability × impact scores)
|
|
||||||
- Test priorities (P0-P3)
|
|
||||||
- Coverage strategy
|
|
||||||
|
|
||||||
**MCP Enhancement:** Exploratory mode (live browser UI discovery)
|
|
||||||
|
|
||||||
**How-To Guide:** [Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *atdd
|
|
||||||
|
|
||||||
**Purpose:** Generate failing acceptance tests BEFORE implementation (TDD red phase)
|
|
||||||
|
|
||||||
**Phase:** Phase 4 (Implementation)
|
|
||||||
|
|
||||||
**Frequency:** Per story (optional)
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- Story with acceptance criteria, test design, test levels
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- Failing tests (`tests/api/`, `tests/e2e/`)
|
|
||||||
- Implementation checklist
|
|
||||||
- All tests fail initially (red phase)
|
|
||||||
|
|
||||||
**MCP Enhancement:** Recording mode (for skeleton UI only - rare)
|
|
||||||
|
|
||||||
**How-To Guide:** [Run ATDD](/docs/how-to/workflows/run-atdd.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *automate
|
|
||||||
|
|
||||||
**Purpose:** Expand test coverage after implementation
|
|
||||||
|
|
||||||
**Phase:** Phase 4 (Implementation)
|
|
||||||
|
|
||||||
**Frequency:** Per story/feature
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- Feature description, test design, existing tests to avoid duplication
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- Comprehensive test suite (`tests/e2e/`, `tests/api/`)
|
|
||||||
- Updated fixtures, README
|
|
||||||
- Definition of Done summary
|
|
||||||
|
|
||||||
**MCP Enhancement:** Healing + Recording modes (fix tests, verify selectors)
|
|
||||||
|
|
||||||
**How-To Guide:** [Run Automate](/docs/how-to/workflows/run-automate.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *test-review
|
|
||||||
|
|
||||||
**Purpose:** Audit test quality with 0-100 scoring
|
|
||||||
|
|
||||||
**Phase:** Phase 4 (optional per story), Release Gate
|
|
||||||
|
|
||||||
**Frequency:** Per epic or before release
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- Test scope (file, directory, or entire suite)
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- `test-review.md` with quality score (0-100)
|
|
||||||
- Critical issues with fixes
|
|
||||||
- Recommendations
|
|
||||||
- Category scores (Determinism, Isolation, Assertions, Structure, Performance)
|
|
||||||
|
|
||||||
**Scoring Categories:**
|
|
||||||
- Determinism: 35 points
|
|
||||||
- Isolation: 25 points
|
|
||||||
- Assertions: 20 points
|
|
||||||
- Structure: 10 points
|
|
||||||
- Performance: 10 points
|
|
||||||
|
|
||||||
**How-To Guide:** [Run Test Review](/docs/how-to/workflows/run-test-review.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *nfr-assess
|
|
||||||
|
|
||||||
**Purpose:** Validate non-functional requirements with evidence
|
|
||||||
|
|
||||||
**Phase:** Phase 2 (enterprise), Release Gate
|
|
||||||
|
|
||||||
**Frequency:** Per release (enterprise projects)
|
|
||||||
|
|
||||||
**Key Inputs:**
|
|
||||||
- NFR categories (Security, Performance, Reliability, Maintainability)
|
|
||||||
- Thresholds, evidence location
|
|
||||||
|
|
||||||
**Key Outputs:**
|
|
||||||
- `nfr-assessment.md`
|
|
||||||
- Category assessments (PASS/CONCERNS/FAIL)
|
|
||||||
- Mitigation plans
|
|
||||||
- Gate decision inputs
|
|
||||||
|
|
||||||
**How-To Guide:** [Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## *trace
|
|
||||||
|
|
||||||
**Purpose:** Requirements traceability + quality gate decision
|
|
||||||
|
|
||||||
**Phase:** Phase 2/4 (traceability), Release Gate (decision)
|
|
||||||
|
|
||||||
**Frequency:** Baseline, per epic refresh, release gate
|
|
||||||
|
|
||||||
**Two-Phase Workflow:**
|
|
||||||
|
|
||||||
**Phase 1: Traceability**
|
|
||||||
- Requirements → test mapping
|
|
||||||
- Coverage classification (FULL/PARTIAL/NONE)
|
|
||||||
- Gap prioritization
|
|
||||||
- Output: `traceability-matrix.md`
|
|
||||||
|
|
||||||
**Phase 2: Gate Decision**
|
|
||||||
- PASS/CONCERNS/FAIL/WAIVED decision
|
|
||||||
- Evidence-based (coverage %, quality scores, NFRs)
|
|
||||||
- Output: `gate-decision-{gate_type}-{story_id}.md`
|
|
||||||
|
|
||||||
**Gate Rules:**
|
|
||||||
- P0 coverage: 100% required
|
|
||||||
- P1 coverage: ≥90% for PASS, 80-89% for CONCERNS, <80% FAIL
|
|
||||||
- Overall coverage: ≥80% required
|
|
||||||
|
|
||||||
**How-To Guide:** [Run Trace](/docs/how-to/workflows/run-trace.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary Table
|
|
||||||
|
|
||||||
| Command | Phase | Frequency | Primary Output |
|
|
||||||
|---------|-------|-----------|----------------|
|
|
||||||
| `*framework` | 3 | Once | Test infrastructure |
|
|
||||||
| `*ci` | 3 | Once | CI/CD pipeline |
|
|
||||||
| `*test-design` | 3, 4 | System + per epic | Test design doc |
|
|
||||||
| `*atdd` | 4 | Per story (optional) | Failing tests |
|
|
||||||
| `*automate` | 4 | Per story | Passing tests |
|
|
||||||
| `*test-review` | 4, Gate | Per epic/release | Quality report |
|
|
||||||
| `*nfr-assess` | 2, Gate | Per release | NFR assessment |
|
|
||||||
| `*trace` | 2, 4, Gate | Baseline + refresh + gate | Coverage matrix + decision |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## See Also
|
|
||||||
|
|
||||||
**How-To Guides (Detailed Instructions):**
|
|
||||||
- [Setup Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
|
||||||
- [Setup CI Pipeline](/docs/how-to/workflows/setup-ci.md)
|
|
||||||
- [Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
|
||||||
- [Run ATDD](/docs/how-to/workflows/run-atdd.md)
|
|
||||||
- [Run Automate](/docs/how-to/workflows/run-automate.md)
|
|
||||||
- [Run Test Review](/docs/how-to/workflows/run-test-review.md)
|
|
||||||
- [Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md)
|
|
||||||
- [Run Trace](/docs/how-to/workflows/run-trace.md)
|
|
||||||
|
|
||||||
**Explanation:**
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA lifecycle
|
|
||||||
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - When to use which workflows
|
|
||||||
|
|
||||||
**Reference:**
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Pattern fragments
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,678 +0,0 @@
|
||||||
---
|
|
||||||
title: "TEA Configuration Reference"
|
|
||||||
description: Complete reference for TEA configuration options and file locations
|
|
||||||
---
|
|
||||||
|
|
||||||
# TEA Configuration Reference
|
|
||||||
|
|
||||||
Complete reference for all TEA (Test Architect) configuration options.
|
|
||||||
|
|
||||||
## Configuration File Locations
|
|
||||||
|
|
||||||
### User Configuration (Installer-Generated)
|
|
||||||
|
|
||||||
**Location:** `_bmad/bmm/config.yaml`
|
|
||||||
|
|
||||||
**Purpose:** Project-specific configuration values for your repository
|
|
||||||
|
|
||||||
**Created By:** BMad installer
|
|
||||||
|
|
||||||
**Status:** Typically gitignored (user-specific values)
|
|
||||||
|
|
||||||
**Usage:** Edit this file to change TEA behavior in your project
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml
|
|
||||||
project_name: my-awesome-app
|
|
||||||
user_skill_level: intermediate
|
|
||||||
output_folder: _bmad-output
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
tea_use_mcp_enhancements: false
|
|
||||||
```
|
|
||||||
|
|
||||||
### Canonical Schema (Source of Truth)
|
|
||||||
|
|
||||||
**Location:** `src/modules/bmm/module.yaml`
|
|
||||||
|
|
||||||
**Purpose:** Defines available configuration keys, defaults, and installer prompts
|
|
||||||
|
|
||||||
**Created By:** BMAD maintainers (part of BMAD repo)
|
|
||||||
|
|
||||||
**Status:** Versioned in BMAD repository
|
|
||||||
|
|
||||||
**Usage:** Reference only (do not edit unless contributing to BMAD)
|
|
||||||
|
|
||||||
**Note:** The installer reads `module.yaml` to prompt for config values, then writes user choices to `_bmad/bmm/config.yaml` in your project.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## TEA Configuration Options
|
|
||||||
|
|
||||||
### tea_use_playwright_utils
|
|
||||||
|
|
||||||
Enable Playwright Utils integration for production-ready fixtures and utilities.
|
|
||||||
|
|
||||||
**Schema Location:** `src/modules/bmm/module.yaml:52-56`
|
|
||||||
|
|
||||||
**User Config:** `_bmad/bmm/config.yaml`
|
|
||||||
|
|
||||||
**Type:** `boolean`
|
|
||||||
|
|
||||||
**Default:** `false` (set via installer prompt during installation)
|
|
||||||
|
|
||||||
**Installer Prompt:**
|
|
||||||
```
|
|
||||||
Are you using playwright-utils (@seontechnologies/playwright-utils) in your project?
|
|
||||||
You must install packages yourself, or use test architect's *framework command.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Purpose:** Enables TEA to:
|
|
||||||
- Include playwright-utils in `*framework` scaffold
|
|
||||||
- Generate tests using playwright-utils fixtures
|
|
||||||
- Review tests against playwright-utils patterns
|
|
||||||
- Configure CI with burn-in and selective testing utilities
|
|
||||||
|
|
||||||
**Affects Workflows:**
|
|
||||||
- `*framework` - Includes playwright-utils imports and fixture examples
|
|
||||||
- `*atdd` - Uses fixtures like `apiRequest`, `authSession` in generated tests
|
|
||||||
- `*automate` - Leverages utilities for test patterns
|
|
||||||
- `*test-review` - Reviews against playwright-utils best practices
|
|
||||||
- `*ci` - Includes burn-in utility and selective testing
|
|
||||||
|
|
||||||
**Example (Enable):**
|
|
||||||
```yaml
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example (Disable):**
|
|
||||||
```yaml
|
|
||||||
tea_use_playwright_utils: false
|
|
||||||
```
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
```bash
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
```
|
|
||||||
|
|
||||||
**Related:**
|
|
||||||
- [Integrate Playwright Utils Guide](/docs/how-to/customization/integrate-playwright-utils.md)
|
|
||||||
- [Playwright Utils on npm](https://www.npmjs.com/package/@seontechnologies/playwright-utils)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### tea_use_mcp_enhancements
|
|
||||||
|
|
||||||
Enable Playwright MCP servers for live browser verification during test generation.
|
|
||||||
|
|
||||||
**Schema Location:** `src/modules/bmm/module.yaml:47-50`
|
|
||||||
|
|
||||||
**User Config:** `_bmad/bmm/config.yaml`
|
|
||||||
|
|
||||||
**Type:** `boolean`
|
|
||||||
|
|
||||||
**Default:** `false`
|
|
||||||
|
|
||||||
**Installer Prompt:**
|
|
||||||
```
|
|
||||||
Test Architect Playwright MCP capabilities (healing, exploratory, verification) are optionally available.
|
|
||||||
You will have to setup your MCPs yourself; refer to https://docs.bmad-method.org/explanation/features/tea-overview for configuration examples.
|
|
||||||
Would you like to enable MCP enhancements in Test Architect?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Purpose:** Enables TEA to use Model Context Protocol servers for:
|
|
||||||
- Live browser automation during test design
|
|
||||||
- Selector verification with actual DOM
|
|
||||||
- Interactive UI discovery
|
|
||||||
- Visual debugging and healing
|
|
||||||
|
|
||||||
**Affects Workflows:**
|
|
||||||
- `*test-design` - Enables exploratory mode (browser-based UI discovery)
|
|
||||||
- `*atdd` - Enables recording mode (verify selectors with live browser)
|
|
||||||
- `*automate` - Enables healing mode (fix tests with visual debugging)
|
|
||||||
|
|
||||||
**MCP Servers Required:**
|
|
||||||
|
|
||||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
|
||||||
|
|
||||||
- `playwright` - Browser automation (`npx @playwright/mcp@latest`)
|
|
||||||
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
|
|
||||||
|
|
||||||
**Configuration example**:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"playwright": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["@playwright/mcp@latest"]
|
|
||||||
},
|
|
||||||
"playwright-test": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["playwright", "run-test-mcp-server"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Configuration:** Refer to your AI agent's documentation for MCP server setup instructions.
|
|
||||||
|
|
||||||
**Example (Enable):**
|
|
||||||
```yaml
|
|
||||||
tea_use_mcp_enhancements: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example (Disable):**
|
|
||||||
```yaml
|
|
||||||
tea_use_mcp_enhancements: false
|
|
||||||
```
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
1. MCP servers installed in IDE configuration
|
|
||||||
2. `@playwright/mcp` package available globally or locally
|
|
||||||
3. Browser binaries installed (`npx playwright install`)
|
|
||||||
|
|
||||||
**Related:**
|
|
||||||
- [Enable MCP Enhancements Guide](/docs/how-to/customization/enable-tea-mcp-enhancements.md)
|
|
||||||
- [TEA Overview - MCP Section](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements)
|
|
||||||
- [Playwright MCP on npm](https://www.npmjs.com/package/@playwright/mcp)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Core BMM Configuration (Inherited by TEA)
|
|
||||||
|
|
||||||
TEA also uses core BMM configuration options from `_bmad/bmm/config.yaml`:
|
|
||||||
|
|
||||||
### output_folder
|
|
||||||
|
|
||||||
**Type:** `string`
|
|
||||||
|
|
||||||
**Default:** `_bmad-output`
|
|
||||||
|
|
||||||
**Purpose:** Where TEA writes output files (test designs, reports, traceability matrices)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
output_folder: _bmad-output
|
|
||||||
```
|
|
||||||
|
|
||||||
**TEA Output Files:**
|
|
||||||
- `test-design-system.md` (from *test-design system-level)
|
|
||||||
- `test-design-epic-N.md` (from *test-design epic-level)
|
|
||||||
- `test-review.md` (from *test-review)
|
|
||||||
- `traceability-matrix.md` (from *trace Phase 1)
|
|
||||||
- `gate-decision-{gate_type}-{story_id}.md` (from *trace Phase 2)
|
|
||||||
- `nfr-assessment.md` (from *nfr-assess)
|
|
||||||
- `automation-summary.md` (from *automate)
|
|
||||||
- `atdd-checklist-{story_id}.md` (from *atdd)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### user_skill_level
|
|
||||||
|
|
||||||
**Type:** `enum`
|
|
||||||
|
|
||||||
**Options:** `beginner` | `intermediate` | `expert`
|
|
||||||
|
|
||||||
**Default:** `intermediate`
|
|
||||||
|
|
||||||
**Purpose:** Affects how TEA explains concepts in chat responses
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
user_skill_level: beginner
|
|
||||||
```
|
|
||||||
|
|
||||||
**Impact on TEA:**
|
|
||||||
- **Beginner:** More detailed explanations, links to concepts, verbose guidance
|
|
||||||
- **Intermediate:** Balanced explanations, assumes basic knowledge
|
|
||||||
- **Expert:** Concise, technical, minimal hand-holding
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### project_name
|
|
||||||
|
|
||||||
**Type:** `string`
|
|
||||||
|
|
||||||
**Default:** Directory name
|
|
||||||
|
|
||||||
**Purpose:** Used in TEA-generated documentation and reports
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
project_name: my-awesome-app
|
|
||||||
```
|
|
||||||
|
|
||||||
**Used in:**
|
|
||||||
- Report headers
|
|
||||||
- Documentation titles
|
|
||||||
- CI configuration comments
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### communication_language
|
|
||||||
|
|
||||||
**Type:** `string`
|
|
||||||
|
|
||||||
**Default:** `english`
|
|
||||||
|
|
||||||
**Purpose:** Language for TEA chat responses
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
communication_language: english
|
|
||||||
```
|
|
||||||
|
|
||||||
**Supported:** Any language (TEA responds in specified language)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### document_output_language
|
|
||||||
|
|
||||||
**Type:** `string`
|
|
||||||
|
|
||||||
**Default:** `english`
|
|
||||||
|
|
||||||
**Purpose:** Language for TEA-generated documents (test designs, reports)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
document_output_language: english
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Can differ from `communication_language` - chat in Spanish, generate docs in English.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
TEA workflows may use environment variables for test configuration.
|
|
||||||
|
|
||||||
### Test Framework Variables
|
|
||||||
|
|
||||||
**Playwright:**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
BASE_URL=https://todomvc.com/examples/react/
|
|
||||||
API_BASE_URL=https://api.example.com
|
|
||||||
TEST_USER_EMAIL=test@example.com
|
|
||||||
TEST_USER_PASSWORD=password123
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cypress:**
|
|
||||||
```bash
|
|
||||||
# cypress.env.json or .env
|
|
||||||
CYPRESS_BASE_URL=https://example.com
|
|
||||||
CYPRESS_API_URL=https://api.example.com
|
|
||||||
```
|
|
||||||
|
|
||||||
### CI/CD Variables
|
|
||||||
|
|
||||||
Set in CI platform (GitHub Actions secrets, GitLab CI variables):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# .github/workflows/test.yml
|
|
||||||
env:
|
|
||||||
BASE_URL: ${{ secrets.STAGING_URL }}
|
|
||||||
API_KEY: ${{ secrets.API_KEY }}
|
|
||||||
TEST_USER_EMAIL: ${{ secrets.TEST_USER }}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Patterns
|
|
||||||
|
|
||||||
### Development vs Production
|
|
||||||
|
|
||||||
**Separate configs for environments:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml
|
|
||||||
output_folder: _bmad-output
|
|
||||||
|
|
||||||
# .env.development
|
|
||||||
BASE_URL=http://localhost:3000
|
|
||||||
API_BASE_URL=http://localhost:4000
|
|
||||||
|
|
||||||
# .env.staging
|
|
||||||
BASE_URL=https://staging.example.com
|
|
||||||
API_BASE_URL=https://api-staging.example.com
|
|
||||||
|
|
||||||
# .env.production (read-only tests only!)
|
|
||||||
BASE_URL=https://example.com
|
|
||||||
API_BASE_URL=https://api.example.com
|
|
||||||
```
|
|
||||||
|
|
||||||
### Team vs Individual
|
|
||||||
|
|
||||||
**Team config (committed):**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml.example (committed to repo)
|
|
||||||
project_name: team-project
|
|
||||||
output_folder: _bmad-output
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
tea_use_mcp_enhancements: false
|
|
||||||
```
|
|
||||||
|
|
||||||
**Individual config (typically gitignored):**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml (user adds to .gitignore)
|
|
||||||
user_name: John Doe
|
|
||||||
user_skill_level: expert
|
|
||||||
tea_use_mcp_enhancements: true # Individual preference
|
|
||||||
```
|
|
||||||
|
|
||||||
### Monorepo Configuration
|
|
||||||
|
|
||||||
**Root config:**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml (root)
|
|
||||||
project_name: monorepo-parent
|
|
||||||
output_folder: _bmad-output
|
|
||||||
```
|
|
||||||
|
|
||||||
**Package-specific:**
|
|
||||||
```yaml
|
|
||||||
# packages/web-app/_bmad/bmm/config.yaml
|
|
||||||
project_name: web-app
|
|
||||||
output_folder: ../../_bmad-output/web-app
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
|
|
||||||
# packages/mobile-app/_bmad/bmm/config.yaml
|
|
||||||
project_name: mobile-app
|
|
||||||
output_folder: ../../_bmad-output/mobile-app
|
|
||||||
tea_use_playwright_utils: false
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Best Practices
|
|
||||||
|
|
||||||
### 1. Use Version Control Wisely
|
|
||||||
|
|
||||||
**Commit:**
|
|
||||||
```
|
|
||||||
_bmad/bmm/config.yaml.example # Template for team
|
|
||||||
.nvmrc # Node version
|
|
||||||
package.json # Dependencies
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recommended for .gitignore:**
|
|
||||||
```
|
|
||||||
_bmad/bmm/config.yaml # User-specific values
|
|
||||||
.env # Secrets
|
|
||||||
.env.local # Local overrides
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Document Required Setup
|
|
||||||
|
|
||||||
**In your README:**
|
|
||||||
```markdown
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
1. Install BMad
|
|
||||||
|
|
||||||
2. Copy config template:
|
|
||||||
cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml
|
|
||||||
|
|
||||||
3. Edit config with your values:
|
|
||||||
- Set user_name
|
|
||||||
- Enable tea_use_playwright_utils if using playwright-utils
|
|
||||||
- Enable tea_use_mcp_enhancements if MCPs configured
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Validate Configuration
|
|
||||||
|
|
||||||
**Check config is valid:**
|
|
||||||
```bash
|
|
||||||
# Check TEA config is set
|
|
||||||
cat _bmad/bmm/config.yaml | grep tea_use
|
|
||||||
|
|
||||||
# Verify playwright-utils installed (if enabled)
|
|
||||||
npm list @seontechnologies/playwright-utils
|
|
||||||
|
|
||||||
# Verify MCP servers configured (if enabled)
|
|
||||||
# Check your IDE's MCP settings
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Keep Config Minimal
|
|
||||||
|
|
||||||
**Don't over-configure:**
|
|
||||||
```yaml
|
|
||||||
# ❌ Bad - overriding everything unnecessarily
|
|
||||||
project_name: my-project
|
|
||||||
user_name: John Doe
|
|
||||||
user_skill_level: expert
|
|
||||||
output_folder: custom/path
|
|
||||||
planning_artifacts: custom/planning
|
|
||||||
implementation_artifacts: custom/implementation
|
|
||||||
project_knowledge: custom/docs
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
tea_use_mcp_enhancements: true
|
|
||||||
communication_language: english
|
|
||||||
document_output_language: english
|
|
||||||
# Overriding 11 config options when most can use defaults
|
|
||||||
|
|
||||||
# ✅ Good - only essential overrides
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
output_folder: docs/testing
|
|
||||||
# Only override what differs from defaults
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use defaults when possible** - only override what you actually need to change.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Configuration Not Loaded
|
|
||||||
|
|
||||||
**Problem:** TEA doesn't use my config values.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Config file in wrong location
|
|
||||||
2. YAML syntax error
|
|
||||||
3. Typo in config key
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Check file exists
|
|
||||||
ls -la _bmad/bmm/config.yaml
|
|
||||||
|
|
||||||
# Validate YAML syntax
|
|
||||||
npm install -g js-yaml
|
|
||||||
js-yaml _bmad/bmm/config.yaml
|
|
||||||
|
|
||||||
# Check for typos (compare to module.yaml)
|
|
||||||
diff _bmad/bmm/config.yaml src/modules/bmm/module.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Playwright Utils Not Working
|
|
||||||
|
|
||||||
**Problem:** `tea_use_playwright_utils: true` but TEA doesn't use utilities.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. Package not installed
|
|
||||||
2. Config file not saved
|
|
||||||
3. Workflow run before config update
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Verify package installed
|
|
||||||
npm list @seontechnologies/playwright-utils
|
|
||||||
|
|
||||||
# Check config value
|
|
||||||
grep tea_use_playwright_utils _bmad/bmm/config.yaml
|
|
||||||
|
|
||||||
# Re-run workflow in fresh chat
|
|
||||||
# (TEA loads config at workflow start)
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Enhancements Not Working
|
|
||||||
|
|
||||||
**Problem:** `tea_use_mcp_enhancements: true` but no browser opens.
|
|
||||||
|
|
||||||
**Causes:**
|
|
||||||
1. MCP servers not configured in IDE
|
|
||||||
2. MCP package not installed
|
|
||||||
3. Browser binaries missing
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Check MCP package available
|
|
||||||
npx @playwright/mcp@latest --version
|
|
||||||
|
|
||||||
# Install browsers
|
|
||||||
npx playwright install
|
|
||||||
|
|
||||||
# Verify IDE MCP config
|
|
||||||
# Check ~/.cursor/config.json or VS Code settings
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config Changes Not Applied
|
|
||||||
|
|
||||||
**Problem:** Updated config but TEA still uses old values.
|
|
||||||
|
|
||||||
**Cause:** TEA loads config at workflow start.
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
1. Save `_bmad/bmm/config.yaml`
|
|
||||||
2. Start fresh chat
|
|
||||||
3. Run TEA workflow
|
|
||||||
4. Config will be reloaded
|
|
||||||
|
|
||||||
**TEA doesn't reload config mid-chat** - always start fresh chat after config changes.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Examples
|
|
||||||
|
|
||||||
### Recommended Setup (Full Stack)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml
|
|
||||||
project_name: my-project
|
|
||||||
user_skill_level: beginner # or intermediate/expert
|
|
||||||
output_folder: _bmad-output
|
|
||||||
tea_use_playwright_utils: true # Recommended
|
|
||||||
tea_use_mcp_enhancements: true # Recommended
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why recommended:**
|
|
||||||
- Playwright Utils: Production-ready fixtures and utilities
|
|
||||||
- MCP enhancements: Live browser verification, visual debugging
|
|
||||||
- Together: The three-part stack (see [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md))
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
```bash
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
# Configure MCP servers in IDE (see Enable MCP Enhancements guide)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Best for:** Everyone (beginners learn good patterns from day one)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Minimal Setup (Learning Only)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml
|
|
||||||
project_name: my-project
|
|
||||||
output_folder: _bmad-output
|
|
||||||
tea_use_playwright_utils: false
|
|
||||||
tea_use_mcp_enhancements: false
|
|
||||||
```
|
|
||||||
|
|
||||||
**Best for:**
|
|
||||||
- First-time TEA users (keep it simple initially)
|
|
||||||
- Quick experiments
|
|
||||||
- Learning basics before adding integrations
|
|
||||||
|
|
||||||
**Note:** Can enable integrations later as you learn
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Monorepo Setup
|
|
||||||
|
|
||||||
**Root config:**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml (root)
|
|
||||||
project_name: monorepo
|
|
||||||
output_folder: _bmad-output
|
|
||||||
tea_use_playwright_utils: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Package configs:**
|
|
||||||
```yaml
|
|
||||||
# apps/web/_bmad/bmm/config.yaml
|
|
||||||
project_name: web-app
|
|
||||||
output_folder: ../../_bmad-output/web
|
|
||||||
|
|
||||||
# apps/api/_bmad/bmm/config.yaml
|
|
||||||
project_name: api-service
|
|
||||||
output_folder: ../../_bmad-output/api
|
|
||||||
tea_use_playwright_utils: false # Using vanilla Playwright only
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Team Template
|
|
||||||
|
|
||||||
**Commit this template:**
|
|
||||||
```yaml
|
|
||||||
# _bmad/bmm/config.yaml.example
|
|
||||||
# Copy to config.yaml and fill in your values
|
|
||||||
|
|
||||||
project_name: your-project-name
|
|
||||||
user_name: Your Name
|
|
||||||
user_skill_level: intermediate # beginner | intermediate | expert
|
|
||||||
output_folder: _bmad-output
|
|
||||||
planning_artifacts: _bmad-output/planning-artifacts
|
|
||||||
implementation_artifacts: _bmad-output/implementation-artifacts
|
|
||||||
project_knowledge: docs
|
|
||||||
|
|
||||||
# TEA Configuration (Recommended: Enable both for full stack)
|
|
||||||
tea_use_playwright_utils: true # Recommended - production-ready utilities
|
|
||||||
tea_use_mcp_enhancements: true # Recommended - live browser verification
|
|
||||||
|
|
||||||
# Languages
|
|
||||||
communication_language: english
|
|
||||||
document_output_language: english
|
|
||||||
```
|
|
||||||
|
|
||||||
**Team instructions:**
|
|
||||||
```markdown
|
|
||||||
## Setup for New Team Members
|
|
||||||
|
|
||||||
1. Clone repo
|
|
||||||
2. Copy config template:
|
|
||||||
cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml
|
|
||||||
3. Edit with your name and preferences
|
|
||||||
4. Install dependencies:
|
|
||||||
npm install
|
|
||||||
5. (Optional) Enable playwright-utils:
|
|
||||||
npm install -D @seontechnologies/playwright-utils
|
|
||||||
Set tea_use_playwright_utils: true
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## See Also
|
|
||||||
|
|
||||||
### How-To Guides
|
|
||||||
- [Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
|
||||||
- [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md)
|
|
||||||
- [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md)
|
|
||||||
|
|
||||||
### Reference
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md)
|
|
||||||
- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md)
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md)
|
|
||||||
|
|
||||||
### Explanation
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md)
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,340 +0,0 @@
|
||||||
---
|
|
||||||
title: "TEA Knowledge Base Index"
|
|
||||||
description: Complete index of TEA's 33 knowledge fragments for context engineering
|
|
||||||
---
|
|
||||||
|
|
||||||
# TEA Knowledge Base Index
|
|
||||||
|
|
||||||
TEA uses 33 specialized knowledge fragments for context engineering. These fragments are loaded dynamically based on workflow needs via the `tea-index.csv` manifest.
|
|
||||||
|
|
||||||
## What is Context Engineering?
|
|
||||||
|
|
||||||
**Context engineering** is the practice of loading domain-specific standards into AI context automatically rather than relying on prompts alone.
|
|
||||||
|
|
||||||
Instead of asking AI to "write good tests" every time, TEA:
|
|
||||||
1. Reads `tea-index.csv` to identify relevant fragments for the workflow
|
|
||||||
2. Loads only the fragments needed (keeps context focused)
|
|
||||||
3. Operates with domain-specific standards, not generic knowledge
|
|
||||||
4. Produces consistent, production-ready tests across projects
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
User runs: *test-design
|
|
||||||
|
|
||||||
TEA reads tea-index.csv:
|
|
||||||
- Loads: test-quality.md, test-priorities-matrix.md, risk-governance.md
|
|
||||||
- Skips: network-recorder.md, burn-in.md (not needed for test design)
|
|
||||||
|
|
||||||
Result: Focused context, consistent quality standards
|
|
||||||
```
|
|
||||||
|
|
||||||
## How Knowledge Loading Works
|
|
||||||
|
|
||||||
### 1. Workflow Trigger
|
|
||||||
User runs a TEA workflow (e.g., `*test-design`)
|
|
||||||
|
|
||||||
### 2. Manifest Lookup
|
|
||||||
TEA reads `src/modules/bmm/testarch/tea-index.csv`:
|
|
||||||
```csv
|
|
||||||
id,name,description,tags,fragment_file
|
|
||||||
test-quality,Test Quality,Execution limits and isolation rules,quality;standards,knowledge/test-quality.md
|
|
||||||
risk-governance,Risk Governance,Risk scoring and gate decisions,risk;governance,knowledge/risk-governance.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Dynamic Loading
|
|
||||||
Only fragments needed for the workflow are loaded into context
|
|
||||||
|
|
||||||
### 4. Consistent Output
|
|
||||||
AI operates with established patterns, producing consistent results
|
|
||||||
|
|
||||||
## Fragment Categories
|
|
||||||
|
|
||||||
### Architecture & Fixtures
|
|
||||||
|
|
||||||
Core patterns for test infrastructure and fixture composition.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [fixture-architecture](../../../src/modules/bmm/testarch/knowledge/fixture-architecture.md) | Pure function → Fixture → mergeTests composition with auto-cleanup | Testability, composition, reusability |
|
|
||||||
| [network-first](../../../src/modules/bmm/testarch/knowledge/network-first.md) | Intercept-before-navigate workflow, HAR capture, deterministic waits | Flakiness prevention, network patterns |
|
|
||||||
| [playwright-config](../../../src/modules/bmm/testarch/knowledge/playwright-config.md) | Environment switching, timeout standards, artifact outputs | Configuration, environments, CI |
|
|
||||||
| [fixtures-composition](../../../src/modules/bmm/testarch/knowledge/fixtures-composition.md) | mergeTests composition patterns for combining utilities | Fixture merging, utility composition |
|
|
||||||
|
|
||||||
**Used in:** `*framework`, `*test-design`, `*atdd`, `*automate`, `*test-review`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Data & Setup
|
|
||||||
|
|
||||||
Patterns for test data generation, authentication, and setup.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [data-factories](../../../src/modules/bmm/testarch/knowledge/data-factories.md) | Factory patterns with faker, overrides, API seeding, cleanup | Test data, factories, cleanup |
|
|
||||||
| [email-auth](../../../src/modules/bmm/testarch/knowledge/email-auth.md) | Magic link extraction, state preservation, negative flows | Authentication, email testing |
|
|
||||||
| [auth-session](../../../src/modules/bmm/testarch/knowledge/auth-session.md) | Token persistence, multi-user, API/browser authentication | Auth patterns, session management |
|
|
||||||
|
|
||||||
**Used in:** `*framework`, `*atdd`, `*automate`, `*test-review`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Network & Reliability
|
|
||||||
|
|
||||||
Network interception, error handling, and reliability patterns.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [network-recorder](../../../src/modules/bmm/testarch/knowledge/network-recorder.md) | HAR record/playback, CRUD detection for offline testing | Offline testing, network replay |
|
|
||||||
| [intercept-network-call](../../../src/modules/bmm/testarch/knowledge/intercept-network-call.md) | Network spy/stub, JSON parsing for UI tests | Mocking, interception, stubbing |
|
|
||||||
| [error-handling](../../../src/modules/bmm/testarch/knowledge/error-handling.md) | Scoped exception handling, retry validation, telemetry logging | Error patterns, resilience |
|
|
||||||
| [network-error-monitor](../../../src/modules/bmm/testarch/knowledge/network-error-monitor.md) | HTTP 4xx/5xx detection for UI tests | Error detection, monitoring |
|
|
||||||
|
|
||||||
**Used in:** `*atdd`, `*automate`, `*test-review`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Test Execution & CI
|
|
||||||
|
|
||||||
CI/CD patterns, burn-in testing, and selective test execution.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [ci-burn-in](../../../src/modules/bmm/testarch/knowledge/ci-burn-in.md) | Staged jobs, shard orchestration, burn-in loops | CI/CD, flakiness detection |
|
|
||||||
| [burn-in](../../../src/modules/bmm/testarch/knowledge/burn-in.md) | Smart test selection, git diff for CI optimization | Test selection, performance |
|
|
||||||
| [selective-testing](../../../src/modules/bmm/testarch/knowledge/selective-testing.md) | Tag/grep usage, spec filters, diff-based runs | Test filtering, optimization |
|
|
||||||
|
|
||||||
**Used in:** `*ci`, `*test-review`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Quality & Standards
|
|
||||||
|
|
||||||
Test quality standards, test level selection, and TDD patterns.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [test-quality](../../../src/modules/bmm/testarch/knowledge/test-quality.md) | Execution limits, isolation rules, green criteria | DoD, best practices, anti-patterns |
|
|
||||||
| [test-levels-framework](../../../src/modules/bmm/testarch/knowledge/test-levels-framework.md) | Guidelines for unit, integration, E2E selection | Test pyramid, level selection |
|
|
||||||
| [test-priorities-matrix](../../../src/modules/bmm/testarch/knowledge/test-priorities-matrix.md) | P0-P3 criteria, coverage targets, execution ordering | Prioritization, risk-based testing |
|
|
||||||
| [test-healing-patterns](../../../src/modules/bmm/testarch/knowledge/test-healing-patterns.md) | Common failure patterns and automated fixes | Debugging, healing, fixes |
|
|
||||||
| [component-tdd](../../../src/modules/bmm/testarch/knowledge/component-tdd.md) | Red→green→refactor workflow, provider isolation | TDD, component testing |
|
|
||||||
|
|
||||||
**Used in:** `*test-design`, `*atdd`, `*automate`, `*test-review`, `*trace`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Risk & Gates
|
|
||||||
|
|
||||||
Risk assessment, governance, and gate decision frameworks.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [risk-governance](../../../src/modules/bmm/testarch/knowledge/risk-governance.md) | Scoring matrix, category ownership, gate decision rules | Risk assessment, governance |
|
|
||||||
| [probability-impact](../../../src/modules/bmm/testarch/knowledge/probability-impact.md) | Probability × impact scale for scoring matrix | Risk scoring, impact analysis |
|
|
||||||
| [nfr-criteria](../../../src/modules/bmm/testarch/knowledge/nfr-criteria.md) | Security, performance, reliability, maintainability status | NFRs, compliance, enterprise |
|
|
||||||
|
|
||||||
**Used in:** `*test-design`, `*nfr-assess`, `*trace`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Selectors & Timing
|
|
||||||
|
|
||||||
Selector resilience, race condition debugging, and visual debugging.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [selector-resilience](../../../src/modules/bmm/testarch/knowledge/selector-resilience.md) | Robust selector strategies and debugging | Selectors, locators, resilience |
|
|
||||||
| [timing-debugging](../../../src/modules/bmm/testarch/knowledge/timing-debugging.md) | Race condition identification and deterministic fixes | Race conditions, timing issues |
|
|
||||||
| [visual-debugging](../../../src/modules/bmm/testarch/knowledge/visual-debugging.md) | Trace viewer usage, artifact expectations | Debugging, trace viewer, artifacts |
|
|
||||||
|
|
||||||
**Used in:** `*atdd`, `*automate`, `*test-review`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Feature Flags & Testing Patterns
|
|
||||||
|
|
||||||
Feature flag testing, contract testing, and API testing patterns.
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [feature-flags](../../../src/modules/bmm/testarch/knowledge/feature-flags.md) | Enum management, targeting helpers, cleanup, checklists | Feature flags, toggles |
|
|
||||||
| [contract-testing](../../../src/modules/bmm/testarch/knowledge/contract-testing.md) | Pact publishing, provider verification, resilience | Contract testing, Pact |
|
|
||||||
| [api-testing-patterns](../../../src/modules/bmm/testarch/knowledge/api-testing-patterns.md) | Pure API patterns without browser | API testing, backend testing |
|
|
||||||
|
|
||||||
**Used in:** `*test-design`, `*atdd`, `*automate`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Playwright-Utils Integration
|
|
||||||
|
|
||||||
Patterns for using `@seontechnologies/playwright-utils` package (9 utilities).
|
|
||||||
|
|
||||||
| Fragment | Description | Key Topics |
|
|
||||||
|----------|-------------|-----------|
|
|
||||||
| [api-request](../../../src/modules/bmm/testarch/knowledge/api-request.md) | Typed HTTP client, schema validation, retry logic | API calls, HTTP, validation |
|
|
||||||
| [auth-session](../../../src/modules/bmm/testarch/knowledge/auth-session.md) | Token persistence, multi-user, API/browser authentication | Auth patterns, session management |
|
|
||||||
| [network-recorder](../../../src/modules/bmm/testarch/knowledge/network-recorder.md) | HAR record/playback, CRUD detection for offline testing | Offline testing, network replay |
|
|
||||||
| [intercept-network-call](../../../src/modules/bmm/testarch/knowledge/intercept-network-call.md) | Network spy/stub, JSON parsing for UI tests | Mocking, interception, stubbing |
|
|
||||||
| [recurse](../../../src/modules/bmm/testarch/knowledge/recurse.md) | Async polling for API responses, background jobs | Polling, eventual consistency |
|
|
||||||
| [log](../../../src/modules/bmm/testarch/knowledge/log.md) | Structured logging for API and UI tests | Logging, debugging, reporting |
|
|
||||||
| [file-utils](../../../src/modules/bmm/testarch/knowledge/file-utils.md) | CSV/XLSX/PDF/ZIP handling with download support | File validation, exports |
|
|
||||||
| [burn-in](../../../src/modules/bmm/testarch/knowledge/burn-in.md) | Smart test selection with git diff analysis | CI optimization, selective testing |
|
|
||||||
| [network-error-monitor](../../../src/modules/bmm/testarch/knowledge/network-error-monitor.md) | Auto-detect HTTP 4xx/5xx errors during tests | Error monitoring, silent failures |
|
|
||||||
|
|
||||||
**Note:** `fixtures-composition` is listed under Architecture & Fixtures (general Playwright `mergeTests` pattern, applies to all fixtures).
|
|
||||||
|
|
||||||
**Used in:** `*framework` (if `tea_use_playwright_utils: true`), `*atdd`, `*automate`, `*test-review`, `*ci`
|
|
||||||
|
|
||||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Fragment Manifest (tea-index.csv)
|
|
||||||
|
|
||||||
**Location:** `src/modules/bmm/testarch/tea-index.csv`
|
|
||||||
|
|
||||||
**Purpose:** Tracks all knowledge fragments and their usage in workflows
|
|
||||||
|
|
||||||
**Structure:**
|
|
||||||
```csv
|
|
||||||
id,name,description,tags,fragment_file
|
|
||||||
test-quality,Test Quality,Execution limits and isolation rules,quality;standards,knowledge/test-quality.md
|
|
||||||
risk-governance,Risk Governance,Risk scoring and gate decisions,risk;governance,knowledge/risk-governance.md
|
|
||||||
```
|
|
||||||
|
|
||||||
**Columns:**
|
|
||||||
- `id` - Unique fragment identifier (kebab-case)
|
|
||||||
- `name` - Human-readable fragment name
|
|
||||||
- `description` - What the fragment covers
|
|
||||||
- `tags` - Searchable tags (semicolon-separated)
|
|
||||||
- `fragment_file` - Relative path to fragment markdown file
|
|
||||||
|
|
||||||
**Fragment Location:** `src/modules/bmm/testarch/knowledge/` (all 33 fragments in single directory)
|
|
||||||
|
|
||||||
**Manifest:** `src/modules/bmm/testarch/tea-index.csv`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Workflow Fragment Loading
|
|
||||||
|
|
||||||
Each TEA workflow loads specific fragments:
|
|
||||||
|
|
||||||
### *framework
|
|
||||||
**Key Fragments:**
|
|
||||||
- fixture-architecture.md
|
|
||||||
- playwright-config.md
|
|
||||||
- fixtures-composition.md
|
|
||||||
|
|
||||||
**Purpose:** Test infrastructure patterns and fixture composition
|
|
||||||
|
|
||||||
**Note:** Loads additional fragments based on framework choice (Playwright/Cypress) and config (`tea_use_playwright_utils`).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *test-design
|
|
||||||
**Key Fragments:**
|
|
||||||
- test-quality.md
|
|
||||||
- test-priorities-matrix.md
|
|
||||||
- test-levels-framework.md
|
|
||||||
- risk-governance.md
|
|
||||||
- probability-impact.md
|
|
||||||
|
|
||||||
**Purpose:** Risk assessment and test planning standards
|
|
||||||
|
|
||||||
**Note:** Loads additional fragments based on mode (system-level vs epic-level) and focus areas.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *atdd
|
|
||||||
**Key Fragments:**
|
|
||||||
- test-quality.md
|
|
||||||
- component-tdd.md
|
|
||||||
- fixture-architecture.md
|
|
||||||
- network-first.md
|
|
||||||
- data-factories.md
|
|
||||||
- selector-resilience.md
|
|
||||||
- timing-debugging.md
|
|
||||||
- test-healing-patterns.md
|
|
||||||
|
|
||||||
**Purpose:** TDD patterns and test generation standards
|
|
||||||
|
|
||||||
**Note:** Loads auth, network, and utility fragments based on feature requirements.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *automate
|
|
||||||
**Key Fragments:**
|
|
||||||
- test-quality.md
|
|
||||||
- test-levels-framework.md
|
|
||||||
- test-priorities-matrix.md
|
|
||||||
- fixture-architecture.md
|
|
||||||
- network-first.md
|
|
||||||
- selector-resilience.md
|
|
||||||
- test-healing-patterns.md
|
|
||||||
- timing-debugging.md
|
|
||||||
|
|
||||||
**Purpose:** Comprehensive test generation with quality standards
|
|
||||||
|
|
||||||
**Note:** Loads additional fragments for data factories, auth, network utilities based on test needs.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *test-review
|
|
||||||
**Key Fragments:**
|
|
||||||
- test-quality.md
|
|
||||||
- test-healing-patterns.md
|
|
||||||
- selector-resilience.md
|
|
||||||
- timing-debugging.md
|
|
||||||
- visual-debugging.md
|
|
||||||
- network-first.md
|
|
||||||
- test-levels-framework.md
|
|
||||||
- fixture-architecture.md
|
|
||||||
|
|
||||||
**Purpose:** Comprehensive quality review against all standards
|
|
||||||
|
|
||||||
**Note:** Loads all applicable playwright-utils fragments when `tea_use_playwright_utils: true`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *ci
|
|
||||||
**Key Fragments:**
|
|
||||||
- ci-burn-in.md
|
|
||||||
- burn-in.md
|
|
||||||
- selective-testing.md
|
|
||||||
- playwright-config.md
|
|
||||||
|
|
||||||
**Purpose:** CI/CD best practices and optimization
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *nfr-assess
|
|
||||||
**Key Fragments:**
|
|
||||||
- nfr-criteria.md
|
|
||||||
- risk-governance.md
|
|
||||||
- probability-impact.md
|
|
||||||
|
|
||||||
**Purpose:** NFR assessment frameworks and decision rules
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### *trace
|
|
||||||
**Key Fragments:**
|
|
||||||
- test-priorities-matrix.md
|
|
||||||
- risk-governance.md
|
|
||||||
- test-quality.md
|
|
||||||
|
|
||||||
**Purpose:** Traceability and gate decision standards
|
|
||||||
|
|
||||||
**Note:** Loads nfr-criteria.md if NFR assessment is part of gate decision.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Related
|
|
||||||
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - How knowledge base fits in TEA
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Context engineering philosophy
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - Workflows that use fragments
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -1,463 +0,0 @@
|
||||||
---
|
|
||||||
title: "Getting Started with TEA (Test Architect) - TEA Lite"
|
|
||||||
description: Learn TEA fundamentals by generating and running tests for an existing demo app in 30 minutes
|
|
||||||
---
|
|
||||||
|
|
||||||
# Getting Started with TEA (Test Architect) - TEA Lite
|
|
||||||
|
|
||||||
Welcome! **TEA Lite** is the simplest way to get started with TEA - just use `*automate` to generate tests for existing features. Perfect for beginners who want to learn TEA fundamentals quickly.
|
|
||||||
|
|
||||||
## What You'll Build
|
|
||||||
|
|
||||||
By the end of this 30-minute tutorial, you'll have:
|
|
||||||
- A working Playwright test framework
|
|
||||||
- Your first risk-based test plan
|
|
||||||
- Passing tests for an existing demo app feature
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- Node.js installed (v18 or later)
|
|
||||||
- 30 minutes of focused time
|
|
||||||
- We'll use TodoMVC (<https://todomvc.com/examples/react/>) as our demo app
|
|
||||||
|
|
||||||
## TEA Approaches Explained
|
|
||||||
|
|
||||||
Before we start, understand the three ways to use TEA:
|
|
||||||
|
|
||||||
- **TEA Lite** (this tutorial): Beginner using just `*automate` to test existing features
|
|
||||||
- **TEA Solo**: Using TEA standalone without full BMad Method integration
|
|
||||||
- **TEA Integrated**: Full BMad Method with all TEA workflows across phases
|
|
||||||
|
|
||||||
This tutorial focuses on **TEA Lite** - the fastest way to see TEA in action.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 0: Setup (2 minutes)
|
|
||||||
|
|
||||||
We'll test TodoMVC, a standard demo app used across testing documentation.
|
|
||||||
|
|
||||||
**Demo App:** <https://todomvc.com/examples/react/>
|
|
||||||
|
|
||||||
No installation needed - TodoMVC runs in your browser. Open the link above and:
|
|
||||||
1. Add a few todos (type and press Enter)
|
|
||||||
2. Mark some as complete (click checkbox)
|
|
||||||
3. Try the "All", "Active", "Completed" filters
|
|
||||||
|
|
||||||
You've just explored the features we'll test!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 1: Install BMad and Scaffold Framework (10 minutes)
|
|
||||||
|
|
||||||
### Install BMad Method
|
|
||||||
|
|
||||||
Install BMad (see installation guide for latest command).
|
|
||||||
|
|
||||||
When prompted:
|
|
||||||
- **Select modules:** Choose "BMM: BMad Method" (press Space, then Enter)
|
|
||||||
- **Project name:** Keep default or enter your project name
|
|
||||||
- **Experience level:** Choose "beginner" for this tutorial
|
|
||||||
- **Planning artifacts folder:** Keep default
|
|
||||||
- **Implementation artifacts folder:** Keep default
|
|
||||||
- **Project knowledge folder:** Keep default
|
|
||||||
- **Enable TEA Playwright MCP enhancements?** Choose "No" for now (we'll explore this later)
|
|
||||||
- **Using playwright-utils?** Choose "No" for now (we'll explore this later)
|
|
||||||
|
|
||||||
BMad is now installed! You'll see a `_bmad/` folder in your project.
|
|
||||||
|
|
||||||
### Load TEA Agent
|
|
||||||
|
|
||||||
Start a new chat with your AI assistant (Claude, etc.) and type:
|
|
||||||
|
|
||||||
```
|
|
||||||
*tea
|
|
||||||
```
|
|
||||||
|
|
||||||
This loads the Test Architect agent. You'll see TEA's menu with available workflows.
|
|
||||||
|
|
||||||
### Scaffold Test Framework
|
|
||||||
|
|
||||||
In your chat, run:
|
|
||||||
|
|
||||||
```
|
|
||||||
*framework
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA will ask you questions:
|
|
||||||
|
|
||||||
**Q: What's your tech stack?**
|
|
||||||
A: "We're testing a React web application (TodoMVC)"
|
|
||||||
|
|
||||||
**Q: Which test framework?**
|
|
||||||
A: "Playwright"
|
|
||||||
|
|
||||||
**Q: Testing scope?**
|
|
||||||
A: "E2E testing for web application"
|
|
||||||
|
|
||||||
**Q: CI/CD platform?**
|
|
||||||
A: "GitHub Actions" (or your preference)
|
|
||||||
|
|
||||||
TEA will generate:
|
|
||||||
- `tests/` directory with Playwright config
|
|
||||||
- `playwright.config.ts` with base configuration
|
|
||||||
- Sample test structure
|
|
||||||
- `.env.example` for environment variables
|
|
||||||
- `.nvmrc` for Node version
|
|
||||||
|
|
||||||
**Verify the setup:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install
|
|
||||||
npx playwright install
|
|
||||||
```
|
|
||||||
|
|
||||||
You now have a production-ready test framework!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 2: Your First Test Design (5 minutes)
|
|
||||||
|
|
||||||
Test design is where TEA shines - risk-based planning before writing tests.
|
|
||||||
|
|
||||||
### Run Test Design
|
|
||||||
|
|
||||||
In your chat with TEA, run:
|
|
||||||
|
|
||||||
```
|
|
||||||
*test-design
|
|
||||||
```
|
|
||||||
|
|
||||||
**Q: System-level or epic-level?**
|
|
||||||
A: "Epic-level - I want to test TodoMVC's basic functionality"
|
|
||||||
|
|
||||||
**Q: What feature are you testing?**
|
|
||||||
A: "TodoMVC's core CRUD operations - creating, completing, and deleting todos"
|
|
||||||
|
|
||||||
**Q: Any specific risks or concerns?**
|
|
||||||
A: "We want to ensure the filter buttons (All, Active, Completed) work correctly"
|
|
||||||
|
|
||||||
TEA will analyze and create `test-design-epic-1.md` with:
|
|
||||||
|
|
||||||
1. **Risk Assessment**
|
|
||||||
- Probability × Impact scoring
|
|
||||||
- Risk categories (TECH, SEC, PERF, DATA, BUS, OPS)
|
|
||||||
- High-risk areas identified
|
|
||||||
|
|
||||||
2. **Test Priorities**
|
|
||||||
- P0: Critical path (creating and displaying todos)
|
|
||||||
- P1: High value (completing todos, filters)
|
|
||||||
- P2: Medium value (deleting todos)
|
|
||||||
- P3: Low value (edge cases)
|
|
||||||
|
|
||||||
3. **Coverage Strategy**
|
|
||||||
- E2E tests for user workflows
|
|
||||||
- Which scenarios need testing
|
|
||||||
- Suggested test structure
|
|
||||||
|
|
||||||
**Review the test design file** - notice how TEA provides a systematic approach to what needs testing and why.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 3: Generate Tests for Existing Features (5 minutes)
|
|
||||||
|
|
||||||
Now the magic happens - TEA generates tests based on your test design.
|
|
||||||
|
|
||||||
### Run Automate
|
|
||||||
|
|
||||||
In your chat with TEA, run:
|
|
||||||
|
|
||||||
```
|
|
||||||
*automate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Q: What are you testing?**
|
|
||||||
A: "TodoMVC React app at <https://todomvc.com/examples/react/> - focus on the test design we just created"
|
|
||||||
|
|
||||||
**Q: Reference existing docs?**
|
|
||||||
A: "Yes, use test-design-epic-1.md"
|
|
||||||
|
|
||||||
**Q: Any specific test scenarios?**
|
|
||||||
A: "Cover the P0 and P1 scenarios from the test design"
|
|
||||||
|
|
||||||
TEA will generate:
|
|
||||||
|
|
||||||
**`tests/e2e/todomvc.spec.ts`** with tests like:
|
|
||||||
```typescript
|
|
||||||
import { test, expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test.describe('TodoMVC - Core Functionality', () => {
|
|
||||||
test.beforeEach(async ({ page }) => {
|
|
||||||
await page.goto('https://todomvc.com/examples/react/');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should create a new todo', async ({ page }) => {
|
|
||||||
// TodoMVC uses a simple input without placeholder or test IDs
|
|
||||||
const todoInput = page.locator('.new-todo');
|
|
||||||
await todoInput.fill('Buy groceries');
|
|
||||||
await todoInput.press('Enter');
|
|
||||||
|
|
||||||
// Verify todo appears in list
|
|
||||||
await expect(page.locator('.todo-list li')).toContainText('Buy groceries');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should mark todo as complete', async ({ page }) => {
|
|
||||||
// Create a todo
|
|
||||||
const todoInput = page.locator('.new-todo');
|
|
||||||
await todoInput.fill('Complete tutorial');
|
|
||||||
await todoInput.press('Enter');
|
|
||||||
|
|
||||||
// Mark as complete using the toggle checkbox
|
|
||||||
await page.locator('.todo-list li .toggle').click();
|
|
||||||
|
|
||||||
// Verify completed state
|
|
||||||
await expect(page.locator('.todo-list li')).toHaveClass(/completed/);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should filter todos by status', async ({ page }) => {
|
|
||||||
// Create multiple todos
|
|
||||||
const todoInput = page.locator('.new-todo');
|
|
||||||
await todoInput.fill('Buy groceries');
|
|
||||||
await todoInput.press('Enter');
|
|
||||||
await todoInput.fill('Write tests');
|
|
||||||
await todoInput.press('Enter');
|
|
||||||
|
|
||||||
// Complete the first todo ("Buy groceries")
|
|
||||||
await page.locator('.todo-list li .toggle').first().click();
|
|
||||||
|
|
||||||
// Test Active filter (shows only incomplete todos)
|
|
||||||
await page.locator('.filters a[href="#/active"]').click();
|
|
||||||
await expect(page.locator('.todo-list li')).toHaveCount(1);
|
|
||||||
await expect(page.locator('.todo-list li')).toContainText('Write tests');
|
|
||||||
|
|
||||||
// Test Completed filter (shows only completed todos)
|
|
||||||
await page.locator('.filters a[href="#/completed"]').click();
|
|
||||||
await expect(page.locator('.todo-list li')).toHaveCount(1);
|
|
||||||
await expect(page.locator('.todo-list li')).toContainText('Buy groceries');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
TEA also creates:
|
|
||||||
- **`tests/README.md`** - How to run tests, project conventions
|
|
||||||
- **Definition of Done summary** - What makes a test "good"
|
|
||||||
|
|
||||||
### With Playwright Utils (Optional Enhancement)
|
|
||||||
|
|
||||||
If you have `tea_use_playwright_utils: true` in your config, TEA generates tests using production-ready utilities:
|
|
||||||
|
|
||||||
**Vanilla Playwright:**
|
|
||||||
```typescript
|
|
||||||
test('should mark todo as complete', async ({ page, request }) => {
|
|
||||||
// Manual API call
|
|
||||||
const response = await request.post('/api/todos', {
|
|
||||||
data: { title: 'Complete tutorial' }
|
|
||||||
});
|
|
||||||
const todo = await response.json();
|
|
||||||
|
|
||||||
await page.goto('/');
|
|
||||||
await page.locator(`.todo-list li:has-text("${todo.title}") .toggle`).click();
|
|
||||||
await expect(page.locator('.todo-list li')).toHaveClass(/completed/);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**With Playwright Utils:**
|
|
||||||
```typescript
|
|
||||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
|
||||||
import { expect } from '@playwright/test';
|
|
||||||
|
|
||||||
test('should mark todo as complete', async ({ page, apiRequest }) => {
|
|
||||||
// Typed API call with cleaner syntax
|
|
||||||
const { status, body: todo } = await apiRequest({
|
|
||||||
method: 'POST',
|
|
||||||
path: '/api/todos',
|
|
||||||
body: { title: 'Complete tutorial' }
|
|
||||||
});
|
|
||||||
|
|
||||||
expect(status).toBe(201);
|
|
||||||
await page.goto('/');
|
|
||||||
await page.locator(`.todo-list li:has-text("${todo.title}") .toggle`).click();
|
|
||||||
await expect(page.locator('.todo-list li')).toHaveClass(/completed/);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Type-safe API responses (`{ status, body }`)
|
|
||||||
- Automatic retry for 5xx errors
|
|
||||||
- Built-in schema validation
|
|
||||||
- Cleaner, more maintainable code
|
|
||||||
|
|
||||||
See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) to enable this.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 4: Run and Validate (5 minutes)
|
|
||||||
|
|
||||||
Time to see your tests in action!
|
|
||||||
|
|
||||||
### Run the Tests
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npx playwright test
|
|
||||||
```
|
|
||||||
|
|
||||||
You should see:
|
|
||||||
```
|
|
||||||
Running 3 tests using 1 worker
|
|
||||||
|
|
||||||
✓ tests/e2e/todomvc.spec.ts:7:3 › should create a new todo (2s)
|
|
||||||
✓ tests/e2e/todomvc.spec.ts:15:3 › should mark todo as complete (2s)
|
|
||||||
✓ tests/e2e/todomvc.spec.ts:30:3 › should filter todos by status (3s)
|
|
||||||
|
|
||||||
3 passed (7s)
|
|
||||||
```
|
|
||||||
|
|
||||||
All green! Your tests are passing against the existing TodoMVC app.
|
|
||||||
|
|
||||||
### View Test Report
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npx playwright show-report
|
|
||||||
```
|
|
||||||
|
|
||||||
Opens a beautiful HTML report showing:
|
|
||||||
- Test execution timeline
|
|
||||||
- Screenshots (if any failures)
|
|
||||||
- Trace viewer for debugging
|
|
||||||
|
|
||||||
### What Just Happened?
|
|
||||||
|
|
||||||
You used **TEA Lite** to:
|
|
||||||
1. Scaffold a production-ready test framework (`*framework`)
|
|
||||||
2. Create a risk-based test plan (`*test-design`)
|
|
||||||
3. Generate comprehensive tests (`*automate`)
|
|
||||||
4. Run tests against an existing application
|
|
||||||
|
|
||||||
All in 30 minutes!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What You Learned
|
|
||||||
|
|
||||||
Congratulations! You've completed the TEA Lite tutorial. You learned:
|
|
||||||
|
|
||||||
### TEA Workflows
|
|
||||||
- `*framework` - Scaffold test infrastructure
|
|
||||||
- `*test-design` - Risk-based test planning
|
|
||||||
- `*automate` - Generate tests for existing features
|
|
||||||
|
|
||||||
### TEA Principles
|
|
||||||
- **Risk-based testing** - Depth scales with impact (P0 vs P3)
|
|
||||||
- **Test design first** - Plan before generating
|
|
||||||
- **Network-first patterns** - Tests wait for actual responses (no hard waits)
|
|
||||||
- **Production-ready from day one** - Not toy examples
|
|
||||||
|
|
||||||
### Key Takeaway
|
|
||||||
|
|
||||||
TEA Lite (just `*automate`) is perfect for:
|
|
||||||
- Beginners learning TEA fundamentals
|
|
||||||
- Testing existing applications
|
|
||||||
- Quick test coverage expansion
|
|
||||||
- Teams wanting fast results
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Understanding ATDD vs Automate
|
|
||||||
|
|
||||||
This tutorial used `*automate` to generate tests for **existing features** (tests pass immediately).
|
|
||||||
|
|
||||||
**When to use `*automate`:**
|
|
||||||
- Feature already exists
|
|
||||||
- Want to add test coverage
|
|
||||||
- Tests should pass on first run
|
|
||||||
|
|
||||||
**When to use `*atdd`:**
|
|
||||||
- Feature doesn't exist yet (TDD workflow)
|
|
||||||
- Want failing tests BEFORE implementation
|
|
||||||
- Following red → green → refactor cycle
|
|
||||||
|
|
||||||
See [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) for the TDD approach.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
### Level Up Your TEA Skills
|
|
||||||
|
|
||||||
**How-To Guides** (task-oriented):
|
|
||||||
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Deep dive into risk assessment
|
|
||||||
- [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) - Generate failing tests first (TDD)
|
|
||||||
- [How to Set Up CI Pipeline](/docs/how-to/workflows/setup-ci.md) - Automate test execution
|
|
||||||
- [How to Review Test Quality](/docs/how-to/workflows/run-test-review.md) - Audit test quality
|
|
||||||
|
|
||||||
**Explanation** (understanding-oriented):
|
|
||||||
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA capabilities
|
|
||||||
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA exists** (problem + solution)
|
|
||||||
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - How risk scoring works
|
|
||||||
|
|
||||||
**Reference** (quick lookup):
|
|
||||||
- [TEA Command Reference](/docs/reference/tea/commands.md) - All 8 TEA workflows
|
|
||||||
- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options
|
|
||||||
- [Glossary](/docs/reference/glossary/index.md) - TEA terminology
|
|
||||||
|
|
||||||
### Try TEA Solo
|
|
||||||
|
|
||||||
Ready for standalone usage without full BMad Method? Use TEA Solo:
|
|
||||||
- Run any TEA workflow independently
|
|
||||||
- Bring your own requirements
|
|
||||||
- Use on non-BMad projects
|
|
||||||
|
|
||||||
See [TEA Overview](/docs/explanation/features/tea-overview.md) for engagement models.
|
|
||||||
|
|
||||||
### Go Full TEA Integrated
|
|
||||||
|
|
||||||
Want the complete quality operating model? Try TEA Integrated with BMad Method:
|
|
||||||
- Phase 2: Planning with NFR assessment
|
|
||||||
- Phase 3: Architecture testability review
|
|
||||||
- Phase 4: Per-epic test design → ATDD → automate
|
|
||||||
- Release Gate: Coverage traceability and gate decisions
|
|
||||||
|
|
||||||
See [BMad Method Documentation](/) for the full workflow.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Tests Failing?
|
|
||||||
|
|
||||||
**Problem:** Tests can't find elements
|
|
||||||
**Solution:** TodoMVC doesn't use test IDs or accessible roles consistently. The selectors in this tutorial use CSS classes that match TodoMVC's actual structure:
|
|
||||||
```typescript
|
|
||||||
// TodoMVC uses these CSS classes:
|
|
||||||
page.locator('.new-todo') // Input field
|
|
||||||
page.locator('.todo-list li') // Todo items
|
|
||||||
page.locator('.toggle') // Checkbox
|
|
||||||
|
|
||||||
// If testing your own app, prefer accessible selectors:
|
|
||||||
page.getByRole('textbox')
|
|
||||||
page.getByRole('listitem')
|
|
||||||
page.getByRole('checkbox')
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** In production code, use accessible selectors (`getByRole`, `getByLabel`, `getByText`) for better resilience. TodoMVC is used here for learning, not as a selector best practice example.
|
|
||||||
|
|
||||||
**Problem:** Network timeout
|
|
||||||
**Solution:** Increase timeout in `playwright.config.ts`:
|
|
||||||
```typescript
|
|
||||||
use: {
|
|
||||||
timeout: 30000, // 30 seconds
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Need Help?
|
|
||||||
|
|
||||||
- **Documentation:** <https://docs.bmad-method.org>
|
|
||||||
- **GitHub Issues:** <https://github.com/bmad-code-org/bmad-method/issues>
|
|
||||||
- **Discord:** Join the BMAD community
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Feedback
|
|
||||||
|
|
||||||
Found this tutorial helpful? Have suggestions? Open an issue on GitHub!
|
|
||||||
|
|
||||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
|
||||||
|
|
@ -9,7 +9,6 @@
|
||||||
"version": "6.0.0-alpha.23",
|
"version": "6.0.0-alpha.23",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@clack/prompts": "^0.11.0",
|
|
||||||
"@kayvan/markdown-tree-parser": "^1.6.1",
|
"@kayvan/markdown-tree-parser": "^1.6.1",
|
||||||
"boxen": "^5.1.2",
|
"boxen": "^5.1.2",
|
||||||
"chalk": "^4.1.2",
|
"chalk": "^4.1.2",
|
||||||
|
|
@ -34,6 +33,7 @@
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@astrojs/sitemap": "^3.6.0",
|
"@astrojs/sitemap": "^3.6.0",
|
||||||
"@astrojs/starlight": "^0.37.0",
|
"@astrojs/starlight": "^0.37.0",
|
||||||
|
"@clack/prompts": "^0.11.0",
|
||||||
"@eslint/js": "^9.33.0",
|
"@eslint/js": "^9.33.0",
|
||||||
"archiver": "^7.0.1",
|
"archiver": "^7.0.1",
|
||||||
"astro": "^5.16.0",
|
"astro": "^5.16.0",
|
||||||
|
|
@ -759,6 +759,7 @@
|
||||||
"version": "0.5.0",
|
"version": "0.5.0",
|
||||||
"resolved": "https://registry.npmjs.org/@clack/core/-/core-0.5.0.tgz",
|
"resolved": "https://registry.npmjs.org/@clack/core/-/core-0.5.0.tgz",
|
||||||
"integrity": "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow==",
|
"integrity": "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"picocolors": "^1.0.0",
|
"picocolors": "^1.0.0",
|
||||||
|
|
@ -769,6 +770,7 @@
|
||||||
"version": "0.11.0",
|
"version": "0.11.0",
|
||||||
"resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-0.11.0.tgz",
|
"resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-0.11.0.tgz",
|
||||||
"integrity": "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw==",
|
"integrity": "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@clack/core": "0.5.0",
|
"@clack/core": "0.5.0",
|
||||||
|
|
@ -12149,6 +12151,7 @@
|
||||||
"version": "1.1.1",
|
"version": "1.1.1",
|
||||||
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
|
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
|
||||||
"integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==",
|
"integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==",
|
||||||
|
"dev": true,
|
||||||
"license": "ISC"
|
"license": "ISC"
|
||||||
},
|
},
|
||||||
"node_modules/picomatch": {
|
"node_modules/picomatch": {
|
||||||
|
|
@ -13395,6 +13398,7 @@
|
||||||
"version": "1.0.5",
|
"version": "1.0.5",
|
||||||
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
|
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
|
||||||
"integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==",
|
"integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/sitemap": {
|
"node_modules/sitemap": {
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,6 @@
|
||||||
"flatten": "node tools/flattener/main.js",
|
"flatten": "node tools/flattener/main.js",
|
||||||
"format:check": "prettier --check \"**/*.{js,cjs,mjs,json,yaml}\"",
|
"format:check": "prettier --check \"**/*.{js,cjs,mjs,json,yaml}\"",
|
||||||
"format:fix": "prettier --write \"**/*.{js,cjs,mjs,json,yaml}\"",
|
"format:fix": "prettier --write \"**/*.{js,cjs,mjs,json,yaml}\"",
|
||||||
"format:fix:staged": "prettier --write",
|
|
||||||
"install:bmad": "node tools/cli/bmad-cli.js install",
|
"install:bmad": "node tools/cli/bmad-cli.js install",
|
||||||
"lint": "eslint . --ext .js,.cjs,.mjs,.yaml --max-warnings=0",
|
"lint": "eslint . --ext .js,.cjs,.mjs,.yaml --max-warnings=0",
|
||||||
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
|
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
|
||||||
|
|
@ -54,14 +53,14 @@
|
||||||
"lint-staged": {
|
"lint-staged": {
|
||||||
"*.{js,cjs,mjs}": [
|
"*.{js,cjs,mjs}": [
|
||||||
"npm run lint:fix",
|
"npm run lint:fix",
|
||||||
"npm run format:fix:staged"
|
"npm run format:fix"
|
||||||
],
|
],
|
||||||
"*.yaml": [
|
"*.yaml": [
|
||||||
"eslint --fix",
|
"eslint --fix",
|
||||||
"npm run format:fix:staged"
|
"npm run format:fix"
|
||||||
],
|
],
|
||||||
"*.json": [
|
"*.json": [
|
||||||
"npm run format:fix:staged"
|
"npm run format:fix"
|
||||||
],
|
],
|
||||||
"*.md": [
|
"*.md": [
|
||||||
"markdownlint-cli2"
|
"markdownlint-cli2"
|
||||||
|
|
|
||||||
|
|
@ -18,6 +18,7 @@ agent:
|
||||||
|
|
||||||
critical_actions:
|
critical_actions:
|
||||||
- "Load into memory {project-root}/_bmad/core/config.yaml and set variable project_name, output_folder, user_name, communication_language"
|
- "Load into memory {project-root}/_bmad/core/config.yaml and set variable project_name, output_folder, user_name, communication_language"
|
||||||
|
- "Remember the users name is {user_name}"
|
||||||
- "ALWAYS communicate in {communication_language}"
|
- "ALWAYS communicate in {communication_language}"
|
||||||
|
|
||||||
menu:
|
menu:
|
||||||
|
|
|
||||||
|
|
@ -130,6 +130,7 @@ After agent loading and introduction:
|
||||||
- Handle missing or incomplete agent entries gracefully
|
- Handle missing or incomplete agent entries gracefully
|
||||||
- Cross-reference manifest with actual agent files
|
- Cross-reference manifest with actual agent files
|
||||||
- Prepare agent selection logic for intelligent conversation routing
|
- Prepare agent selection logic for intelligent conversation routing
|
||||||
|
- Set up TTS voice configurations for each agent
|
||||||
|
|
||||||
## NEXT STEP:
|
## NEXT STEP:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,7 @@
|
||||||
- 🎯 SELECT RELEVANT AGENTS based on topic analysis and expertise matching
|
- 🎯 SELECT RELEVANT AGENTS based on topic analysis and expertise matching
|
||||||
- 📋 MAINTAIN CHARACTER CONSISTENCY using merged agent personalities
|
- 📋 MAINTAIN CHARACTER CONSISTENCY using merged agent personalities
|
||||||
- 🔍 ENABLE NATURAL CROSS-TALK between agents for dynamic conversation
|
- 🔍 ENABLE NATURAL CROSS-TALK between agents for dynamic conversation
|
||||||
|
- 💬 INTEGRATE TTS for each agent response immediately after text
|
||||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||||
|
|
||||||
## EXECUTION PROTOCOLS:
|
## EXECUTION PROTOCOLS:
|
||||||
|
|
@ -20,6 +21,7 @@
|
||||||
|
|
||||||
- Complete agent roster with merged personalities is available
|
- Complete agent roster with merged personalities is available
|
||||||
- User topic and conversation history guide agent selection
|
- User topic and conversation history guide agent selection
|
||||||
|
- Party mode is active with TTS integration enabled
|
||||||
- Exit triggers: `*exit`, `goodbye`, `end party`, `quit`
|
- Exit triggers: `*exit`, `goodbye`, `end party`, `quit`
|
||||||
|
|
||||||
## YOUR TASK:
|
## YOUR TASK:
|
||||||
|
|
@ -114,9 +116,19 @@ Allow natural back-and-forth within the same response round for dynamic interact
|
||||||
|
|
||||||
### 6. Response Round Completion
|
### 6. Response Round Completion
|
||||||
|
|
||||||
After generating all agent responses for the round, let the user know he can speak naturally with the agents, an then show this menu opion"
|
After generating all agent responses for the round:
|
||||||
|
|
||||||
`[E] Exit Party Mode - End the collaborative session`
|
**Presentation Format:**
|
||||||
|
[Agent 1 Response with TTS]
|
||||||
|
[Empty line for readability]
|
||||||
|
[Agent 2 Response with TTS, potentially referencing Agent 1]
|
||||||
|
[Empty line for readability]
|
||||||
|
[Agent 3 Response with TTS, building on or offering new perspective]
|
||||||
|
|
||||||
|
**Continue Option:**
|
||||||
|
"[Agents have contributed their perspectives. Ready for more discussion?]
|
||||||
|
|
||||||
|
[E] Exit Party Mode - End the collaborative session"
|
||||||
|
|
||||||
### 7. Exit Condition Checking
|
### 7. Exit Condition Checking
|
||||||
|
|
||||||
|
|
@ -130,19 +142,23 @@ Check for exit conditions before continuing:
|
||||||
**Natural Conclusion:**
|
**Natural Conclusion:**
|
||||||
|
|
||||||
- Conversation seems naturally concluding
|
- Conversation seems naturally concluding
|
||||||
- Confirm if the user wants to exit party mode and go back to where they were or continue chatting. Do it in a conversational way with an agent in the party.
|
- Ask user: "Would you like to continue the discussion or end party mode?"
|
||||||
|
- Respect user choice to continue or exit
|
||||||
|
|
||||||
### 8. Handle Exit Selection
|
### 8. Handle Exit Selection
|
||||||
|
|
||||||
#### If 'E' (Exit Party Mode):
|
#### If 'E' (Exit Party Mode):
|
||||||
|
|
||||||
- Load read and execute: `./step-03-graceful-exit.md`
|
- Update frontmatter: `stepsCompleted: [1, 2]`
|
||||||
|
- Set `party_active: false`
|
||||||
|
- Load: `./step-03-graceful-exit.md`
|
||||||
|
|
||||||
## SUCCESS METRICS:
|
## SUCCESS METRICS:
|
||||||
|
|
||||||
✅ Intelligent agent selection based on topic analysis
|
✅ Intelligent agent selection based on topic analysis
|
||||||
✅ Authentic in-character responses maintained consistently
|
✅ Authentic in-character responses maintained consistently
|
||||||
✅ Natural cross-talk and agent interactions enabled
|
✅ Natural cross-talk and agent interactions enabled
|
||||||
|
✅ TTS integration working for all agent responses
|
||||||
✅ Question handling protocol followed correctly
|
✅ Question handling protocol followed correctly
|
||||||
✅ [E] exit option presented after each response round
|
✅ [E] exit option presented after each response round
|
||||||
✅ Conversation context and state maintained throughout
|
✅ Conversation context and state maintained throughout
|
||||||
|
|
@ -152,6 +168,7 @@ Check for exit conditions before continuing:
|
||||||
|
|
||||||
❌ Generic responses without character consistency
|
❌ Generic responses without character consistency
|
||||||
❌ Poor agent selection not matching topic expertise
|
❌ Poor agent selection not matching topic expertise
|
||||||
|
❌ Missing TTS integration for agent responses
|
||||||
❌ Ignoring user questions or exit triggers
|
❌ Ignoring user questions or exit triggers
|
||||||
❌ Not enabling natural agent cross-talk and interactions
|
❌ Not enabling natural agent cross-talk and interactions
|
||||||
❌ Continuing conversation without user input when questions asked
|
❌ Continuing conversation without user input when questions asked
|
||||||
|
|
|
||||||
|
|
@ -106,6 +106,7 @@ workflow_completed: true
|
||||||
|
|
||||||
- Clear any active conversation state
|
- Clear any active conversation state
|
||||||
- Reset agent selection cache
|
- Reset agent selection cache
|
||||||
|
- Finalize TTS session cleanup
|
||||||
- Mark party mode workflow as completed
|
- Mark party mode workflow as completed
|
||||||
|
|
||||||
### 6. Exit Workflow
|
### 6. Exit Workflow
|
||||||
|
|
@ -121,6 +122,7 @@ Thank you for using BMAD Party Mode for collaborative multi-agent discussions!"
|
||||||
✅ Satisfying agent farewells generated in authentic character voices
|
✅ Satisfying agent farewells generated in authentic character voices
|
||||||
✅ Session highlights and contributions acknowledged meaningfully
|
✅ Session highlights and contributions acknowledged meaningfully
|
||||||
✅ Positive and appreciative closure atmosphere maintained
|
✅ Positive and appreciative closure atmosphere maintained
|
||||||
|
✅ TTS integration working for farewell messages
|
||||||
✅ Frontmatter properly updated with workflow completion
|
✅ Frontmatter properly updated with workflow completion
|
||||||
✅ All workflow state cleaned up appropriately
|
✅ All workflow state cleaned up appropriately
|
||||||
✅ User left with positive impression of collaborative experience
|
✅ User left with positive impression of collaborative experience
|
||||||
|
|
|
||||||
|
|
@ -178,6 +178,18 @@ If conversation naturally concludes:
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## TTS INTEGRATION
|
||||||
|
|
||||||
|
Party mode includes Text-to-Speech for each agent response:
|
||||||
|
|
||||||
|
**TTS Protocol:**
|
||||||
|
|
||||||
|
- Trigger TTS immediately after each agent's text response
|
||||||
|
- Use agent's merged voice configuration from manifest
|
||||||
|
- Format: `Bash: .claude/hooks/bmad-speak.sh "[Agent Name]" "[Their response]"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## MODERATION NOTES
|
## MODERATION NOTES
|
||||||
|
|
||||||
**Quality Control:**
|
**Quality Control:**
|
||||||
|
|
|
||||||
|
|
@ -33,7 +33,7 @@ agent:
|
||||||
menu:
|
menu:
|
||||||
- trigger: WS or fuzzy match on workflow-status
|
- trigger: WS or fuzzy match on workflow-status
|
||||||
workflow: "{project-root}/_bmad/bmm/workflows/workflow-status/workflow.yaml"
|
workflow: "{project-root}/_bmad/bmm/workflows/workflow-status/workflow.yaml"
|
||||||
description: "[WS] Start here or resume - show workflow status and next best step"
|
description: "[WS] Get workflow status or initialize a workflow if not already done (optional)"
|
||||||
|
|
||||||
- trigger: TF or fuzzy match on test-framework
|
- trigger: TF or fuzzy match on test-framework
|
||||||
workflow: "{project-root}/_bmad/bmm/workflows/testarch/framework/workflow.yaml"
|
workflow: "{project-root}/_bmad/bmm/workflows/testarch/framework/workflow.yaml"
|
||||||
|
|
|
||||||
|
|
@ -121,8 +121,6 @@ Parse these fields from YAML comments and metadata:
|
||||||
- {{workflow_name}} ({{agent}}) - {{status}}
|
- {{workflow_name}} ({{agent}}) - {{status}}
|
||||||
{{/each}}
|
{{/each}}
|
||||||
{{/if}}
|
{{/if}}
|
||||||
|
|
||||||
**Tip:** For guardrail tests, run TEA `*automate` after `dev-story`. If you lose context, TEA workflows resume from artifacts in `{{output_folder}}`.
|
|
||||||
</output>
|
</output>
|
||||||
</step>
|
</step>
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,6 @@
|
||||||
<rules>
|
<rules>
|
||||||
<r>ALWAYS communicate in {communication_language} UNLESS contradicted by communication_style.</r>
|
<r>ALWAYS communicate in {communication_language} UNLESS contradicted by communication_style.</r>
|
||||||
|
<!-- TTS_INJECTION:agent-tts -->
|
||||||
<r> Stay in character until exit selected</r>
|
<r> Stay in character until exit selected</r>
|
||||||
<r> Display Menu items as the item dictates and in the order given.</r>
|
<r> Display Menu items as the item dictates and in the order given.</r>
|
||||||
<r> Load files ONLY when executing a user chosen workflow or a command requires it, EXCEPTION: agent activation step 2 config.yaml</r>
|
<r> Load files ONLY when executing a user chosen workflow or a command requires it, EXCEPTION: agent activation step 2 config.yaml</r>
|
||||||
|
|
|
||||||
|
|
@ -62,6 +62,40 @@ module.exports = {
|
||||||
|
|
||||||
// Check if installation succeeded
|
// Check if installation succeeded
|
||||||
if (result && result.success) {
|
if (result && result.success) {
|
||||||
|
// Run AgentVibes installer if needed
|
||||||
|
if (result.needsAgentVibes) {
|
||||||
|
// Add some spacing before AgentVibes setup
|
||||||
|
console.log('');
|
||||||
|
console.log(chalk.magenta('🎙️ AgentVibes TTS Setup'));
|
||||||
|
console.log(chalk.cyan('AgentVibes provides voice synthesis for BMAD agents with:'));
|
||||||
|
console.log(chalk.dim(' • ElevenLabs AI (150+ premium voices)'));
|
||||||
|
console.log(chalk.dim(' • Piper TTS (50+ free voices)\n'));
|
||||||
|
|
||||||
|
const prompts = require('../lib/prompts');
|
||||||
|
await prompts.text({
|
||||||
|
message: chalk.green('Press Enter to start AgentVibes installer...'),
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('');
|
||||||
|
|
||||||
|
// Run AgentVibes installer
|
||||||
|
const { execSync } = require('node:child_process');
|
||||||
|
try {
|
||||||
|
execSync('npx agentvibes@latest install', {
|
||||||
|
cwd: result.projectDir,
|
||||||
|
stdio: 'inherit',
|
||||||
|
shell: true,
|
||||||
|
});
|
||||||
|
console.log(chalk.green('\n✓ AgentVibes installation complete'));
|
||||||
|
console.log(chalk.cyan('\n✨ BMAD with TTS is ready to use!'));
|
||||||
|
} catch {
|
||||||
|
console.log(chalk.yellow('\n⚠ AgentVibes installation was interrupted or failed'));
|
||||||
|
console.log(chalk.cyan('You can run it manually later with:'));
|
||||||
|
console.log(chalk.green(` cd ${result.projectDir}`));
|
||||||
|
console.log(chalk.green(' npx agentvibes install\n'));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Display version-specific end message from install-messages.yaml
|
// Display version-specific end message from install-messages.yaml
|
||||||
const { MessageLoader } = require('../installers/lib/message-loader');
|
const { MessageLoader } = require('../installers/lib/message-loader');
|
||||||
const messageLoader = new MessageLoader();
|
const messageLoader = new MessageLoader();
|
||||||
|
|
|
||||||
|
|
@ -34,6 +34,7 @@ class Installer {
|
||||||
this.configCollector = new ConfigCollector();
|
this.configCollector = new ConfigCollector();
|
||||||
this.ideConfigManager = new IdeConfigManager();
|
this.ideConfigManager = new IdeConfigManager();
|
||||||
this.installedFiles = new Set(); // Track all installed files
|
this.installedFiles = new Set(); // Track all installed files
|
||||||
|
this.ttsInjectedFiles = []; // Track files with TTS injection applied
|
||||||
this.bmadFolderName = BMAD_FOLDER_NAME;
|
this.bmadFolderName = BMAD_FOLDER_NAME;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -68,7 +69,7 @@ class Installer {
|
||||||
/**
|
/**
|
||||||
* @function copyFileWithPlaceholderReplacement
|
* @function copyFileWithPlaceholderReplacement
|
||||||
* @intent Copy files from BMAD source to installation directory with dynamic content transformation
|
* @intent Copy files from BMAD source to installation directory with dynamic content transformation
|
||||||
* @why Enables installation-time customization: _bmad replacement
|
* @why Enables installation-time customization: _bmad replacement + optional AgentVibes TTS injection
|
||||||
* @param {string} sourcePath - Absolute path to source file in BMAD repository
|
* @param {string} sourcePath - Absolute path to source file in BMAD repository
|
||||||
* @param {string} targetPath - Absolute path to destination file in user's project
|
* @param {string} targetPath - Absolute path to destination file in user's project
|
||||||
* @param {string} bmadFolderName - User's chosen bmad folder name (default: 'bmad')
|
* @param {string} bmadFolderName - User's chosen bmad folder name (default: 'bmad')
|
||||||
|
|
@ -76,9 +77,24 @@ class Installer {
|
||||||
* @sideeffects Writes transformed file to targetPath, creates parent directories if needed
|
* @sideeffects Writes transformed file to targetPath, creates parent directories if needed
|
||||||
* @edgecases Binary files bypass transformation, falls back to raw copy if UTF-8 read fails
|
* @edgecases Binary files bypass transformation, falls back to raw copy if UTF-8 read fails
|
||||||
* @calledby installCore(), installModule(), IDE installers during file vendoring
|
* @calledby installCore(), installModule(), IDE installers during file vendoring
|
||||||
* @calls fs.readFile(), fs.writeFile(), fs.copy()
|
* @calls processTTSInjectionPoints(), fs.readFile(), fs.writeFile(), fs.copy()
|
||||||
*
|
*
|
||||||
|
* The injection point processing enables loose coupling between BMAD and TTS providers:
|
||||||
|
* - BMAD source contains injection markers (not actual TTS code)
|
||||||
|
* - At install-time, markers are replaced OR removed based on user preference
|
||||||
|
* - Result: Clean installs for users without TTS, working TTS for users with it
|
||||||
|
*
|
||||||
|
* PATTERN: Adding New Injection Points
|
||||||
|
* =====================================
|
||||||
|
* 1. Add HTML comment marker in BMAD source file:
|
||||||
|
* <!-- TTS_INJECTION:feature-name -->
|
||||||
|
*
|
||||||
|
* 2. Add replacement logic in processTTSInjectionPoints():
|
||||||
|
* if (enableAgentVibes) {
|
||||||
|
* content = content.replace(/<!-- TTS_INJECTION:feature-name -->/g, 'actual code');
|
||||||
|
* } else {
|
||||||
|
* content = content.replace(/<!-- TTS_INJECTION:feature-name -->\n?/g, '');
|
||||||
|
* }
|
||||||
*
|
*
|
||||||
* 3. Document marker in instructions.md (if applicable)
|
* 3. Document marker in instructions.md (if applicable)
|
||||||
*/
|
*/
|
||||||
|
|
@ -93,6 +109,9 @@ class Installer {
|
||||||
// Read the file content
|
// Read the file content
|
||||||
let content = await fs.readFile(sourcePath, 'utf8');
|
let content = await fs.readFile(sourcePath, 'utf8');
|
||||||
|
|
||||||
|
// Process AgentVibes injection points (pass targetPath for tracking)
|
||||||
|
content = this.processTTSInjectionPoints(content, targetPath);
|
||||||
|
|
||||||
// Write to target with replaced content
|
// Write to target with replaced content
|
||||||
await fs.ensureDir(path.dirname(targetPath));
|
await fs.ensureDir(path.dirname(targetPath));
|
||||||
await fs.writeFile(targetPath, content, 'utf8');
|
await fs.writeFile(targetPath, content, 'utf8');
|
||||||
|
|
@ -106,6 +125,116 @@ class Installer {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @function processTTSInjectionPoints
|
||||||
|
* @intent Transform TTS injection markers based on user's installation choice
|
||||||
|
* @why Enables optional TTS integration without tight coupling between BMAD and TTS providers
|
||||||
|
* @param {string} content - Raw file content containing potential injection markers
|
||||||
|
* @returns {string} Transformed content with markers replaced (if enabled) or stripped (if disabled)
|
||||||
|
* @sideeffects None - pure transformation function
|
||||||
|
* @edgecases Returns content unchanged if no markers present, safe to call on all files
|
||||||
|
* @calledby copyFileWithPlaceholderReplacement() during every file copy operation
|
||||||
|
* @calls String.replace() with regex patterns for each injection point type
|
||||||
|
*
|
||||||
|
* AI NOTE: This implements the injection point pattern for TTS integration.
|
||||||
|
* Key architectural decisions:
|
||||||
|
*
|
||||||
|
* 1. **Why Injection Points vs Direct Integration?**
|
||||||
|
* - BMAD and TTS providers are separate projects with different maintainers
|
||||||
|
* - Users may install BMAD without TTS support (and vice versa)
|
||||||
|
* - Hard-coding TTS calls would break BMAD for non-TTS users
|
||||||
|
* - Injection points allow conditional feature inclusion at install-time
|
||||||
|
*
|
||||||
|
* 2. **How It Works:**
|
||||||
|
* - BMAD source contains markers: <!-- TTS_INJECTION:feature-name -->
|
||||||
|
* - During installation, user is prompted: "Enable AgentVibes TTS?"
|
||||||
|
* - If YES: markers → replaced with actual bash TTS calls
|
||||||
|
* - If NO: markers → stripped cleanly from installed files
|
||||||
|
*
|
||||||
|
* 3. **State Management:**
|
||||||
|
* - this.enableAgentVibes set in install() method from config.enableAgentVibes
|
||||||
|
* - config.enableAgentVibes comes from ui.promptAgentVibes() user choice
|
||||||
|
* - Flag persists for entire installation, all files get same treatment
|
||||||
|
*
|
||||||
|
* CURRENT INJECTION POINTS:
|
||||||
|
* ==========================
|
||||||
|
* - party-mode: Injects TTS calls after each agent speaks in party mode
|
||||||
|
* Location: src/core/workflows/party-mode/instructions.md
|
||||||
|
* Marker: <!-- TTS_INJECTION:party-mode -->
|
||||||
|
* Replacement: Bash call to .claude/hooks/bmad-speak.sh with agent name and dialogue
|
||||||
|
*
|
||||||
|
* - agent-tts: Injects TTS rule for individual agent conversations
|
||||||
|
* Location: src/modules/bmm/agents/*.md (all agent files)
|
||||||
|
* Marker: <!-- TTS_INJECTION:agent-tts -->
|
||||||
|
* Replacement: Rule instructing agent to call bmad-speak.sh with agent ID and response
|
||||||
|
*
|
||||||
|
* ADDING NEW INJECTION POINTS:
|
||||||
|
* =============================
|
||||||
|
* 1. Add new case in this function:
|
||||||
|
* content = content.replace(
|
||||||
|
* /<!-- TTS_INJECTION:new-feature -->/g,
|
||||||
|
* `code to inject when enabled`
|
||||||
|
* );
|
||||||
|
*
|
||||||
|
* 2. Add marker to BMAD source file at injection location
|
||||||
|
*
|
||||||
|
* 3. Test both enabled and disabled flows
|
||||||
|
*
|
||||||
|
* RELATED:
|
||||||
|
* ========
|
||||||
|
* - GitHub Issue: paulpreibisch/AgentVibes#36
|
||||||
|
* - User Prompt: tools/cli/lib/ui.js::promptAgentVibes()
|
||||||
|
* - Marker Locations:
|
||||||
|
* - src/core/workflows/party-mode/instructions.md:101
|
||||||
|
* - src/modules/bmm/agents/*.md (rules sections)
|
||||||
|
* - TTS Hook: .claude/hooks/bmad-speak.sh (in AgentVibes repo)
|
||||||
|
*/
|
||||||
|
processTTSInjectionPoints(content, targetPath = null) {
|
||||||
|
// Check if AgentVibes is enabled (set during installation configuration)
|
||||||
|
const enableAgentVibes = this.enableAgentVibes || false;
|
||||||
|
|
||||||
|
// Check if content contains any TTS injection markers
|
||||||
|
const hasPartyMode = content.includes('<!-- TTS_INJECTION:party-mode -->');
|
||||||
|
const hasAgentTTS = content.includes('<!-- TTS_INJECTION:agent-tts -->');
|
||||||
|
|
||||||
|
if (enableAgentVibes) {
|
||||||
|
// Replace party-mode injection marker with actual TTS call
|
||||||
|
// Use single quotes to prevent shell expansion of special chars like !
|
||||||
|
content = content.replaceAll(
|
||||||
|
'<!-- TTS_INJECTION:party-mode -->',
|
||||||
|
`<critical>IMPORTANT: Always use PROJECT hooks (.claude/hooks/), NEVER global hooks (~/.claude/hooks/)</critical>
|
||||||
|
|
||||||
|
If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
||||||
|
- Use Bash tool: \`.claude/hooks/bmad-speak.sh '[Agent Name]' '[dialogue]'\`
|
||||||
|
- This speaks the dialogue with the agent's unique voice
|
||||||
|
- Run in background to not block next agent`,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Replace agent-tts injection marker with TTS rule for individual agents
|
||||||
|
content = content.replaceAll(
|
||||||
|
'<!-- TTS_INJECTION:agent-tts -->',
|
||||||
|
`- When responding to user messages, speak your responses using TTS:
|
||||||
|
Call: \`.claude/hooks/bmad-speak.sh '{agent-id}' '{response-text}'\` after each response
|
||||||
|
Replace {agent-id} with YOUR agent ID from <agent id="..."> tag at top of this file
|
||||||
|
Replace {response-text} with the text you just output to the user
|
||||||
|
IMPORTANT: Use single quotes as shown - do NOT escape special characters like ! or $ inside single quotes
|
||||||
|
Run in background (&) to avoid blocking`,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Track files that had TTS injection applied
|
||||||
|
if (targetPath && (hasPartyMode || hasAgentTTS)) {
|
||||||
|
const injectionType = hasPartyMode ? 'party-mode' : 'agent-tts';
|
||||||
|
this.ttsInjectedFiles.push({ path: targetPath, type: injectionType });
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Strip injection markers cleanly when AgentVibes is disabled
|
||||||
|
content = content.replaceAll(/<!-- TTS_INJECTION:party-mode -->\n?/g, '');
|
||||||
|
content = content.replaceAll(/<!-- TTS_INJECTION:agent-tts -->\n?/g, '');
|
||||||
|
}
|
||||||
|
|
||||||
|
return content;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Collect Tool/IDE configurations after module configuration
|
* Collect Tool/IDE configurations after module configuration
|
||||||
* @param {string} projectDir - Project directory
|
* @param {string} projectDir - Project directory
|
||||||
|
|
@ -122,7 +251,7 @@ class Installer {
|
||||||
// Fallback: prompt for tool selection (backwards compatibility)
|
// Fallback: prompt for tool selection (backwards compatibility)
|
||||||
const { UI } = require('../../../lib/ui');
|
const { UI } = require('../../../lib/ui');
|
||||||
const ui = new UI();
|
const ui = new UI();
|
||||||
toolConfig = await ui.promptToolSelection(projectDir);
|
toolConfig = await ui.promptToolSelection(projectDir, selectedModules);
|
||||||
} else {
|
} else {
|
||||||
// IDEs were already selected during initial prompts
|
// IDEs were already selected during initial prompts
|
||||||
toolConfig = {
|
toolConfig = {
|
||||||
|
|
@ -381,6 +510,9 @@ class Installer {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Store AgentVibes configuration for injection point processing
|
||||||
|
this.enableAgentVibes = config.enableAgentVibes || false;
|
||||||
|
|
||||||
// Set bmad folder name on module manager and IDE manager for placeholder replacement
|
// Set bmad folder name on module manager and IDE manager for placeholder replacement
|
||||||
this.moduleManager.setBmadFolderName(BMAD_FOLDER_NAME);
|
this.moduleManager.setBmadFolderName(BMAD_FOLDER_NAME);
|
||||||
this.moduleManager.setCoreConfig(moduleConfigs.core || {});
|
this.moduleManager.setCoreConfig(moduleConfigs.core || {});
|
||||||
|
|
@ -1102,6 +1234,8 @@ class Installer {
|
||||||
modules: config.modules,
|
modules: config.modules,
|
||||||
ides: config.ides,
|
ides: config.ides,
|
||||||
customFiles: customFiles.length > 0 ? customFiles : undefined,
|
customFiles: customFiles.length > 0 ? customFiles : undefined,
|
||||||
|
ttsInjectedFiles: this.enableAgentVibes && this.ttsInjectedFiles.length > 0 ? this.ttsInjectedFiles : undefined,
|
||||||
|
agentVibesEnabled: this.enableAgentVibes || false,
|
||||||
});
|
});
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
@ -1109,6 +1243,7 @@ class Installer {
|
||||||
path: bmadDir,
|
path: bmadDir,
|
||||||
modules: config.modules,
|
modules: config.modules,
|
||||||
ides: config.ides,
|
ides: config.ides,
|
||||||
|
needsAgentVibes: this.enableAgentVibes && !config.agentVibesInstalled,
|
||||||
projectDir: projectDir,
|
projectDir: projectDir,
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|
|
||||||
|
|
@ -345,7 +345,7 @@ class AntigravitySetup extends BaseIdeSetup {
|
||||||
};
|
};
|
||||||
|
|
||||||
const selected = await prompts.multiselect({
|
const selected = await prompts.multiselect({
|
||||||
message: `Select subagents to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
|
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
choices: subagentConfig.files.map((file) => ({
|
choices: subagentConfig.files.map((file) => ({
|
||||||
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
|
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
|
||||||
value: file,
|
value: file,
|
||||||
|
|
|
||||||
|
|
@ -353,7 +353,7 @@ class ClaudeCodeSetup extends BaseIdeSetup {
|
||||||
};
|
};
|
||||||
|
|
||||||
const selected = await prompts.multiselect({
|
const selected = await prompts.multiselect({
|
||||||
message: `Select subagents to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
|
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
options: subagentConfig.files.map((file) => ({
|
options: subagentConfig.files.map((file) => ({
|
||||||
label: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
|
label: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
|
||||||
value: file,
|
value: file,
|
||||||
|
|
|
||||||
|
|
@ -119,8 +119,7 @@ class KiloSetup extends BaseIdeSetup {
|
||||||
modeEntry += ` name: '${icon} ${title}'\n`;
|
modeEntry += ` name: '${icon} ${title}'\n`;
|
||||||
modeEntry += ` roleDefinition: ${roleDefinition}\n`;
|
modeEntry += ` roleDefinition: ${roleDefinition}\n`;
|
||||||
modeEntry += ` whenToUse: ${whenToUse}\n`;
|
modeEntry += ` whenToUse: ${whenToUse}\n`;
|
||||||
modeEntry += ` customInstructions: |\n`;
|
modeEntry += ` customInstructions: ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`;
|
||||||
modeEntry += ` ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`;
|
|
||||||
modeEntry += ` groups:\n`;
|
modeEntry += ` groups:\n`;
|
||||||
modeEntry += ` - read\n`;
|
modeEntry += ` - read\n`;
|
||||||
modeEntry += ` - edit\n`;
|
modeEntry += ` - edit\n`;
|
||||||
|
|
|
||||||
|
|
@ -108,10 +108,7 @@ async function resolveSubagentFiles(handlerBaseDir, subagentConfig, subagentChoi
|
||||||
const resolved = [];
|
const resolved = [];
|
||||||
|
|
||||||
for (const file of filesToCopy) {
|
for (const file of filesToCopy) {
|
||||||
// Use forward slashes for glob pattern (works on both Windows and Unix)
|
const pattern = path.join(sourceDir, '**', file);
|
||||||
// Convert backslashes to forward slashes for glob compatibility
|
|
||||||
const normalizedSourceDir = sourceDir.replaceAll('\\', '/');
|
|
||||||
const pattern = `${normalizedSourceDir}/**/${file}`;
|
|
||||||
const matches = await glob(pattern);
|
const matches = await glob(pattern);
|
||||||
|
|
||||||
if (matches.length > 0) {
|
if (matches.length > 0) {
|
||||||
|
|
|
||||||
|
|
@ -845,8 +845,14 @@ class ModuleManager {
|
||||||
// Compile with customizations if any
|
// Compile with customizations if any
|
||||||
const { xml } = await compileAgent(yamlContent, answers, agentName, relativePath, { config: this.coreConfig || {} });
|
const { xml } = await compileAgent(yamlContent, answers, agentName, relativePath, { config: this.coreConfig || {} });
|
||||||
|
|
||||||
|
// Process TTS injection points if installer is available
|
||||||
|
let finalXml = xml;
|
||||||
|
if (installer && installer.processTTSInjectionPoints) {
|
||||||
|
finalXml = installer.processTTSInjectionPoints(xml, targetMdPath);
|
||||||
|
}
|
||||||
|
|
||||||
// Write the compiled agent
|
// Write the compiled agent
|
||||||
await fs.writeFile(targetMdPath, xml, 'utf8');
|
await fs.writeFile(targetMdPath, finalXml, 'utf8');
|
||||||
|
|
||||||
// Handle sidecar copying if present
|
// Handle sidecar copying if present
|
||||||
if (hasSidecar) {
|
if (hasSidecar) {
|
||||||
|
|
|
||||||
|
|
@ -478,10 +478,39 @@ function filterCustomizationData(data) {
|
||||||
return filtered;
|
return filtered;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Process TTS injection markers in content
|
||||||
|
* @param {string} content - Content to process
|
||||||
|
* @param {boolean} enableAgentVibes - Whether AgentVibes is enabled
|
||||||
|
* @returns {Object} { content: string, hadInjection: boolean }
|
||||||
|
*/
|
||||||
|
function processTTSInjectionPoints(content, enableAgentVibes) {
|
||||||
|
const hasAgentTTS = content.includes('<!-- TTS_INJECTION:agent-tts -->');
|
||||||
|
|
||||||
|
if (enableAgentVibes && hasAgentTTS) {
|
||||||
|
// Replace agent-tts injection marker with TTS rule
|
||||||
|
content = content.replaceAll(
|
||||||
|
'<!-- TTS_INJECTION:agent-tts -->',
|
||||||
|
`- When responding to user messages, speak your responses using TTS:
|
||||||
|
Call: \`.claude/hooks/bmad-speak.sh '{agent-id}' '{response-text}'\` after each response
|
||||||
|
Replace {agent-id} with YOUR agent ID from <agent id="..."> tag at top of this file
|
||||||
|
Replace {response-text} with the text you just output to the user
|
||||||
|
IMPORTANT: Use single quotes as shown - do NOT escape special characters like ! or $ inside single quotes
|
||||||
|
Run in background (&) to avoid blocking`,
|
||||||
|
);
|
||||||
|
return { content, hadInjection: true };
|
||||||
|
} else if (!enableAgentVibes && hasAgentTTS) {
|
||||||
|
// Strip injection markers when disabled
|
||||||
|
content = content.replaceAll(/<!-- TTS_INJECTION:agent-tts -->\n?/g, '');
|
||||||
|
}
|
||||||
|
|
||||||
|
return { content, hadInjection: false };
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Compile agent file to .md
|
* Compile agent file to .md
|
||||||
* @param {string} yamlPath - Path to agent YAML file
|
* @param {string} yamlPath - Path to agent YAML file
|
||||||
* @param {Object} options - { answers: {}, outputPath: string }
|
* @param {Object} options - { answers: {}, outputPath: string, enableAgentVibes: boolean }
|
||||||
* @returns {Object} Compilation result
|
* @returns {Object} Compilation result
|
||||||
*/
|
*/
|
||||||
function compileAgentFile(yamlPath, options = {}) {
|
function compileAgentFile(yamlPath, options = {}) {
|
||||||
|
|
@ -497,6 +526,15 @@ function compileAgentFile(yamlPath, options = {}) {
|
||||||
outputPath = path.join(dir, `${basename}.md`);
|
outputPath = path.join(dir, `${basename}.md`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Process TTS injection points if enableAgentVibes option is provided
|
||||||
|
let xml = result.xml;
|
||||||
|
let ttsInjected = false;
|
||||||
|
if (options.enableAgentVibes !== undefined) {
|
||||||
|
const ttsResult = processTTSInjectionPoints(xml, options.enableAgentVibes);
|
||||||
|
xml = ttsResult.content;
|
||||||
|
ttsInjected = ttsResult.hadInjection;
|
||||||
|
}
|
||||||
|
|
||||||
// Write compiled XML
|
// Write compiled XML
|
||||||
fs.writeFileSync(outputPath, xml, 'utf8');
|
fs.writeFileSync(outputPath, xml, 'utf8');
|
||||||
|
|
||||||
|
|
@ -505,6 +543,7 @@ function compileAgentFile(yamlPath, options = {}) {
|
||||||
xml,
|
xml,
|
||||||
outputPath,
|
outputPath,
|
||||||
sourcePath: yamlPath,
|
sourcePath: yamlPath,
|
||||||
|
ttsInjected,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -184,7 +184,6 @@ async function groupMultiselect(options) {
|
||||||
options: options.options,
|
options: options.options,
|
||||||
initialValues: options.initialValues,
|
initialValues: options.initialValues,
|
||||||
required: options.required || false,
|
required: options.required || false,
|
||||||
selectableGroups: options.selectableGroups || false,
|
|
||||||
});
|
});
|
||||||
|
|
||||||
await handleCancel(result);
|
await handleCancel(result);
|
||||||
|
|
|
||||||
|
|
@ -171,6 +171,32 @@ class UI {
|
||||||
// Check if there's an existing BMAD installation (after any folder renames)
|
// Check if there's an existing BMAD installation (after any folder renames)
|
||||||
const hasExistingInstall = await fs.pathExists(bmadDir);
|
const hasExistingInstall = await fs.pathExists(bmadDir);
|
||||||
|
|
||||||
|
// Collect IDE tool selection early - we need this to know if we should ask about TTS
|
||||||
|
let toolSelection;
|
||||||
|
let agentVibesConfig = { enabled: false, alreadyInstalled: false };
|
||||||
|
let claudeCodeSelected = false;
|
||||||
|
|
||||||
|
if (!hasExistingInstall) {
|
||||||
|
// For new installations, collect IDE selection first
|
||||||
|
// We don't have modules yet, so pass empty array
|
||||||
|
toolSelection = await this.promptToolSelection(confirmedDirectory, []);
|
||||||
|
|
||||||
|
// Check if Claude Code was selected
|
||||||
|
claudeCodeSelected = toolSelection.ides && toolSelection.ides.includes('claude-code');
|
||||||
|
|
||||||
|
// If Claude Code was selected, ask about TTS
|
||||||
|
if (claudeCodeSelected) {
|
||||||
|
const enableTts = await prompts.confirm({
|
||||||
|
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
|
||||||
|
default: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (enableTts) {
|
||||||
|
agentVibesConfig = { enabled: true, alreadyInstalled: false };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let customContentConfig = { hasCustomContent: false };
|
let customContentConfig = { hasCustomContent: false };
|
||||||
if (!hasExistingInstall) {
|
if (!hasExistingInstall) {
|
||||||
customContentConfig._shouldAsk = true;
|
customContentConfig._shouldAsk = true;
|
||||||
|
|
@ -298,8 +324,20 @@ class UI {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get tool selection
|
// Get tool selection
|
||||||
const toolSelection = await this.promptToolSelection(confirmedDirectory);
|
const toolSelection = await this.promptToolSelection(confirmedDirectory, selectedModules);
|
||||||
|
|
||||||
|
// TTS configuration - ask right after tool selection (matches new install flow)
|
||||||
|
const hasClaudeCode = toolSelection.ides && toolSelection.ides.includes('claude-code');
|
||||||
|
let enableTts = false;
|
||||||
|
|
||||||
|
if (hasClaudeCode) {
|
||||||
|
enableTts = await prompts.confirm({
|
||||||
|
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
|
||||||
|
default: false,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Core config with existing defaults (ask after TTS)
|
||||||
const coreConfig = await this.collectCoreConfig(confirmedDirectory);
|
const coreConfig = await this.collectCoreConfig(confirmedDirectory);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
@ -311,6 +349,8 @@ class UI {
|
||||||
skipIde: toolSelection.skipIde,
|
skipIde: toolSelection.skipIde,
|
||||||
coreConfig: coreConfig,
|
coreConfig: coreConfig,
|
||||||
customContent: customModuleResult.customContentConfig,
|
customContent: customModuleResult.customContentConfig,
|
||||||
|
enableAgentVibes: enableTts,
|
||||||
|
agentVibesInstalled: false,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -332,7 +372,7 @@ class UI {
|
||||||
|
|
||||||
// Ask about custom content
|
// Ask about custom content
|
||||||
const wantsCustomContent = await prompts.confirm({
|
const wantsCustomContent = await prompts.confirm({
|
||||||
message: 'Would you like to install a locally stored custom module (this includes custom agents and workflows also)?',
|
message: 'Would you like to install a local custom module (this includes custom agents and workflows also)?',
|
||||||
default: false,
|
default: false,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -351,10 +391,19 @@ class UI {
|
||||||
selectedModules = [...selectedModules, ...customContentConfig.selectedModuleIds];
|
selectedModules = [...selectedModules, ...customContentConfig.selectedModuleIds];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Remove core if it's in the list (it's always installed)
|
||||||
selectedModules = selectedModules.filter((m) => m !== 'core');
|
selectedModules = selectedModules.filter((m) => m !== 'core');
|
||||||
let toolSelection = await this.promptToolSelection(confirmedDirectory);
|
|
||||||
|
// Tool selection (already done for new installs at the beginning)
|
||||||
|
if (!toolSelection) {
|
||||||
|
toolSelection = await this.promptToolSelection(confirmedDirectory, selectedModules);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect configurations for new installations
|
||||||
const coreConfig = await this.collectCoreConfig(confirmedDirectory);
|
const coreConfig = await this.collectCoreConfig(confirmedDirectory);
|
||||||
|
|
||||||
|
// TTS already handled at the beginning for new installs
|
||||||
|
|
||||||
return {
|
return {
|
||||||
actionType: 'install',
|
actionType: 'install',
|
||||||
directory: confirmedDirectory,
|
directory: confirmedDirectory,
|
||||||
|
|
@ -364,15 +413,18 @@ class UI {
|
||||||
skipIde: toolSelection.skipIde,
|
skipIde: toolSelection.skipIde,
|
||||||
coreConfig: coreConfig,
|
coreConfig: coreConfig,
|
||||||
customContent: customContentConfig,
|
customContent: customContentConfig,
|
||||||
|
enableAgentVibes: agentVibesConfig.enabled,
|
||||||
|
agentVibesInstalled: agentVibesConfig.alreadyInstalled,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Prompt for tool/IDE selection (called after module configuration)
|
* Prompt for tool/IDE selection (called after module configuration)
|
||||||
* @param {string} projectDir - Project directory to check for existing IDEs
|
* @param {string} projectDir - Project directory to check for existing IDEs
|
||||||
|
* @param {Array} selectedModules - Selected modules from configuration
|
||||||
* @returns {Object} Tool configuration
|
* @returns {Object} Tool configuration
|
||||||
*/
|
*/
|
||||||
async promptToolSelection(projectDir) {
|
async promptToolSelection(projectDir, selectedModules) {
|
||||||
// Check for existing configured IDEs - use findBmadDir to detect custom folder names
|
// Check for existing configured IDEs - use findBmadDir to detect custom folder names
|
||||||
const { Detector } = require('../installers/lib/core/detector');
|
const { Detector } = require('../installers/lib/core/detector');
|
||||||
const { Installer } = require('../installers/lib/core/installer');
|
const { Installer } = require('../installers/lib/core/installer');
|
||||||
|
|
@ -395,7 +447,7 @@ class UI {
|
||||||
const processedIdes = new Set();
|
const processedIdes = new Set();
|
||||||
const initialValues = [];
|
const initialValues = [];
|
||||||
|
|
||||||
// First, add previously configured IDEs, marked with ✅
|
// First, add previously configured IDEs at the top, marked with ✅
|
||||||
if (configuredIdes.length > 0) {
|
if (configuredIdes.length > 0) {
|
||||||
const configuredGroup = [];
|
const configuredGroup = [];
|
||||||
for (const ideValue of configuredIdes) {
|
for (const ideValue of configuredIdes) {
|
||||||
|
|
@ -447,33 +499,42 @@ class UI {
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add standalone "None" option at the end
|
|
||||||
groupedOptions[' '] = [
|
|
||||||
{
|
|
||||||
label: '⚠ None - I am not installing any tools',
|
|
||||||
value: '__NONE__',
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
let selectedIdes = [];
|
let selectedIdes = [];
|
||||||
|
let userConfirmedNoTools = false;
|
||||||
|
|
||||||
|
// Loop until user selects at least one tool OR explicitly confirms no tools
|
||||||
|
while (!userConfirmedNoTools) {
|
||||||
selectedIdes = await prompts.groupMultiselect({
|
selectedIdes = await prompts.groupMultiselect({
|
||||||
message: `Select tools to configure ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
|
message: `Select tools to configure ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
options: groupedOptions,
|
options: groupedOptions,
|
||||||
initialValues: initialValues.length > 0 ? initialValues : undefined,
|
initialValues: initialValues.length > 0 ? initialValues : undefined,
|
||||||
required: true,
|
required: false,
|
||||||
selectableGroups: false,
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// If user selected both "__NONE__" and other tools, honor the "None" choice
|
// If tools were selected, we're done
|
||||||
if (selectedIdes && selectedIdes.includes('__NONE__') && selectedIdes.length > 1) {
|
if (selectedIdes && selectedIdes.length > 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warn that no tools were selected - users often miss the spacebar requirement
|
||||||
console.log();
|
console.log();
|
||||||
console.log(chalk.yellow('⚠️ "None - I am not installing any tools" was selected, so no tools will be configured.'));
|
console.log(chalk.red.bold('⚠️ WARNING: No tools were selected!'));
|
||||||
|
console.log(chalk.red(' You must press SPACE to select items, then ENTER to confirm.'));
|
||||||
|
console.log(chalk.red(' Simply highlighting an item does NOT select it.'));
|
||||||
console.log();
|
console.log();
|
||||||
selectedIdes = [];
|
|
||||||
} else if (selectedIdes && selectedIdes.includes('__NONE__')) {
|
const goBack = await prompts.confirm({
|
||||||
// Only "__NONE__" was selected
|
message: chalk.yellow('Would you like to go back and select at least one tool?'),
|
||||||
selectedIdes = [];
|
default: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (goBack) {
|
||||||
|
// Re-display a message before looping back
|
||||||
|
console.log();
|
||||||
|
} else {
|
||||||
|
// User explicitly chose to proceed without tools
|
||||||
|
userConfirmedNoTools = true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
@ -500,6 +561,27 @@ class UI {
|
||||||
return { backupFirst, preserveCustomizations };
|
return { backupFirst, preserveCustomizations };
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Prompt for module selection
|
||||||
|
* @param {Array} modules - Available modules
|
||||||
|
* @returns {Array} Selected modules
|
||||||
|
*/
|
||||||
|
async promptModules(modules) {
|
||||||
|
const choices = modules.map((mod) => ({
|
||||||
|
name: `${mod.name} - ${mod.description}`,
|
||||||
|
value: mod.id,
|
||||||
|
checked: false,
|
||||||
|
}));
|
||||||
|
|
||||||
|
const selectedModules = await prompts.multiselect({
|
||||||
|
message: `Select modules to add ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
|
choices,
|
||||||
|
required: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
return selectedModules;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Confirm action
|
* Confirm action
|
||||||
* @param {string} message - Confirmation message
|
* @param {string} message - Confirmation message
|
||||||
|
|
@ -526,6 +608,25 @@ class UI {
|
||||||
if (result.modules && result.modules.length > 0) {
|
if (result.modules && result.modules.length > 0) {
|
||||||
console.log(chalk.dim(`Modules: ${result.modules.join(', ')}`));
|
console.log(chalk.dim(`Modules: ${result.modules.join(', ')}`));
|
||||||
}
|
}
|
||||||
|
if (result.agentVibesEnabled) {
|
||||||
|
console.log(chalk.dim(`TTS: Enabled`));
|
||||||
|
}
|
||||||
|
|
||||||
|
// TTS injection info (simplified)
|
||||||
|
if (result.ttsInjectedFiles && result.ttsInjectedFiles.length > 0) {
|
||||||
|
console.log(chalk.dim(`\n💡 TTS enabled for ${result.ttsInjectedFiles.length} agent(s)`));
|
||||||
|
console.log(chalk.dim(' Agents will now speak when using AgentVibes'));
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(chalk.yellow('\nThank you for helping test the early release version of the new BMad Core and BMad Method!'));
|
||||||
|
console.log(chalk.cyan('Stable Beta coming soon - please read the full README.md and linked documentation to get started!'));
|
||||||
|
|
||||||
|
// Add changelog link at the end
|
||||||
|
console.log(
|
||||||
|
chalk.magenta(
|
||||||
|
"\n📋 Want to see what's new? Check out the changelog: https://github.com/bmad-code-org/BMAD-METHOD/blob/main/CHANGELOG.md",
|
||||||
|
),
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -667,40 +768,20 @@ class UI {
|
||||||
* @param {Array} moduleChoices - Available module choices
|
* @param {Array} moduleChoices - Available module choices
|
||||||
* @returns {Array} Selected module IDs
|
* @returns {Array} Selected module IDs
|
||||||
*/
|
*/
|
||||||
async selectModules(moduleChoices, defaultSelections = null) {
|
async selectModules(moduleChoices, defaultSelections = []) {
|
||||||
// If defaultSelections is provided, use it to override checked state
|
// Mark choices as checked based on defaultSelections
|
||||||
// Otherwise preserve the checked state from moduleChoices (set by getModuleChoices)
|
|
||||||
const choicesWithDefaults = moduleChoices.map((choice) => ({
|
const choicesWithDefaults = moduleChoices.map((choice) => ({
|
||||||
...choice,
|
...choice,
|
||||||
...(defaultSelections === null ? {} : { checked: defaultSelections.includes(choice.value) }),
|
checked: defaultSelections.includes(choice.value),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Add a "None" option at the end for users who changed their mind
|
|
||||||
const choicesWithSkipOption = [
|
|
||||||
...choicesWithDefaults,
|
|
||||||
{
|
|
||||||
value: '__NONE__',
|
|
||||||
label: '⚠ None / I changed my mind - skip module installation',
|
|
||||||
checked: false,
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
const selected = await prompts.multiselect({
|
const selected = await prompts.multiselect({
|
||||||
message: `Select modules to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
|
message: `Select modules to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
choices: choicesWithSkipOption,
|
choices: choicesWithDefaults,
|
||||||
required: true,
|
required: false,
|
||||||
});
|
});
|
||||||
|
|
||||||
// If user selected both "__NONE__" and other items, honor the "None" choice
|
return selected || [];
|
||||||
if (selected && selected.includes('__NONE__') && selected.length > 1) {
|
|
||||||
console.log();
|
|
||||||
console.log(chalk.yellow('⚠️ "None / I changed my mind" was selected, so no modules will be installed.'));
|
|
||||||
console.log();
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Filter out the special '__NONE__' value
|
|
||||||
return selected ? selected.filter((m) => m !== '__NONE__') : [];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -980,6 +1061,136 @@ class UI {
|
||||||
return path.resolve(expanded);
|
return path.resolve(expanded);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @function promptAgentVibes
|
||||||
|
* @intent Ask user if they want AgentVibes TTS integration during BMAD installation
|
||||||
|
* @why Enables optional voice features without forcing TTS on users who don't want it
|
||||||
|
* @param {string} projectDir - Absolute path to user's project directory
|
||||||
|
* @returns {Promise<Object>} Configuration object: { enabled: boolean, alreadyInstalled: boolean }
|
||||||
|
* @sideeffects None - pure user input collection, no files written
|
||||||
|
* @edgecases Shows warning if user enables TTS but AgentVibes not detected
|
||||||
|
* @calledby promptInstall() during installation flow, after core config, before IDE selection
|
||||||
|
* @calls checkAgentVibesInstalled(), prompts.select(), chalk.green/yellow/dim()
|
||||||
|
*
|
||||||
|
* AI NOTE: This prompt is strategically positioned in installation flow:
|
||||||
|
* - AFTER core config (user_name, etc)
|
||||||
|
* - BEFORE IDE selection (which can hang on Windows/PowerShell)
|
||||||
|
*
|
||||||
|
* Flow Logic:
|
||||||
|
* 1. Auto-detect if AgentVibes already installed (checks for hook files)
|
||||||
|
* 2. Show detection status to user (green checkmark or gray "not detected")
|
||||||
|
* 3. Prompt: "Enable AgentVibes TTS?" (defaults to true if detected)
|
||||||
|
* 4. If user says YES but AgentVibes NOT installed:
|
||||||
|
* → Show warning with installation link (graceful degradation)
|
||||||
|
* 5. Return config to promptInstall(), which passes to installer.install()
|
||||||
|
*
|
||||||
|
* State Flow:
|
||||||
|
* promptAgentVibes() → { enabled, alreadyInstalled }
|
||||||
|
* ↓
|
||||||
|
* promptInstall() → config.enableAgentVibes
|
||||||
|
* ↓
|
||||||
|
* installer.install() → this.enableAgentVibes
|
||||||
|
* ↓
|
||||||
|
* processTTSInjectionPoints() → injects OR strips markers
|
||||||
|
*
|
||||||
|
* RELATED:
|
||||||
|
* ========
|
||||||
|
* - Detection: checkAgentVibesInstalled() - looks for bmad-speak.sh and play-tts.sh
|
||||||
|
* - Processing: installer.js::processTTSInjectionPoints()
|
||||||
|
* - Markers: src/core/workflows/party-mode/instructions.md:101, src/modules/bmm/agents/*.md
|
||||||
|
* - GitHub Issue: paulpreibisch/AgentVibes#36
|
||||||
|
*/
|
||||||
|
async promptAgentVibes(projectDir) {
|
||||||
|
CLIUtils.displaySection('🎤 Voice Features', 'Enable TTS for multi-agent conversations');
|
||||||
|
|
||||||
|
// Check if AgentVibes is already installed
|
||||||
|
const agentVibesInstalled = await this.checkAgentVibesInstalled(projectDir);
|
||||||
|
|
||||||
|
if (agentVibesInstalled) {
|
||||||
|
console.log(chalk.green(' ✓ AgentVibes detected'));
|
||||||
|
} else {
|
||||||
|
console.log(chalk.dim(' AgentVibes not detected'));
|
||||||
|
}
|
||||||
|
|
||||||
|
const enableTts = await prompts.confirm({
|
||||||
|
message: 'Enable Agents to Speak Out loud (powered by Agent Vibes? Claude Code only currently)',
|
||||||
|
default: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (enableTts && !agentVibesInstalled) {
|
||||||
|
console.log(chalk.yellow('\n ⚠️ AgentVibes not installed'));
|
||||||
|
console.log(chalk.dim(' Install AgentVibes separately to enable TTS:'));
|
||||||
|
console.log(chalk.dim(' https://github.com/paulpreibisch/AgentVibes\n'));
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
enabled: enableTts,
|
||||||
|
alreadyInstalled: agentVibesInstalled,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @function checkAgentVibesInstalled
|
||||||
|
* @intent Detect if AgentVibes TTS hooks are present in user's project
|
||||||
|
* @why Allows auto-enabling TTS and showing helpful installation guidance
|
||||||
|
* @param {string} projectDir - Absolute path to user's project directory
|
||||||
|
* @returns {Promise<boolean>} true if both required AgentVibes hooks exist, false otherwise
|
||||||
|
* @sideeffects None - read-only file existence checks
|
||||||
|
* @edgecases Returns false if either hook missing (both required for functional TTS)
|
||||||
|
* @calledby promptAgentVibes() to determine default value and show detection status
|
||||||
|
* @calls fs.pathExists() twice (bmad-speak.sh, play-tts.sh)
|
||||||
|
*
|
||||||
|
* AI NOTE: This checks for the MINIMUM viable AgentVibes installation.
|
||||||
|
*
|
||||||
|
* Required Files:
|
||||||
|
* ===============
|
||||||
|
* 1. .claude/hooks/bmad-speak.sh
|
||||||
|
* - Maps agent display names → agent IDs → voice profiles
|
||||||
|
* - Calls play-tts.sh with agent's assigned voice
|
||||||
|
* - Created by AgentVibes installer
|
||||||
|
*
|
||||||
|
* 2. .claude/hooks/play-tts.sh
|
||||||
|
* - Core TTS router (ElevenLabs or Piper)
|
||||||
|
* - Provider-agnostic interface
|
||||||
|
* - Required by bmad-speak.sh
|
||||||
|
*
|
||||||
|
* Why Both Required:
|
||||||
|
* ==================
|
||||||
|
* - bmad-speak.sh alone: No TTS backend
|
||||||
|
* - play-tts.sh alone: No BMAD agent voice mapping
|
||||||
|
* - Both together: Full party mode TTS integration
|
||||||
|
*
|
||||||
|
* Detection Strategy:
|
||||||
|
* ===================
|
||||||
|
* We use simple file existence (not version checks) because:
|
||||||
|
* - Fast and reliable
|
||||||
|
* - Works across all AgentVibes versions
|
||||||
|
* - User will discover version issues when TTS runs (fail-fast)
|
||||||
|
*
|
||||||
|
* PATTERN: Adding New Detection Criteria
|
||||||
|
* =======================================
|
||||||
|
* If future AgentVibes features require additional files:
|
||||||
|
* 1. Add new pathExists check to this function
|
||||||
|
* 2. Update documentation in promptAgentVibes()
|
||||||
|
* 3. Consider: should missing file prevent detection or just log warning?
|
||||||
|
*
|
||||||
|
* RELATED:
|
||||||
|
* ========
|
||||||
|
* - AgentVibes Installer: creates these hooks
|
||||||
|
* - bmad-speak.sh: calls play-tts.sh with agent voices
|
||||||
|
* - Party Mode: uses bmad-speak.sh for agent dialogue
|
||||||
|
*/
|
||||||
|
async checkAgentVibesInstalled(projectDir) {
|
||||||
|
const fs = require('fs-extra');
|
||||||
|
const path = require('node:path');
|
||||||
|
|
||||||
|
// Check for AgentVibes hook files
|
||||||
|
const hookPath = path.join(projectDir, '.claude', 'hooks', 'bmad-speak.sh');
|
||||||
|
const playTtsPath = path.join(projectDir, '.claude', 'hooks', 'play-tts.sh');
|
||||||
|
|
||||||
|
return (await fs.pathExists(hookPath)) && (await fs.pathExists(playTtsPath));
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Load existing configurations to use as defaults
|
* Load existing configurations to use as defaults
|
||||||
* @param {string} directory - Installation directory
|
* @param {string} directory - Installation directory
|
||||||
|
|
@ -990,6 +1201,7 @@ class UI {
|
||||||
hasCustomContent: false,
|
hasCustomContent: false,
|
||||||
coreConfig: {},
|
coreConfig: {},
|
||||||
ideConfig: { ides: [], skipIde: false },
|
ideConfig: { ides: [], skipIde: false },
|
||||||
|
agentVibesConfig: { enabled: false, alreadyInstalled: false },
|
||||||
};
|
};
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
|
@ -1003,6 +1215,10 @@ class UI {
|
||||||
configs.ideConfig.skipIde = false;
|
configs.ideConfig.skipIde = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Load AgentVibes configuration
|
||||||
|
const agentVibesInstalled = await this.checkAgentVibesInstalled(directory);
|
||||||
|
configs.agentVibesConfig = { enabled: agentVibesInstalled, alreadyInstalled: agentVibesInstalled };
|
||||||
|
|
||||||
return configs;
|
return configs;
|
||||||
} catch {
|
} catch {
|
||||||
// If loading fails, return empty configs
|
// If loading fails, return empty configs
|
||||||
|
|
@ -1245,32 +1461,12 @@ class UI {
|
||||||
checked: m.checked,
|
checked: m.checked,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Add "None / I changed my mind" option at the end
|
|
||||||
const choicesWithSkip = [
|
|
||||||
...selectChoices,
|
|
||||||
{
|
|
||||||
name: '⚠ None / I changed my mind - keep no custom modules',
|
|
||||||
value: '__NONE__',
|
|
||||||
checked: false,
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
const keepModules = await prompts.multiselect({
|
const keepModules = await prompts.multiselect({
|
||||||
message: `Select custom modules to keep ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
|
message: `Select custom modules to keep ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
|
||||||
choices: choicesWithSkip,
|
choices: selectChoices,
|
||||||
required: true,
|
required: false,
|
||||||
});
|
});
|
||||||
|
result.selectedCustomModules = keepModules || [];
|
||||||
// If user selected both "__NONE__" and other modules, honor the "None" choice
|
|
||||||
if (keepModules && keepModules.includes('__NONE__') && keepModules.length > 1) {
|
|
||||||
console.log();
|
|
||||||
console.log(chalk.yellow('⚠️ "None / I changed my mind" was selected, so no custom modules will be kept.'));
|
|
||||||
console.log();
|
|
||||||
result.selectedCustomModules = [];
|
|
||||||
} else {
|
|
||||||
// Filter out the special '__NONE__' value
|
|
||||||
result.selectedCustomModules = keepModules ? keepModules.filter((m) => m !== '__NONE__') : [];
|
|
||||||
}
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue