Convert Phase 2/3 workflows to MD

This commit is contained in:
Dicky Moore 2026-02-05 14:19:05 +00:00
parent 8d0702551f
commit c769f1b44d
6 changed files with 2309 additions and 0 deletions

View File

@ -0,0 +1,128 @@
# Create Data Flow Diagram - Workflow Instructions
```xml
<critical>This workflow creates data flow diagrams (DFD) in Excalidraw format.</critical>
<workflow>
<step n="0" goal="Contextual Analysis">
<action>Review user's request and extract: DFD level, processes, data stores, external entities</action>
<check if="ALL requirements clear"><action>Skip to Step 4</action></check>
</step>
<step n="1" goal="Identify DFD Level" elicit="true">
<action>Ask: "What level of DFD do you need?"</action>
<action>Present options:
1. Context Diagram (Level 0) - Single process showing system boundaries
2. Level 1 DFD - Major processes and data flows
3. Level 2 DFD - Detailed sub-processes
4. Custom - Specify your requirements
</action>
<action>WAIT for selection</action>
</step>
<step n="2" goal="Gather Requirements" elicit="true">
<action>Ask: "Describe the processes, data stores, and external entities in your system"</action>
<action>WAIT for user description</action>
<action>Summarize what will be included and confirm with user</action>
</step>
<step n="3" goal="Theme Setup" elicit="true">
<action>Check for existing theme.json, ask to use if exists</action>
<check if="no existing theme">
<action>Ask: "Choose a DFD color scheme:"</action>
<action>Present numbered options:
1. Standard DFD
- Process: #e3f2fd (light blue)
- Data Store: #e8f5e9 (light green)
- External Entity: #f3e5f5 (light purple)
- Border: #1976d2 (blue)
2. Colorful DFD
- Process: #fff9c4 (light yellow)
- Data Store: #c5e1a5 (light lime)
- External Entity: #ffccbc (light coral)
- Border: #f57c00 (orange)
3. Minimal DFD
- Process: #f5f5f5 (light gray)
- Data Store: #eeeeee (gray)
- External Entity: #e0e0e0 (medium gray)
- Border: #616161 (dark gray)
4. Custom - Define your own colors
</action>
<action>WAIT for selection</action>
<action>Create theme.json based on selection</action>
</check>
</step>
<step n="4" goal="Plan DFD Structure">
<action>List all processes with numbers (1.0, 2.0, etc.)</action>
<action>List all data stores (D1, D2, etc.)</action>
<action>List all external entities</action>
<action>Map all data flows with labels</action>
<action>Show planned structure, confirm with user</action>
</step>
<step n="5" goal="Load Resources">
<action>Load {{templates}} and extract `dataflow` section</action>
<action>Load {{library}}</action>
<action>Load theme.json</action>
<action>Load {{helpers}}</action>
</step>
<step n="6" goal="Build DFD Elements">
<critical>Follow standard DFD notation from {{helpers}}</critical>
<substep>Build Order:
1. External entities (rectangles, bold border)
2. Processes (circles/ellipses with numbers)
3. Data stores (parallel lines or rectangles)
4. Data flows (labeled arrows)
</substep>
<substep>DFD Rules:
- Processes: Numbered (1.0, 2.0), verb phrases
- Data stores: Named (D1, D2), noun phrases
- External entities: Named, noun phrases
- Data flows: Labeled with data names, arrows show direction
- No direct flow between external entities
- No direct flow between data stores
</substep>
<substep>Layout:
- External entities at edges
- Processes in center
- Data stores between processes
- Minimize crossing flows
- Left-to-right or top-to-bottom flow
</substep>
</step>
<step n="7" goal="Optimize and Save">
<action>Verify DFD rules compliance</action>
<action>Strip unused elements and elements with isDeleted: true</action>
<action>Save to {{default_output_file}}</action>
</step>
<step n="8" goal="Validate JSON Syntax">
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
<check if="validation fails (exit code 1)">
<action>Read the error message carefully - it shows the syntax error and position</action>
<action>Open the file and navigate to the error location</action>
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
<action>Save the file</action>
<action>Re-run validation with the same command</action>
<action>Repeat until validation passes</action>
</check>
<action>Once validation passes, confirm with user</action>
</step>
<step n="9" goal="Validate Content">
<invoke-task>Validate against {{validation}}</invoke-task>
</step>
</workflow>
```

View File

@ -0,0 +1,139 @@
# Create Diagram - Workflow Instructions
```xml
<critical>This workflow creates system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format.</critical>
<workflow>
<step n="0" goal="Contextual Analysis">
<action>Review user's request and extract: diagram type, components/entities, relationships, notation preferences</action>
<check if="ALL requirements clear"><action>Skip to Step 5</action></check>
<check if="SOME requirements clear"><action>Only ask about missing info in Steps 1-2</action></check>
</step>
<step n="1" goal="Identify Diagram Type" elicit="true">
<action>Ask: "What type of technical diagram do you need?"</action>
<action>Present options:
1. System Architecture
2. Entity-Relationship Diagram (ERD)
3. UML Class Diagram
4. UML Sequence Diagram
5. UML Use Case Diagram
6. Network Diagram
7. Other
</action>
<action>WAIT for selection</action>
</step>
<step n="2" goal="Gather Requirements" elicit="true">
<action>Ask: "Describe the components/entities and their relationships"</action>
<action>Ask: "What notation standard? (Standard/Simplified/Strict UML-ERD)"</action>
<action>WAIT for user input</action>
<action>Summarize what will be included and confirm with user</action>
</step>
<step n="3" goal="Check for Existing Theme" elicit="true">
<action>Check if theme.json exists at output location</action>
<check if="exists"><action>Ask to use it, load if yes, else proceed to Step 4</action></check>
<check if="not exists"><action>Proceed to Step 4</action></check>
</step>
<step n="4" goal="Create Theme" elicit="true">
<action>Ask: "Choose a color scheme for your diagram:"</action>
<action>Present numbered options:
1. Professional
- Component: #e3f2fd (light blue)
- Database: #e8f5e9 (light green)
- Service: #fff3e0 (light orange)
- Border: #1976d2 (blue)
2. Colorful
- Component: #e1bee7 (light purple)
- Database: #c5e1a5 (light lime)
- Service: #ffccbc (light coral)
- Border: #7b1fa2 (purple)
3. Minimal
- Component: #f5f5f5 (light gray)
- Database: #eeeeee (gray)
- Service: #e0e0e0 (medium gray)
- Border: #616161 (dark gray)
4. Custom - Define your own colors
</action>
<action>WAIT for selection</action>
<action>Create theme.json based on selection</action>
<action>Show preview and confirm</action>
</step>
<step n="5" goal="Plan Diagram Structure">
<action>List all components/entities</action>
<action>Map all relationships</action>
<action>Show planned layout</action>
<action>Ask: "Structure looks correct? (yes/no)"</action>
<check if="no"><action>Adjust and repeat</action></check>
</step>
<step n="6" goal="Load Resources">
<action>Load {{templates}} and extract `diagram` section</action>
<action>Load {{library}}</action>
<action>Load theme.json and merge with template</action>
<action>Load {{helpers}} for guidelines</action>
</step>
<step n="7" goal="Build Diagram Elements">
<critical>Follow {{helpers}} for proper element creation</critical>
<substep>For Each Component:
- Generate unique IDs (component-id, text-id, group-id)
- Create shape with groupIds
- Calculate text width
- Create text with containerId and matching groupIds
- Add boundElements
</substep>
<substep>For Each Connection:
- Determine arrow type (straight/elbow)
- Create with startBinding and endBinding
- Update boundElements on both components
</substep>
<substep>Build Order by Type:
- Architecture: Services → Databases → Connections → Labels
- ERD: Entities → Attributes → Relationships → Cardinality
- UML Class: Classes → Attributes → Methods → Relationships
- UML Sequence: Actors → Lifelines → Messages → Returns
- UML Use Case: Actors → Use Cases → Relationships
</substep>
<substep>Alignment:
- Snap to 20px grid
- Space: 40px between components, 60px between sections
</substep>
</step>
<step n="8" goal="Optimize and Save">
<action>Strip unused elements and elements with isDeleted: true</action>
<action>Save to {{default_output_file}}</action>
</step>
<step n="9" goal="Validate JSON Syntax">
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
<check if="validation fails (exit code 1)">
<action>Read the error message carefully - it shows the syntax error and position</action>
<action>Open the file and navigate to the error location</action>
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
<action>Save the file</action>
<action>Re-run validation with the same command</action>
<action>Repeat until validation passes</action>
</check>
<action>Once validation passes, confirm: "Diagram created at {{default_output_file}}. Open to view?"</action>
</step>
<step n="10" goal="Validate Content">
<invoke-task>Validate against {{validation}} using {_bmad}/core/tasks/validate-workflow.xml</invoke-task>
</step>
</workflow>
```

View File

@ -0,0 +1,239 @@
# Create Flowchart - Workflow Instructions
```xml
<critical>This workflow creates a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows.</critical>
<workflow>
<step n="0" goal="Contextual Analysis (Smart Elicitation)">
<critical>Before asking any questions, analyze what the user has already told you</critical>
<action>Review the user's initial request and conversation history</action>
<action>Extract any mentioned: flowchart type, complexity, decision points, save location</action>
<check if="ALL requirements are clear from context">
<action>Summarize your understanding</action>
<action>Skip directly to Step 4 (Plan Flowchart Layout)</action>
</check>
<check if="SOME requirements are clear">
<action>Note what you already know</action>
<action>Only ask about missing information in Step 1</action>
</check>
<check if="requirements are unclear or minimal">
<action>Proceed with full elicitation in Step 1</action>
</check>
</step>
<step n="1" goal="Gather Requirements" elicit="true">
<action>Ask Question 1: "What type of process flow do you need to visualize?"</action>
<action>Present numbered options:
1. Business Process Flow - Document business workflows, approval processes, or operational procedures
2. Algorithm/Logic Flow - Visualize code logic, decision trees, or computational processes
3. User Journey Flow - Map user interactions, navigation paths, or experience flows
4. Data Processing Pipeline - Show data transformation, ETL processes, or processing stages
5. Other - Describe your specific flowchart needs
</action>
<action>WAIT for user selection (1-5)</action>
<action>Ask Question 2: "How many main steps are in this flow?"</action>
<action>Present numbered options:
1. Simple (3-5 steps) - Quick process with few decision points
2. Medium (6-10 steps) - Standard workflow with some branching
3. Complex (11-20 steps) - Detailed process with multiple decision points
4. Very Complex (20+ steps) - Comprehensive workflow requiring careful layout
</action>
<action>WAIT for user selection (1-4)</action>
<action>Store selection in {{complexity}}</action>
<action>Ask Question 3: "Does your flow include decision points (yes/no branches)?"</action>
<action>Present numbered options:
1. No decisions - Linear flow from start to end
2. Few decisions (1-2) - Simple branching with yes/no paths
3. Multiple decisions (3-5) - Several conditional branches
4. Complex decisions (6+) - Extensive branching logic
</action>
<action>WAIT for user selection (1-4)</action>
<action>Store selection in {{decision_points}}</action>
<action>Ask Question 4: "Where should the flowchart be saved?"</action>
<action>Present numbered options:
1. Default location - docs/flowcharts/[auto-generated-name].excalidraw
2. Custom path - Specify your own file path
3. Project root - Save in main project directory
4. Specific folder - Choose from existing folders
</action>
<action>WAIT for user selection (1-4)</action>
<check if="selection is 2 or 4">
<action>Ask for specific path</action>
<action>WAIT for user input</action>
</check>
<action>Store final path in {{default_output_file}}</action>
</step>
<step n="2" goal="Check for Existing Theme" elicit="true">
<action>Check if theme.json exists at output location</action>
<check if="theme.json exists">
<action>Ask: "Found existing theme. Use it? (yes/no)"</action>
<action>WAIT for user response</action>
<check if="user says yes">
<action>Load and use existing theme</action>
<action>Skip to Step 4</action>
</check>
<check if="user says no">
<action>Proceed to Step 3</action>
</check>
</check>
<check if="theme.json does not exist">
<action>Proceed to Step 3</action>
</check>
</step>
<step n="3" goal="Create Theme" elicit="true">
<action>Ask: "Let's create a theme for your flowchart. Choose a color scheme:"</action>
<action>Present numbered options:
1. Professional Blue
- Primary Fill: #e3f2fd (light blue)
- Accent/Border: #1976d2 (blue)
- Decision: #fff3e0 (light orange)
- Text: #1e1e1e (dark gray)
2. Success Green
- Primary Fill: #e8f5e9 (light green)
- Accent/Border: #388e3c (green)
- Decision: #fff9c4 (light yellow)
- Text: #1e1e1e (dark gray)
3. Neutral Gray
- Primary Fill: #f5f5f5 (light gray)
- Accent/Border: #616161 (gray)
- Decision: #e0e0e0 (medium gray)
- Text: #1e1e1e (dark gray)
4. Warm Orange
- Primary Fill: #fff3e0 (light orange)
- Accent/Border: #f57c00 (orange)
- Decision: #ffe0b2 (peach)
- Text: #1e1e1e (dark gray)
5. Custom Colors - Define your own color palette
</action>
<action>WAIT for user selection (1-5)</action>
<action>Store selection in {{theme_choice}}</action>
<check if="selection is 5 (Custom)">
<action>Ask: "Primary fill color (hex code)?"</action>
<action>WAIT for user input</action>
<action>Store in {{custom_colors.primary_fill}}</action>
<action>Ask: "Accent/border color (hex code)?"</action>
<action>WAIT for user input</action>
<action>Store in {{custom_colors.accent}}</action>
<action>Ask: "Decision color (hex code)?"</action>
<action>WAIT for user input</action>
<action>Store in {{custom_colors.decision}}</action>
</check>
<action>Create theme.json with selected colors</action>
<action>Show theme preview with all colors</action>
<action>Ask: "Theme looks good?"</action>
<action>Present numbered options:
1. Yes, use this theme - Proceed with theme
2. No, adjust colors - Modify color selections
3. Start over - Choose different preset
</action>
<action>WAIT for selection (1-3)</action>
<check if="selection is 2 or 3">
<action>Repeat Step 3</action>
</check>
</step>
<step n="4" goal="Plan Flowchart Layout">
<action>List all steps and decision points based on gathered requirements</action>
<action>Show user the planned structure</action>
<action>Ask: "Structure looks correct? (yes/no)"</action>
<action>WAIT for user response</action>
<check if="user says no">
<action>Adjust structure based on feedback</action>
<action>Repeat this step</action>
</check>
</step>
<step n="5" goal="Load Template and Resources">
<action>Load {{templates}} file</action>
<action>Extract `flowchart` section from YAML</action>
<action>Load {{library}} file</action>
<action>Load theme.json and merge colors with template</action>
<action>Load {{helpers}} for element creation guidelines</action>
</step>
<step n="6" goal="Build Flowchart Elements">
<critical>Follow guidelines from {{helpers}} for proper element creation</critical>
<action>Build ONE section at a time following these rules:</action>
<substep>For Each Shape with Label:
1. Generate unique IDs (shape-id, text-id, group-id)
2. Create shape with groupIds: [group-id]
3. Calculate text width: (text.length × fontSize × 0.6) + 20, round to nearest 10
4. Create text element with:
- containerId: shape-id
- groupIds: [group-id] (SAME as shape)
- textAlign: "center"
- verticalAlign: "middle"
- width: calculated width
5. Add boundElements to shape referencing text
</substep>
<substep>For Each Arrow:
1. Determine arrow type needed:
- Straight: For forward flow (left-to-right, top-to-bottom)
- Elbow: For upward flow, backward flow, or complex routing
2. Create arrow with startBinding and endBinding
3. Set startBinding.elementId to source shape ID
4. Set endBinding.elementId to target shape ID
5. Set gap: 10 for both bindings
6. If elbow arrow, add intermediate points for direction changes
7. Update boundElements on both connected shapes
</substep>
<substep>Alignment:
- Snap all x, y to 20px grid
- Align shapes vertically (same x for vertical flow)
- Space elements: 60px between shapes
</substep>
<substep>Build Order:
1. Start point (circle) with label
2. Each process step (rectangle) with label
3. Each decision point (diamond) with label
4. End point (circle) with label
5. Connect all with bound arrows
</substep>
</step>
<step n="7" goal="Optimize and Save">
<action>Strip unused elements and elements with isDeleted: true</action>
<action>Save to {{default_output_file}}</action>
</step>
<step n="8" goal="Validate JSON Syntax">
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
<check if="validation fails (exit code 1)">
<action>Read the error message carefully - it shows the syntax error and position</action>
<action>Open the file and navigate to the error location</action>
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
<action>Save the file</action>
<action>Re-run validation with the same command</action>
<action>Repeat until validation passes</action>
</check>
<action>Once validation passes, confirm with user: "Flowchart created at {{default_output_file}}. Open to view?"</action>
</step>
<step n="9" goal="Validate Content">
<invoke-task>Validate against checklist at {{validation}} using {_bmad}/core/tasks/validate-workflow.xml</invoke-task>
</step>
</workflow>
```

View File

@ -0,0 +1,131 @@
# Create Wireframe - Workflow Instructions
```xml
<critical>This workflow creates website or app wireframes in Excalidraw format.</critical>
<workflow>
<step n="0" goal="Contextual Analysis">
<action>Review user's request and extract: wireframe type, fidelity level, screen count, device type, save location</action>
<check if="ALL requirements clear"><action>Skip to Step 5</action></check>
</step>
<step n="1" goal="Identify Wireframe Type" elicit="true">
<action>Ask: "What type of wireframe do you need?"</action>
<action>Present options:
1. Website (Desktop)
2. Mobile App (iOS/Android)
3. Web App (Responsive)
4. Tablet App
5. Multi-platform
</action>
<action>WAIT for selection</action>
</step>
<step n="2" goal="Gather Requirements" elicit="true">
<action>Ask fidelity level (Low/Medium/High)</action>
<action>Ask screen count (Single/Few 2-3/Multiple 4-6/Many 7+)</action>
<action>Ask device dimensions or use standard</action>
<action>Ask save location</action>
</step>
<step n="3" goal="Check Theme" elicit="true">
<action>Check for existing theme.json, ask to use if exists</action>
</step>
<step n="4" goal="Create Theme" elicit="true">
<action>Ask: "Choose a wireframe style:"</action>
<action>Present numbered options:
1. Classic Wireframe
- Background: #ffffff (white)
- Container: #f5f5f5 (light gray)
- Border: #9e9e9e (gray)
- Text: #424242 (dark gray)
2. High Contrast
- Background: #ffffff (white)
- Container: #eeeeee (light gray)
- Border: #212121 (black)
- Text: #000000 (black)
3. Blueprint Style
- Background: #1a237e (dark blue)
- Container: #3949ab (blue)
- Border: #7986cb (light blue)
- Text: #ffffff (white)
4. Custom - Define your own colors
</action>
<action>WAIT for selection</action>
<action>Create theme.json based on selection</action>
<action>Confirm with user</action>
</step>
<step n="5" goal="Plan Wireframe Structure">
<action>List all screens and their purposes</action>
<action>Map navigation flow between screens</action>
<action>Identify key UI elements for each screen</action>
<action>Show planned structure, confirm with user</action>
</step>
<step n="6" goal="Load Resources">
<action>Load {{templates}} and extract `wireframe` section</action>
<action>Load {{library}}</action>
<action>Load theme.json</action>
<action>Load {{helpers}}</action>
</step>
<step n="7" goal="Build Wireframe Elements">
<critical>Follow {{helpers}} for proper element creation</critical>
<substep>For Each Screen:
- Create container/frame
- Add header section
- Add content areas
- Add navigation elements
- Add interactive elements (buttons, inputs)
- Add labels and annotations
</substep>
<substep>Build Order:
1. Screen containers
2. Layout sections (header, content, footer)
3. Navigation elements
4. Content blocks
5. Interactive elements
6. Labels and annotations
7. Flow indicators (if multi-screen)
</substep>
<substep>Fidelity Guidelines:
- Low: Basic shapes, minimal detail, placeholder text
- Medium: More defined elements, some styling, representative content
- High: Detailed elements, realistic sizing, actual content examples
</substep>
</step>
<step n="8" goal="Optimize and Save">
<action>Strip unused elements and elements with isDeleted: true</action>
<action>Save to {{default_output_file}}</action>
</step>
<step n="9" goal="Validate JSON Syntax">
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
<check if="validation fails (exit code 1)">
<action>Read the error message carefully - it shows the syntax error and position</action>
<action>Open the file and navigate to the error location</action>
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
<action>Save the file</action>
<action>Re-run validation with the same command</action>
<action>Repeat until validation passes</action>
</check>
<action>Once validation passes, confirm with user</action>
</step>
<step n="10" goal="Validate Content">
<invoke-task>Validate against {{validation}}</invoke-task>
</step>
</workflow>
```

View File

@ -0,0 +1,642 @@
# Requirements Traceability & Gate Decision - Validation Checklist
**Workflow:** `testarch-trace`
**Purpose:** Ensure complete traceability matrix with actionable gap analysis AND make deployment readiness decision (PASS/CONCERNS/FAIL/WAIVED)
This checklist covers **two sequential phases**:
- **PHASE 1**: Requirements Traceability (always executed)
- **PHASE 2**: Quality Gate Decision (executed if `enable_gate_decision: true`)
---
# PHASE 1: REQUIREMENTS TRACEABILITY
## Prerequisites Validation
- [ ] Acceptance criteria are available (from story file OR inline)
- [ ] Test suite exists (or gaps are acknowledged and documented)
- [ ] If tests are missing, recommend `*atdd` (trace does not run it automatically)
- [ ] Test directory path is correct (`test_dir` variable)
- [ ] Story file is accessible (if using BMad mode)
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
---
## Context Loading
- [ ] Story file read successfully (if applicable)
- [ ] Acceptance criteria extracted correctly
- [ ] Story ID identified (e.g., 1.3)
- [ ] `test-design.md` loaded (if available)
- [ ] `tech-spec.md` loaded (if available)
- [ ] `PRD.md` loaded (if available)
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`
---
## Test Discovery and Cataloging
- [ ] Tests auto-discovered using multiple strategies (test IDs, describe blocks, file paths)
- [ ] Tests categorized by level (E2E, API, Component, Unit)
- [ ] Test metadata extracted:
- [ ] Test IDs (e.g., 1.3-E2E-001)
- [ ] Describe/context blocks
- [ ] It blocks (individual test cases)
- [ ] Given-When-Then structure (if BDD)
- [ ] Priority markers (P0/P1/P2/P3)
- [ ] All relevant test files found (no tests missed due to naming conventions)
---
## Criteria-to-Test Mapping
- [ ] Each acceptance criterion mapped to tests (or marked as NONE)
- [ ] Explicit references found (test IDs, describe blocks mentioning criterion)
- [ ] Test level documented (E2E, API, Component, Unit)
- [ ] Given-When-Then narrative verified for alignment
- [ ] Traceability matrix table generated:
- [ ] Criterion ID
- [ ] Description
- [ ] Test ID
- [ ] Test File
- [ ] Test Level
- [ ] Coverage Status
---
## Coverage Classification
- [ ] Coverage status classified for each criterion:
- [ ] **FULL** - All scenarios validated at appropriate level(s)
- [ ] **PARTIAL** - Some coverage but missing edge cases or levels
- [ ] **NONE** - No test coverage at any level
- [ ] **UNIT-ONLY** - Only unit tests (missing integration/E2E validation)
- [ ] **INTEGRATION-ONLY** - Only API/Component tests (missing unit confidence)
- [ ] Classification justifications provided
- [ ] Edge cases considered in FULL vs PARTIAL determination
---
## Duplicate Coverage Detection
- [ ] Duplicate coverage checked across test levels
- [ ] Acceptable overlap identified (defense in depth for critical paths)
- [ ] Unacceptable duplication flagged (same validation at multiple levels)
- [ ] Recommendations provided for consolidation
- [ ] Selective testing principles applied
---
## Gap Analysis
- [ ] Coverage gaps identified:
- [ ] Criteria with NONE status
- [ ] Criteria with PARTIAL status
- [ ] Criteria with UNIT-ONLY status
- [ ] Criteria with INTEGRATION-ONLY status
- [ ] Gaps prioritized by risk level using test-priorities framework:
- [ ] **CRITICAL** - P0 criteria without FULL coverage (BLOCKER)
- [ ] **HIGH** - P1 criteria without FULL coverage (PR blocker)
- [ ] **MEDIUM** - P2 criteria without FULL coverage (nightly gap)
- [ ] **LOW** - P3 criteria without FULL coverage (acceptable)
- [ ] Specific test recommendations provided for each gap:
- [ ] Suggested test level (E2E, API, Component, Unit)
- [ ] Test description (Given-When-Then)
- [ ] Recommended test ID (e.g., 1.3-E2E-004)
- [ ] Explanation of why test is needed
---
## Coverage Metrics
- [ ] Overall coverage percentage calculated (FULL coverage / total criteria)
- [ ] P0 coverage percentage calculated
- [ ] P1 coverage percentage calculated
- [ ] P2 coverage percentage calculated (if applicable)
- [ ] Coverage by level calculated:
- [ ] E2E coverage %
- [ ] API coverage %
- [ ] Component coverage %
- [ ] Unit coverage %
---
## Test Quality Verification
For each mapped test, verify:
- [ ] Explicit assertions are present (not hidden in helpers)
- [ ] Test follows Given-When-Then structure
- [ ] No hard waits or sleeps (deterministic waiting only)
- [ ] Self-cleaning (test cleans up its data)
- [ ] File size < 300 lines
- [ ] Test duration < 90 seconds
Quality issues flagged:
- [ ] **BLOCKER** issues identified (missing assertions, hard waits, flaky patterns)
- [ ] **WARNING** issues identified (large files, slow tests, unclear structure)
- [ ] **INFO** issues identified (style inconsistencies, missing documentation)
Knowledge fragments referenced:
- [ ] `test-quality.md` for Definition of Done
- [ ] `fixture-architecture.md` for self-cleaning patterns
- [ ] `network-first.md` for Playwright best practices
- [ ] `data-factories.md` for test data patterns
---
## Phase 1 Deliverables Generated
### Traceability Matrix Markdown
- [ ] File created at `{output_folder}/traceability-matrix.md`
- [ ] Template from `trace-template.md` used
- [ ] Full mapping table included
- [ ] Coverage status section included
- [ ] Gap analysis section included
- [ ] Quality assessment section included
- [ ] Recommendations section included
### Coverage Badge/Metric (if enabled)
- [ ] Badge markdown generated
- [ ] Metrics exported to JSON for CI/CD integration
### Updated Story File (if enabled)
- [ ] "Traceability" section added to story markdown
- [ ] Link to traceability matrix included
- [ ] Coverage summary included
---
## Phase 1 Quality Assurance
### Accuracy Checks
- [ ] All acceptance criteria accounted for (none skipped)
- [ ] Test IDs correctly formatted (e.g., 1.3-E2E-001)
- [ ] File paths are correct and accessible
- [ ] Coverage percentages calculated correctly
- [ ] No false positives (tests incorrectly mapped to criteria)
- [ ] No false negatives (existing tests missed in mapping)
### Completeness Checks
- [ ] All test levels considered (E2E, API, Component, Unit)
- [ ] All priorities considered (P0, P1, P2, P3)
- [ ] All coverage statuses used appropriately (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
- [ ] All gaps have recommendations
- [ ] All quality issues have severity and remediation guidance
### Actionability Checks
- [ ] Recommendations are specific (not generic)
- [ ] Test IDs suggested for new tests
- [ ] Given-When-Then provided for recommended tests
- [ ] Impact explained for each gap
- [ ] Priorities clear (CRITICAL, HIGH, MEDIUM, LOW)
---
## Phase 1 Documentation
- [ ] Traceability matrix is readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
---
# PHASE 2: QUALITY GATE DECISION
**Note**: Phase 2 executes only if `enable_gate_decision: true` in workflow.md
---
## Prerequisites
### Evidence Gathering
- [ ] Test execution results obtained (CI/CD pipeline, test framework reports)
- [ ] Story/epic/release file identified and read
- [ ] Test design document discovered or explicitly provided (if available)
- [ ] Traceability matrix discovered or explicitly provided (available from Phase 1)
- [ ] NFR assessment discovered or explicitly provided (if available)
- [ ] Code coverage report discovered or explicitly provided (if available)
- [ ] Burn-in results discovered or explicitly provided (if available)
### Evidence Validation
- [ ] Evidence freshness validated (warn if >7 days old, recommend re-running workflows)
- [ ] All required assessments available or user acknowledged gaps
- [ ] Test results are complete (not partial or interrupted runs)
- [ ] Test results match current codebase (not from outdated branch)
### Knowledge Base Loading
- [ ] `risk-governance.md` loaded successfully
- [ ] `probability-impact.md` loaded successfully
- [ ] `test-quality.md` loaded successfully
- [ ] `test-priorities.md` loaded successfully
- [ ] `ci-burn-in.md` loaded (if burn-in results available)
---
## Process Steps
### Step 1: Context Loading
- [ ] Gate type identified (story/epic/release/hotfix)
- [ ] Target ID extracted (story_id, epic_num, or release_version)
- [ ] Decision thresholds loaded from workflow variables
- [ ] Risk tolerance configuration loaded
- [ ] Waiver policy loaded
### Step 2: Evidence Parsing
**Test Results:**
- [ ] Total test count extracted
- [ ] Passed test count extracted
- [ ] Failed test count extracted
- [ ] Skipped test count extracted
- [ ] Test duration extracted
- [ ] P0 test pass rate calculated
- [ ] P1 test pass rate calculated
- [ ] Overall test pass rate calculated
**Quality Assessments:**
- [ ] P0/P1/P2/P3 scenarios extracted from test-design.md (if available)
- [ ] Risk scores extracted from test-design.md (if available)
- [ ] Coverage percentages extracted from traceability-matrix.md (available from Phase 1)
- [ ] Coverage gaps extracted from traceability-matrix.md (available from Phase 1)
- [ ] NFR status extracted from nfr-assessment.md (if available)
- [ ] Security issues count extracted from nfr-assessment.md (if available)
**Code Coverage:**
- [ ] Line coverage percentage extracted (if available)
- [ ] Branch coverage percentage extracted (if available)
- [ ] Function coverage percentage extracted (if available)
- [ ] Critical path coverage validated (if available)
**Burn-in Results:**
- [ ] Burn-in iterations count extracted (if available)
- [ ] Flaky tests count extracted (if available)
- [ ] Stability score calculated (if available)
### Step 3: Decision Rules Application
**P0 Criteria Evaluation:**
- [ ] P0 test pass rate evaluated (must be 100%)
- [ ] P0 acceptance criteria coverage evaluated (must be 100%)
- [ ] Security issues count evaluated (must be 0)
- [ ] Critical NFR failures evaluated (must be 0)
- [ ] Flaky tests evaluated (must be 0 if burn-in enabled)
- [ ] P0 decision recorded: PASS or FAIL
**P1 Criteria Evaluation:**
- [ ] P1 test pass rate evaluated (threshold: min_p1_pass_rate)
- [ ] P1 acceptance criteria coverage evaluated (threshold: 95%)
- [ ] Overall test pass rate evaluated (threshold: min_overall_pass_rate)
- [ ] Code coverage evaluated (threshold: min_coverage)
- [ ] P1 decision recorded: PASS or CONCERNS
**P2/P3 Criteria Evaluation:**
- [ ] P2 failures tracked (informational, don't block if allow_p2_failures: true)
- [ ] P3 failures tracked (informational, don't block if allow_p3_failures: true)
- [ ] Residual risks documented
**Final Decision:**
- [ ] Decision determined: PASS / CONCERNS / FAIL / WAIVED
- [ ] Decision rationale documented
- [ ] Decision is deterministic (follows rules, not arbitrary)
### Step 4: Documentation
**Gate Decision Document Created:**
- [ ] Story/epic/release info section complete (ID, title, description, links)
- [ ] Decision clearly stated (PASS / CONCERNS / FAIL / WAIVED)
- [ ] Decision date recorded
- [ ] Evaluator recorded (user or agent name)
**Evidence Summary Documented:**
- [ ] Test results summary complete (total, passed, failed, pass rates)
- [ ] Coverage summary complete (P0/P1 criteria, code coverage)
- [ ] NFR validation summary complete (security, performance, reliability, maintainability)
- [ ] Flakiness summary complete (burn-in iterations, flaky test count)
**Rationale Documented:**
- [ ] Decision rationale clearly explained
- [ ] Key evidence highlighted
- [ ] Assumptions and caveats noted (if any)
**Residual Risks Documented (if CONCERNS or WAIVED):**
- [ ] Unresolved P1/P2 issues listed
- [ ] Probability × impact estimated for each risk
- [ ] Mitigations or workarounds described
**Waivers Documented (if WAIVED):**
- [ ] Waiver reason documented (business justification)
- [ ] Waiver approver documented (name, role)
- [ ] Waiver expiry date documented
- [ ] Remediation plan documented (fix in next release, due date)
- [ ] Monitoring plan documented
**Critical Issues Documented (if FAIL or CONCERNS):**
- [ ] Top 5-10 critical issues listed
- [ ] Priority assigned to each issue (P0/P1/P2)
- [ ] Owner assigned to each issue
- [ ] Due date assigned to each issue
**Recommendations Documented:**
- [ ] Next steps clearly stated for decision type
- [ ] Deployment recommendation provided
- [ ] Monitoring recommendations provided (if applicable)
- [ ] Remediation recommendations provided (if applicable)
### Step 5: Status Updates and Notifications
**Gate YAML Created:**
- [ ] Gate YAML snippet generated with decision and criteria
- [ ] Evidence references included in YAML
- [ ] Next steps included in YAML
- [ ] YAML file saved to output folder
**Stakeholder Notification Generated:**
- [ ] Notification subject line created
- [ ] Notification body created with summary
- [ ] Recipients identified (PM, SM, DEV lead, stakeholders)
- [ ] Notification ready for delivery (if notify_stakeholders: true)
**Outputs Saved:**
- [ ] Gate decision document saved to `{output_file}`
- [ ] Gate YAML saved to `{output_folder}/gate-decision-{target}.yaml`
- [ ] All outputs are valid and readable
---
## Phase 2 Output Validation
### Gate Decision Document
**Completeness:**
- [ ] All required sections present (info, decision, evidence, rationale, next steps)
- [ ] No placeholder text or TODOs left in document
- [ ] All evidence references are accurate and complete
- [ ] All links to artifacts are valid
**Accuracy:**
- [ ] Decision matches applied criteria rules
- [ ] Test results match CI/CD pipeline output
- [ ] Coverage percentages match reports
- [ ] NFR status matches assessment document
- [ ] No contradictions or inconsistencies
**Clarity:**
- [ ] Decision rationale is clear and unambiguous
- [ ] Technical jargon is explained or avoided
- [ ] Stakeholders can understand next steps
- [ ] Recommendations are actionable
### Gate YAML
**Format:**
- [ ] YAML is valid (no syntax errors)
- [ ] All required fields present (target, decision, date, evaluator, criteria, evidence)
- [ ] Field values are correct data types (numbers, strings, dates)
**Content:**
- [ ] Criteria values match decision document
- [ ] Evidence references are accurate
- [ ] Next steps align with decision type
---
## Phase 2 Quality Checks
### Decision Integrity
- [ ] Decision is deterministic (follows rules, not arbitrary)
- [ ] P0 failures result in FAIL decision (unless waived)
- [ ] Security issues result in FAIL decision (unless waived - but should never be waived)
- [ ] Waivers have business justification and approver (if WAIVED)
- [ ] Residual risks are documented (if CONCERNS or WAIVED)
### Evidence-Based
- [ ] Decision is based on actual test results (not guesses)
- [ ] All claims are supported by evidence
- [ ] No assumptions without documentation
- [ ] Evidence sources are cited (CI run IDs, report URLs)
### Transparency
- [ ] Decision rationale is transparent and auditable
- [ ] Criteria evaluation is documented step-by-step
- [ ] Any deviations from standard process are explained
- [ ] Waiver justifications are clear (if applicable)
### Consistency
- [ ] Decision aligns with risk-governance knowledge fragment
- [ ] Priority framework (P0/P1/P2/P3) applied consistently
- [ ] Terminology consistent with test-quality knowledge fragment
- [ ] Decision matrix followed correctly
---
## Phase 2 Integration Points
### CI/CD Pipeline
- [ ] Gate YAML is CI/CD-compatible
- [ ] YAML can be parsed by pipeline automation
- [ ] Decision can be used to block/allow deployments
- [ ] Evidence references are accessible to pipeline
### Stakeholders
- [ ] Notification message is clear and actionable
- [ ] Decision is explained in non-technical terms
- [ ] Next steps are specific and time-bound
- [ ] Recipients are appropriate for decision type
---
## Phase 2 Compliance and Audit
### Audit Trail
- [ ] Decision date and time recorded
- [ ] Evaluator identified (user or agent)
- [ ] All evidence sources cited
- [ ] Decision criteria documented
- [ ] Rationale clearly explained
### Traceability
- [ ] Gate decision traceable to story/epic/release
- [ ] Evidence traceable to specific test runs
- [ ] Assessments traceable to workflows that created them
- [ ] Waiver traceable to approver (if applicable)
### Compliance
- [ ] Security requirements validated (no unresolved vulnerabilities)
- [ ] Quality standards met or waived with justification
- [ ] Regulatory requirements addressed (if applicable)
- [ ] Documentation sufficient for external audit
---
## Phase 2 Edge Cases and Exceptions
### Missing Evidence
- [ ] If test-design.md missing, decision still possible with test results + trace
- [ ] If traceability-matrix.md missing, decision still possible with test results (but Phase 1 should provide it)
- [ ] If nfr-assessment.md missing, NFR validation marked as NOT ASSESSED
- [ ] If code coverage missing, coverage criterion marked as NOT ASSESSED
- [ ] User acknowledged gaps in evidence or provided alternative proof
### Stale Evidence
- [ ] Evidence freshness checked (if validate_evidence_freshness: true)
- [ ] Warnings issued for assessments >7 days old
- [ ] User acknowledged stale evidence or re-ran workflows
- [ ] Decision document notes any stale evidence used
### Conflicting Evidence
- [ ] Conflicts between test results and assessments resolved
- [ ] Most recent/authoritative source identified
- [ ] Conflict resolution documented in decision rationale
- [ ] User consulted if conflict cannot be resolved
### Waiver Scenarios
- [ ] Waiver only used for FAIL decision (not PASS or CONCERNS)
- [ ] Waiver has business justification (not technical convenience)
- [ ] Waiver has named approver with authority (VP/CTO/PO)
- [ ] Waiver has expiry date (does NOT apply to future releases)
- [ ] Waiver has remediation plan with concrete due date
- [ ] Security vulnerabilities are NOT waived (enforced)
---
# FINAL VALIDATION (Both Phases)
## Non-Prescriptive Validation
- [ ] Traceability format adapted to team needs (not rigid template)
- [ ] Examples are minimal and focused on patterns
- [ ] Teams can extend with custom classifications
- [ ] Integration with external systems supported (JIRA, Azure DevOps)
- [ ] Compliance requirements considered (if applicable)
---
## Documentation and Communication
- [ ] All documents are readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
- [ ] Gate decision is prominent and unambiguous (Phase 2)
---
## Final Validation
**Phase 1 (Traceability):**
- [ ] All prerequisites met
- [ ] All acceptance criteria mapped or gaps documented
- [ ] P0 coverage is 100% OR documented as BLOCKER
- [ ] Gap analysis is complete and prioritized
- [ ] Test quality issues identified and flagged
- [ ] Deliverables generated and saved
**Phase 2 (Gate Decision):**
- [ ] All quality evidence gathered
- [ ] Decision criteria applied correctly
- [ ] Decision rationale documented
- [ ] Gate YAML ready for CI/CD integration
- [ ] Status file updated (if enabled)
- [ ] Stakeholders notified (if enabled)
**Workflow Complete:**
- [ ] Phase 1 completed successfully
- [ ] Phase 2 completed successfully (if enabled)
- [ ] All outputs validated and saved
- [ ] Ready to proceed based on gate decision
---
## Sign-Off
**Phase 1 - Traceability Status:**
- [ ] ✅ PASS - All quality gates met, no critical gaps
- [ ] ⚠️ WARN - P1 gaps exist, address before PR merge
- [ ] ❌ FAIL - P0 gaps exist, BLOCKER for release
**Phase 2 - Gate Decision Status (if enabled):**
- [ ] ✅ PASS - Deploy to production
- [ ] ⚠️ CONCERNS - Deploy with monitoring
- [ ] ❌ FAIL - Block deployment, fix issues
- [ ] 🔓 WAIVED - Deploy with business approval and remediation plan
**Next Actions:**
- If PASS (both phases): Proceed to deployment
- If WARN/CONCERNS: Address gaps/issues, proceed with monitoring
- If FAIL (either phase): Run `*atdd` for missing tests, fix issues, re-run `*trace`
- If WAIVED: Deploy with approved waiver, schedule remediation
---
## Notes
Record any issues, deviations, or important observations during workflow execution:
- **Phase 1 Issues**: [Note any traceability mapping challenges, missing tests, quality concerns]
- **Phase 2 Issues**: [Note any missing, stale, or conflicting evidence]
- **Decision Rationale**: [Document any nuanced reasoning or edge cases]
- **Waiver Details**: [Document waiver negotiations or approvals]
- **Follow-up Actions**: [List any actions required after gate decision]
---
<!-- Powered by BMAD-CORE™ -->

File diff suppressed because it is too large Load Diff