feat: Add Production QA Expansion Pack with comprehensive testing automation

- Added 4 specialized QA agents (Test Engineer, Performance, Security, Test Lead)
- Created enhanced story creation task with integrated QA requirements
- Implemented parallel Dev/QA workflow for faster delivery
- Added comprehensive Production QA Guide documentation
- Configured automated upstream sync with QA preservation
- Added validation script to ensure QA integration integrity
- Maintained 100% BMAD method adherence with tool-agnostic approach

The Production QA expansion works alongside traditional BMAD workflow,
providing enterprise-grade testing capabilities while preserving the
original BMAD philosophy and structure for easy upstream syncing.
This commit is contained in:
papuman 2025-09-13 23:14:16 -06:00
parent 861e959d56
commit 7a6b97b494
12 changed files with 1855 additions and 4 deletions

90
.github/scripts/validate-qa-integration.sh vendored Executable file
View File

@ -0,0 +1,90 @@
#!/bin/bash
# Validate QA Integration Script
# This script checks that all QA integration files are present and valid
echo "🧪 Validating Production QA Integration..."
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Track validation status
VALIDATION_PASSED=true
# Function to check if file exists
check_file() {
local file=$1
local description=$2
if [ -f "$file" ]; then
echo -e "${GREEN}${NC} $description exists: $file"
else
echo -e "${RED}${NC} $description missing: $file"
VALIDATION_PASSED=false
fi
}
# Function to check if string exists in file
check_content() {
local file=$1
local search_string=$2
local description=$3
if grep -q "$search_string" "$file" 2>/dev/null; then
echo -e "${GREEN}${NC} $description found in $file"
else
echo -e "${RED}${NC} $description not found in $file"
VALIDATION_PASSED=false
fi
}
echo ""
echo "📁 Checking QA Files..."
echo "------------------------"
# Check expansion pack files
check_file "expansion-packs/bmad-production-qa/config.yaml" "Production QA config"
check_file "expansion-packs/bmad-production-qa/agents/qa-test-engineer.md" "QA Test Engineer agent"
check_file "expansion-packs/bmad-production-qa/agents/qa-performance-engineer.md" "Performance Engineer agent"
check_file "expansion-packs/bmad-production-qa/agents/qa-security-engineer.md" "Security Engineer agent"
check_file "expansion-packs/bmad-production-qa/agents/qa-test-lead.md" "QA Test Lead agent"
check_file "expansion-packs/bmad-production-qa/README.md" "Production QA README"
# Check core modifications
check_file "bmad-core/tasks/create-next-story-with-qa.md" "Enhanced story creation task"
check_file "docs/production-qa-guide.md" "Production QA Guide"
echo ""
echo "🔍 Checking Integration Points..."
echo "----------------------------------"
# Check SM agent modifications
check_content "bmad-core/agents/sm.md" "create-next-story-with-qa" "SM agent QA integration"
# Check README enhancements
check_content "README.md" "Production QA" "README QA section"
check_content "README.md" "production-qa-guide.md" "README QA guide link"
# Check workflows
check_file "expansion-packs/bmad-production-qa/workflows/production-qa-cycle.yaml" "Production QA workflow"
# Check tasks
check_file "expansion-packs/bmad-production-qa/tasks/create-e2e-test-suite.md" "E2E test creation task"
check_file "expansion-packs/bmad-production-qa/tasks/setup-testing-framework.md" "Testing framework setup"
echo ""
echo "📊 Validation Summary"
echo "--------------------"
if [ "$VALIDATION_PASSED" = true ]; then
echo -e "${GREEN}✅ All QA integration checks passed!${NC}"
echo "Production QA is properly integrated with BMAD."
exit 0
else
echo -e "${RED}❌ Some QA integration checks failed!${NC}"
echo "Please review the missing components above."
exit 1
fi

View File

@ -75,6 +75,12 @@ jobs:
# Backup our enhanced tasks # Backup our enhanced tasks
cp bmad-core/tasks/create-next-story-with-qa.md /tmp/qa-backup/ || true cp bmad-core/tasks/create-next-story-with-qa.md /tmp/qa-backup/ || true
# Backup our documentation
cp docs/production-qa-guide.md /tmp/qa-backup/ || true
# Backup modified core files
cp bmad-core/agents/sm.md /tmp/qa-backup/ || true
# Backup any modified core files # Backup any modified core files
cp -r .github /tmp/qa-backup/ || true cp -r .github /tmp/qa-backup/ || true
@ -107,9 +113,25 @@ jobs:
# Restore our enhanced task # Restore our enhanced task
cp /tmp/qa-backup/create-next-story-with-qa.md bmad-core/tasks/ || true cp /tmp/qa-backup/create-next-story-with-qa.md bmad-core/tasks/ || true
# Restore our documentation
cp /tmp/qa-backup/production-qa-guide.md docs/ || true
# Restore modified SM agent
cp /tmp/qa-backup/sm.md bmad-core/agents/ || true
# Restore workflows # Restore workflows
cp -r /tmp/qa-backup/.github . || true cp -r /tmp/qa-backup/.github . || true
# Ensure our QA enhancements are in README
if ! grep -q "Production QA Enhancement" README.md; then
echo "Re-adding Production QA section to README..."
# Add our Production QA badge if missing
sed -i '7a[![Production QA](https://img.shields.io/badge/Enhanced-Production%20QA-success)](expansion-packs/bmad-production-qa/README.md)' README.md || true
# Add our enhancement notice if missing
sed -i '/Transform any domain/a\\n> 🧪 **This fork includes the Production QA Expansion Pack** - Enterprise-grade testing automation with specialized QA agents, automated quality gates, and comprehensive test coverage. Works alongside traditional BMAD workflow. [Learn more →](#-production-qa-enhancement)' README.md || true
fi
echo "QA integration restored" echo "QA integration restored"
- name: Install dependencies and validate - name: Install dependencies and validate
@ -125,6 +147,10 @@ jobs:
node tools/cli.js validate || echo "BMAD validation not available" node tools/cli.js validate || echo "BMAD validation not available"
fi fi
# Validate QA integration
echo "Validating QA integration..."
.github/scripts/validate-qa-integration.sh
- name: Commit integration - name: Commit integration
if: steps.check_changes.outputs.has_changes == 'true' if: steps.check_changes.outputs.has_changes == 'true'
run: | run: |
@ -165,10 +191,12 @@ jobs:
``` ```
### QA Integration Status ### QA Integration Status
- ✅ `bmad-production-qa` expansion pack preserved - ✅ `expansion-packs/bmad-production-qa/` - Expansion pack preserved
- ✅ Enhanced story creation task maintained - ✅ `bmad-core/tasks/create-next-story-with-qa.md` - Enhanced task maintained
- ✅ Production QA workflows intact - ✅ `bmad-core/agents/sm.md` - Modified SM agent preserved
- ✅ GitHub workflows preserved - ✅ `docs/production-qa-guide.md` - Documentation maintained
- ✅ `README.md` - QA enhancements preserved
- ✅ `.github/workflows/` - GitHub workflows intact
### Review Checklist ### Review Checklist
- [ ] Verify no conflicts with core BMAD functionality - [ ] Verify no conflicts with core BMAD functionality

View File

@ -0,0 +1,283 @@
# BMAD Production QA Expansion Pack
Transform your BMAD development workflow with comprehensive, production-ready QA and testing capabilities. This expansion pack integrates seamlessly with BMAD's natural language approach while providing enterprise-grade testing automation.
## 🎯 What This Expansion Pack Provides
### Complete QA Integration
- **Enhanced Story Creation** with comprehensive testing requirements
- **Specialized QA Agents** for different testing domains
- **Production QA Workflow** with quality gates and validation
- **Tool-Agnostic Approach** supporting any testing framework
- **Automated Quality Gates** with pass/fail criteria
- **Comprehensive Test Coverage** across all testing types
### Key Features
- ✅ **E2E Testing** - Complete user journey validation
- ✅ **API Testing** - Backend functionality verification
- ✅ **Performance Testing** - Load, stress, and capacity validation
- ✅ **Security Testing** - Vulnerability scanning and compliance
- ✅ **Visual Regression** - UI consistency across browsers
- ✅ **Accessibility Testing** - WCAG compliance validation
- ✅ **CI/CD Integration** - Automated testing in pipelines
## 🚀 Quick Start
### 1. Installation
This expansion pack is already integrated into your BMAD fork. No additional installation needed!
### 2. Initialize Testing Framework
```bash
# Activate QA Test Engineer
@qa-test-engineer
# Set up testing infrastructure
*setup-testing-framework
```
### 3. Enhanced Story Creation
```bash
# Stories now include comprehensive testing requirements
@sm *draft
# Creates story with E2E, API, Performance, and Security test scenarios
```
### 4. Parallel Development & Testing
```bash
# Development track
@dev *develop-story docs/stories/1.1.story.md
# Testing track (run in parallel)
@qa-test-engineer *create-e2e-tests docs/stories/1.1.story.md
@qa-test-engineer *create-api-tests docs/stories/1.1.story.md
```
## 🧪 Available QA Agents
### QA Test Engineer (Alex)
**Primary testing specialist for comprehensive test automation**
- Creates E2E, API, and integration test suites
- Sets up testing frameworks and CI/CD integration
- Generates test data and fixtures
- Validates test coverage and quality
**Key Commands:**
- `*create-e2e-tests {story}` - Generate E2E test suite
- `*create-api-tests {story}` - Generate API test collection
- `*setup-testing-framework` - Initialize testing infrastructure
- `*analyze-test-coverage` - Review test coverage metrics
### QA Performance Engineer (Morgan)
**Specialized in performance, load, and scalability testing**
- Creates load testing scenarios
- Performs stress and spike testing
- Establishes performance baselines
- Generates capacity planning analysis
**Key Commands:**
- `*create-load-test {story}` - Generate load test scenarios
- `*create-stress-test {story}` - Create stress test scenarios
- `*analyze-performance-baseline` - Establish performance baseline
- `*create-capacity-plan` - Generate capacity planning analysis
### QA Security Engineer (Riley)
**Expert in security testing and vulnerability assessment**
- Performs comprehensive security scans
- Creates penetration testing scenarios
- Validates OWASP compliance
- Conducts vulnerability assessments
**Key Commands:**
- `*security-scan {story}` - Perform comprehensive security scan
- `*vulnerability-assessment` - Conduct vulnerability assessment
- `*owasp-compliance-check` - Validate OWASP Top 10 compliance
- `*create-threat-model` - Generate threat modeling analysis
### QA Test Lead (Jordan)
**Strategic coordinator for all testing activities**
- Creates comprehensive test strategies
- Manages quality gates and criteria
- Coordinates testing across all specialties
- Generates quality reports and metrics
**Key Commands:**
- `*create-test-strategy` - Generate comprehensive test strategy
- `*create-quality-gates` - Define quality gates and criteria
- `*coordinate-testing` - Manage all testing activities
- `*create-test-reports` - Generate comprehensive test reports
## 🔄 Enhanced Development Workflow
### Traditional BMAD Flow
```
Planning → SM creates story → Dev implements → QA reviews → Done
```
### Enhanced Production QA Flow
```
Planning → Test Strategy → SM creates story with QA →
Dev implements ←→ QA creates test suites (parallel)
Execute tests → Quality gates → Production ready
```
## 📊 Quality Gates
Stories must pass all quality gates before production deployment:
### Automated Gates
- ✅ Unit test coverage ≥ 80%
- ✅ All E2E tests pass
- ✅ API tests pass with proper error handling
- ✅ Performance meets defined SLAs
- ✅ Security scans show no critical vulnerabilities
- ✅ Accessibility standards met (if applicable)
### Manual Gates (Optional)
- 🔍 Security review for sensitive features
- 🔍 Performance validation for critical paths
- 🔍 UX review for user-facing changes
## 🛠️ Tool Support (Framework Agnostic)
### E2E Testing
- **Playwright** (recommended for modern web apps)
- **Cypress** (excellent developer experience)
- **Selenium** (maximum browser compatibility)
- **WebdriverIO** (enterprise flexibility)
### API Testing
- **Bruno** (Git-friendly, version controlled)
- **Postman + Newman** (industry standard)
- **REST Client** (VS Code integrated)
### Performance Testing
- **k6** (JavaScript-based, developer-friendly)
- **Artillery** (Node.js, excellent CI/CD integration)
- **Locust** (Python-based, scalable)
### Security Testing
- **OWASP ZAP** (comprehensive security scanning)
- **Snyk** (dependency vulnerability scanning)
- **Custom security test suites**
## 📁 Project Structure
```
your-project/
├── expansion-packs/
│ └── bmad-production-qa/ # This expansion pack
├── tests/ # Generated by QA agents
│ ├── e2e/ # End-to-end tests
│ ├── api/ # API tests
│ ├── performance/ # Performance tests
│ ├── security/ # Security tests
│ └── visual/ # Visual regression tests
├── test-reports/ # Test execution reports
├── docs/
│ ├── test-strategy.md # Overall testing strategy
│ └── stories/ # Stories with QA requirements
└── .github/workflows/
└── test.yml # CI/CD testing pipeline
```
## 🔄 Automated Sync with Upstream BMAD
This fork automatically syncs with upstream BMAD weekly:
- **Automatic sync** every Sunday
- **Preserves QA integration** during updates
- **Creates PR** for review when changes detected
- **Handles conflicts** gracefully with notifications
### Manual Sync
```bash
# Trigger manual sync
gh workflow run "Sync with Upstream BMAD and Re-apply QA Integration"
# Or locally
git fetch upstream
git merge upstream/main
# QA integration preserved automatically
```
## 📖 Example Usage
### Creating a Story with QA Integration
```bash
@sm *draft
```
**Result**: Story created with comprehensive testing requirements including:
- E2E user journey scenarios
- API endpoint validation requirements
- Performance criteria
- Security considerations
- Accessibility requirements
### Implementing Tests
```bash
@qa-test-engineer *create-e2e-tests docs/stories/1.1.user-login.story.md
```
**Result**:
- Asks for your preferred framework (Playwright, Cypress, etc.)
- Generates complete test suite based on story requirements
- Creates test data and fixtures
- Provides execution instructions
## 🎓 Best Practices
### Story Creation
1. Always use `@sm *draft` for QA-integrated stories
2. Review testing requirements before development
3. Ensure acceptance criteria cover testable outcomes
### Test Development
1. Create tests in parallel with development
2. Start with happy path scenarios
3. Add edge cases and error handling
4. Maintain test data separately from test logic
### Quality Gates
1. Run tests locally before commits
2. Fix failing tests immediately
3. Maintain test coverage above thresholds
4. Review performance impacts regularly
## 🆘 Troubleshooting
### Common Issues
**Tests not generating**
- Ensure story file exists and has proper AC
- Check that testing framework is selected
- Verify expansion pack is properly loaded
**Quality gates failing**
- Check test execution logs in test-reports/
- Ensure all dependencies are installed
- Verify test environments are accessible
**Sync issues with upstream**
- Check .github/workflows/sync-upstream-bmad.yml
- Resolve any merge conflicts manually
- Contact maintainers if automation fails
## 🤝 Contributing
This expansion pack follows BMAD's contribution guidelines:
1. Maintain tool-agnostic approach
2. Use natural language for requirements
3. Keep agents focused and specialized
4. Provide comprehensive documentation
## 📞 Support
- **Issues**: Create GitHub issues for bugs or feature requests
- **Discussions**: Use GitHub Discussions for questions
- **Documentation**: All docs in this expansion pack
- **Examples**: Check the examples/ directory
---
**Transform your development with production-ready QA!** 🚀

View File

@ -0,0 +1,101 @@
<!-- Powered by BMAD™ Core -->
# qa-performance-engineer
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-load-test.md → {root}/tasks/create-load-test.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "test performance"→*create-load-test, "stress test the API" would be dependencies->tasks->create-stress-test), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Load and read `.bmad-core/core-config.yaml` AND `expansion-packs/bmad-production-qa/config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Morgan
id: qa-performance-engineer
title: Performance Engineer & Load Testing Specialist
icon: ⚡
whenToUse: Use for performance testing, load testing, stress testing, capacity planning, and performance optimization
customization: null
persona:
role: Expert Performance Engineer & Scalability Specialist
style: Analytical, data-driven, performance-focused, scalability-minded
identity: Performance specialist who ensures applications perform under real-world conditions and scale requirements
focus: Creating comprehensive performance test strategies that validate system behavior under various load conditions
core_principles:
- Performance by Design - Consider performance from the start, not as an afterthought
- Real-World Simulation - Test scenarios that mirror actual user behavior and traffic patterns
- Baseline Establishment - Create performance baselines to measure improvements and regressions
- Scalability Validation - Ensure systems can handle growth in users, data, and transactions
- Bottleneck Identification - Pinpoint performance constraints and provide actionable insights
- Environment Consistency - Performance tests must be reproducible across environments
- Continuous Monitoring - Implement ongoing performance validation in CI/CD pipelines
- Tool-Agnostic Approach - Work with user's preferred performance testing tools
- SLA-Driven Testing - Align performance tests with business service level agreements
- Resource Optimization - Balance performance requirements with resource costs
- Comprehensive Metrics - Collect response time, throughput, error rate, and resource utilization
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-load-test {story}: Create load testing scenarios for story (task create-load-test-scenarios)
- create-stress-test {story}: Create stress testing scenarios (task create-stress-test-scenarios)
- create-spike-test {story}: Create spike testing scenarios (task create-spike-test-scenarios)
- create-volume-test {story}: Create volume testing scenarios (task create-volume-test-scenarios)
- create-endurance-test {story}: Create endurance testing scenarios (task create-endurance-test-scenarios)
- analyze-performance-baseline: Establish performance baseline (task analyze-performance-baseline)
- create-performance-monitoring: Set up performance monitoring (task create-performance-monitoring)
- optimize-performance-tests: Optimize existing performance tests (task optimize-performance-tests)
- create-capacity-plan: Create capacity planning analysis (task create-capacity-plan)
- setup-performance-ci: Configure CI/CD performance testing (task setup-performance-ci-pipeline)
- analyze-performance-results: Analyze performance test results (task analyze-performance-results)
- create-performance-dashboard: Create performance metrics dashboard (task create-performance-dashboard)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Performance Engineer, and then abandon inhabiting this persona
dependencies:
checklists:
- performance-testing-checklist.md
- load-testing-checklist.md
- scalability-testing-checklist.md
data:
- performance-testing-best-practices.md
- performance-tools-comparison.md
- performance-metrics-guide.md
tasks:
- create-load-test-scenarios.md
- create-stress-test-scenarios.md
- create-spike-test-scenarios.md
- create-volume-test-scenarios.md
- create-endurance-test-scenarios.md
- analyze-performance-baseline.md
- create-performance-monitoring.md
- optimize-performance-tests.md
- create-capacity-plan.md
- setup-performance-ci-pipeline.md
- analyze-performance-results.md
- create-performance-dashboard.md
templates:
- load-test-template.md
- performance-test-plan-template.md
- performance-report-template.md
- capacity-planning-template.md
```

View File

@ -0,0 +1,110 @@
<!-- Powered by BMAD™ Core -->
# qa-security-engineer
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-security-test.md → {root}/tasks/create-security-test.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "security scan"→*security-scan, "check vulnerabilities" would be dependencies->tasks->vulnerability-assessment), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Load and read `.bmad-core/core-config.yaml` AND `expansion-packs/bmad-production-qa/config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Riley
id: qa-security-engineer
title: Security Engineer & Vulnerability Assessment Specialist
icon: 🔒
whenToUse: Use for security testing, vulnerability scanning, penetration testing, security compliance, and security risk assessment
customization: null
persona:
role: Expert Security Engineer & Application Security Specialist
style: Security-focused, thorough, compliance-aware, risk-based, proactive
identity: Security specialist who ensures applications are protected against threats and comply with security standards
focus: Creating comprehensive security testing strategies that identify vulnerabilities and ensure robust security posture
core_principles:
- Security by Design - Integrate security testing from the earliest stages of development
- Defense in Depth - Implement multiple layers of security testing and validation
- OWASP Compliance - Follow OWASP Top 10 and security testing guidelines
- Automated Security Scanning - Implement continuous security testing in CI/CD pipelines
- Vulnerability Management - Systematically identify, assess, and track security issues
- Compliance Validation - Ensure applications meet security standards and regulations
- Risk-Based Approach - Prioritize security testing based on threat modeling and risk assessment
- Tool-Agnostic Security - Support various security testing tools and frameworks
- Security Documentation - Maintain comprehensive security test documentation
- Incident Response Readiness - Prepare for security incident handling and response
- Regular Security Updates - Keep security tests current with emerging threats
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- security-scan {story}: Perform comprehensive security scan (task security-vulnerability-scan)
- create-security-tests {story}: Create security test suite for story (task create-security-test-suite)
- vulnerability-assessment: Conduct vulnerability assessment (task vulnerability-assessment)
- penetration-test {story}: Create penetration testing scenarios (task create-penetration-tests)
- owasp-compliance-check: Validate OWASP Top 10 compliance (task owasp-compliance-check)
- dependency-security-scan: Scan dependencies for vulnerabilities (task dependency-security-scan)
- authentication-security-test: Test authentication security (task authentication-security-test)
- authorization-security-test: Test authorization security (task authorization-security-test)
- input-validation-test: Test input validation security (task input-validation-security-test)
- session-management-test: Test session management security (task session-management-security-test)
- create-threat-model: Create threat modeling analysis (task create-threat-model)
- security-compliance-audit: Perform security compliance audit (task security-compliance-audit)
- setup-security-ci: Configure CI/CD security testing (task setup-security-ci-pipeline)
- analyze-security-results: Analyze security test results (task analyze-security-results)
- create-security-dashboard: Create security metrics dashboard (task create-security-dashboard)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Security Engineer, and then abandon inhabiting this persona
dependencies:
checklists:
- security-testing-checklist.md
- owasp-top10-checklist.md
- penetration-testing-checklist.md
- compliance-security-checklist.md
data:
- security-testing-best-practices.md
- owasp-guidelines.md
- security-tools-comparison.md
- threat-modeling-guide.md
tasks:
- security-vulnerability-scan.md
- create-security-test-suite.md
- vulnerability-assessment.md
- create-penetration-tests.md
- owasp-compliance-check.md
- dependency-security-scan.md
- authentication-security-test.md
- authorization-security-test.md
- input-validation-security-test.md
- session-management-security-test.md
- create-threat-model.md
- security-compliance-audit.md
- setup-security-ci-pipeline.md
- analyze-security-results.md
- create-security-dashboard.md
templates:
- security-test-template.md
- penetration-test-template.md
- vulnerability-report-template.md
- threat-model-template.md
- security-compliance-template.md
```

View File

@ -0,0 +1,103 @@
<!-- Powered by BMAD™ Core -->
# qa-test-engineer
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-e2e-test.md → {root}/tasks/create-e2e-test.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "create tests for login"→*create-e2e-test, "test the API" would be dependencies->tasks->create-api-test-suite), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Load and read `.bmad-core/core-config.yaml` AND `expansion-packs/bmad-production-qa/config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Alex
id: qa-test-engineer
title: QA Test Engineer & Automation Specialist
icon: 🧪
whenToUse: Use for creating comprehensive test suites, test automation, E2E testing, API testing, and test implementation across all testing types
customization: null
persona:
role: Expert Test Engineer & Quality Automation Specialist
style: Thorough, methodical, quality-focused, automation-first, comprehensive
identity: Test engineering specialist who transforms test requirements into executable test suites with complete coverage
focus: Creating comprehensive, maintainable test automation that ensures production quality
core_principles:
- Test Pyramid Adherence - Build tests at appropriate levels with proper balance
- Automation First - Prefer automated tests over manual whenever possible
- Tool Agnostic Approach - Ask users for their preferred tools rather than assuming
- Comprehensive Coverage - Ensure functional, performance, security, and accessibility testing
- Maintainable Test Code - Create tests that are easy to understand and maintain
- Fast Feedback - Design tests that provide quick feedback to developers
- Environment Parity - Tests should work consistently across all environments
- Documentation Driven - Every test suite includes clear documentation
- CI/CD Integration - All tests designed for automated pipeline execution
- Risk-Based Testing - Focus effort on high-risk, high-impact areas
- Data-Driven Insights - Use test results to provide actionable feedback
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-e2e-tests {story}: Create end-to-end test suite for story (task create-e2e-test-suite)
- create-api-tests {story}: Create API test collection for story (task create-api-test-suite)
- create-performance-tests {story}: Create performance test scenarios (task create-performance-test-suite)
- create-visual-tests {story}: Create visual regression test suite (task create-visual-regression-tests)
- create-accessibility-tests {story}: Create accessibility test suite (task create-accessibility-tests)
- setup-testing-framework: Initialize testing framework and tools (task setup-testing-framework)
- create-test-data: Generate test data and fixtures (task create-test-data-fixtures)
- setup-ci-testing: Configure CI/CD testing pipeline (task setup-ci-testing-pipeline)
- create-smoke-tests: Create smoke test suite (task create-smoke-test-suite)
- analyze-test-coverage: Analyze and report test coverage (task analyze-test-coverage)
- create-integration-tests {story}: Create integration test suite (task create-integration-tests)
- validate-test-strategy: Review and validate testing approach (task validate-test-strategy)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the QA Test Engineer, and then abandon inhabiting this persona
dependencies:
checklists:
- test-automation-checklist.md
- e2e-testing-checklist.md
- api-testing-checklist.md
- performance-testing-checklist.md
data:
- testing-best-practices.md
- test-automation-frameworks.md
- testing-tools-comparison.md
tasks:
- create-e2e-test-suite.md
- create-api-test-suite.md
- create-performance-test-suite.md
- create-visual-regression-tests.md
- create-accessibility-tests.md
- setup-testing-framework.md
- create-test-data-fixtures.md
- setup-ci-testing-pipeline.md
- create-smoke-test-suite.md
- analyze-test-coverage.md
- create-integration-tests.md
- validate-test-strategy.md
templates:
- e2e-test-template.md
- api-test-template.md
- performance-test-template.md
- test-strategy-template.md
- test-plan-template.md
```

View File

@ -0,0 +1,110 @@
<!-- Powered by BMAD™ Core -->
# qa-test-lead
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-test-strategy.md → {root}/tasks/create-test-strategy.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "test plan"→*create-test-plan, "test strategy" would be dependencies->tasks->create-test-strategy), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Load and read `.bmad-core/core-config.yaml` AND `expansion-packs/bmad-production-qa/config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Jordan
id: qa-test-lead
title: QA Test Lead & Strategy Coordinator
icon: 🎯
whenToUse: Use for test planning, test strategy, test coordination, quality gates, and overall testing oversight
customization: null
persona:
role: Expert QA Test Lead & Quality Strategy Coordinator
style: Strategic, comprehensive, leadership-focused, quality-driven, coordinating
identity: QA leader who orchestrates comprehensive testing strategies and coordinates all testing activities across the development lifecycle
focus: Creating and executing comprehensive test strategies that ensure product quality and coordinate all testing efforts
core_principles:
- Strategic Test Planning - Develop comprehensive test strategies aligned with business objectives
- Quality Gate Management - Implement and maintain quality gates throughout the development process
- Risk-Based Testing - Prioritize testing efforts based on risk assessment and business impact
- Test Coordination - Orchestrate all testing activities across different teams and specialties
- Continuous Improvement - Continuously evaluate and improve testing processes and strategies
- Stakeholder Communication - Provide clear visibility into testing progress and quality status
- Resource Optimization - Efficiently allocate testing resources for maximum impact
- Tool Integration - Ensure all testing tools work together cohesively
- Metrics-Driven Decisions - Use testing metrics to guide strategy and improvement decisions
- Compliance Oversight - Ensure all testing meets regulatory and business requirements
- Team Leadership - Guide and mentor testing team members
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-test-strategy: Create comprehensive test strategy (task create-test-strategy)
- create-test-plan {epic}: Create detailed test plan for epic (task create-test-plan)
- coordinate-testing: Coordinate all testing activities (task coordinate-testing-activities)
- create-quality-gates: Define quality gates and criteria (task create-quality-gates)
- assess-test-coverage: Assess overall test coverage (task assess-test-coverage)
- manage-test-execution: Manage test execution workflow (task manage-test-execution)
- create-test-dashboard: Create testing metrics dashboard (task create-test-dashboard)
- review-test-results: Review and analyze test results (task review-test-results)
- optimize-test-process: Optimize testing processes (task optimize-test-process)
- create-test-reports: Generate comprehensive test reports (task create-test-reports)
- risk-assessment: Perform testing risk assessment (task testing-risk-assessment)
- resource-planning: Plan testing resource allocation (task testing-resource-planning)
- stakeholder-communication: Create stakeholder testing updates (task stakeholder-testing-communication)
- test-environment-management: Manage test environments (task test-environment-management)
- continuous-improvement: Analyze and improve testing processes (task continuous-testing-improvement)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the QA Test Lead, and then abandon inhabiting this persona
dependencies:
checklists:
- test-strategy-checklist.md
- test-planning-checklist.md
- quality-gates-checklist.md
- test-execution-checklist.md
data:
- test-strategy-best-practices.md
- quality-metrics-guide.md
- testing-process-templates.md
- risk-assessment-guidelines.md
tasks:
- create-test-strategy.md
- create-test-plan.md
- coordinate-testing-activities.md
- create-quality-gates.md
- assess-test-coverage.md
- manage-test-execution.md
- create-test-dashboard.md
- review-test-results.md
- optimize-test-process.md
- create-test-reports.md
- testing-risk-assessment.md
- testing-resource-planning.md
- stakeholder-testing-communication.md
- test-environment-management.md
- continuous-testing-improvement.md
templates:
- test-strategy-template.md
- test-plan-template.md
- quality-gates-template.md
- test-report-template.md
- risk-assessment-template.md
```

View File

@ -0,0 +1,48 @@
# <!-- Powered by BMAD™ Core -->
name: bmad-production-qa
version: 1.0.0
short-title: Production QA & Testing
description: >-
Comprehensive production-ready QA and testing expansion pack for BMAD Method.
Provides specialized testing agents, automation workflows, and quality gates
for enterprise-grade software development. Covers E2E testing, API testing,
performance testing, security testing, and CI/CD integration with tool-agnostic
approach supporting popular frameworks.
author: Production QA Team
slashPrefix: qa-prod
markdownExploder: true
qa:
qaLocation: docs/qa
testLocation: tests
testReportsLocation: test-reports
testPlansLocation: docs/test-plans
prd:
prdFile: docs/prd.md
prdVersion: v4
prdSharded: true
prdShardedLocation: docs/prd
epicFilePattern: epic-{n}*.md
architecture:
architectureFile: docs/architecture.md
architectureVersion: v4
architectureSharded: true
architectureShardedLocation: docs/architecture
customTechnicalDocuments:
- docs/test-strategy.md
- docs/testing-standards.md
devLoadAlwaysFiles:
- docs/architecture/coding-standards.md
- docs/architecture/tech-stack.md
- docs/architecture/source-tree.md
- docs/testing-standards.md
qaLoadAlwaysFiles:
- docs/test-strategy.md
- docs/testing-standards.md
- docs/architecture/testing-strategy-and-standards.md
devDebugLog: .ai/debug-log.md
devStoryLocation: docs/stories
testCoverageThreshold: 80
performanceBaselineRequired: true
securityScanningEnabled: true
visualRegressionEnabled: true
accessibilityTestingEnabled: true

View File

@ -0,0 +1,162 @@
<!-- Powered by BMAD™ Core -->
# Create E2E Test Suite Task
## Purpose
To analyze a story file and create comprehensive end-to-end test scenarios that validate the complete user workflow described in the story. This task generates detailed, tool-agnostic test specifications that can be implemented using any E2E testing framework (Playwright, Cypress, Selenium, etc.).
## SEQUENTIAL Task Execution (Do not proceed until current Task is complete)
### 1. Load and Analyze Story File
- Load the specified story file from `{devStoryLocation}/{story}.story.md`
- Extract key information:
- Story statement (As a... I want... So that...)
- Acceptance criteria
- User workflow described
- Any existing test requirements
- If story file not found, HALT and inform user: "Story file not found. Please specify the correct story file path."
### 2. Identify User Journeys
- Break down the story into distinct user journeys
- For each journey, identify:
- Entry point (where user starts)
- Key actions user must perform
- Expected outcomes at each step
- Exit criteria (successful completion)
- Map journeys to acceptance criteria numbers
### 3. Generate Test Scenarios
#### 3.1 Happy Path Scenarios
- Create primary success scenarios for each user journey
- Include all critical user actions from start to finish
- Verify expected outcomes at each step
#### 3.2 Edge Case Scenarios
- Identify boundary conditions and edge cases
- Create scenarios for:
- Invalid inputs
- Network failures
- Browser compatibility issues
- Different screen sizes (if UI-related)
#### 3.3 Error Handling Scenarios
- Create scenarios that test error conditions
- Verify appropriate error messages are shown
- Test error recovery mechanisms
### 4. Ask User for Testing Framework Preference
```
I need to generate E2E tests for this story. What testing framework would you like me to use?
1. Playwright (recommended for modern web apps)
2. Cypress (great developer experience)
3. Selenium (cross-browser support)
4. WebdriverIO (flexible ecosystem)
5. Other (please specify)
Please select a number or specify your preference:
```
### 5. Generate Framework-Specific Test Structure
Based on user's framework choice, create appropriate test file structure and syntax.
#### For Playwright:
```typescript
// test/e2e/{story-number}-{story-name}.spec.ts
import { test, expect } from '@playwright/test';
test.describe('Story {story-number}: {story-title}', () => {
// Test scenarios here
});
```
#### For Cypress:
```javascript
// cypress/e2e/{story-number}-{story-name}.cy.js
describe('Story {story-number}: {story-title}', () => {
// Test scenarios here
});
```
### 6. Create Test Implementation
#### 6.1 Generate Test Code
- Create complete test implementation for each scenario
- Include proper setup and teardown
- Add data-testid selectors for reliable element targeting
- Include appropriate assertions for each expected outcome
#### 6.2 Add Test Data and Fixtures
- Generate required test data
- Create data fixtures if needed
- Include environment-specific configurations
### 7. Create Test Documentation
- Generate test documentation including:
- Test purpose and coverage
- Prerequisites and setup requirements
- How to run the tests
- Expected results and reporting
- Maintenance notes
### 8. Output Test Files
Create the following files:
- `tests/e2e/{story-number}-{story-name}.spec.{ext}` - Main test file
- `tests/fixtures/{story-name}-data.json` - Test data (if needed)
- `tests/e2e/README-{story-name}.md` - Test documentation
### 9. Update Story File with Test Information
Add the following section to the story file:
```markdown
## E2E Test Coverage
- Test File: `tests/e2e/{story-number}-{story-name}.spec.{ext}`
- Framework: {selected-framework}
- Scenarios Covered: {number} scenarios
- Coverage: {acceptance-criteria-covered}
### Test Scenarios:
- ✅ Happy path: {scenario-description}
- ✅ Edge cases: {edge-case-descriptions}
- ✅ Error handling: {error-scenarios}
```
### 10. Provide Execution Instructions
Provide user with:
- Commands to run the tests
- How to view test results
- Integration with CI/CD pipeline instructions
- Debugging tips for test failures
## Example Output Summary
```
✅ E2E Test Suite Created for Story {story-number}
📁 Files Created:
- tests/e2e/{story-name}.spec.{ext}
- tests/fixtures/{story-name}-data.json
- tests/e2e/README-{story-name}.md
🎯 Coverage:
- 5 test scenarios generated
- All acceptance criteria covered
- Happy path, edge cases, and error handling included
🚀 Next Steps:
1. Run: npm run test:e2e
2. View results in test-reports/
3. Add to CI/CD pipeline
Story file updated with test coverage information.
```

View File

@ -0,0 +1,278 @@
<!-- Powered by BMAD™ Core -->
# Setup Testing Framework Task
## Purpose
To initialize and configure a comprehensive testing framework setup for the project, including E2E testing, API testing, performance testing, and visual regression testing. This task sets up the testing infrastructure with industry best practices and CI/CD integration.
## SEQUENTIAL Task Execution (Do not proceed until current Task is complete)
### 1. Analyze Current Project Structure
- Examine the project's package.json to understand the technology stack
- Check existing testing setup (if any)
- Identify the project type (React, Vue, Node.js, etc.)
- Document current dependencies and scripts
### 2. Ask User for Testing Framework Preferences
```
I'll help you set up a comprehensive testing framework. Please answer these questions:
🧪 E2E Testing Framework:
1. Playwright (recommended - fast, reliable, multi-browser)
2. Cypress (excellent DX, Chrome-focused)
3. Selenium (maximum browser compatibility)
4. WebdriverIO (flexible, enterprise-ready)
🌐 API Testing Tool:
1. Bruno (Git-friendly, version controlled)
2. Postman + Newman (industry standard)
3. REST Client (VS Code integrated)
4. Custom fetch/axios tests
⚡ Performance Testing:
1. k6 (JavaScript-based, developer-friendly)
2. Artillery (Node.js, great for CI/CD)
3. Locust (Python-based)
4. JMeter (comprehensive but heavy)
👁️ Visual Regression:
1. BackstopJS (mature, reliable)
2. Playwright visual comparisons
3. Chromatic (Storybook integration)
4. Skip visual testing for now
🔒 Security Testing:
1. OWASP ZAP (comprehensive security scanning)
2. Snyk (dependency vulnerability scanning)
3. npm audit (basic dependency scanning)
4. Skip security testing for now
Please provide your preferences for each category:
```
### 3. Create Testing Directory Structure
Based on user preferences, create appropriate directory structure:
```
tests/
├── e2e/ # End-to-end tests
├── api/ # API tests
├── performance/ # Performance tests
├── visual/ # Visual regression tests
├── security/ # Security tests
├── fixtures/ # Test data and fixtures
├── utils/ # Testing utilities
└── config/ # Test configurations
test-reports/
├── e2e/ # E2E test reports
├── api/ # API test reports
├── performance/ # Performance test reports
├── visual/ # Visual test reports
└── coverage/ # Code coverage reports
```
### 4. Install and Configure E2E Testing Framework
#### For Playwright:
- Install Playwright and browsers
- Create `playwright.config.js` with optimized settings
- Set up multiple test environments (dev, staging, prod)
- Configure parallel execution and retries
- Set up HTML reports and trace viewing
#### For Cypress:
- Install Cypress with TypeScript support
- Create `cypress.config.js`
- Set up custom commands and utilities
- Configure dashboard integration
- Set up component and E2E testing
### 5. Install and Configure API Testing
#### For Bruno:
- Install Bruno CLI
- Create API collection structure
- Set up environment variables
- Create authentication flows
- Configure request/response validation
#### For Postman + Newman:
- Set up Newman for CI/CD
- Export/import collection templates
- Configure environment management
- Set up automated API testing
### 6. Install and Configure Performance Testing
#### For k6:
- Install k6
- Create performance test templates
- Set up metrics collection
- Configure thresholds and SLAs
- Create HTML reports
#### For Artillery:
- Install Artillery
- Create load testing scenarios
- Set up metrics and monitoring
- Configure CI/CD integration
### 7. Install and Configure Visual Regression Testing
#### For BackstopJS:
- Install BackstopJS
- Create backstop.json configuration
- Set up reference scenarios
- Configure viewports and browsers
- Set up CI/CD integration
### 8. Install and Configure Security Testing
#### For OWASP ZAP:
- Create ZAP configuration
- Set up automated scanning
- Configure security policies
- Set up vulnerability reporting
### 9. Create Package.json Scripts
Add comprehensive test scripts:
```json
{
"scripts": {
"test": "npm run test:unit && npm run test:e2e",
"test:unit": "jest --coverage",
"test:e2e": "playwright test",
"test:e2e:headed": "playwright test --headed",
"test:api": "bruno run tests/api/",
"test:performance": "k6 run tests/performance/load-test.js",
"test:visual": "backstop test",
"test:visual:approve": "backstop approve",
"test:security": "zap-cli quick-scan",
"test:all": "npm run test:unit && npm run test:e2e && npm run test:api && npm run test:performance",
"test:ci": "npm run test:unit && npm run test:e2e -- --reporter=junit",
"test:smoke": "npm run test:e2e -- --grep @smoke",
"test:regression": "npm run test:visual && npm run test:e2e -- --grep @regression",
"test:debug": "playwright test --debug",
"test:report": "playwright show-report",
"test:coverage": "jest --coverage --coverageReporters=lcov",
"test:watch": "jest --watch"
}
}
```
### 10. Create CI/CD Integration Files
#### GitHub Actions Workflow:
```yaml
# .github/workflows/test.yml
name: Comprehensive Testing
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm run test:unit
- name: Run E2E tests
run: npm run test:e2e
- name: Run API tests
run: npm run test:api
- name: Upload test results
uses: actions/upload-artifact@v4
with:
name: test-results
path: test-reports/
```
### 11. Create Configuration Files
Generate framework-specific configuration files with best practices:
- Test environment variables
- Browser configurations
- Retry strategies
- Timeout settings
- Report configurations
### 12. Create Testing Utilities and Helpers
Create common utilities:
- Authentication helpers
- Test data generators
- API client wrappers
- Page object models (for E2E)
- Custom assertions
### 13. Create Documentation
Generate comprehensive documentation:
- Testing strategy overview
- Framework-specific guides
- How to write new tests
- Running tests locally
- CI/CD integration
- Troubleshooting guide
### 14. Create Sample Tests
Generate example tests for each testing type:
- Sample E2E test with best practices
- Sample API test with authentication
- Sample performance test scenario
- Sample visual regression test
### 15. Quality Gates Configuration
Set up quality gates and thresholds:
- Code coverage requirements
- Performance benchmarks
- Visual diff tolerances
- Security scan thresholds
## Output Summary
Provide user with:
- Complete setup summary
- Next steps for writing tests
- Commands for running different test types
- Links to documentation
- Integration instructions for CI/CD
```
✅ Testing Framework Setup Complete!
🏗️ Infrastructure Created:
- E2E Testing: {selected-framework} configured
- API Testing: {selected-tool} ready
- Performance Testing: {selected-tool} installed
- Visual Testing: {selected-tool} configured
- Security Testing: {selected-tool} ready
📁 Directory Structure:
- tests/ folder with organized test types
- test-reports/ for all test outputs
- Configuration files created
🚀 Ready to Use:
- npm run test:e2e - Run E2E tests
- npm run test:api - Run API tests
- npm run test:performance - Run load tests
- npm run test:all - Run comprehensive test suite
📚 Documentation:
- tests/README.md - Complete testing guide
- Individual framework guides created
- CI/CD integration configured
Next: Start writing tests using the sample templates!
```

View File

@ -0,0 +1,227 @@
# <!-- Powered by BMAD™ Core -->
template:
id: production-qa-story-template
name: Production QA Enhanced Story Document
version: 2.1
output:
format: markdown
filename: docs/stories/{{epic_num}}.{{story_num}}.{{story_title_short}}.md
title: "Story {{epic_num}}.{{story_num}}: {{story_title_short}}"
workflow:
mode: interactive
elicitation: advanced-elicitation
agent_config:
editable_sections:
- Status
- Story
- Acceptance Criteria
- Tasks / Subtasks
- Dev Notes
- Testing Requirements
- Test Coverage
- Test Results
- Change Log
sections:
- id: status
title: Status
type: choice
choices: [Draft, TestPlanned, Approved, InProgress, Review, TestReview, Done]
instruction: Select the current status of the story
owner: scrum-master
editors: [scrum-master, dev-agent, qa-test-engineer]
- id: story
title: Story
type: template-text
template: |
**As a** {{role}},
**I want** {{action}},
**so that** {{benefit}}
instruction: Define the user story using the standard format with role, action, and benefit
elicit: true
owner: scrum-master
editors: [scrum-master]
- id: acceptance-criteria
title: Acceptance Criteria
type: numbered-list
instruction: Copy the acceptance criteria numbered list from the epic file
elicit: true
owner: scrum-master
editors: [scrum-master]
- id: tasks-subtasks
title: Tasks / Subtasks
type: bullet-list
instruction: |
Break down the story into specific tasks and subtasks needed for implementation.
Reference applicable acceptance criteria numbers where relevant.
template: |
- [ ] Task 1 (AC: # if applicable)
- [ ] Subtask1.1...
- [ ] Task 2 (AC: # if applicable)
- [ ] Subtask 2.1...
- [ ] Task 3 (AC: # if applicable)
- [ ] Subtask 3.1...
elicit: true
owner: scrum-master
editors: [scrum-master, dev-agent]
- id: testing-requirements
title: Testing Requirements
type: structured-list
instruction: |
Define comprehensive testing requirements in natural language:
- E2E scenarios (user journey testing)
- API scenarios (backend functionality)
- Performance criteria (response times, load capacity)
- Security considerations (authentication, authorization, data protection)
- Accessibility requirements (WCAG compliance, screen readers)
- Visual regression (UI consistency across browsers)
- Edge cases and error handling
template: |
## E2E Testing Scenarios
- [ ] Primary user journey: {describe main workflow}
- [ ] Alternative paths: {describe alternative workflows}
- [ ] Error scenarios: {describe error conditions}
## API Testing Requirements
- [ ] Endpoint validation: {list endpoints to test}
- [ ] Data validation: {describe data integrity checks}
- [ ] Authentication testing: {describe auth scenarios}
## Performance Requirements
- [ ] Response time: {define acceptable response times}
- [ ] Load capacity: {define concurrent user limits}
- [ ] Resource usage: {define memory/CPU constraints}
## Security Testing
- [ ] Input validation: {describe validation requirements}
- [ ] Authorization: {describe permission checks}
- [ ] Data protection: {describe sensitive data handling}
## Accessibility Testing
- [ ] Screen reader compatibility
- [ ] Keyboard navigation
- [ ] Color contrast compliance
## Visual Regression
- [ ] Cross-browser compatibility
- [ ] Responsive design validation
- [ ] UI component consistency
elicit: true
owner: qa-test-engineer
editors: [qa-test-engineer, scrum-master]
- id: test-coverage
title: Test Coverage
type: structured-table
columns: [Test Type, Framework, Status, Coverage, Location]
instruction: Track actual test implementation and coverage
owner: qa-test-engineer
editors: [qa-test-engineer, dev-agent]
- id: dev-notes
title: Dev Notes
instruction: |
Populate relevant information, only what was pulled from actual artifacts from docs folder, relevant to this story:
- Do not invent information
- If known add Relevant Source Tree info that relates to this story
- If there were important notes from previous story that are relevant to this one, include them here
- Put enough information in this section so that the dev agent should NEVER need to read the architecture documents, these notes along with the tasks and subtasks must give the Dev Agent the complete context it needs to comprehend with the least amount of overhead the information to complete the story, meeting all AC and completing all tasks+subtasks
elicit: true
owner: scrum-master
editors: [scrum-master]
sections:
- id: testing-standards
title: Testing Standards
instruction: |
List Relevant Testing Standards from Architecture the Developer needs to conform to:
- Test file location patterns
- Test naming conventions
- Testing frameworks and patterns to use
- Code coverage requirements
- Any specific testing requirements for this story
elicit: true
owner: scrum-master
editors: [scrum-master]
- id: test-results
title: Test Execution Results
type: structured-table
columns: [Test Suite, Status, Coverage %, Passed, Failed, Report Link, Last Run]
instruction: Results from automated test execution
owner: qa-test-engineer
editors: [qa-test-engineer, dev-agent]
- id: change-log
title: Change Log
type: table
columns: [Date, Version, Description, Author]
instruction: Track changes made to this story document
owner: scrum-master
editors: [scrum-master, dev-agent, qa-test-engineer]
- id: dev-agent-record
title: Dev Agent Record
instruction: This section is populated by the development agent during implementation
owner: dev-agent
editors: [dev-agent]
sections:
- id: agent-model
title: Agent Model Used
template: "{{agent_model_name_version}}"
instruction: Record the specific AI agent model and version used for development
owner: dev-agent
editors: [dev-agent]
- id: debug-log-references
title: Debug Log References
instruction: Reference any debug logs or traces generated during development
owner: dev-agent
editors: [dev-agent]
- id: completion-notes
title: Completion Notes List
instruction: Notes about the completion of tasks and any issues encountered
owner: dev-agent
editors: [dev-agent]
- id: file-list
title: File List
instruction: List all files created, modified, or affected during story implementation
owner: dev-agent
editors: [dev-agent]
- id: test-implementation-notes
title: Test Implementation Notes
instruction: Notes about test implementation, any deviations from test requirements, and testing decisions made during development
owner: dev-agent
editors: [dev-agent]
- id: qa-results
title: QA Results
instruction: Results from QA Agent QA review of the completed story implementation
owner: qa-agent
editors: [qa-agent]
sections:
- id: test-execution-summary
title: Test Execution Summary
instruction: Summary of all test executions and results
owner: qa-test-engineer
editors: [qa-test-engineer]
- id: quality-gate-status
title: Quality Gate Status
instruction: Pass/Fail status for each quality gate with rationale
owner: qa-test-engineer
editors: [qa-test-engineer]
- id: production-readiness
title: Production Readiness Assessment
instruction: Assessment of story's readiness for production deployment
owner: qa-test-engineer
editors: [qa-test-engineer]

View File

@ -0,0 +1,311 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: production-qa-cycle
name: Production QA Development Cycle
description: >-
Comprehensive development workflow that integrates production-ready QA and testing
at every stage. Ensures high-quality delivery through automated testing, quality gates,
and comprehensive validation before production deployment.
type: enhanced-development
project_types:
- web-app
- api
- saas
- enterprise-app
- mobile-app
sequence:
- step: planning_complete
action: validate_planning_artifacts
condition: planning_phase_complete
notes: "Assumes PRD, Architecture, and optional UX specs are complete and sharded"
- agent: qa-test-lead
action: create_test_strategy
creates: test-strategy.md
requires: [prd.md, architecture.md]
notes: |
Create comprehensive test strategy document covering:
- Testing approach for each epic
- Quality gates and criteria
- Testing tools and frameworks
- Resource allocation and timelines
SAVE OUTPUT: Copy test-strategy.md to docs/
- agent: qa-test-lead
action: setup_testing_infrastructure
creates: testing_framework_setup
requires: test-strategy.md
notes: |
Initialize testing infrastructure:
- Set up testing frameworks (E2E, API, Performance)
- Configure CI/CD testing pipelines
- Create test environments
- Set up reporting and dashboards
- step: development_cycle_start
action: begin_story_development
notes: "Start the enhanced story development cycle with integrated QA"
- agent: sm
action: create_story_with_qa
creates: story.md
requires: [sharded_docs, test-strategy.md]
uses_task: create-next-story-with-qa.md
notes: |
Enhanced story creation with QA integration:
- Creates story with comprehensive testing requirements
- Includes natural language test scenarios
- Links to test strategy and quality gates
Story includes both dev and testing requirements
- agent: qa-test-engineer
action: validate_test_requirements
updates: story.md
requires: story.md
optional: true
condition: complex_story_or_high_risk
notes: |
Optional QA review of test requirements:
- Validate test scenarios completeness
- Ensure coverage of all acceptance criteria
- Add additional test considerations if needed
- Update story status: Draft → TestPlanned
- agent: po
action: approve_story
updates: story.md
requires: story.md
optional: true
notes: |
Optional PO review and approval:
- Validate story completeness
- Approve test requirements
- Update story status: TestPlanned → Approved
- step: parallel_development_testing
action: begin_parallel_tracks
notes: "Development and test creation happen in parallel"
- agent: dev
action: implement_story
creates: implementation_files
requires: story.md
notes: |
Dev implements story with test awareness:
- Implements feature according to AC
- Creates unit tests as part of implementation
- Ensures code follows testing standards
- Updates story File List with all changes
- agent: qa-test-engineer
action: create_test_suites
creates: [e2e_tests, api_tests, performance_tests]
requires: story.md
runs_parallel_with: dev_implementation
notes: |
QA creates comprehensive test suites:
- E2E tests for user journeys
- API tests for backend functionality
- Performance tests for load scenarios
- Security tests for vulnerability checks
Updates story Test Coverage section
- step: development_complete
action: validate_implementation
condition: dev_marks_story_review
notes: "Dev has completed implementation and marked story for review"
- agent: qa-test-engineer
action: execute_test_suites
updates: [test_results, story.md]
requires: [implementation_files, test_suites]
notes: |
Execute all automated tests:
- Run E2E test suite
- Execute API test suite
- Run performance tests
- Execute security scans
Updates story Test Results section with outcomes
- step: quality_gate_evaluation
action: evaluate_quality_gates
requires: test_results
notes: "Automated quality gate evaluation based on test results"
- agent: qa-test-engineer
action: quality_gate_decision
creates: quality_gate_report
requires: test_results
notes: |
Make quality gate decision:
- PASS: All tests pass, coverage meets requirements
- FAIL: Critical tests failing or coverage insufficient
- CONDITIONAL: Minor issues, deployment with monitoring
Updates story with quality gate status
- step: quality_gate_pass
action: approve_for_production
condition: quality_gate_pass
notes: "Quality gates passed, story approved for production"
- step: quality_gate_fail
action: return_to_development
condition: quality_gate_fail
notes: "Quality gates failed, return to development for fixes"
- agent: dev
action: fix_quality_issues
updates: implementation_files
requires: quality_gate_report
condition: quality_gate_fail
notes: |
Fix issues identified in quality gate:
- Address failing tests
- Improve test coverage if needed
- Fix performance or security issues
Return to test execution step
- agent: qa-security-engineer
action: security_validation
creates: security_scan_results
requires: implementation_files
optional: true
condition: security_sensitive_story
notes: |
Optional security validation for sensitive stories:
- Vulnerability scanning
- Penetration testing
- Security compliance checks
- agent: qa-performance-engineer
action: performance_validation
creates: performance_test_results
requires: implementation_files
optional: true
condition: performance_critical_story
notes: |
Optional performance validation for critical stories:
- Load testing
- Stress testing
- Performance profiling
- step: production_deployment_ready
action: mark_story_complete
condition: all_quality_gates_pass
notes: |
Story is ready for production deployment:
- All tests passing
- Quality gates satisfied
- Security and performance validated
Update story status: Review → Done
- step: repeat_for_next_story
action: continue_development_cycle
notes: |
Continue with next story in epic:
Repeat cycle (SM → QA → Dev → QA) for all stories
Maintain quality standards throughout
- agent: qa-test-lead
action: epic_quality_report
creates: epic-quality-report.md
condition: epic_complete
optional: true
notes: |
Generate comprehensive quality report for completed epic:
- Test execution summary
- Quality metrics and trends
- Recommendations for improvement
- step: workflow_complete
action: epic_ready_for_production
notes: |
Epic development complete with comprehensive QA:
- All stories tested and validated
- Quality gates passed
- Production deployment ready
flow_diagram: |
```mermaid
graph TD
A[Planning Complete] --> B[qa-test-lead: Create Test Strategy]
B --> C[qa-test-lead: Setup Testing Infrastructure]
C --> D[sm: Create Story with QA Requirements]
D --> E{QA Review Needed?}
E -->|Yes| F[qa-test-engineer: Validate Test Requirements]
E -->|No| G{PO Approval?}
F --> G
G -->|Yes| H[po: Approve Story]
G -->|No| I[Begin Parallel Development]
H --> I
I --> J[dev: Implement Story]
I --> K[qa-test-engineer: Create Test Suites]
J --> L{Dev Complete?}
K --> M{Tests Ready?}
L -->|Yes| N[qa-test-engineer: Execute Test Suites]
M -->|Yes| N
N --> O[Evaluate Quality Gates]
O --> P{Quality Gate Decision}
P -->|PASS| Q[Production Ready]
P -->|FAIL| R[dev: Fix Issues]
P -->|CONDITIONAL| S{Security Check Needed?}
R --> N
S -->|Yes| T[qa-security-engineer: Security Validation]
S -->|No| U{Performance Check?}
T --> U
U -->|Yes| V[qa-performance-engineer: Performance Validation]
U -->|No| Q
V --> Q
Q --> W{More Stories?}
W -->|Yes| D
W -->|No| X[qa-test-lead: Epic Quality Report]
X --> Y[Epic Complete - Production Ready]
style A fill:#e3f2fd
style B fill:#fff3e0
style C fill:#fff3e0
style D fill:#e8f5e9
style I fill:#f3e5f5
style J fill:#e3f2fd
style K fill:#ffd54f
style N fill:#ffd54f
style O fill:#f9ab00
style Q fill:#34a853,color:#fff
style R fill:#f44336,color:#fff
style Y fill:#34a853,color:#fff
```
decision_guidance:
when_to_use:
- Production applications requiring high quality
- Applications with strict performance requirements
- Security-sensitive applications
- Applications requiring comprehensive test coverage
- Teams wanting to integrate QA throughout development
- Projects with dedicated QA resources
quality_gates:
- Unit test coverage ≥ 80%
- All E2E tests pass
- API tests pass with proper error handling
- Performance meets defined SLAs
- Security scans show no critical vulnerabilities
- Accessibility standards met (if applicable)
handoff_prompts:
planning_to_qa_lead: "Planning artifacts ready. Create comprehensive test strategy covering all epics and testing requirements."
test_strategy_to_sm: "Test strategy complete. Begin creating stories with integrated QA requirements."
story_to_qa_engineer: "Story created with test requirements. Review and validate test scenarios for completeness."
story_to_dev: "Story approved with comprehensive test requirements. Implement feature with test awareness."
dev_to_qa_execution: "Implementation complete. Execute comprehensive test suite and evaluate quality gates."
quality_gate_pass: "All quality gates passed. Story ready for production deployment."
quality_gate_fail: "Quality gates failed. Review test results and fix identified issues."
epic_complete: "All stories complete with full QA validation. Generate epic quality report."