14 KiB
Production QA Enhancement Guide
Overview
The Production QA Enhancement transforms BMAD from a development framework into a complete enterprise software delivery platform with comprehensive testing automation. This guide explains everything you need to know about using Production QA alongside traditional BMAD.
Table of Contents
- Quick Start
- Architecture Overview
- Production QA Agents
- Traditional vs Production QA
- Workflow Comparison
- Implementation Guide
- Testing Framework Support
- Quality Gates
- CI/CD Integration
- Best Practices
- Troubleshooting
Quick Start
Installation
Production QA is already integrated into this fork. No additional installation needed!
Basic Usage
# 1. Initialize testing infrastructure (one-time setup)
@qa-test-engineer
*setup-testing-framework
# 2. Create stories with integrated QA
@sm *draft # Stories now include test requirements automatically
# 3. Parallel development and testing
# Terminal 1:
@dev *implement docs/stories/1.1.story.md
# Terminal 2:
@qa-test-engineer *create-e2e-tests docs/stories/1.1.story.md
@qa-test-engineer *create-api-tests docs/stories/1.1.story.md
# 4. Execute tests and quality gates
@qa-test-engineer *execute-tests
@qa-test-lead *evaluate-quality-gates
Architecture Overview
How Production QA Integrates with BMAD
Traditional BMAD Core (Preserved)
├── Original Agents (SM, Dev, QA/Quinn, etc.)
├── Original Workflows
└── Original Tasks
Production QA Expansion Pack (Added)
├── Specialized QA Agents (4 new agents)
├── Enhanced Tasks (create-next-story-with-qa)
├── Test Automation Workflows
└── Quality Gate System
Key Integration Points
- Story Creation Enhanced - SM agent now uses
create-next-story-with-qatask - Parallel Workflows - Dev and QA work simultaneously
- Quality Gates - Automated pass/fail before production
- Tool Agnostic - Works with any testing framework
Production QA Agents
🧪 QA Test Engineer (Alex)
Role: Hands-on test automation specialist
Capabilities:
- Creates E2E test suites (Playwright, Cypress, Selenium)
- Develops API test collections (Bruno, Postman, REST Client)
- Implements integration tests
- Sets up testing frameworks
- Generates test data and fixtures
Key Commands:
*create-e2e-tests {story} # Generate E2E test suite
*create-api-tests {story} # Generate API test collection
*setup-testing-framework # Initialize testing infrastructure
*analyze-test-coverage # Review coverage metrics
⚡ Performance Engineer (Morgan)
Role: Performance and scalability testing expert
Capabilities:
- Creates load testing scenarios (k6, Artillery, JMeter)
- Implements stress and spike tests
- Establishes performance baselines
- Capacity planning and analysis
Key Commands:
*create-load-test {story} # Generate load test scenarios
*create-stress-test {story} # Create stress test scenarios
*analyze-performance-baseline # Establish performance baseline
*create-capacity-plan # Generate capacity analysis
🔒 Security Engineer (Riley)
Role: Security testing and vulnerability assessment specialist
Capabilities:
- Comprehensive security scanning (OWASP ZAP)
- Vulnerability assessments
- OWASP Top 10 compliance validation
- Dependency security scanning (Snyk)
- Penetration testing scenarios
Key Commands:
*security-scan {story} # Perform security scan
*vulnerability-assessment # Conduct vulnerability assessment
*owasp-compliance-check # Validate OWASP compliance
*dependency-security-scan # Scan dependencies
🎯 QA Test Lead (Jordan)
Role: Strategic QA coordination and oversight
Capabilities:
- Creates comprehensive test strategies
- Manages quality gates and criteria
- Coordinates all testing activities
- Generates quality reports and metrics
Key Commands:
*create-test-strategy # Generate test strategy
*create-quality-gates # Define quality gates
*coordinate-testing # Manage testing activities
*create-test-reports # Generate quality reports
Traditional vs Production QA
Comparison Table
| Aspect | Traditional BMAD (Quinn) | Production QA Enhancement |
|---|---|---|
| Purpose | Advisory & Guidance | Implementation & Automation |
| Test Creation | Recommends what to test | Creates actual test code |
| Quality Gates | Advisory gates | Automated pass/fail gates |
| Workflow | Sequential (Dev → QA) | Parallel (Dev + QA) |
| Tools | Tool recommendations | Tool implementation |
| Coverage | Strategic coverage advice | Measurable coverage metrics |
| Best For | MVPs, Prototypes | Production, Enterprise |
When to Use Each
Use Traditional BMAD When:
- Rapid prototyping
- MVP development
- Small teams
- Learning projects
- Advisory guidance sufficient
Use Production QA When:
- Enterprise applications
- Regulated industries
- High quality requirements
- Team has dedicated QA
- Automated testing needed
Workflow Comparison
Traditional BMAD Workflow
graph LR
A[SM: Create Story] --> B[Dev: Implement]
B --> C[QA: Review & Advise]
C --> D[Dev: Address Feedback]
D --> E[Done]
Production QA Enhanced Workflow
graph TD
A[SM: Create Story with Test Requirements] --> B{Parallel Execution}
B --> C[Dev: Implement Feature]
B --> D[QA: Create Test Suites]
C --> E[Merge Point]
D --> E
E --> F[Execute All Tests]
F --> G{Quality Gates}
G -->|Pass| H[Production Ready]
G -->|Fail| I[Fix Issues]
I --> F
Implementation Guide
Step 1: Project Setup
# 1. Install BMAD with Production QA (already done in this fork)
git clone https://github.com/papuman/BMAD-METHOD.git
# 2. Initialize your project
cd your-project
npx bmad-method install
Step 2: Configure Testing Strategy
# Activate QA Test Lead
@qa-test-lead
# Create comprehensive test strategy
*create-test-strategy
# This generates:
# - docs/test-strategy.md
# - Quality gate definitions
# - Testing approach per epic
Step 3: Setup Testing Infrastructure
# Activate QA Test Engineer
@qa-test-engineer
# Initialize testing framework
*setup-testing-framework
# You'll be asked to choose:
# - E2E Framework (Playwright, Cypress, etc.)
# - API Testing Tool (Bruno, Postman, etc.)
# - Performance Tool (k6, Artillery, etc.)
# - Security Scanner (OWASP ZAP, Snyk, etc.)
Step 4: Enhanced Story Creation
# Stories now include test requirements automatically
@sm *draft
# Generated story includes:
# - Standard story elements
# - E2E test scenarios
# - API test requirements
# - Performance criteria
# - Security considerations
Step 5: Parallel Development
# Development track
@dev *implement docs/stories/1.1.story.md
# Testing track (simultaneously)
@qa-test-engineer *create-e2e-tests docs/stories/1.1.story.md
@qa-performance-engineer *create-load-test docs/stories/1.1.story.md
@qa-security-engineer *security-scan docs/stories/1.1.story.md
Testing Framework Support
E2E Testing Frameworks
Playwright (Recommended)
// Generated test example
test('User Login Flow', async ({ page }) => {
await page.goto('/login');
await page.fill('[data-testid="email"]', 'user@example.com');
await page.fill('[data-testid="password"]', 'password');
await page.click('[data-testid="submit"]');
await expect(page).toHaveURL('/dashboard');
});
Cypress
// Generated test example
describe('User Login Flow', () => {
it('should login successfully', () => {
cy.visit('/login');
cy.get('[data-testid="email"]').type('user@example.com');
cy.get('[data-testid="password"]').type('password');
cy.get('[data-testid="submit"]').click();
cy.url().should('include', '/dashboard');
});
});
API Testing Tools
Bruno (Git-friendly)
# Generated collection
name: User API Tests
requests:
- name: Login
method: POST
url: {{baseUrl}}/api/auth/login
body:
email: user@example.com
password: password
tests:
- status: 200
- body.token: exists
Performance Testing
k6 (JavaScript-based)
// Generated load test
import http from 'k6/http';
import { check } from 'k6';
export const options = {
stages: [
{ duration: '2m', target: 100 },
{ duration: '5m', target: 100 },
{ duration: '2m', target: 0 },
],
};
export default function() {
const res = http.get('https://api.example.com/');
check(res, { 'status is 200': (r) => r.status === 200 });
}
Quality Gates
Automated Quality Gate Criteria
quality_gates:
unit_tests:
coverage: ">= 80%"
passing: "100%"
e2e_tests:
passing: "100%"
critical_paths: "100% coverage"
api_tests:
passing: "100%"
response_time: "< 2s"
performance:
response_time_p95: "< 3s"
error_rate: "< 1%"
concurrent_users: ">= 100"
security:
critical_vulnerabilities: 0
high_vulnerabilities: "< 3"
owasp_compliance: "pass"
accessibility:
wcag_level: "AA"
lighthouse_score: ">= 90"
Quality Gate Workflow
graph TD
A[Tests Execute] --> B{Unit Tests}
B -->|Pass| C{E2E Tests}
B -->|Fail| X[Gate Failed]
C -->|Pass| D{API Tests}
C -->|Fail| X
D -->|Pass| E{Performance}
D -->|Fail| X
E -->|Pass| F{Security}
E -->|Fail| X
F -->|Pass| G[All Gates Passed]
F -->|Fail| X
G --> H[Deploy to Production]
X --> I[Fix Issues]
I --> A
CI/CD Integration
GitHub Actions Workflow
# .github/workflows/production-qa.yml
name: Production QA Pipeline
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm run test:unit
- name: Run E2E tests
run: npm run test:e2e
- name: Run API tests
run: npm run test:api
- name: Run security scan
run: npm run test:security
- name: Evaluate quality gates
run: npm run quality-gates
- name: Upload test reports
uses: actions/upload-artifact@v4
with:
name: test-reports
path: test-reports/
Best Practices
1. Story Creation
- Always use
@sm *draftfor QA-integrated stories - Review test requirements before development starts
- Ensure acceptance criteria are testable
2. Test Development
- Create tests in parallel with feature development
- Start with happy path scenarios
- Add edge cases and error handling
- Keep test data separate from test logic
3. Quality Management
- Run tests locally before committing
- Fix failing tests immediately
- Maintain coverage above thresholds
- Review performance impacts regularly
4. Tool Selection
- Choose tools your team knows
- Prioritize maintainability over features
- Use consistent patterns across test types
- Document tool decisions
Troubleshooting
Common Issues and Solutions
Issue: Tests not generating
# Solution 1: Ensure story file exists
ls docs/stories/
# Solution 2: Check expansion pack is loaded
ls expansion-packs/bmad-production-qa/
# Solution 3: Verify agent can access story
@qa-test-engineer
*create-e2e-tests docs/stories/1.1.story.md
Issue: Quality gates failing
# Check test results
cat test-reports/quality-gate-report.md
# Review specific failures
npm run test:e2e -- --verbose
# Check coverage
npm run test:coverage
Issue: Framework not recognized
# Reinitialize framework
@qa-test-engineer
*setup-testing-framework
# Manually install if needed
npm install --save-dev playwright
Advanced Topics
Custom Quality Gates
// custom-gates.js
module.exports = {
customGates: {
performance: {
ttfb: '<500ms',
fcp: '<1s',
lcp: '<2.5s'
},
bundle: {
size: '<500kb',
gzip: '<150kb'
}
}
};
Test Data Management
// test-data/users.js
export const testUsers = {
admin: {
email: 'admin@test.com',
password: process.env.TEST_ADMIN_PASSWORD
},
user: {
email: 'user@test.com',
password: process.env.TEST_USER_PASSWORD
}
};
Parallel Test Execution
# Run tests in parallel
npm run test:e2e -- --workers=4
# Run specific test suites in parallel
npm run test:e2e:auth &
npm run test:e2e:dashboard &
npm run test:e2e:api &
wait
Migration Guide
From Traditional BMAD to Production QA
- Keep existing workflow - Traditional BMAD still works
- Add testing gradually - Start with one story
- Choose tools wisely - Pick what team knows
- Train team - Share this guide
- Monitor metrics - Track quality improvements
Support and Resources
- Expansion Pack Documentation: README
- BMAD Core Documentation: User Guide
- GitHub Issues: Report bugs or request features
- Community Discord: Get help from other users
Production QA Enhancement - Enterprise-grade testing for BMAD Method