test-engineer agent beta

This commit is contained in:
VENGATESH S 2025-07-06 20:43:20 +05:30
parent 9e6940e8ee
commit 5487b14ab4
5 changed files with 1179 additions and 0 deletions

View File

@ -0,0 +1,89 @@
# test-engineer
CRITICAL: Read the full YML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:
```yaml
root: .bmad-core
IDE-FILE-RESOLUTION: Dependencies map to files as {root}/{type}/{name}.md where root=".bmad-core", type=folder (tasks/templates/checklists/utils), name=dependency name.
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "test story"→*test-story task, "generate test scenarios"→*generate-scenarios), or ask for clarification if ambiguous.
activation-instructions:
- Follow all instructions in this file -> this defines you, your persona and more importantly what you can do. STAY IN CHARACTER!
- Only read the files/tasks listed here when user selects them for execution to minimize context usage
- The customization field ALWAYS takes precedence over any conflicting instructions
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
agent:
name: Alex
id: test-engineer
title: BMAD Test Engineer & Quality Assurance Specialist
icon: 🧪
whenToUse: Use for comprehensive testing of user stories using Playwright MCP, generating natural language test scenarios, E2E testing, API testing, integration testing, and security validation
customization: |
BMAD Testing Philosophy: Behavior-driven, Model-driven, AI-driven, Documentation-driven testing approach.
Uses Playwright MCP for comprehensive testing including API, authentication, authorization, and data security tests.
Generates natural language test scenarios from user stories that serve as clear instructions for Playwright MCP.
Focuses on comprehensive edge case coverage, security testing, and cross-browser compatibility.
persona:
role: BMAD Test Engineer & Quality Assurance Specialist
style: Methodical, comprehensive, security-focused, behavior-driven, detail-oriented
identity: Expert test engineer specializing in BMAD methodology with deep expertise in Playwright MCP automation
focus: Comprehensive testing through natural language scenarios that cover all aspects of user stories
core_principles:
- BMAD Methodology - Behavior-driven, Model-driven, AI-driven, Documentation-driven testing
- Natural Language Testing - Generate clear, executable test scenarios in natural language
- Comprehensive Coverage - Test all aspects: functionality, security, performance, accessibility
- Edge Case Focus - Identify and test all possible edge cases and failure scenarios
- Security First - Always include authentication, authorization, and data security tests
- Cross-Platform Testing - Ensure compatibility across browsers and devices
- API-First Testing - Validate all API endpoints with proper authentication
- Integration Testing - Test complete user journeys and system interactions
- Documentation-Driven - Generate tests that serve as living documentation
- Continuous Validation - Tests should be maintainable and provide ongoing value
startup:
- Greet the user as Alex, BMAD Test Engineer, and inform of the *help command.
- Explain that you specialize in generating comprehensive natural language test scenarios using Playwright MCP
- Mention that you can test entire stories or generate test scenarios for manual execution
- CRITICAL: Before executing any testing commands, automatically load all required dependencies:
* Read and load validate-scenarios.md task
* Read and load generate-test-scenarios.md task
* Read and load generate-test-files.md task
* Read and load bmad-test-scenarios-tmpl.md template
* Read and load test-file-generation-tmpl.md template
* Verify all dependencies are accessible before proceeding
- If any dependencies are missing, inform the user and provide guidance on resolution
- IMPORTANT: Enforce workflow dependencies:
* *validate-scenarios requires successful completion of *generate-scenarios first
* *generate-test-files requires successful completion of *validate-scenarios first
- Track execution status to enforce workflow: *generate-scenarios → *validate-scenarios → *generate-test-files
- **Pre-execution Validation**: Before executing `*generate-test-files`, the agent MUST check if `*validate-scenarios` was run in the current session. If not, prompt the user to execute `*validate-scenarios` first
commands: # All commands require * prefix when used (e.g., *help)
- help: Show numbered list of the following commands to allow selection
- generate-scenarios: Generate natural language test scenarios for a specific story
- validate-scenarios: Validate generated scenarios through interactive browser testing with Playwright MCP (requires *generate-scenarios first)
- generate-test-files: Convert validated scenarios to TypeScript Playwright test files (ONLY available after successful *validate-scenarios execution)
- security-audit: Perform comprehensive security testing including auth, authorization, and data protection
- api-test: Test all API endpoints with authentication and edge cases
- e2e-test: Execute end-to-end user journey testing across browsers
- integration-test: Test system integration points and data flow
- chat-mode: (Default) Testing consultation with advanced test scenario generation
- exit: Say goodbye as the BMAD Test Engineer, and then abandon inhabiting this persona
dependencies:
tasks:
- validate-scenarios
- generate-test-scenarios
- generate-test-files
templates:
- bmad-test-scenarios-tmpl
- test-file-generation-tmpl
data:
- technical-preferences
utils:
- template-format
stories_path: docs\stories
file_paths:
- .bmad-core\tasks\test-story-comprehensive.md
- .bmad-core\tasks\generate-test-scenarios.md
- .bmad-core\templates\bmad-test-scenarios-tmpl.md
- .bmad-core\templates\test-file-generation-tmpl.md
```

View File

@ -0,0 +1,207 @@
# Generate Test Files
## Purpose
Convert natural language Playwright MCP test scenarios into executable TypeScript Playwright test files. This command is only available AFTER successful completion of the `*test-story` command.
## Prerequisites
- **CRITICAL**: User must have successfully executed `*validate-scenarios` command first
- All scenarios from the interactive validation phase must have PASSED
- Natural language test scenarios must be available from the previous validation session
- Target directory structure must exist: `packages/e2e-tests/`
## Dependency Validation
Before executing this task, verify:
1. **Scenario Validation Status**: Confirm `*validate-scenarios` was completed successfully
2. **Validation Results**: All interactive scenario validation passed during Playwright MCP execution
3. **Scenario Availability**: Validated natural language scenarios are available for conversion
4. **Directory Structure**: `packages/e2e-tests/` directory exists
5. **Template Access**: `test-file-generation-tmpl.md` template is loaded
## Inputs Required
1. **Story Identifier**: Which story's test scenarios to convert (e.g., "Story 2.1")
2. **Scenario Validation Results**: Results and fixes from the interactive `*validate-scenarios` session
3. **Target Directory**: Where to save generated test files (default: `packages/e2e-tests/`)
4. **Test Types**: Which types to generate (API, E2E, Integration, or All)
5. **Interactive Fixes**: Any fixes or adjustments discovered during interactive scenario validation
## Process
### Phase 1: Validation and Setup
1. **Verify Prerequisites**
```
Check that *validate-scenarios command was executed successfully
Verify all scenario validations passed in the interactive session
Confirm validated natural language scenarios are available
Validate target directory structure exists
```
2. **Load Required Dependencies**
```
Load test-file-generation-tmpl.md template
Load the story file for context
Load the validated natural language test scenarios from previous session
Load any fixes or adjustments from interactive scenario validation
```
3. **Analyze Test Scenarios**
```
Parse natural language Playwright MCP scenarios
Categorize scenarios by type (API, E2E, Integration)
Identify test data requirements
Extract authentication and setup requirements
Note any browser-specific or cross-platform requirements
```
### Phase 2: Test File Generation
1. **Generate API Test Files**
```
Convert Playwright_get, Playwright_post, Playwright_put, Playwright_delete commands
Transform to request-based API tests using Playwright's request context
Include authentication setup and token management
Add proper TypeScript interfaces for API responses
Include error handling and edge case testing
Generate test data fixtures
```
2. **Generate E2E Test Files**
```
Convert Playwright_navigate, Playwright_click, Playwright_fill commands
Transform to page-based UI tests using Playwright's page context
Include API mocking for deterministic tests
Add proper selectors using data-testid attributes
Include accessibility and responsive design tests
Add screenshot capture for visual verification
```
3. **Generate Integration Test Files**
```
Convert complex scenarios that combine UI and API interactions
Transform Playwright_expect_response and Playwright_assert_response commands
Include realistic API responses and error scenarios
Add concurrent operation testing
Include performance and load testing scenarios
```
### Phase 3: Code Structure and Organization
1. **Apply Best Practices**
```
Use TypeScript interfaces for type safety
Implement proper test.describe grouping
Add test.beforeEach and test.afterAll setup/teardown
Include proper error handling and cleanup
Add descriptive test names and comments
Implement data-testid selector strategy
```
2. **Create Supporting Files**
```
Generate test data fixtures in fixtures/ directory
Create helper functions in utils/ directory
Generate mock data and API responses
Create authentication and setup utilities
Add configuration files if needed
```
3. **Organize File Structure**
```
packages/e2e-tests/
├── tests/
│ ├── api/story-{{number}}-{{name}}.spec.ts
│ ├── e2e/story-{{number}}-{{name}}.spec.ts
│ └── integration/story-{{number}}-{{name}}.spec.ts
├── fixtures/story-{{number}}-test-data.ts
└── utils/story-{{number}}-helpers.ts
```
### Phase 4: Conversion Rules Application
1. **Transform Playwright MCP Commands**
```
Playwright_navigate → await page.goto()
Playwright_fill → await page.locator().fill()
Playwright_click → await page.locator().click()
Playwright_get → await request.get()
Playwright_post → await request.post()
Playwright_screenshot → await page.screenshot()
Playwright_console_logs → page.on('console')
Playwright_get_visible_text → await expect().toContainText()
```
2. **Add Proper Assertions**
```
Convert verification statements to expect() assertions
Add status code validation for API calls
Include response body validation
Add UI state verification
Include error message validation
```
3. **Include Interactive Fixes**
```
Apply any selector fixes discovered during interactive scenario validation
Include timing adjustments found during validation execution
Add error handling improvements identified during validation
Include performance optimizations discovered during validation
```
### Phase 5: Quality Assurance
1. **Validate Generated Code**
```
Ensure TypeScript syntax is correct
Verify all imports are included
Check that test structure follows best practices
Validate that all scenarios are converted
Ensure proper error handling is included
```
2. **Add Documentation**
```
Include comments explaining test purpose
Add setup and execution instructions
Document any special requirements
Include troubleshooting notes
Add links to related story documentation
```
## Output
### Generated Test Files
- **API Tests**: `packages/e2e-tests/tests/api/story-{{number}}-{{name}}.spec.ts`
- **E2E Tests**: `packages/e2e-tests/tests/e2e/story-{{number}}-{{name}}.spec.ts`
- **Integration Tests**: `packages/e2e-tests/tests/integration/story-{{number}}-{{name}}.spec.ts`
### Supporting Files
- **Test Data**: `packages/e2e-tests/fixtures/story-{{number}}-test-data.ts`
- **Helper Functions**: `packages/e2e-tests/utils/story-{{number}}-helpers.ts`
- **Configuration**: Updated playwright.config.ts if needed
### Documentation
- **Test Execution Guide**: Instructions for running the generated tests
- **Troubleshooting Guide**: Common issues and solutions
- **Maintenance Notes**: How to update tests when story changes
## Success Criteria
- All natural language scenarios are successfully converted to TypeScript
- Generated tests follow established best practices and patterns
- Tests include proper setup, teardown, and error handling
- Code is properly typed with TypeScript interfaces
- Tests are organized by type and follow naming conventions
- Supporting files (fixtures, helpers) are generated
- All interactive fixes are incorporated into the generated code
- Tests are ready for CI/CD integration
## Error Handling
- If `*validate-scenarios` was not executed first, provide clear error message and guidance
- If scenario validation failed during interactive session, require fixes before proceeding
- If target directory doesn't exist, create it or provide setup instructions
- If conversion fails, provide detailed error information and suggested fixes
## Notes
- This command builds upon the interactive validation results from `*validate-scenarios`
- Generated tests capture the knowledge and fixes from real browser scenario validation
- Tests are designed to be maintainable and follow project conventions
- The workflow ensures that only validated, working scenarios are converted to code
- Generated tests serve as regression tests for future development

View File

@ -0,0 +1,247 @@
# Generate Test Scenarios
## Purpose
Generate comprehensive natural language test scenarios for a user story using the BMAD methodology, without executing them. These scenarios can be used for manual testing or later automated execution with Playwright MCP.
## Prerequisites
- Access to the story file to be analyzed
- Understanding of the application architecture
- Knowledge of authentication and authorization requirements
- Access to API documentation (OpenAPI/Swagger)
## Inputs Required
1. **Story Identifier**: Which story to generate scenarios for (e.g., "Story 2.1", "docs/stories/2.1.story.md")
2. **Environment Details**:
- Frontend URL (default: http://localhost:3000)
- Backend URL (default: http://localhost:8000)
- OpenAPI URL (default: http://localhost:8000/swagger/)
3. **Test Focus Areas**: Which areas to emphasize (API, E2E, Security, Performance, etc.)
4. **Output Format**: Where to save the generated scenarios
## Process
### Phase 1: Story Analysis
1. **Read and parse the story file**
- Extract story title, description, and acceptance criteria
- Identify user roles involved (vendor, admin, etc.)
- Note technical requirements and constraints
- Identify dependencies on other stories
2. **Identify test scope**
- List all API endpoints mentioned or implied
- Identify frontend pages and components
- Note authentication and authorization requirements
- Identify data models and relationships
3. **Extract security requirements**
- Authentication mechanisms
- Authorization rules
- Data access controls
- Input validation requirements
### Phase 2: Test Scenario Generation
1. **Use bmad-test-scenarios-tmpl as base template**
- Fill in story-specific information
- Customize scenarios based on story requirements
- Add story-specific edge cases
2. **Generate Authentication & Authorization scenarios**
```
Based on story requirements, create scenarios for:
- User registration (if applicable)
- User login flows
- Role-based access control
- Session management
- Token handling
- Unauthorized access attempts
```
3. **Generate API Testing scenarios**
```
For each API endpoint in the story:
- Authentication testing (401, 403 scenarios)
- CRUD operation testing
- Input validation testing
- Error handling testing
- Rate limiting testing
- Security testing (SQL injection, XSS)
```
4. **Generate Frontend E2E scenarios**
```
Based on acceptance criteria:
- Complete user journey scenarios
- Form interaction scenarios
- Navigation scenarios
- Responsive design scenarios
- Accessibility scenarios
```
5. **Generate Integration scenarios**
```
- Frontend-backend communication
- Database integration
- Third-party service integration (if applicable)
- Error propagation scenarios
```
6. **Generate Security scenarios**
```
- Authentication security
- Data isolation testing
- Input sanitization
- CSRF protection
- XSS prevention
- SQL injection prevention
```
7. **Generate Performance scenarios**
```
- Page load time testing
- API response time testing
- Concurrent user testing
- Database query optimization
```
8. **Generate Cross-browser scenarios**
```
- Browser compatibility testing
- JavaScript functionality across browsers
- Responsive design across browsers
```
### Phase 3: Edge Case Identification
1. **Identify boundary conditions**
- Maximum/minimum input values
- Empty/null data scenarios
- Large dataset scenarios
- Concurrent operation scenarios
2. **Identify failure scenarios**
- Network failures
- Server errors
- Database connection issues
- Invalid user inputs
- Expired sessions/tokens
3. **Identify security edge cases**
- Privilege escalation attempts
- Data leakage scenarios
- Session hijacking attempts
- Brute force attack scenarios
### Phase 4: Scenario Documentation
1. **Format scenarios in natural language**
- Use clear, actionable language
- Include expected outcomes
- Specify verification steps
- Add context and prerequisites
2. **Organize scenarios by category**
- Group related scenarios together
- Prioritize by importance/risk
- Add execution order dependencies
- Include setup and teardown steps
3. **Add execution metadata**
- Estimated execution time
- Required test data
- Browser requirements
- Environment prerequisites
## Output Format
### Generated Test Scenarios Document
The output will be a comprehensive test scenarios document based on the bmad-test-scenarios-tmpl template, customized for the specific story, including:
1. **Story Context Section**
- Story description and acceptance criteria
- Dependencies and prerequisites
- Environment setup instructions
2. **Authentication & Authorization Scenarios**
- Pre-authentication tests
- Registration and login flows
- Role-based access control tests
3. **API Testing Scenarios**
- Endpoint authentication tests
- CRUD operation tests
- Data validation tests
4. **Frontend E2E Scenarios**
- User journey tests
- Form interaction tests
- Responsive design tests
5. **Security Test Scenarios**
- Authentication security tests
- Data security tests
- Input validation tests
6. **Integration Test Scenarios**
- Frontend-backend integration
- Database integration tests
7. **Cross-Browser Compatibility Scenarios**
- Browser-specific tests
8. **Performance Test Scenarios**
- Load time tests
- API performance tests
9. **Error Handling Scenarios**
- Network error tests
- Server error tests
10. **Execution Checklist**
- Success criteria
- Manual verification steps
- Regression test scenarios
## Usage Instructions
### For Manual Testing
- Use scenarios as step-by-step testing instructions
- Execute scenarios in order of priority
- Document results and issues found
- Use checklist to track completion
### For Automated Testing with Playwright MCP
- Use scenarios as natural language prompts for Playwright MCP
- Execute scenarios through AI agent with Playwright MCP integration
- Combine multiple scenarios for comprehensive test runs
- Use for continuous integration testing
### For Documentation
- Scenarios serve as living documentation of expected behavior
- Use for onboarding new team members
- Reference for understanding system requirements
- Basis for future test automation
## Quality Criteria
### Comprehensive Coverage
- All acceptance criteria are covered by test scenarios
- Edge cases and failure scenarios are included
- Security considerations are thoroughly addressed
- Performance requirements are validated
### Clear and Actionable
- Scenarios are written in clear, unambiguous language
- Steps are specific and executable
- Expected outcomes are clearly defined
- Prerequisites and setup are documented
### Maintainable
- Scenarios are organized logically
- Dependencies are clearly marked
- Test data requirements are specified
- Scenarios can be easily updated as requirements change
## Notes
- Generated scenarios follow BMAD methodology principles
- Scenarios are designed to work with Playwright MCP's natural language interface
- Focus on behavior-driven testing approach
- Scenarios can be executed manually or automated
- Output serves as both test plan and documentation

View File

@ -0,0 +1,262 @@
# Validate Scenarios
## Purpose
Validate natural language test scenarios for a user story using Playwright MCP through interactive browser testing. This command validates scenarios generated by *generate-scenarios and discovers fixes through real browser interaction.
## Prerequisites
- **CRITICAL**: User must have successfully executed `*generate-scenarios` command first
- Natural language test scenarios must be available from the previous generation phase
- Playwright MCP server is installed and configured
- Frontend and backend applications are running locally
- Test database is set up and accessible
- OpenAPI documentation is available
## Dependency Validation
Before executing this task, verify:
1. **Scenario Generation Status**: Confirm `*generate-scenarios` was completed successfully
2. **Scenario Availability**: Natural language scenarios are available for validation
3. **Environment Setup**: All required services are running
4. **Playwright MCP**: Server is accessible and configured
## Inputs Required
1. **Story Identifier**: Which story's scenarios to validate (e.g., "Story 2.1", "docs/stories/2.1.story.md")
2. **Environment URLs**:
- Frontend URL (default: http://localhost:3000)
- Backend URL (default: http://localhost:8000)
- OpenAPI URL (default: http://localhost:8000/swagger/)
3. **Validation Scope**: Full comprehensive validation or specific areas (API, E2E, Security, etc.)
4. **Browser Coverage**: Which browsers to test (default: chromium, firefox, webkit)
## Process
### Phase 1: Scenario Loading & Environment Preparation
1. **Load generated scenarios from previous *generate-scenarios execution**
- Read natural language test scenarios
- Parse scenario structure and requirements
- Identify all test cases and edge cases
- Note security requirements and authentication needs
2. **Verify test environment is ready**
```
Use Playwright_navigate to navigate to {frontend_url} and verify application loads
Use Playwright_get to call {backend_url}/api/health/ and verify API is responding
Use Playwright_navigate to {openapi_url} and verify API documentation is accessible
```
3. **Set up test data if needed**
- Create test users with appropriate roles
- Set up any required test data in database
- Prepare authentication tokens for API testing
### Phase 2: Authentication & Authorization Validation
1. **Validate unauthenticated access scenarios**
```
Use Playwright_navigate to navigate to {frontend_url}
Attempt to access protected routes without authentication
Verify redirects to login page occur
Use Playwright_get to test API endpoints without authentication tokens
Verify 401 Unauthorized responses
Use Playwright_console_logs to check for any authorization errors
```
2. **Validate user registration and login flow scenarios**
```
Use Playwright_navigate to navigate to {frontend_url}/register
Use Playwright_fill to fill registration form with test data
Use Playwright_click to submit registration and verify success
Use Playwright_navigate to navigate to login page
Use Playwright_fill to fill login form with created credentials
Use Playwright_click to submit login form
Use Playwright_console_logs to verify JWT tokens are stored
Use Playwright_get_visible_text to verify redirect to appropriate dashboard
```
3. **Validate role-based access control scenarios**
```
Login as different user types (vendor, admin, etc.)
Use Playwright_navigate to verify each role can only access appropriate features
Use Playwright_get to test API endpoints with different user roles
Verify proper authorization responses (403 for unauthorized)
Use Playwright_get to test row-level security (users can only see their own data)
Use Playwright_console_logs to check for any authorization errors
```
### Phase 3: API Validation
1. **Validate all API endpoint scenarios**
```
For each endpoint in the OpenAPI documentation:
- Use Playwright_get to test without authentication (expect 401)
- Use Playwright_get to test with invalid token (expect 401)
- Use Playwright_get to test with valid token (expect appropriate response)
- Use Playwright_post/put/patch/delete for CRUD operations if applicable
- Use Playwright_post to test input validation (invalid data should return 400)
- Test edge cases and boundary conditions
```
2. **Validate API security scenarios**
```
Use Playwright_post to test for SQL injection vulnerabilities
Use Playwright_post to test for XSS vulnerabilities in API responses
Verify proper input sanitization
Use Playwright_get to test rate limiting on API endpoints
Verify CSRF protection on state-changing operations
Use Playwright_console_logs to monitor for security warnings
```
### Phase 4: Frontend E2E Validation
1. **Validate complete user journey scenarios**
```
Execute the primary user flow described in the story
Use Playwright_navigate through each step of the acceptance criteria
Use Playwright_click, Playwright_fill, Playwright_select for interactions
Use Playwright_screenshot to take screenshots at key steps for documentation
Use Playwright_get_visible_text to verify each interaction works as expected
Use Playwright_console_logs to check for JavaScript errors
Verify final state matches expected outcome
```
2. **Validate form interaction scenarios**
```
For each form in the story:
- Use Playwright_fill to test form validation (empty fields, invalid data)
- Use Playwright_click to test successful form submission
- Use Playwright_get_visible_text to verify error messages are user-friendly
- Use Playwright_press_key to test keyboard navigation and accessibility
```
3. **Validate responsive design scenarios**
```
Use Playwright_navigate with width=1920 and height=1080 to test desktop viewport
Use Playwright_screenshot to capture desktop view
Use Playwright_navigate with width=768 and height=1024 to test tablet viewport
Use Playwright_screenshot to capture tablet view
Use Playwright_navigate with width=375 and height=667 to test mobile viewport
Use Playwright_screenshot to capture mobile view
Verify layout adapts correctly by comparing screenshots
```
### Phase 5: Integration Validation
1. **Validate frontend-backend integration scenarios**
```
Use Playwright_expect_response to set up monitoring for API calls
Use Playwright_click to trigger actions that make API calls
Use Playwright_assert_response to verify API calls are made correctly
Use Playwright_console_logs to verify no network errors occurred
Test error handling for API failures
Test loading states during API calls
```
2. **Validate database integration scenarios**
```
Use Playwright_navigate to create data via frontend
Use Playwright_get to verify data exists in database via API
Use Playwright_navigate to modify data via frontend
Use Playwright_get to verify changes persist in database
Use Playwright_navigate to delete data via frontend
Use Playwright_get to verify removal from database
```
### Phase 6: Cross-Browser Validation
1. **Validate critical flows in multiple browsers**
```
For each browser (chromium, firefox, webkit):
- Use Playwright_navigate with browserType to execute primary user journey
- Test authentication flows
- Verify JavaScript functionality
- Test form submissions
- Use Playwright_screenshot to verify responsive design renders correctly
- Use Playwright_console_logs to check for browser-specific issues
```
### Phase 7: Security Audit Validation
1. **Validate comprehensive security scenarios**
```
Use Playwright_post to test password security requirements
Verify secure token storage (httpOnly cookies)
Use Playwright_post to test CSRF protection
Use Playwright_get to test rate limiting
Use Playwright_get to verify data isolation between users
Use Playwright_post to test input sanitization and output escaping
Use Playwright_console_logs to monitor for security warnings
```
### Phase 8: Performance Validation
1. **Validate page load time scenarios**
```
Use Playwright_navigate to navigate to key pages and measure load times
Use Playwright_console_logs to check for performance warnings
Use Playwright_screenshot to document page states
Test with simulated slow network conditions
```
2. **Validate API performance scenarios**
```
Use Playwright_get to measure API response times
Use Playwright_evaluate to test with concurrent requests
Use Playwright_console_logs to monitor for performance warnings
Verify database query optimization
```
### Phase 9: Error Handling Validation
1. **Validate network error scenarios**
```
Use Playwright_evaluate to simulate network failures during operations
Use Playwright_get_visible_text to verify graceful error handling
Test retry mechanisms
Use Playwright_console_logs to verify user feedback for errors
```
2. **Validate server error scenarios**
```
Use Playwright_evaluate to simulate 500 Internal Server Error responses
Use Playwright_get_visible_text to verify user-friendly error messages
Use Playwright_console_logs to verify application doesn't crash
```
## Output
### Validation Results Summary
- **Total Scenarios Validated**: {count}
- **Scenarios Passed**: {count}
- **Scenarios Failed**: {count}
- **Issues Found and Fixed**: {count}
- **Security Issues Found**: {count}
- **Performance Issues**: {count}
- **Cross-Browser Issues**: {count}
### Detailed Validation Report
- **Authentication & Authorization**: {status and details}
- **API Validation**: {status and details}
- **Frontend E2E**: {status and details}
- **Integration Validation**: {status and details}
- **Security Audit**: {status and details}
- **Performance Validation**: {status and details}
- **Cross-Browser Compatibility**: {status and details}
### Issues Found and Fixes Applied
- List any bugs, security vulnerabilities, or performance issues discovered
- Include screenshots and console logs where relevant
- Document fixes and adjustments made during validation
- Provide recommendations for improvements
### Validated Scenarios Ready for Code Generation
- All scenarios that passed validation
- Fixes and adjustments incorporated
- Context for *generate-test-files command
## Success Criteria
- All acceptance criteria from the story are validated through browser interaction
- No critical security vulnerabilities found
- All API endpoints function correctly with proper authentication
- User journeys work across all supported browsers
- Performance meets established benchmarks
- Error scenarios are handled gracefully
- Data integrity is maintained throughout all operations
- All scenarios are ready for conversion to TypeScript test files
## Notes
- This task validates scenarios generated by *generate-scenarios
- All validation is done through interactive Playwright MCP browser testing
- Issues discovered are fixed in real-time during validation
- Results provide context and fixes for *generate-test-files command
- Validated scenarios serve as the foundation for production test code generation

View File

@ -0,0 +1,374 @@
# Test File Generation Template
## Story Context
**Story**: {{Story Number}} - {{Story Title}}
**Test Execution Status**: {{Must be PASSED - verified through *test-story command}}
**Interactive Testing Fixes**: {{Capture any fixes discovered during Playwright MCP testing}}
## Test File Structure
### API Tests Template
```typescript
import { test, expect } from '@playwright/test';
// Define TypeScript interfaces for API responses
interface {{ResourceName}}Response {
id: number;
{{additional_fields_based_on_story}};
}
interface AuthResponse {
access: string;
refresh: string;
user: {
id: number;
email: string;
role: string;
};
}
interface ErrorResponse {
error: string;
details?: string[];
}
test.describe('{{Story Title}} - API Tests', () => {
let authToken: string;
let testUserId: number;
test.beforeAll(async ({ request }) => {
// Setup: Create test user and get authentication token
const registerResponse = await request.post('/api/auth/register/', {
data: {
email: 'test-{{timestamp}}@example.com',
password: 'SecurePass123!',
first_name: 'Test',
last_name: 'User',
organization_name: 'Test Org'
}
});
expect(registerResponse.status()).toBe(201);
const loginResponse = await request.post('/api/auth/login/', {
data: {
email: 'test-{{timestamp}}@example.com',
password: 'SecurePass123!'
}
});
expect(loginResponse.status()).toBe(200);
const authData: AuthResponse = await loginResponse.json();
authToken = authData.access;
testUserId = authData.user.id;
});
test.afterAll(async ({ request }) => {
// Cleanup: Remove test data
if (testUserId) {
await request.delete(`/api/users/${testUserId}/`, {
headers: { Authorization: `Bearer ${authToken}` }
});
}
});
{{API_TEST_SCENARIOS}}
test('should handle authentication errors correctly', async ({ request }) => {
const response = await request.get('/api/{{protected_endpoint}}/', {
headers: { Authorization: 'Bearer invalid-token' }
});
expect(response.status()).toBe(401);
const errorData: ErrorResponse = await response.json();
expect(errorData).toHaveProperty('error');
});
test('should handle authorization errors correctly', async ({ request }) => {
// Test with valid token but insufficient permissions
const response = await request.get('/api/admin/{{admin_endpoint}}/', {
headers: { Authorization: `Bearer ${authToken}` }
});
expect(response.status()).toBe(403);
});
});
```
### E2E Tests Template
```typescript
import { test, expect } from '@playwright/test';
test.describe('{{Story Title}} - E2E Tests', () => {
test.beforeEach(async ({ page }) => {
// Mock API responses for consistent testing
await page.route('**/api/auth/login/', async route => {
const request = route.request();
const body = await request.postDataJSON();
if (body.email === 'test@example.com' && body.password === 'SecurePass123!') {
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
access: 'mock-jwt-token',
refresh: 'mock-refresh-token',
user: { id: 1, email: 'test@example.com', role: 'vendor' }
})
});
} else {
await route.fulfill({
status: 401,
contentType: 'application/json',
body: JSON.stringify({ error: 'Invalid credentials' })
});
}
});
{{ADDITIONAL_API_MOCKS}}
await page.goto('/');
});
{{E2E_TEST_SCENARIOS}}
test('should handle network errors gracefully', async ({ page }) => {
// Simulate network failure
await page.route('**/api/**', route => route.abort());
await page.goto('/{{test_page}}');
// Verify error handling
await expect(page.locator('[data-testid="error-message"]')).toBeVisible();
await expect(page.locator('[data-testid="error-message"]')).toContainText('Network error');
});
test('should maintain accessibility standards', async ({ page }) => {
await page.goto('/{{test_page}}');
// Test keyboard navigation
await page.keyboard.press('Tab');
await expect(page.locator(':focus')).toBeVisible();
// Test ARIA labels
const form = page.locator('form');
await expect(form).toHaveAttribute('aria-label');
});
});
```
### Integration Tests Template
```typescript
import { test, expect } from '@playwright/test';
test.describe('{{Story Title}} - Integration Tests', () => {
test.beforeEach(async ({ page }) => {
// Set up realistic API responses
await page.route('**/api/{{resource}}/**', async route => {
const method = route.request().method();
const url = route.request().url();
if (method === 'GET' && url.includes('/api/{{resource}}/')) {
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({{mock_data}})
});
} else if (method === 'POST') {
const body = await route.request().postDataJSON();
await route.fulfill({
status: 201,
contentType: 'application/json',
body: JSON.stringify({ id: Date.now(), ...body })
});
}
});
await page.goto('/{{test_page}}');
});
{{INTEGRATION_TEST_SCENARIOS}}
test('should handle concurrent operations correctly', async ({ page }) => {
// Test multiple simultaneous actions
const promises = [
page.click('[data-testid="action-1"]'),
page.click('[data-testid="action-2"]'),
page.click('[data-testid="action-3"]')
];
await Promise.all(promises);
// Verify final state is consistent
await expect(page.locator('[data-testid="status"]')).toContainText('All operations completed');
});
});
```
## Test Scenario Conversion Rules
### Natural Language → TypeScript Conversion
**Playwright MCP**: `Use Playwright_navigate to navigate to {{URL}}`
**TypeScript**: `await page.goto('{{URL}}');`
**Playwright MCP**: `Use Playwright_fill to fill {{field}} with selector "{{selector}}" and value "{{value}}"`
**TypeScript**: `await page.locator('{{selector}}').fill('{{value}}');`
**Playwright MCP**: `Use Playwright_click to click {{element}} with selector "{{selector}}"`
**TypeScript**: `await page.locator('{{selector}}').click();`
**Playwright MCP**: `Use Playwright_get to call {{API_URL}} with Authorization header`
**TypeScript**:
```typescript
const response = await request.get('{{API_URL}}', {
headers: { Authorization: `Bearer ${authToken}` }
});
expect(response.status()).toBe(200);
```
**Playwright MCP**: `Use Playwright_post to call {{API_URL}} with JSON body`
**TypeScript**:
```typescript
const response = await request.post('{{API_URL}}', {
data: {{JSON_BODY}},
headers: { Authorization: `Bearer ${authToken}` }
});
expect(response.status()).toBe(201);
```
**Playwright MCP**: `Use Playwright_screenshot to take screenshot named "{{name}}"`
**TypeScript**: `await page.screenshot({ path: 'screenshots/{{name}}.png' });`
**Playwright MCP**: `Use Playwright_console_logs to check for errors`
**TypeScript**:
```typescript
page.on('console', msg => {
if (msg.type() === 'error') {
console.error('Console error:', msg.text());
}
});
```
**Playwright MCP**: `Use Playwright_get_visible_text to verify {{content}}`
**TypeScript**: `await expect(page.locator('{{selector}}')).toContainText('{{content}}');`
**Playwright MCP**: `Verify response status code is {{status}}`
**TypeScript**: `expect(response.status()).toBe({{status}});`
**Playwright MCP**: `Use Playwright_select to select {{option}} with selector "{{selector}}"`
**TypeScript**: `await page.locator('{{selector}}').selectOption('{{option}}');`
**Playwright MCP**: `Use Playwright_hover to hover over {{element}} with selector "{{selector}}"`
**TypeScript**: `await page.locator('{{selector}}').hover();`
**Playwright MCP**: `Use Playwright_press_key to press "{{key}}"`
**TypeScript**: `await page.keyboard.press('{{key}}');`
**Playwright MCP**: `Use Playwright_expect_response to monitor API calls with id "{{id}}"`
**TypeScript**:
```typescript
const responsePromise = page.waitForResponse(response =>
response.url().includes('{{endpoint}}') && response.status() === 200
);
```
**Playwright MCP**: `Use Playwright_assert_response to validate response with id "{{id}}"`
**TypeScript**:
```typescript
const response = await responsePromise;
const data = await response.json();
expect(data).toHaveProperty('{{expected_property}}');
```
## File Organization Structure
```
packages/e2e-tests/
├── tests/
│ ├── api/
│ │ └── story-{{story_number}}-{{story_name}}.spec.ts
│ ├── e2e/
│ │ └── story-{{story_number}}-{{story_name}}.spec.ts
│ └── integration/
│ └── story-{{story_number}}-{{story_name}}.spec.ts
├── fixtures/
│ └── story-{{story_number}}-test-data.ts
└── utils/
└── story-{{story_number}}-helpers.ts
```
## Test Data and Fixtures
```typescript
// fixtures/story-{{story_number}}-test-data.ts
export const testData = {
validUser: {
email: 'test-user@example.com',
password: 'SecurePass123!',
firstName: 'Test',
lastName: 'User',
organization: 'Test Organization'
},
invalidUser: {
email: 'invalid-email',
password: '123',
firstName: '',
lastName: ''
},
apiEndpoints: {
register: '/api/auth/register/',
login: '/api/auth/login/',
profile: '/api/users/profile/',
{{additional_endpoints}}
}
};
export const mockResponses = {
successfulLogin: {
access: 'mock-jwt-token',
refresh: 'mock-refresh-token',
user: { id: 1, email: 'test@example.com', role: 'vendor' }
},
registrationSuccess: {
message: 'Registration successful',
user: { id: 1, email: 'test@example.com' }
},
{{additional_mocks}}
};
```
## Helper Functions
```typescript
// utils/story-{{story_number}}-helpers.ts
import { Page, Request } from '@playwright/test';
export class TestHelpers {
static async loginUser(page: Page, email: string, password: string) {
await page.goto('/login');
await page.locator('[data-testid="email"]').fill(email);
await page.locator('[data-testid="password"]').fill(password);
await page.locator('[data-testid="login-button"]').click();
}
static async createTestUser(request: Request) {
const response = await request.post('/api/auth/register/', {
data: {
email: `test-${Date.now()}@example.com`,
password: 'SecurePass123!',
first_name: 'Test',
last_name: 'User'
}
});
return response.json();
}
static async cleanupTestData(request: Request, userId: number, authToken: string) {
await request.delete(`/api/users/${userId}/`, {
headers: { Authorization: `Bearer ${authToken}` }
});
}
{{additional_helpers}}
}
```