feat: Add TDD infrastructure and scaffold files

- Create TDD tasks: write-failing-tests.md, tdd-implement.md, tdd-refactor.md
- Add TDD story template with frontmatter metadata tracking
- Implement TDD-specific Definition of Done checklist
- Create TDD workflow orchestration (Red-Green-Refactor cycle)
- Add TDD prompts for QA (Red), Dev (Green), and Refactor phases
- Configure test runner auto-detection for JS/TS, Python, Go, Java, .NET, Ruby
- Update core-config.yaml with TDD settings (disabled by default)
- Add story frontmatter schema with TDD metadata fields

All files pass linting and formatting checks.
Backward compatible - TDD disabled by default (tdd.enabled: false)
This commit is contained in:
vforvaick 2025-09-01 21:04:40 +07:00
parent fbd8f1fd73
commit 36ca3c1bfa
12 changed files with 3231 additions and 0 deletions

View File

@ -0,0 +1,188 @@
<!-- Powered by BMAD™ Core -->
# TDD Story Definition of Done Checklist
## Instructions for Agents
This checklist ensures TDD stories meet quality standards across all Red-Green-Refactor cycles. Both QA and Dev agents should validate completion before marking a story as Done.
[[LLM: TDD DOD VALIDATION INSTRUCTIONS
This is a specialized DoD checklist for Test-Driven Development stories. It extends the standard DoD with TDD-specific quality gates.
EXECUTION APPROACH:
1. Verify TDD cycle progression (Red → Green → Refactor → Done)
2. Validate test-first approach was followed
3. Ensure proper test isolation and determinism
4. Check code quality improvements from refactoring
5. Confirm coverage targets are met
CRITICAL: Never mark a TDD story as Done without completing all TDD phases.]]
## TDD Cycle Validation
### Red Phase Completion
[[LLM: Verify tests were written BEFORE implementation]]
- [ ] **Tests written first:** All tests were created before any implementation code
- [ ] **Failing correctly:** Tests fail for the right reasons (missing functionality, not bugs)
- [ ] **Proper test structure:** Tests follow Given-When-Then or Arrange-Act-Assert patterns
- [ ] **Deterministic tests:** No random values, network calls, or time dependencies
- [ ] **External dependencies mocked:** All external services, databases, APIs properly mocked
- [ ] **Test naming:** Clear, descriptive test names that express intent
- [ ] **Story metadata updated:** TDD status set to 'red' and test list populated
### Green Phase Completion
[[LLM: Ensure minimal implementation that makes tests pass]]
- [ ] **All tests passing:** 100% of tests pass consistently
- [ ] **Minimal implementation:** Only code necessary to make tests pass was written
- [ ] **No feature creep:** No functionality added without corresponding failing tests
- [ ] **Test-code traceability:** Implementation clearly addresses specific test requirements
- [ ] **Regression protection:** All previously passing tests remain green
- [ ] **Story metadata updated:** TDD status set to 'green' and test results documented
### Refactor Phase Completion
[[LLM: Verify code quality improvements while maintaining green tests]]
- [ ] **Tests remain green:** All tests continue to pass after refactoring
- [ ] **Code quality improved:** Duplication eliminated, naming improved, structure clarified
- [ ] **Design enhanced:** Better separation of concerns, cleaner interfaces
- [ ] **Technical debt addressed:** Known code smells identified and resolved
- [ ] **Commit discipline:** Small, incremental commits with green tests after each
- [ ] **Story metadata updated:** Refactoring notes and improvements documented
## Test Quality Standards
### Test Implementation Quality
[[LLM: Ensure tests are maintainable and reliable]]
- [ ] **Fast execution:** Unit tests complete in <100ms each
- [ ] **Isolated tests:** Each test can run independently in any order
- [ ] **Single responsibility:** Each test validates one specific behavior
- [ ] **Clear assertions:** Test failures provide meaningful error messages
- [ ] **Appropriate test types:** Right mix of unit/integration/e2e tests
- [ ] **Mock strategy:** Appropriate use of mocks vs fakes vs stubs
### Coverage and Completeness
[[LLM: Validate comprehensive test coverage]]
- [ ] **Coverage target met:** Code coverage meets story's target percentage
- [ ] **Acceptance criteria covered:** All ACs have corresponding tests
- [ ] **Edge cases tested:** Boundary conditions and error scenarios included
- [ ] **Happy path validated:** Primary success scenarios thoroughly tested
- [ ] **Error handling tested:** Exception paths and error recovery validated
## Implementation Quality
### Code Standards Compliance
[[LLM: Ensure production-ready code quality]]
- [ ] **Coding standards followed:** Code adheres to project style guidelines
- [ ] **Architecture alignment:** Implementation follows established patterns
- [ ] **Security practices:** Input validation, error handling, no hardcoded secrets
- [ ] **Performance considerations:** No obvious performance bottlenecks introduced
- [ ] **Documentation updated:** Code comments and documentation reflect changes
### File Organization and Management
[[LLM: Verify proper project structure]]
- [ ] **Test file organization:** Tests follow project's testing folder structure
- [ ] **Naming conventions:** Files and functions follow established patterns
- [ ] **Dependencies managed:** New dependencies properly declared and justified
- [ ] **Import/export clarity:** Clear module interfaces and dependencies
- [ ] **File list accuracy:** All created/modified files documented in story
## TDD Process Adherence
### Methodology Compliance
[[LLM: Confirm true TDD practice was followed]]
- [ ] **Test-first discipline:** No implementation code written before tests
- [ ] **Minimal cycles:** Small Red-Green-Refactor iterations maintained
- [ ] **Refactoring safety:** Only refactored with green test coverage
- [ ] **Requirements traceability:** Clear mapping from tests to acceptance criteria
- [ ] **Collaboration evidence:** QA and Dev agent coordination documented
### Documentation and Traceability
[[LLM: Ensure proper tracking and communication]]
- [ ] **TDD progress tracked:** Story shows progression through all TDD phases
- [ ] **Test execution logged:** Evidence of test runs and results captured
- [ ] **Refactoring documented:** Changes made during refactor phase explained
- [ ] **Agent collaboration:** Clear handoffs between QA (Red) and Dev (Green/Refactor)
- [ ] **Story metadata complete:** All TDD fields properly populated
## Integration and Deployment Readiness
### Build and Deployment
[[LLM: Ensure story integrates properly with project]]
- [ ] **Project builds successfully:** Code compiles without errors or warnings
- [ ] **All tests pass in CI:** Automated test suite runs successfully
- [ ] **No breaking changes:** Existing functionality remains intact
- [ ] **Environment compatibility:** Code works across development environments
- [ ] **Configuration managed:** Any new config values properly documented
### Review Readiness
[[LLM: Story is ready for peer review]]
- [ ] **Complete implementation:** All acceptance criteria fully implemented
- [ ] **Clean commit history:** Clear, logical progression of changes
- [ ] **Review artifacts:** All necessary files and documentation available
- [ ] **No temporary code:** Debug code, TODOs, and temporary hacks removed
- [ ] **Quality gates passed:** All automated quality checks successful
## Final TDD Validation
### Holistic Assessment
[[LLM: Overall TDD process and outcome validation]]
- [ ] **TDD value delivered:** Process improved code design and quality
- [ ] **Test suite value:** Tests provide reliable safety net for changes
- [ ] **Knowledge captured:** Future developers can understand and maintain code
- [ ] **Standards elevated:** Code quality meets or exceeds project standards
- [ ] **Learning documented:** Any insights or patterns discovered are captured
### Story Completion Criteria
[[LLM: Final checklist before marking Done]]
- [ ] **Business value delivered:** Story provides promised user value
- [ ] **Technical debt managed:** Any remaining debt is documented and acceptable
- [ ] **Future maintainability:** Code can be easily modified and extended
- [ ] **Production readiness:** Code is ready for production deployment
- [ ] **TDD story complete:** All TDD-specific requirements fulfilled
## Completion Declaration
**Agent Validation:**
- [ ] **QA Agent confirms:** Test strategy executed successfully, coverage adequate
- [ ] **Dev Agent confirms:** Implementation complete, code quality satisfactory
**Final Status:**
- [ ] **Story marked Done:** All DoD criteria met and verified
- [ ] **TDD status complete:** Story TDD metadata shows 'done' status
- [ ] **Ready for review:** Story package complete for stakeholder review
---
**Validation Date:** {date}
**Validating Agents:** {qa_agent} & {dev_agent}
**TDD Cycles Completed:** {cycle_count}
**Final Test Status:** {passing_count} passing, {failing_count} failing

View File

@ -0,0 +1,296 @@
# <!-- Powered by BMAD™ Core -->
# Test Runner Auto-Detection Configuration
# Used by BMAD TDD framework to detect and configure test runners
detection_rules:
# JavaScript/TypeScript ecosystem
javascript:
priority: 1
detection_files:
- "package.json"
detection_logic:
- check_dependencies: ["jest", "vitest", "mocha", "cypress", "@testing-library"]
- check_scripts: ["test", "test:unit", "test:integration"]
runners:
jest:
detection_patterns:
- dependency: "jest"
- config_file: ["jest.config.js", "jest.config.json"]
commands:
test: "npm test"
test_single_file: "npm test -- {file_path}"
test_watch: "npm test -- --watch"
test_coverage: "npm test -- --coverage"
file_patterns:
unit: ["**/*.test.js", "**/*.spec.js", "**/*.test.ts", "**/*.spec.ts"]
integration: ["**/*.integration.test.js", "**/*.int.test.js"]
report_paths:
coverage: "coverage/lcov-report/index.html"
junit: "coverage/junit.xml"
vitest:
detection_patterns:
- dependency: "vitest"
- config_file: ["vitest.config.js", "vitest.config.ts"]
commands:
test: "npm run test"
test_single_file: "npx vitest run {file_path}"
test_watch: "npx vitest"
test_coverage: "npx vitest run --coverage"
file_patterns:
unit: ["**/*.test.js", "**/*.spec.js", "**/*.test.ts", "**/*.spec.ts"]
integration: ["**/*.integration.test.js", "**/*.int.test.js"]
report_paths:
coverage: "coverage/index.html"
mocha:
detection_patterns:
- dependency: "mocha"
- config_file: [".mocharc.json", ".mocharc.yml"]
commands:
test: "npx mocha"
test_single_file: "npx mocha {file_path}"
test_watch: "npx mocha --watch"
test_coverage: "npx nyc mocha"
file_patterns:
unit: ["test/**/*.js", "test/**/*.ts"]
integration: ["test/integration/**/*.js"]
report_paths:
coverage: "coverage/index.html"
# Python ecosystem
python:
priority: 2
detection_files:
- "requirements.txt"
- "requirements-dev.txt"
- "pyproject.toml"
- "setup.py"
- "pytest.ini"
- "tox.ini"
detection_logic:
- check_requirements: ["pytest", "unittest2", "nose2"]
- check_pyproject: ["pytest", "unittest"]
runners:
pytest:
detection_patterns:
- requirement: "pytest"
- config_file: ["pytest.ini", "pyproject.toml", "setup.cfg"]
commands:
test: "pytest"
test_single_file: "pytest {file_path}"
test_watch: "pytest-watch"
test_coverage: "pytest --cov=."
file_patterns:
unit: ["test_*.py", "*_test.py", "tests/unit/**/*.py"]
integration: ["tests/integration/**/*.py", "tests/int/**/*.py"]
report_paths:
coverage: "htmlcov/index.html"
junit: "pytest-report.xml"
unittest:
detection_patterns:
- python_version: ">=2.7"
- fallback: true
commands:
test: "python -m unittest discover"
test_single_file: "python -m unittest {module_path}"
test_coverage: "coverage run -m unittest discover && coverage html"
file_patterns:
unit: ["test_*.py", "*_test.py"]
integration: ["integration_test_*.py"]
report_paths:
coverage: "htmlcov/index.html"
# Go ecosystem
go:
priority: 3
detection_files:
- "go.mod"
- "go.sum"
detection_logic:
- check_go_files: ["*_test.go"]
runners:
go_test:
detection_patterns:
- files_exist: ["*.go", "*_test.go"]
commands:
test: "go test ./..."
test_single_package: "go test {package_path}"
test_single_file: "go test -run {test_function}"
test_coverage: "go test -coverprofile=coverage.out ./... && go tool cover -html=coverage.out"
test_watch: "gotestsum --watch"
file_patterns:
unit: ["*_test.go"]
integration: ["*_integration_test.go", "*_int_test.go"]
report_paths:
coverage: "coverage.html"
# Java ecosystem
java:
priority: 4
detection_files:
- "pom.xml"
- "build.gradle"
- "build.gradle.kts"
detection_logic:
- check_maven_dependencies: ["junit", "testng", "junit-jupiter"]
- check_gradle_dependencies: ["junit", "testng", "junit-platform"]
runners:
maven:
detection_patterns:
- file: "pom.xml"
commands:
test: "mvn test"
test_single_class: "mvn test -Dtest={class_name}"
test_coverage: "mvn clean jacoco:prepare-agent test jacoco:report"
file_patterns:
unit: ["src/test/java/**/*Test.java", "src/test/java/**/*Tests.java"]
integration: ["src/test/java/**/*IT.java", "src/integration-test/java/**/*.java"]
report_paths:
coverage: "target/site/jacoco/index.html"
surefire: "target/surefire-reports"
gradle:
detection_patterns:
- file: ["build.gradle", "build.gradle.kts"]
commands:
test: "gradle test"
test_single_class: "gradle test --tests {class_name}"
test_coverage: "gradle test jacocoTestReport"
file_patterns:
unit: ["src/test/java/**/*Test.java", "src/test/java/**/*Tests.java"]
integration: ["src/integrationTest/java/**/*.java"]
report_paths:
coverage: "build/reports/jacoco/test/html/index.html"
junit: "build/test-results/test"
# .NET ecosystem
dotnet:
priority: 5
detection_files:
- "*.csproj"
- "*.sln"
- "global.json"
detection_logic:
- check_project_references: ["Microsoft.NET.Test.Sdk", "xunit", "NUnit", "MSTest"]
runners:
dotnet_test:
detection_patterns:
- files_exist: ["*.csproj"]
- test_project_reference: ["Microsoft.NET.Test.Sdk"]
commands:
test: "dotnet test"
test_single_project: "dotnet test {project_path}"
test_coverage: 'dotnet test --collect:"XPlat Code Coverage"'
test_watch: "dotnet watch test"
file_patterns:
unit: ["**/*Tests.cs", "**/*Test.cs"]
integration: ["**/*IntegrationTests.cs", "**/*.Integration.Tests.cs"]
report_paths:
coverage: "TestResults/*/coverage.cobertura.xml"
trx: "TestResults/*.trx"
# Ruby ecosystem
ruby:
priority: 6
detection_files:
- "Gemfile"
- "*.gemspec"
detection_logic:
- check_gems: ["rspec", "minitest", "test-unit"]
runners:
rspec:
detection_patterns:
- gem: "rspec"
- config_file: [".rspec", "spec/spec_helper.rb"]
commands:
test: "rspec"
test_single_file: "rspec {file_path}"
test_coverage: "rspec --coverage"
file_patterns:
unit: ["spec/**/*_spec.rb"]
integration: ["spec/integration/**/*_spec.rb"]
report_paths:
coverage: "coverage/index.html"
minitest:
detection_patterns:
- gem: "minitest"
commands:
test: "ruby -Itest test/test_*.rb"
test_single_file: "ruby -Itest {file_path}"
file_patterns:
unit: ["test/test_*.rb", "test/*_test.rb"]
report_paths:
coverage: "coverage/index.html"
# Auto-detection algorithm
detection_algorithm:
steps:
1. scan_project_root: "Look for detection files in project root"
2. check_subdirectories: "Scan up to 2 levels deep for test indicators"
3. apply_priority_rules: "Higher priority languages checked first"
4. validate_runner: "Ensure detected runner actually works"
5. fallback_to_custom: "Use custom command if no runner detected"
validation_commands:
- run_help_command: "Check if runner responds to --help"
- run_version_command: "Verify runner version"
- check_sample_test: "Try to run a simple test if available"
# Fallback configuration
fallback:
enabled: true
custom_command: null # Will be prompted from user or config
prompt_user:
- "No test runner detected. Please specify test command:"
- "Example: 'npm test' or 'pytest' or 'go test ./...'"
- "Leave blank to skip test execution"
# TDD-specific settings
tdd_configuration:
preferred_test_types:
- unit # Fastest, most isolated
- integration # Component interactions
- e2e # Full user journeys
test_execution_timeout: 300 # 5 minutes max per test run
coverage_thresholds:
minimum: 0.0 # No minimum by default
warning: 70.0 # Warn below 70%
target: 80.0 # Target 80%
excellent: 90.0 # Excellent above 90%
watch_mode:
enabled: true
file_patterns: ["src/**/*", "test/**/*", "tests/**/*"]
ignore_patterns: ["node_modules/**", "coverage/**", "dist/**"]
# Integration with BMAD agents
agent_integration:
qa_agent:
commands_available:
- "run_failing_tests"
- "verify_test_isolation"
- "check_mocking_strategy"
dev_agent:
commands_available:
- "run_tests_for_implementation"
- "check_coverage_improvement"
- "validate_no_feature_creep"
both_agents:
commands_available:
- "run_full_regression_suite"
- "generate_coverage_report"
- "validate_test_performance"

View File

@ -21,3 +21,14 @@ devLoadAlwaysFiles:
devDebugLog: .ai/debug-log.md
devStoryLocation: docs/stories
slashPrefix: BMad
tdd:
enabled: false # Default: false for backward compatibility
require_for_new_stories: true
allow_red_phase_ci_failures: true
default_test_type: unit
test_runner:
auto_detect: true
custom_command: null
coverage:
min_threshold: 0.0
report_format: lcov

View File

@ -0,0 +1,381 @@
<!-- Powered by BMAD™ Core -->
# TDD Green Phase Prompts
Instructions for Dev agents when implementing minimal code to make tests pass in Test-Driven Development.
## Core Green Phase Mindset
**You are a Dev Agent in TDD GREEN PHASE. Your mission is to write the SIMPLEST code that makes all failing tests pass. Resist the urge to be clever - be minimal.**
### Primary Objectives
1. **Make it work first** - Focus on making tests pass, not perfect design
2. **Minimal implementation** - Write only what's needed for green tests
3. **No feature creep** - Don't add functionality without failing tests
4. **Fast feedback** - Run tests frequently during implementation
5. **Traceability** - Link implementation directly to test requirements
## Implementation Strategy
### The Three Rules of TDD (Uncle Bob)
1. **Don't write production code** unless it makes a failing test pass
2. **Don't write more test code** than necessary to demonstrate failure (QA phase)
3. **Don't write more production code** than necessary to make failing tests pass
### Green Phase Workflow
```yaml
workflow:
1. read_failing_test: 'Understand what the test expects'
2. write_minimal_code: 'Simplest implementation to pass'
3. run_test: 'Verify this specific test passes'
4. run_all_tests: 'Ensure no regressions'
5. repeat: 'Move to next failing test'
never_skip:
- running_tests_after_each_change
- checking_for_regressions
- committing_when_green
```
### Minimal Implementation Examples
**Example 1: Start with Hardcoded Values**
```javascript
// Test expects:
it('should return user with ID when creating user', () => {
const result = userService.createUser({ name: 'Test' });
expect(result).toEqual({ id: 1, name: 'Test' });
});
// Minimal implementation (hardcode first):
function createUser(userData) {
return { id: 1, name: userData.name };
}
// Test expects different ID:
it('should return different ID for second user', () => {
userService.createUser({ name: 'First' });
const result = userService.createUser({ name: 'Second' });
expect(result.id).toBe(2);
});
// Now make it dynamic:
let nextId = 1;
function createUser(userData) {
return { id: nextId++, name: userData.name };
}
```
**Example 2: Validation Implementation**
```javascript
// Test expects validation error:
it('should throw error when email is invalid', () => {
expect(() => createUser({ email: 'invalid' })).toThrow('Invalid email format');
});
// Minimal validation:
function createUser(userData) {
if (!userData.email.includes('@')) {
throw new Error('Invalid email format');
}
return { id: nextId++, ...userData };
}
```
## Avoiding Feature Creep
### What NOT to Add (Yet)
```javascript
// Don't add these without failing tests:
// ❌ Comprehensive validation
function createUser(data) {
if (!data.email || !data.email.includes('@')) throw new Error('Invalid email');
if (!data.name || data.name.trim().length === 0) throw new Error('Name required');
if (data.age && (data.age < 0 || data.age > 150)) throw new Error('Invalid age');
// ... only add validation that has failing tests
}
// ❌ Performance optimizations
function createUser(data) {
// Don't add caching, connection pooling, etc. without tests
}
// ❌ Future features
function createUser(data) {
// Don't add roles, permissions, etc. unless tests require it
}
```
### What TO Add
```javascript
// ✅ Only what tests require:
function createUser(data) {
// Only validate what failing tests specify
if (!data.email.includes('@')) {
throw new Error('Invalid email format');
}
// Only return what tests expect
return { id: generateId(), ...data };
}
```
## Test-Code Traceability
### Linking Implementation to Tests
```javascript
// Test ID: UC-001
it('should create user with valid email', () => {
const result = createUser({ email: 'test@example.com', name: 'Test' });
expect(result).toHaveProperty('id');
});
// Implementation comment linking to test:
function createUser(data) {
// UC-001: Return user with generated ID
return {
id: generateId(),
...data,
};
}
```
### Commit Messages with Test References
```bash
# Good commit messages:
git commit -m "GREEN: Implement user creation [UC-001, UC-002]"
git commit -m "GREEN: Add email validation for createUser [UC-003]"
git commit -m "GREEN: Handle edge case for empty name [UC-004]"
# Avoid vague messages:
git commit -m "Fixed user service"
git commit -m "Added validation"
```
## Handling Different Test Types
### Unit Tests - Pure Logic
```javascript
// Test: Calculate tax for purchase
it('should calculate 10% tax on purchase amount', () => {
expect(calculateTax(100)).toBe(10);
});
// Minimal implementation:
function calculateTax(amount) {
return amount * 0.1;
}
```
### Integration Tests - Component Interaction
```javascript
// Test: Service uses injected database
it('should save user to database when created', async () => {
const mockDb = { save: jest.fn().mockResolvedValue({ id: 1 }) };
const service = new UserService(mockDb);
await service.createUser({ name: 'Test' });
expect(mockDb.save).toHaveBeenCalledWith({ name: 'Test' });
});
// Minimal implementation:
class UserService {
constructor(database) {
this.db = database;
}
async createUser(userData) {
return await this.db.save(userData);
}
}
```
### Error Handling Tests
```javascript
// Test: Handle database connection failure
it('should throw service error when database is unavailable', async () => {
const mockDb = { save: jest.fn().mockRejectedValue(new Error('DB down')) };
const service = new UserService(mockDb);
await expect(service.createUser({ name: 'Test' }))
.rejects.toThrow('Service temporarily unavailable');
});
// Minimal error handling:
async createUser(userData) {
try {
return await this.db.save(userData);
} catch (error) {
throw new Error('Service temporarily unavailable');
}
}
```
## Fast Feedback Loop
### Test Execution Strategy
```bash
# Run single test file while implementing:
npm test -- user-service.test.js --watch
pytest tests/unit/test_user_service.py -v
go test ./services -run TestUserService
# Run full suite after each feature:
npm test
pytest
go test ./...
```
### IDE Integration
```yaml
recommended_setup:
- test_runner_integration: 'Tests run on save'
- live_feedback: 'Immediate pass/fail indicators'
- coverage_display: 'Show which lines are tested'
- failure_details: 'Quick access to error messages'
```
## Common Green Phase Mistakes
### Mistake: Over-Implementation
```javascript
// Wrong: Adding features without tests
function createUser(data) {
// No test requires password hashing yet
const hashedPassword = hashPassword(data.password);
// No test requires audit logging yet
auditLog.record('user_created', data);
// Only implement what tests require
return { id: generateId(), ...data };
}
```
### Mistake: Premature Abstraction
```javascript
// Wrong: Creating abstractions too early
class UserValidatorFactory {
static createValidator(type) {
// Complex factory pattern without tests requiring it
}
}
// Right: Keep it simple until tests demand complexity
function createUser(data) {
if (!data.email.includes('@')) {
throw new Error('Invalid email');
}
return { id: generateId(), ...data };
}
```
### Mistake: Not Running Tests Frequently
```javascript
// Wrong: Writing lots of code before testing
function createUser(data) {
// 20 lines of code without running tests
// Many assumptions about what tests expect
}
// Right: Small changes, frequent test runs
function createUser(data) {
return { id: 1, ...data }; // Run test - passes
}
// Then add next failing test's requirement:
function createUser(data) {
if (!data.email.includes('@')) throw new Error('Invalid email');
return { id: 1, ...data }; // Run test - passes
}
```
## Quality Standards in Green Phase
### Acceptable Technical Debt
```javascript
// OK during Green phase (will fix in Refactor):
function createUser(data) {
// Hardcoded values
const id = 1;
// Duplicated validation logic
if (!data.email.includes('@')) throw new Error('Invalid email');
if (!data.name || data.name.trim() === '') throw new Error('Name required');
// Simple algorithm even if inefficient
return { id: Math.floor(Math.random() * 1000000), ...data };
}
```
### Minimum Standards (Even in Green)
```javascript
// Always maintain:
function createUser(data) {
// Clear variable names
const userData = { ...data };
const userId = generateId();
// Proper error messages
if (!userData.email.includes('@')) {
throw new Error('Invalid email format');
}
// Return expected structure
return { id: userId, ...userData };
}
```
## Green Phase Checklist
Before moving to Refactor phase, ensure:
- [ ] **All tests passing** - No failing tests remain
- [ ] **No regressions** - Previously passing tests still pass
- [ ] **Minimal implementation** - Only code needed for tests
- [ ] **Clear test traceability** - Implementation addresses specific tests
- [ ] **No feature creep** - No functionality without tests
- [ ] **Basic quality standards** - Code is readable and correct
- [ ] **Frequent commits** - Changes committed with test references
- [ ] **Story metadata updated** - TDD status set to 'green'
## Success Indicators
**You know you're succeeding in Green phase when:**
1. **All tests consistently pass**
2. **Implementation is obviously minimal**
3. **Each code block addresses specific test requirements**
4. **No functionality exists without corresponding tests**
5. **Tests run quickly and reliably**
6. **Code changes are small and focused**
**Green phase is complete when:**
- Zero failing tests
- Implementation covers all test scenarios
- Code is minimal but correct
- Ready for refactoring improvements
Remember: Green phase is about making it work, not making it perfect. Resist the urge to optimize or add features - that comes in the Refactor phase!

View File

@ -0,0 +1,320 @@
<!-- Powered by BMAD™ Core -->
# TDD Red Phase Prompts
Instructions for QA agents when writing failing tests first in Test-Driven Development.
## Core Red Phase Mindset
**You are a QA Agent in TDD RED PHASE. Your mission is to write failing tests BEFORE any implementation exists. These tests define what success looks like.**
### Primary Objectives
1. **Test First, Always:** Write tests before any production code
2. **Describe Behavior:** Tests should express user/system expectations
3. **Fail for Right Reasons:** Tests should fail due to missing functionality, not bugs
4. **Minimal Scope:** Start with the smallest possible feature slice
5. **External Isolation:** Mock all external dependencies
## Test Writing Guidelines
### Test Structure Template
```javascript
describe('{ComponentName}', () => {
describe('{specific_behavior}', () => {
it('should {expected_behavior} when {condition}', () => {
// Given (Arrange) - Set up test conditions
const input = createTestInput();
const mockDependency = createMock();
// When (Act) - Perform the action
const result = systemUnderTest.performAction(input);
// Then (Assert) - Verify expectations
expect(result).toEqual(expectedOutput);
expect(mockDependency).toHaveBeenCalledWith(expectedArgs);
});
});
});
```
### Test Naming Conventions
**Pattern:** `should {expected_behavior} when {condition}`
**Good Examples:**
- `should return user profile when valid ID provided`
- `should throw validation error when email is invalid`
- `should create empty cart when user first visits`
**Avoid:**
- `testUserCreation` (not descriptive)
- `should work correctly` (too vague)
- `test_valid_input` (focuses on input, not behavior)
## Mocking Strategy
### When to Mock
```yaml
always_mock:
- External APIs and web services
- Database connections and queries
- File system operations
- Network requests
- Current time/date functions
- Random number generators
- Third-party libraries
never_mock:
- Pure functions without side effects
- Simple data structures
- Language built-ins (unless time/random)
- Domain objects under test
```
### Mock Implementation Examples
```javascript
// Mock external API
const mockApiClient = {
getUserById: jest.fn().mockResolvedValue({ id: 1, name: 'Test User' }),
createUser: jest.fn().mockResolvedValue({ id: 2, name: 'New User' }),
};
// Mock time for deterministic tests
const mockDate = new Date('2025-01-01T10:00:00Z');
jest.useFakeTimers().setSystemTime(mockDate);
// Mock database
const mockDb = {
users: {
findById: jest.fn(),
create: jest.fn(),
update: jest.fn(),
},
};
```
## Test Data Management
### Deterministic Test Data
```javascript
// Good: Predictable, meaningful test data
const testUser = {
id: 'user-123',
email: 'test@example.com',
name: 'Test User',
createdAt: '2025-01-01T10:00:00Z',
};
// Avoid: Random or meaningless data
const testUser = {
id: Math.random(),
email: 'a@b.com',
name: 'x',
};
```
### Test Data Builders
```javascript
class UserBuilder {
constructor() {
this.user = {
id: 'default-id',
email: 'default@example.com',
name: 'Default User',
};
}
withEmail(email) {
this.user.email = email;
return this;
}
withId(id) {
this.user.id = id;
return this;
}
build() {
return { ...this.user };
}
}
// Usage
const validUser = new UserBuilder().withEmail('valid@email.com').build();
const invalidUser = new UserBuilder().withEmail('invalid-email').build();
```
## Edge Cases and Error Scenarios
### Prioritize Error Conditions
```javascript
// Test error conditions first - they're often forgotten
describe('UserService.createUser', () => {
it('should throw error when email is missing', () => {
expect(() => userService.createUser({ name: 'Test' })).toThrow('Email is required');
});
it('should throw error when email format is invalid', () => {
expect(() => userService.createUser({ email: 'invalid' })).toThrow('Invalid email format');
});
// Happy path comes after error conditions
it('should create user when all data is valid', () => {
const userData = { email: 'test@example.com', name: 'Test' };
const result = userService.createUser(userData);
expect(result).toEqual(expect.objectContaining(userData));
});
});
```
### Boundary Value Testing
```javascript
describe('validateAge', () => {
it('should reject age below minimum (17)', () => {
expect(() => validateAge(17)).toThrow('Age must be 18 or older');
});
it('should accept minimum valid age (18)', () => {
expect(validateAge(18)).toBe(true);
});
it('should accept maximum reasonable age (120)', () => {
expect(validateAge(120)).toBe(true);
});
it('should reject unreasonable age (121)', () => {
expect(() => validateAge(121)).toThrow('Invalid age');
});
});
```
## Test Organization
### File Structure
```
tests/
├── unit/
│ ├── services/
│ │ ├── user-service.test.js
│ │ └── order-service.test.js
│ ├── utils/
│ │ └── validation.test.js
├── integration/
│ ├── api/
│ │ └── user-api.integration.test.js
└── fixtures/
├── users.js
└── orders.js
```
### Test Suite Organization
```javascript
describe('UserService', () => {
// Setup once per test suite
beforeAll(() => {
// Expensive setup that can be shared
});
// Setup before each test
beforeEach(() => {
// Fresh state for each test
mockDb.reset();
});
describe('createUser', () => {
// Group related tests
});
describe('updateUser', () => {
// Another behavior group
});
});
```
## Red Phase Checklist
Before handing off to Dev Agent, ensure:
- [ ] **Tests written first** - No implementation code exists yet
- [ ] **Tests are failing** - Confirmed by running test suite
- [ ] **Fail for right reasons** - Missing functionality, not syntax errors
- [ ] **External dependencies mocked** - No network/DB/file system calls
- [ ] **Deterministic data** - No random values or current time
- [ ] **Clear test names** - Behavior is obvious from test name
- [ ] **Proper assertions** - Tests verify expected outcomes
- [ ] **Error scenarios included** - Edge cases and validation errors
- [ ] **Minimal scope** - Tests cover smallest useful feature
- [ ] **Story metadata updated** - TDD status set to 'red', test list populated
## Common Red Phase Mistakes
### Mistake: Writing Tests After Code
```javascript
// Wrong: Implementation already exists
function createUser(data) {
return { id: 1, ...data }; // Code exists
}
it('should create user', () => {
// Writing test after implementation
});
```
### Mistake: Testing Implementation Details
```javascript
// Wrong: Testing how it works
it('should call database.insert with user data', () => {
// Testing internal implementation
});
// Right: Testing what it does
it('should return created user with ID', () => {
// Testing observable behavior
});
```
### Mistake: Non-Deterministic Tests
```javascript
// Wrong: Random data
const userId = Math.random();
const createdAt = new Date(); // Current time
// Right: Fixed data
const userId = 'test-user-123';
const createdAt = '2025-01-01T10:00:00Z';
```
## Success Indicators
**You know you're succeeding in Red phase when:**
1. **Tests clearly describe expected behavior**
2. **All tests fail with meaningful error messages**
3. **No external dependencies cause test failures**
4. **Tests can be understood without seeing implementation**
5. **Error conditions are tested first**
6. **Test names tell a story of what the system should do**
**Red phase is complete when:**
- All planned tests are written and failing
- Failure messages clearly indicate missing functionality
- Dev Agent can understand exactly what to implement
- Story metadata reflects current TDD state
Remember: Your tests are the specification. Make them clear, complete, and compelling!

View File

@ -0,0 +1,562 @@
<!-- Powered by BMAD™ Core -->
# TDD Refactor Phase Prompts
Instructions for Dev and QA agents when refactoring code while maintaining green tests in Test-Driven Development.
## Core Refactor Phase Mindset
**You are in TDD REFACTOR PHASE. Your mission is to improve code quality while keeping ALL tests green. Every change must preserve existing behavior.**
### Primary Objectives
1. **Preserve behavior** - External behavior must remain exactly the same
2. **Improve design** - Make code more readable, maintainable, and extensible
3. **Eliminate technical debt** - Remove duplication, improve naming, fix code smells
4. **Maintain test coverage** - All tests must stay green throughout
5. **Small steps** - Make incremental improvements with frequent test runs
## Refactoring Safety Rules
### The Golden Rule
**NEVER proceed with a refactoring step if tests are red.** Always revert and try smaller changes.
### Safe Refactoring Workflow
```yaml
refactoring_cycle:
1. identify_smell: 'Find specific code smell to address'
2. plan_change: 'Decide on minimal improvement step'
3. run_tests: 'Ensure all tests are green before starting'
4. make_change: 'Apply single, small refactoring'
5. run_tests: 'Verify tests are still green'
6. commit: 'Save progress if tests pass'
7. repeat: 'Move to next improvement'
abort_conditions:
- tests_turn_red: 'Immediately revert and try smaller step'
- behavior_changes: 'Revert if external interface changes'
- complexity_increases: 'Revert if code becomes harder to understand'
```
## Code Smells and Refactoring Techniques
### Duplication Elimination
**Before: Repeated validation logic**
```javascript
function createUser(data) {
if (!data.email.includes('@')) {
throw new Error('Invalid email format');
}
return { id: generateId(), ...data };
}
function updateUser(id, data) {
if (!data.email.includes('@')) {
throw new Error('Invalid email format');
}
return { id, ...data };
}
```
**After: Extract validation function**
```javascript
function validateEmail(email) {
if (!email.includes('@')) {
throw new Error('Invalid email format');
}
}
function createUser(data) {
validateEmail(data.email);
return { id: generateId(), ...data };
}
function updateUser(id, data) {
validateEmail(data.email);
return { id, ...data };
}
```
### Long Method Refactoring
**Before: Method doing too much**
```javascript
function processUserRegistration(userData) {
// Validation (5 lines)
if (!userData.email.includes('@')) throw new Error('Invalid email');
if (!userData.name || userData.name.trim().length === 0) throw new Error('Name required');
if (userData.age < 18) throw new Error('Must be 18 or older');
// Data transformation (4 lines)
const user = {
id: generateId(),
email: userData.email.toLowerCase(),
name: userData.name.trim(),
age: userData.age,
};
// Business logic (3 lines)
if (userData.age >= 65) {
user.discountEligible = true;
}
return user;
}
```
**After: Extract methods**
```javascript
function validateUserData(userData) {
if (!userData.email.includes('@')) throw new Error('Invalid email');
if (!userData.name || userData.name.trim().length === 0) throw new Error('Name required');
if (userData.age < 18) throw new Error('Must be 18 or older');
}
function normalizeUserData(userData) {
return {
id: generateId(),
email: userData.email.toLowerCase(),
name: userData.name.trim(),
age: userData.age,
};
}
function applyBusinessRules(user) {
if (user.age >= 65) {
user.discountEligible = true;
}
return user;
}
function processUserRegistration(userData) {
validateUserData(userData);
const user = normalizeUserData(userData);
return applyBusinessRules(user);
}
```
### Magic Numbers and Constants
**Before: Magic numbers scattered**
```javascript
function calculateShipping(weight) {
if (weight < 5) {
return 4.99;
} else if (weight < 20) {
return 9.99;
} else {
return 19.99;
}
}
```
**After: Named constants**
```javascript
const SHIPPING_RATES = {
LIGHT_WEIGHT_THRESHOLD: 5,
MEDIUM_WEIGHT_THRESHOLD: 20,
LIGHT_SHIPPING_COST: 4.99,
MEDIUM_SHIPPING_COST: 9.99,
HEAVY_SHIPPING_COST: 19.99,
};
function calculateShipping(weight) {
if (weight < SHIPPING_RATES.LIGHT_WEIGHT_THRESHOLD) {
return SHIPPING_RATES.LIGHT_SHIPPING_COST;
} else if (weight < SHIPPING_RATES.MEDIUM_WEIGHT_THRESHOLD) {
return SHIPPING_RATES.MEDIUM_SHIPPING_COST;
} else {
return SHIPPING_RATES.HEAVY_SHIPPING_COST;
}
}
```
### Variable Naming Improvements
**Before: Unclear names**
```javascript
function calc(u, p) {
const t = u * p;
const d = t * 0.1;
return t - d;
}
```
**After: Intention-revealing names**
```javascript
function calculateNetPrice(unitPrice, quantity) {
const totalPrice = unitPrice * quantity;
const discount = totalPrice * 0.1;
return totalPrice - discount;
}
```
## Refactoring Strategies by Code Smell
### Complex Conditionals
**Before: Nested conditions**
```javascript
function determineUserType(user) {
if (user.age >= 18) {
if (user.hasAccount) {
if (user.isPremium) {
return 'premium-member';
} else {
return 'basic-member';
}
} else {
return 'guest-adult';
}
} else {
return 'minor';
}
}
```
**After: Guard clauses and early returns**
```javascript
function determineUserType(user) {
if (user.age < 18) {
return 'minor';
}
if (!user.hasAccount) {
return 'guest-adult';
}
return user.isPremium ? 'premium-member' : 'basic-member';
}
```
### Large Classes (God Object)
**Before: Class doing too much**
```javascript
class UserManager {
validateUser(data) {
/* validation logic */
}
createUser(data) {
/* creation logic */
}
sendWelcomeEmail(user) {
/* email logic */
}
logUserActivity(user, action) {
/* logging logic */
}
calculateUserStats(user) {
/* analytics logic */
}
}
```
**After: Single responsibility classes**
```javascript
class UserValidator {
validate(data) {
/* validation logic */
}
}
class UserService {
create(data) {
/* creation logic */
}
}
class EmailService {
sendWelcome(user) {
/* email logic */
}
}
class ActivityLogger {
log(user, action) {
/* logging logic */
}
}
class UserAnalytics {
calculateStats(user) {
/* analytics logic */
}
}
```
## Collaborative Refactoring (Dev + QA)
### When to Involve QA Agent
**QA Agent should participate when:**
```yaml
qa_involvement_triggers:
test_modification_needed:
- 'Test expectations need updating'
- 'New test cases discovered during refactoring'
- 'Mock strategies need adjustment'
coverage_assessment:
- 'Refactoring exposes untested code paths'
- 'New methods need test coverage'
- 'Test organization needs improvement'
design_validation:
- 'Interface changes affect test structure'
- 'Mocking strategy becomes complex'
- 'Test maintainability concerns'
```
### Dev-QA Collaboration Workflow
```yaml
collaborative_steps:
1. dev_identifies_refactoring: 'Dev spots code smell'
2. assess_test_impact: 'Both agents review test implications'
3. plan_refactoring: 'Agree on approach and steps'
4. dev_refactors: 'Dev makes incremental changes'
5. qa_validates_tests: 'QA ensures tests remain valid'
6. both_review: 'Joint review of improved code and tests'
```
## Advanced Refactoring Patterns
### Extract Interface for Testability
**Before: Hard to test due to dependencies**
```javascript
class OrderService {
constructor() {
this.emailSender = new EmailSender();
this.paymentProcessor = new PaymentProcessor();
}
processOrder(order) {
const result = this.paymentProcessor.charge(order.total);
this.emailSender.sendConfirmation(order.customerEmail);
return result;
}
}
```
**After: Dependency injection for testability**
```javascript
class OrderService {
constructor(emailSender, paymentProcessor) {
this.emailSender = emailSender;
this.paymentProcessor = paymentProcessor;
}
processOrder(order) {
const result = this.paymentProcessor.charge(order.total);
this.emailSender.sendConfirmation(order.customerEmail);
return result;
}
}
// Usage in production:
const orderService = new OrderService(new EmailSender(), new PaymentProcessor());
// Usage in tests:
const mockEmail = { sendConfirmation: jest.fn() };
const mockPayment = { charge: jest.fn().mockReturnValue('success') };
const orderService = new OrderService(mockEmail, mockPayment);
```
### Replace Conditional with Polymorphism
**Before: Switch statement**
```javascript
function calculateArea(shape) {
switch (shape.type) {
case 'circle':
return Math.PI * shape.radius * shape.radius;
case 'rectangle':
return shape.width * shape.height;
case 'triangle':
return 0.5 * shape.base * shape.height;
default:
throw new Error('Unknown shape type');
}
}
```
**After: Polymorphic classes**
```javascript
class Circle {
constructor(radius) {
this.radius = radius;
}
calculateArea() {
return Math.PI * this.radius * this.radius;
}
}
class Rectangle {
constructor(width, height) {
this.width = width;
this.height = height;
}
calculateArea() {
return this.width * this.height;
}
}
class Triangle {
constructor(base, height) {
this.base = base;
this.height = height;
}
calculateArea() {
return 0.5 * this.base * this.height;
}
}
```
## Refactoring Safety Checks
### Before Each Refactoring Step
```bash
# 1. Ensure all tests are green
npm test
pytest
go test ./...
# 2. Consider impact
# - Will this change external interfaces?
# - Are there hidden dependencies?
# - Could this affect performance significantly?
# 3. Plan the smallest possible step
# - What's the minimal change that improves code?
# - Can this be broken into smaller steps?
```
### After Each Refactoring Step
```bash
# 1. Run tests immediately
npm test
# 2. If tests fail:
git checkout -- . # Revert changes
# Plan smaller refactoring step
# 3. If tests pass:
git add .
git commit -m "REFACTOR: Extract validateEmail function [maintains UC-001, UC-002]"
```
## Refactoring Anti-Patterns
### Don't Change Behavior
```javascript
// Wrong: Changing logic during refactoring
function calculateDiscount(amount) {
// Original: 10% discount
return amount * 0.1;
// Refactored: DON'T change the discount rate
return amount * 0.15; // This changes behavior!
}
// Right: Only improve structure
const DISCOUNT_RATE = 0.1; // Extract constant
function calculateDiscount(amount) {
return amount * DISCOUNT_RATE; // Same behavior
}
```
### Don't Add Features
```javascript
// Wrong: Adding features during refactoring
function validateUser(userData) {
validateEmail(userData.email); // Existing
validateName(userData.name); // Existing
validateAge(userData.age); // DON'T add new validation
}
// Right: Only improve existing code
function validateUser(userData) {
validateEmail(userData.email);
validateName(userData.name);
// Age validation needs its own failing test first
}
```
### Don't Make Large Changes
```javascript
// Wrong: Massive refactoring in one step
class UserService {
// Completely rewrite entire class structure
}
// Right: Small, incremental improvements
class UserService {
// Extract one method at a time
// Rename one variable at a time
// Improve one code smell at a time
}
```
## Refactor Phase Checklist
Before considering refactoring complete:
- [ ] **All tests remain green** - No test failures introduced
- [ ] **Code quality improved** - Measurable improvement in readability/maintainability
- [ ] **No behavior changes** - External behavior is identical
- [ ] **Technical debt reduced** - Specific code smells addressed
- [ ] **Small commits made** - Each improvement committed separately
- [ ] **Documentation updated** - Comments and docs reflect changes
- [ ] **Performance maintained** - No significant performance degradation
- [ ] **Story metadata updated** - Refactoring notes and improvements documented
## Success Indicators
**Refactoring is successful when:**
1. **All tests consistently pass** throughout the process
2. **Code is noticeably easier to read** and understand
3. **Duplication has been eliminated** or significantly reduced
4. **Method/class sizes are more reasonable** (functions < 15 lines)
5. **Variable and function names clearly express intent**
6. **Code complexity has decreased** (fewer nested conditions)
7. **Future changes will be easier** due to better structure
**Refactoring is complete when:**
- No obvious code smells remain in the story scope
- Code quality metrics show improvement
- Tests provide comprehensive safety net
- Ready for next TDD cycle or story completion
Remember: Refactoring is about improving design, not adding features. Keep tests green, make small changes, and focus on making the code better for the next developer!

View File

@ -0,0 +1,89 @@
# <!-- Powered by BMAD™ Core -->
# Story Frontmatter Schema - Defines optional metadata fields for story files
# Core story metadata (existing)
story:
epic:
type: string
required: true
description: "Epic number (e.g., '1')"
number:
type: string
required: true
description: "Story number within epic (e.g., '3')"
title:
type: string
required: true
description: "Human-readable story title"
status:
type: enum
values: [draft, ready, inprogress, review, done]
default: draft
description: "Current story status"
priority:
type: enum
values: [low, medium, high, critical]
default: medium
description: "Story priority level"
# TDD-specific metadata (optional - only when tdd.enabled=true)
tdd:
status:
type: enum
values: [red, green, refactor, done]
required: false
description: "Current TDD cycle phase"
cycle:
type: integer
default: 1
description: "Current red-green-refactor cycle number"
tests:
type: array
items:
id:
type: string
description: "Unique test identifier"
name:
type: string
description: "Human-readable test name"
type:
type: enum
values: [unit, integration, e2e]
default: unit
status:
type: enum
values: [planned, failing, passing, skipped]
default: planned
file_path:
type: string
description: "Test file path relative to project root"
description: "Array of planned/implemented test cases"
coverage_target:
type: float
minimum: 0.0
maximum: 100.0
description: "Target code coverage percentage for this story"
coverage_actual:
type: float
minimum: 0.0
maximum: 100.0
description: "Actual code coverage achieved"
# Tracking metadata
tracking:
created_date:
type: string
format: iso8601
description: "When story was created"
updated_date:
type: string
format: iso8601
description: "When story was last updated"
estimated_hours:
type: float
minimum: 0.0
description: "Estimated development hours"
actual_hours:
type: float
minimum: 0.0
description: "Actual development hours spent"

View File

@ -0,0 +1,323 @@
<!-- Powered by BMAD™ Core -->
# tdd-implement
Implement minimal code to make failing tests pass - the "Green" phase of TDD.
## Purpose
Write the simplest possible implementation that makes all failing tests pass. This is the "Green" phase of TDD where we focus on making tests pass with minimal, clean code.
## Prerequisites
- Story has failing tests (tdd.status: red)
- All tests fail for correct reasons (missing implementation, not bugs)
- Test runner is configured and working
- Dev agent has reviewed failing tests and acceptance criteria
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
- failing_tests: # List from story TDD metadata
- id: test identifier
- file_path: path to test file
- status: failing
```
## Process
### 1. Review Failing Tests
Before writing any code:
- Read each failing test to understand expected behavior
- Identify the interfaces/classes/functions that need to be created
- Note expected inputs, outputs, and error conditions
- Understand the test's mocking strategy
### 2. Design Minimal Implementation
**TDD Green Phase Principles:**
- **Make it work first, then make it right**
- **Simplest thing that could possibly work**
- **No feature without a failing test**
- **Avoid premature abstraction**
- **Prefer duplication over wrong abstraction**
### 3. Implement Code
**Implementation Strategy:**
```yaml
approach: 1. Start with simplest happy path test
2. Write minimal code to pass that test
3. Run tests frequently (after each small change)
4. Move to next failing test
5. Repeat until all tests pass
avoid:
- Adding features not covered by tests
- Complex algorithms when simple ones suffice
- Premature optimization
- Over-engineering the solution
```
**Example Implementation Progression:**
```javascript
// First test: should return user with id
// Minimal implementation:
function createUser(userData) {
return { id: 1, ...userData };
}
// Second test: should validate email format
// Expand implementation:
function createUser(userData) {
if (!userData.email.includes('@')) {
throw new Error('Invalid email format');
}
return { id: 1, ...userData };
}
```
### 4. Run Tests Continuously
**Test-Driven Workflow:**
1. Run specific failing test
2. Write minimal code to make it pass
3. Run that test again to confirm green
4. Run full test suite to ensure no regressions
5. Move to next failing test
**Test Execution Commands:**
```bash
# Run specific test file
npm test -- user-service.test.js
pytest tests/unit/test_user_service.py
go test ./services/user_test.go
# Run full test suite
npm test
pytest
go test ./...
```
### 5. Handle Edge Cases
Implement only edge cases that have corresponding tests:
- Input validation as tested
- Error conditions as specified in tests
- Boundary conditions covered by tests
- Nothing more, nothing less
### 6. Maintain Test-Code Traceability
**Commit Strategy:**
```bash
git add tests/ src/
git commit -m "GREEN: Implement user creation [UC-001, UC-002]"
```
Link implementation to specific test IDs in commits for traceability.
### 7. Update Story Metadata
Update TDD status to green:
```yaml
tdd:
status: green
cycle: 1
tests:
- id: 'UC-001'
name: 'should create user with valid email'
type: unit
status: passing
file_path: 'tests/unit/user-service.test.js'
- id: 'UC-002'
name: 'should reject user with invalid email'
type: unit
status: passing
file_path: 'tests/unit/user-service.test.js'
```
## Output Requirements
### 1. Working Implementation
Create source files that:
- Make all failing tests pass
- Follow project coding standards
- Are minimal and focused
- Have clear, intention-revealing names
### 2. Test Execution Report
```bash
Running tests...
✅ UserService > should create user with valid email
✅ UserService > should reject user with invalid email
2 passing, 0 failing
```
### 3. Story File Updates
Append to TDD section:
```markdown
## TDD Progress
### Green Phase - Cycle 1
**Date:** {current_date}
**Agent:** James (Dev Agent)
**Implementation Summary:**
- Created UserService class with create() method
- Added email validation for @ symbol
- All tests now passing ✅
**Files Modified:**
- src/services/user-service.js (created)
**Test Results:**
- UC-001: should create user with valid email (PASSING ✅)
- UC-002: should reject user with invalid email (PASSING ✅)
**Next Step:** Review implementation for refactoring opportunities
```
## Implementation Guidelines
### Code Quality Standards
**During Green Phase:**
- **Readable:** Clear variable and function names
- **Simple:** Avoid complex logic when simple works
- **Testable:** Code structure supports the tests
- **Focused:** Each function has single responsibility
**Acceptable Technical Debt (to be addressed in Refactor phase):**
- Code duplication if it keeps tests green
- Hardcoded values if they make tests pass
- Simple algorithms even if inefficient
- Minimal error handling beyond what tests require
### Common Patterns
**Factory Functions:**
```javascript
function createUser(data) {
// Minimal validation
return { id: generateId(), ...data };
}
```
**Error Handling:**
```javascript
function validateEmail(email) {
if (!email.includes('@')) {
throw new Error('Invalid email');
}
}
```
**State Management:**
```javascript
class UserService {
constructor(database) {
this.db = database; // Accept injected dependency
}
}
```
## Error Handling
**If tests still fail after implementation:**
- Review test expectations vs actual implementation
- Check for typos in function/method names
- Verify correct imports/exports
- Ensure proper handling of async operations
**If tests pass unexpectedly without changes:**
- Implementation might already exist
- Test might be incorrect
- Review git status for unexpected changes
**If new tests start failing:**
- Implementation may have broken existing functionality
- Review change impact
- Fix regressions before continuing
## Anti-Patterns to Avoid
**Feature Creep:**
- Don't implement features without failing tests
- Don't add "obviously needed" functionality
**Premature Optimization:**
- Don't optimize for performance in green phase
- Focus on correctness first
**Over-Engineering:**
- Don't add abstraction layers without tests requiring them
- Avoid complex design patterns in initial implementation
## Completion Criteria
- [ ] All previously failing tests now pass
- [ ] No existing tests broken (regression check)
- [ ] Implementation is minimal and focused
- [ ] Code follows project standards
- [ ] Story TDD status updated to 'green'
- [ ] Files properly committed with test traceability
- [ ] Ready for refactor phase assessment
## Validation Commands
```bash
# Verify all tests pass
npm test
pytest
go test ./...
mvn test
dotnet test
# Check code quality (basic)
npm run lint
flake8 .
golint ./...
```
## Key Principles
- **Make it work:** Green tests are the only measure of success
- **Keep it simple:** Resist urge to make it elegant yet
- **One test at a time:** Focus on single failing test
- **Fast feedback:** Run tests frequently during development
- **No speculation:** Only implement what tests require

View File

@ -0,0 +1,371 @@
<!-- Powered by BMAD™ Core -->
# tdd-refactor
Safely refactor code while keeping all tests green - the "Refactor" phase of TDD.
## Purpose
Improve code quality, eliminate duplication, and enhance design while maintaining all existing functionality. This is the "Refactor" phase of TDD where we make the code clean and maintainable.
## Prerequisites
- All tests are passing (tdd.status: green)
- Implementation is complete and functional
- Test suite provides safety net for refactoring
- Code follows basic project standards
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
- passing_tests: # All tests should be green
- id: test identifier
- status: passing
- implementation_files: # Source files to potentially refactor
- path: file path
- purpose: what it does
```
## Process
### 1. Identify Refactoring Opportunities
**Code Smells to Look For:**
```yaml
common_smells:
duplication:
- Repeated code blocks
- Similar logic in different places
- Copy-paste patterns
complexity:
- Long methods/functions (>10-15 lines)
- Too many parameters (>3-4)
- Nested conditions (>2-3 levels)
- Complex boolean expressions
naming:
- Unclear variable names
- Non-descriptive function names
- Inconsistent naming conventions
structure:
- God objects/classes doing too much
- Primitive obsession
- Feature envy (method using more from other class)
- Long parameter lists
```
### 2. Plan Refactoring Steps
**Refactoring Strategy:**
- **One change at a time:** Make small, atomic improvements
- **Run tests after each change:** Ensure no functionality breaks
- **Commit frequently:** Create checkpoints for easy rollback
- **Improve design:** Move toward better architecture
**Common Refactoring Techniques:**
```yaml
extract_methods:
when: 'Function is too long or doing multiple things'
technique: 'Extract complex logic into named methods'
rename_variables:
when: "Names don't clearly express intent"
technique: 'Use intention-revealing names'
eliminate_duplication:
when: 'Same code appears in multiple places'
technique: 'Extract to shared function/method'
simplify_conditionals:
when: 'Complex boolean logic is hard to understand'
technique: 'Extract to well-named boolean methods'
introduce_constants:
when: 'Magic numbers or strings appear repeatedly'
technique: 'Create named constants'
```
### 3. Execute Refactoring
**Step-by-Step Process:**
1. **Choose smallest improvement**
2. **Make the change**
3. **Run all tests**
4. **Commit if green**
5. **Repeat**
**Example Refactoring Sequence:**
```javascript
// Before refactoring
function createUser(data) {
if (!data.email.includes('@') || data.email.length < 5) {
throw new Error('Invalid email format');
}
if (!data.name || data.name.trim().length === 0) {
throw new Error('Name is required');
}
return {
id: Math.floor(Math.random() * 1000000),
...data,
createdAt: new Date().toISOString(),
};
}
// After refactoring - Step 1: Extract validation
function validateEmail(email) {
return email.includes('@') && email.length >= 5;
}
function validateName(name) {
return name && name.trim().length > 0;
}
function createUser(data) {
if (!validateEmail(data.email)) {
throw new Error('Invalid email format');
}
if (!validateName(data.name)) {
throw new Error('Name is required');
}
return {
id: Math.floor(Math.random() * 1000000),
...data,
createdAt: new Date().toISOString(),
};
}
// After refactoring - Step 2: Extract ID generation
function generateUserId() {
return Math.floor(Math.random() * 1000000);
}
function createUser(data) {
if (!validateEmail(data.email)) {
throw new Error('Invalid email format');
}
if (!validateName(data.name)) {
throw new Error('Name is required');
}
return {
id: generateUserId(),
...data,
createdAt: new Date().toISOString(),
};
}
```
### 4. Test After Each Change
**Critical Rule:** Never proceed without green tests
```bash
# Run tests after each refactoring step
npm test
pytest
go test ./...
# If tests fail:
# 1. Undo the change
# 2. Understand what broke
# 3. Try smaller refactoring
# 4. Fix tests if they need updating (rare)
```
### 5. Collaborate with QA Agent
**When to involve QA:**
- Tests need updating due to interface changes
- New test cases identified during refactoring
- Questions about test coverage adequacy
- Validation of refactoring safety
### 6. Update Story Documentation
Track refactoring progress:
```yaml
tdd:
status: refactor # or done if complete
cycle: 1
refactoring_notes:
- extracted_methods: ['validateEmail', 'validateName', 'generateUserId']
- eliminated_duplication: 'Email validation logic'
- improved_readability: 'Function names now express intent'
```
## Output Requirements
### 1. Improved Code Quality
**Measurable Improvements:**
- Reduced code duplication
- Clearer naming and structure
- Smaller, focused functions
- Better separation of concerns
### 2. Maintained Test Coverage
```bash
# All tests still passing
✅ UserService > should create user with valid email
✅ UserService > should reject user with invalid email
✅ UserService > should require valid name
3 passing, 0 failing
```
### 3. Story File Updates
Append to TDD section:
```markdown
## TDD Progress
### Refactor Phase - Cycle 1
**Date:** {current_date}
**Agents:** James (Dev) & Quinn (QA)
**Refactoring Completed:**
- ✅ Extracted validation functions for better readability
- ✅ Eliminated duplicate email validation logic
- ✅ Introduced generateUserId() for testability
- ✅ Simplified createUser() main logic
**Code Quality Improvements:**
- Function length reduced from 12 to 6 lines
- Three reusable validation functions created
- Magic numbers eliminated
- Test coverage maintained at 100%
**Files Modified:**
- src/services/user-service.js (refactored)
**All Tests Passing:** ✅
**Next Step:** Story ready for review or next TDD cycle
```
## Refactoring Guidelines
### Safe Refactoring Practices
**Always Safe:**
- Rename variables/functions
- Extract methods
- Inline temporary variables
- Replace magic numbers with constants
**Potentially Risky:**
- Changing method signatures
- Modifying class hierarchies
- Altering error handling
- Changing async/sync behavior
**Never Do During Refactor:**
- Add new features
- Change external behavior
- Remove existing functionality
- Skip running tests
### Code Quality Metrics
**Before/After Comparison:**
```yaml
metrics_to_track:
cyclomatic_complexity: 'Lower is better'
function_length: 'Shorter is generally better'
duplication_percentage: 'Should decrease'
test_coverage: 'Should maintain 100%'
acceptable_ranges:
function_length: '5-15 lines for most functions'
parameters: '0-4 parameters per function'
nesting_depth: 'Maximum 3 levels'
```
## Advanced Refactoring Techniques
### Design Pattern Introduction
**When appropriate:**
- Template Method for algorithmic variations
- Strategy Pattern for behavior selection
- Factory Pattern for object creation
- Observer Pattern for event handling
**Caution:** Only introduce patterns if they simplify the code
### Architecture Improvements
```yaml
layering:
- Separate business logic from presentation
- Extract data access concerns
- Isolate external dependencies
dependency_injection:
- Make dependencies explicit
- Enable easier testing
- Improve modularity
error_handling:
- Consistent error types
- Meaningful error messages
- Proper error propagation
```
## Error Handling
**If tests fail during refactoring:**
1. **Undo immediately** - Use git to revert
2. **Analyze the failure** - What assumption was wrong?
3. **Try smaller steps** - More atomic refactoring
4. **Consider test updates** - Only if interface must change
**If code becomes more complex:**
- Refactoring went wrong direction
- Revert and try different approach
- Consider if change is actually needed
## Completion Criteria
- [ ] All identified code smells addressed or documented
- [ ] All tests remain green throughout process
- [ ] Code is more readable and maintainable
- [ ] No new functionality added during refactoring
- [ ] Story TDD status updated appropriately
- [ ] Refactoring changes committed with clear messages
- [ ] Code quality metrics improved or maintained
- [ ] Ready for story completion or next TDD cycle
## Key Principles
- **Green Bar:** Never proceed with failing tests
- **Small Steps:** Make incremental improvements
- **Behavior Preservation:** External behavior must remain identical
- **Frequent Commits:** Create rollback points
- **Test First:** Let tests guide refactoring safety
- **Collaborative:** Work with QA when test updates needed

View File

@ -0,0 +1,258 @@
<!-- Powered by BMAD™ Core -->
# write-failing-tests
Write failing tests first to drive development using Test-Driven Development (TDD) Red phase.
## Purpose
Generate failing unit tests that describe expected behavior before implementation. This is the "Red" phase of TDD where we define what success looks like through tests that initially fail.
## Prerequisites
- Story status must be "InProgress" or "Ready"
- TDD must be enabled in core-config.yaml (`tdd.enabled: true`)
- Acceptance criteria are clearly defined
- Test runner is configured or auto-detected
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
- story_title: '{title}' # If missing, derive from story file H1
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
```
## Process
### 1. Analyze Story Requirements
Read the story file and extract:
- Acceptance criteria (AC) that define success
- Business rules and constraints
- Edge cases and error conditions
- Data inputs and expected outputs
### 2. Design Test Strategy
For each acceptance criterion:
- Identify the smallest testable unit
- Choose appropriate test type (unit/integration/e2e)
- Plan test data and scenarios
- Consider mocking strategy for external dependencies
### 3. Detect/Configure Test Runner
```yaml
detection_order:
- Check project files for known patterns
- JavaScript: package.json dependencies (jest, vitest, mocha)
- Python: requirements files (pytest, unittest)
- Java: pom.xml, build.gradle (junit, testng)
- Go: go.mod (built-in testing)
- .NET: *.csproj (xunit, nunit, mstest)
- Fallback: tdd.test_runner.custom_command from config
```
### 4. Write Failing Tests
**Test Quality Guidelines:**
- **Deterministic**: No random values, dates, or network calls
- **Isolated**: Each test is independent and can run alone
- **Fast**: Unit tests should run in milliseconds
- **Readable**: Test names describe the behavior being tested
- **Focused**: One assertion per test when possible
**Mocking Strategy:**
```yaml
mock_vs_fake_vs_stub:
mock: 'Verify interactions (calls, parameters)'
fake: 'Simplified working implementation'
stub: 'Predefined responses to calls'
use_mocks_for:
- External APIs and web services
- Database connections
- File system operations
- Time-dependent operations
- Random number generation
```
**Test Structure (Given-When-Then):**
```typescript
// Example structure
describe('UserService', () => {
it('should create user with valid email', async () => {
// Given (Arrange)
const userData = { email: 'test@example.com', name: 'Test User' };
const mockDb = jest.fn().mockResolvedValue({ id: 1, ...userData });
// When (Act)
const result = await userService.create(userData);
// Then (Assert)
expect(result).toEqual({ id: 1, ...userData });
expect(mockDb).toHaveBeenCalledWith(userData);
});
});
```
### 5. Create Test Files
**Naming Conventions:**
```yaml
patterns:
javascript: '{module}.test.js' or '{module}.spec.js'
python: 'test_{module}.py' or '{module}_test.py'
java: '{Module}Test.java'
go: '{module}_test.go'
csharp: '{Module}Tests.cs'
```
**File Organization:**
```
tests/
├── unit/ # Fast, isolated tests
├── integration/ # Component interaction tests
└── e2e/ # End-to-end user journey tests
```
### 6. Verify Tests Fail
**Critical Step:** Run tests to ensure they fail for the RIGHT reason:
- ✅ Fail because functionality is not implemented
- ❌ Fail because of syntax errors, import issues, or test bugs
**Test Run Command:** Use auto-detected or configured test runner
### 7. Update Story Metadata
Update story file frontmatter:
```yaml
tdd:
status: red
cycle: 1
tests:
- id: 'UC-001'
name: 'should create user with valid email'
type: unit
status: failing
file_path: 'tests/unit/user-service.test.js'
- id: 'UC-002'
name: 'should reject user with invalid email'
type: unit
status: failing
file_path: 'tests/unit/user-service.test.js'
```
## Output Requirements
### 1. Test Files Created
Generate test files with:
- Clear, descriptive test names
- Proper setup/teardown
- Mock configurations
- Expected assertions
### 2. Test Execution Report
```bash
Running tests...
❌ UserService > should create user with valid email
❌ UserService > should reject user with invalid email
2 failing, 0 passing
```
### 3. Story File Updates
Append to TDD section:
```markdown
## TDD Progress
### Red Phase - Cycle 1
**Date:** {current_date}
**Agent:** Quinn (QA Agent)
**Tests Written:**
- UC-001: should create user with valid email (FAILING ✅)
- UC-002: should reject user with invalid email (FAILING ✅)
**Test Files:**
- tests/unit/user-service.test.js
**Next Step:** Dev Agent to implement minimal code to make tests pass
```
## Constraints & Best Practices
### Constraints
- **Minimal Scope:** Write tests for the smallest possible feature slice
- **No Implementation:** Do not implement the actual functionality
- **External Dependencies:** Always mock external services, databases, APIs
- **Deterministic Data:** Use fixed test data, mock time/random functions
- **Fast Execution:** Unit tests must complete quickly (< 100ms each)
### Anti-Patterns to Avoid
- Testing implementation details instead of behavior
- Writing tests after the code is written
- Complex test setup that obscures intent
- Tests that depend on external systems
- Overly broad tests covering multiple behaviors
## Error Handling
**If tests pass unexpectedly:**
- Implementation may already exist
- Test may be testing wrong behavior
- HALT and clarify requirements
**If tests fail for wrong reasons:**
- Fix syntax/import errors
- Verify mocks are properly configured
- Check test runner configuration
**If no test runner detected:**
- Fallback to tdd.test_runner.custom_command
- If not configured, prompt user for test command
- Document setup in story notes
## Completion Criteria
- [ ] All planned tests are written and failing
- [ ] Tests fail for correct reasons (missing implementation)
- [ ] Story TDD metadata updated with test list
- [ ] Test files follow project conventions
- [ ] All external dependencies are properly mocked
- [ ] Tests run deterministically and quickly
- [ ] Ready to hand off to Dev Agent for implementation
## Key Principles
- **Fail First:** Tests must fail before any implementation
- **Describe Behavior:** Tests define what "done" looks like
- **Start Small:** Begin with simplest happy path scenario
- **Isolate Dependencies:** External systems should be mocked
- **Fast Feedback:** Tests should run quickly to enable rapid iteration

View File

@ -0,0 +1,171 @@
<!-- Powered by BMAD™ Core -->
# Story {epic}.{story}: {title}
## Story Metadata
```yaml
story:
epic: '{epic}'
number: '{story}'
title: '{title}'
status: 'draft'
priority: 'medium'
# TDD Configuration (only when tdd.enabled=true)
tdd:
status: 'red' # red|green|refactor|done
cycle: 1
coverage_target: 80.0
tests: [] # Will be populated by QA agent during Red phase
```
## Story Description
**As a** {user_type}
**I want** {capability}
**So that** {business_value}
### Context
{Provide context about why this story is needed, what problem it solves, and how it fits into the larger epic/project}
## Acceptance Criteria
```gherkin
Feature: {Feature name}
Scenario: {Primary happy path}
Given {initial conditions}
When {action performed}
Then {expected outcome}
Scenario: {Error condition 1}
Given {error setup}
When {action that causes error}
Then {expected error handling}
Scenario: {Edge case}
Given {edge case setup}
When {edge case action}
Then {edge case outcome}
```
## Technical Requirements
### Functional Requirements
- {Requirement 1}
- {Requirement 2}
- {Requirement 3}
### Non-Functional Requirements
- **Performance:** {Response time, throughput requirements}
- **Security:** {Authentication, authorization, data protection}
- **Reliability:** {Error handling, recovery requirements}
- **Maintainability:** {Code quality, documentation standards}
## TDD Test Plan (QA Agent Responsibility)
### Test Strategy
- **Primary Test Type:** {unit|integration|e2e}
- **Mocking Approach:** {mock external services, databases, etc.}
- **Test Data:** {how test data will be managed}
### Planned Test Scenarios
| ID | Scenario | Type | Priority | AC Reference |
| ------ | ------------------ | ----------- | -------- | ------------ |
| TC-001 | {test description} | unit | P0 | AC1 |
| TC-002 | {test description} | unit | P0 | AC2 |
| TC-003 | {test description} | integration | P1 | AC3 |
_(This section will be populated by QA agent during test planning)_
## TDD Progress
### Current Phase: {RED|GREEN|REFACTOR|DONE}
**Cycle:** {cycle_number}
**Last Updated:** {date}
_(TDD progress will be tracked here through Red-Green-Refactor cycles)_
---
## Implementation Tasks (Dev Agent)
### Primary Tasks
- [ ] {Main implementation task 1}
- [ ] {Main implementation task 2}
- [ ] {Main implementation task 3}
### Subtasks
- [ ] {Detailed subtask}
- [ ] {Another subtask}
## Definition of Done
### TDD-Specific DoD
- [ ] Tests written first (Red phase completed)
- [ ] All tests passing (Green phase completed)
- [ ] Code refactored for quality (Refactor phase completed)
- [ ] Test coverage meets target ({coverage_target}%)
- [ ] All external dependencies properly mocked
- [ ] No features implemented without corresponding tests
### General DoD
- [ ] All acceptance criteria met
- [ ] Code follows project standards
- [ ] Documentation updated
- [ ] Ready for review
## Dev Agent Record
### Implementation Notes
_(Dev agent will document implementation decisions here)_
### TDD Cycle Log
_(Automatic tracking of Red-Green-Refactor progression)_
**Cycle 1:**
- Red Phase: {date} - {test count} failing tests written
- Green Phase: {date} - Implementation completed, all tests pass
- Refactor Phase: {date} - {refactoring summary}
### File List
_(Dev agent will list all files created/modified)_
- {file1}
- {file2}
### Test Execution Log
```bash
# Test runs will be logged here during development
```
## QA Results
_(QA agent will populate this during review)_
## Change Log
- **{date}**: Story created from TDD template
- **{date}**: {change description}
---
**TDD Status:** 🔴 RED | ⚫ Not Started
**Agent Assigned:** {agent_name}
**Estimated Effort:** {hours} hours

View File

@ -0,0 +1,261 @@
# <!-- Powered by BMAD™ Core -->
name: TDD Story Development Workflow
description: Test-Driven Development workflow for story implementation
version: "1.0"
type: story_workflow
# TDD-specific workflow that orchestrates Red-Green-Refactor cycles
workflow:
prerequisites:
- tdd.enabled: true
- story.status: ["ready", "inprogress"]
- story.acceptance_criteria: "defined"
phases:
# Phase 1: RED - Write failing tests first
red_phase:
description: "Write failing tests that describe expected behavior"
agent: qa
status_check: "tdd.status != 'red'"
tasks:
- name: test-design
description: "Design comprehensive test strategy"
inputs:
- story_id
- acceptance_criteria
outputs:
- test_design_document
- test_scenarios
- name: write-failing-tests
description: "Implement failing tests for story scope"
inputs:
- story_id
- test_scenarios
- codebase_context
outputs:
- test_files
- failing_test_report
completion_criteria:
- "At least one test is failing"
- "Tests fail for correct reasons (missing implementation)"
- "All external dependencies mocked"
- "Story tdd.status = 'red'"
gates:
pass_conditions:
- tests_created: true
- tests_failing_correctly: true
- mocking_strategy_applied: true
- story_metadata_updated: true
fail_conditions:
- tests_passing_unexpectedly: true
- syntax_errors_in_tests: true
- missing_test_runner: true
# Phase 2: GREEN - Make tests pass with minimal code
green_phase:
description: "Implement minimal code to make all tests pass"
agent: dev
status_check: "tdd.status != 'green'"
prerequisites:
- "tdd.status == 'red'"
- "failing_tests.count > 0"
tasks:
- name: tdd-implement
description: "Write simplest code to make tests pass"
inputs:
- story_id
- failing_tests
- codebase_context
outputs:
- implementation_files
- passing_test_report
completion_criteria:
- "All tests are passing"
- "No feature creep beyond test requirements"
- "Code follows basic standards"
- "Story tdd.status = 'green'"
gates:
pass_conditions:
- all_tests_passing: true
- implementation_minimal: true
- no_breaking_changes: true
- story_metadata_updated: true
fail_conditions:
- tests_still_failing: true
- feature_creep_detected: true
- regression_introduced: true
# Phase 3: REFACTOR - Improve code quality while keeping tests green
refactor_phase:
description: "Improve code quality while maintaining green tests"
agents: [dev, qa] # Collaborative phase
status_check: "tdd.status != 'refactor'"
prerequisites:
- "tdd.status == 'green'"
- "all_tests_passing == true"
tasks:
- name: tdd-refactor
description: "Safely refactor code with test coverage"
inputs:
- story_id
- passing_tests
- implementation_files
- code_quality_metrics
outputs:
- refactored_files
- quality_improvements
- maintained_test_coverage
completion_criteria:
- "All tests remain green throughout"
- "Code quality improved"
- "Technical debt addressed"
- "Story tdd.status = 'done' or ready for next cycle"
gates:
pass_conditions:
- tests_remain_green: true
- quality_metrics_improved: true
- refactoring_documented: true
- commits_atomic: true
fail_conditions:
- tests_broken_by_refactoring: true
- code_quality_degraded: true
- feature_changes_during_refactor: true
# Cycle management - can repeat Red-Green-Refactor for complex stories
cycle_management:
max_cycles: 5 # Reasonable limit to prevent infinite cycles
next_cycle_conditions:
- "More acceptance criteria remain unimplemented"
- "Story scope requires additional functionality"
- "Technical complexity requires iterative approach"
cycle_completion_check:
- "All acceptance criteria have tests and implementation"
- "Code quality meets project standards"
- "No remaining technical debt from TDD cycles"
# Quality gates for phase transitions
transition_gates:
red_to_green:
required:
- failing_tests_exist: true
- tests_fail_for_right_reasons: true
- external_dependencies_mocked: true
blocked_by:
- no_failing_tests: true
- syntax_errors: true
- missing_test_infrastructure: true
green_to_refactor:
required:
- all_tests_passing: true
- implementation_complete: true
- basic_quality_standards_met: true
blocked_by:
- failing_tests: true
- incomplete_implementation: true
- major_quality_violations: true
refactor_to_done:
required:
- tests_remain_green: true
- code_quality_improved: true
- all_acceptance_criteria_met: true
blocked_by:
- broken_tests: true
- degraded_code_quality: true
- incomplete_acceptance_criteria: true
# Error handling and recovery
error_handling:
phase_failures:
red_phase_failure:
- "Review acceptance criteria clarity"
- "Check test runner configuration"
- "Verify mocking strategy"
- "Consult with SM for requirements clarification"
green_phase_failure:
- "Review test expectations vs implementation"
- "Check for missing dependencies"
- "Verify implementation approach"
- "Consider breaking down into smaller cycles"
refactor_phase_failure:
- "Immediately revert breaking changes"
- "Use smaller refactoring steps"
- "Review test coverage adequacy"
- "Consider technical debt acceptance"
# Agent coordination
agent_handoffs:
qa_to_dev:
trigger: "tdd.status == 'red'"
handoff_artifacts:
- failing_test_suite
- test_execution_report
- story_with_updated_metadata
- mocking_strategy_documentation
dev_back_to_qa:
trigger: "questions about test expectations or refactoring safety"
collaboration_points:
- test_clarification_needed
- refactoring_impact_assessment
- additional_test_coverage_discussion
both_agents:
trigger: "tdd.status == 'refactor'"
joint_activities:
- code_quality_assessment
- refactoring_safety_validation
- test_maintenance_discussion
# Integration with existing BMAD workflows
bmad_integration:
extends: "story_workflow_base"
modified_sections:
story_creation:
- "Use story-tdd-template.md when tdd.enabled=true"
- "Initialize TDD metadata in story frontmatter"
quality_gates:
- "Apply tdd-dod-checklist.md instead of standard DoD"
- "Include TDD-specific review criteria"
agent_selection:
- "Route to QA agent first for Red phase"
- "Enforce phase-based agent assignment"
# Configuration and customization
configuration:
tdd_settings:
cycle_timeout: "2 days" # Maximum time per TDD cycle
required_coverage_minimum: 0.8 # 80% default
max_failing_tests_per_cycle: 10 # Prevent scope creep
quality_thresholds:
complexity_increase_limit: 10 # Max complexity increase per cycle
duplication_tolerance: 5 # Max acceptable code duplication %
automation_hooks:
test_execution: "Run tests automatically on file changes"
coverage_reporting: "Generate coverage reports per cycle"
quality_metrics: "Track metrics before/after refactoring"