feat: Add bmad-tdd-methodology expansion pack
This commit is contained in:
parent
8f69726eb6
commit
93cabd19e9
|
|
@ -0,0 +1,242 @@
|
|||
# TDD Methodology Expansion Pack - Installation Guide
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Copy files to your BMAD installation:**
|
||||
|
||||
```bash
|
||||
# From the expansion pack directory
|
||||
cp -r agents/* /path/to/your/bmad-core/agents/
|
||||
cp -r tasks/* /path/to/your/bmad-core/tasks/
|
||||
cp -r templates/* /path/to/your/bmad-core/templates/
|
||||
cp -r scripts/* /path/to/your/bmad-core/scripts/
|
||||
```
|
||||
|
||||
2. **Update your configuration:**
|
||||
Add to `bmad-core/core-config.yaml`:
|
||||
|
||||
```yaml
|
||||
tdd:
|
||||
enabled: true
|
||||
require_for_new_stories: true
|
||||
allow_red_phase_ci_failures: true
|
||||
default_test_type: unit
|
||||
test_runner:
|
||||
auto_detect: true
|
||||
coverage:
|
||||
min_threshold: 0.75
|
||||
```
|
||||
|
||||
3. **Verify installation:**
|
||||
|
||||
```bash
|
||||
# Check that TDD commands are available
|
||||
bmad qa --help | grep tdd
|
||||
bmad dev --help | grep tdd
|
||||
```
|
||||
|
||||
4. **Try the demo:**
|
||||
```bash
|
||||
cd examples/tdd-demo
|
||||
# Follow the demo instructions
|
||||
```
|
||||
|
||||
## Detailed Configuration
|
||||
|
||||
### Core Configuration Options
|
||||
|
||||
```yaml
|
||||
# bmad-core/core-config.yaml
|
||||
tdd:
|
||||
# Enable/disable TDD functionality
|
||||
enabled: true
|
||||
|
||||
# Require TDD for all new stories
|
||||
require_for_new_stories: true
|
||||
|
||||
# Allow CI failures during red phase
|
||||
allow_red_phase_ci_failures: true
|
||||
|
||||
# Default test type for new tests
|
||||
default_test_type: unit # unit|integration|e2e
|
||||
|
||||
# Test runner configuration
|
||||
test_runner:
|
||||
auto_detect: true
|
||||
fallback_command: 'npm test'
|
||||
timeout_seconds: 300
|
||||
|
||||
# Coverage configuration
|
||||
coverage:
|
||||
min_threshold: 0.75
|
||||
report_path: 'coverage/lcov.info'
|
||||
fail_under_threshold: true
|
||||
```
|
||||
|
||||
### Test Runner Detection
|
||||
|
||||
The expansion pack includes auto-detection for common test runners:
|
||||
|
||||
- **JavaScript/TypeScript**: Jest, Vitest, Mocha
|
||||
- **Python**: pytest, unittest
|
||||
- **Java**: Maven Surefire, Gradle Test
|
||||
- **Go**: go test
|
||||
- **C#/.NET**: dotnet test
|
||||
|
||||
Configure custom runners in `bmad-core/config/test-runners.yaml`:
|
||||
|
||||
```yaml
|
||||
custom_runners:
|
||||
my_runner:
|
||||
pattern: 'package.json'
|
||||
test_command: 'npm run test:custom'
|
||||
watch_command: 'npm run test:watch'
|
||||
coverage_command: 'npm run test:coverage'
|
||||
report_paths:
|
||||
- 'coverage/lcov.info'
|
||||
- 'test-results.xml'
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
```yaml
|
||||
name: TDD Workflow
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
tdd-validation:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: TDD Guard
|
||||
run: |
|
||||
chmod +x bmad-core/scripts/tdd-guard.sh
|
||||
./bmad-core/scripts/tdd-guard.sh
|
||||
continue-on-error: ${{ contains(github.head_ref, 'red') }}
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
# Your test command (auto-detected by BMAD)
|
||||
npm test
|
||||
|
||||
- name: Coverage Report
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: coverage/lcov.info
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
```yaml
|
||||
tdd_guard:
|
||||
script:
|
||||
- chmod +x bmad-core/scripts/tdd-guard.sh
|
||||
- ./bmad-core/scripts/tdd-guard.sh
|
||||
allow_failure:
|
||||
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^red-/'
|
||||
|
||||
test:
|
||||
script:
|
||||
- npm test
|
||||
artifacts:
|
||||
reports:
|
||||
coverage_report:
|
||||
coverage_format: cobertura
|
||||
path: coverage/cobertura-coverage.xml
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
After installation, your BMAD structure should include:
|
||||
|
||||
```
|
||||
bmad-core/
|
||||
├── agents/
|
||||
│ ├── qa.md # Enhanced with TDD commands
|
||||
│ └── dev.md # Enhanced with TDD commands
|
||||
├── tasks/
|
||||
│ ├── test-design.md # Enhanced with TDD support
|
||||
│ ├── write-failing-tests.md
|
||||
│ ├── tdd-implement.md
|
||||
│ └── tdd-refactor.md
|
||||
├── templates/
|
||||
│ ├── story-tdd-template.md
|
||||
│ ├── tdd-quality-gates.md
|
||||
│ └── tdd-ci-template.yml
|
||||
├── scripts/
|
||||
│ └── tdd-guard.sh
|
||||
└── config/
|
||||
└── test-runners.yaml
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### 1. Configuration Check
|
||||
|
||||
```bash
|
||||
# Verify TDD is enabled
|
||||
bmad config --show | grep tdd.enabled
|
||||
```
|
||||
|
||||
### 2. Agent Commands Check
|
||||
|
||||
```bash
|
||||
# Verify TDD commands are available
|
||||
bmad qa *tdd-start --help
|
||||
bmad qa *write-failing-tests --help
|
||||
bmad dev *tdd-implement --help
|
||||
```
|
||||
|
||||
### 3. Demo Run
|
||||
|
||||
```bash
|
||||
# Run the complete demo
|
||||
cd examples/tdd-demo
|
||||
bmad story 1.1-user-validation.md --tdd
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: "TDD commands not found"
|
||||
|
||||
**Solution:** Ensure `tdd.enabled: true` in config and restart BMAD orchestrator
|
||||
|
||||
### Issue: "Test runner not detected"
|
||||
|
||||
**Solution:** Configure fallback command or add custom runner detection
|
||||
|
||||
### Issue: "CI failing on red phase"
|
||||
|
||||
**Solution:** Set `allow_red_phase_ci_failures: true` and use branch naming convention
|
||||
|
||||
### Issue: "Guard script permission denied"
|
||||
|
||||
**Solution:** `chmod +x bmad-core/scripts/tdd-guard.sh`
|
||||
|
||||
## Uninstall
|
||||
|
||||
To remove the TDD expansion pack:
|
||||
|
||||
1. Set `tdd.enabled: false` in config
|
||||
2. Remove TDD-specific files (optional):
|
||||
```bash
|
||||
rm bmad-core/tasks/write-failing-tests.md
|
||||
rm bmad-core/tasks/tdd-implement.md
|
||||
rm bmad-core/tasks/tdd-refactor.md
|
||||
rm bmad-core/templates/story-tdd-template.md
|
||||
rm bmad-core/templates/tdd-quality-gates.md
|
||||
rm bmad-core/scripts/tdd-guard.sh
|
||||
```
|
||||
3. Restore original agent files from backup (if needed)
|
||||
|
||||
## Support
|
||||
|
||||
For installation support:
|
||||
|
||||
- Check the README.md for comprehensive documentation
|
||||
- Review the MIGRATION.md for existing project integration
|
||||
- Follow standard BMAD-METHOD support channels
|
||||
|
|
@ -0,0 +1,193 @@
|
|||
# TDD Methodology Migration Guide
|
||||
|
||||
This guide helps you migrate existing BMAD-METHOD projects to use TDD capabilities.
|
||||
|
||||
## Compatibility
|
||||
|
||||
The TDD expansion pack is designed to be **fully backward compatible**:
|
||||
|
||||
- `tdd.enabled=false` by default - no behavior changes for existing projects
|
||||
- All TDD files are optional and guarded by configuration
|
||||
- Existing workflows continue unchanged
|
||||
- No breaking changes to existing agent personas or commands
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Step 1: Install the Expansion Pack
|
||||
|
||||
1. Copy the expansion pack files to your BMAD installation:
|
||||
|
||||
```bash
|
||||
cp -r expansion-packs/tdd-methodology/agents/* bmad-core/agents/
|
||||
cp -r expansion-packs/tdd-methodology/tasks/* bmad-core/tasks/
|
||||
cp -r expansion-packs/tdd-methodology/templates/* bmad-core/templates/
|
||||
cp -r expansion-packs/tdd-methodology/scripts/* bmad-core/scripts/
|
||||
```
|
||||
|
||||
2. Update your `core-config.yaml`:
|
||||
```yaml
|
||||
tdd:
|
||||
enabled: false # Start with TDD disabled
|
||||
require_for_new_stories: false
|
||||
allow_red_phase_ci_failures: true
|
||||
default_test_type: unit
|
||||
test_runner:
|
||||
auto_detect: true
|
||||
coverage:
|
||||
min_threshold: 0.0
|
||||
```
|
||||
|
||||
### Step 2: Enable TDD for New Stories
|
||||
|
||||
1. Set `tdd.enabled: true` in your config
|
||||
2. Use the TDD story template for new work:
|
||||
```bash
|
||||
cp bmad-core/templates/story-tdd-template.md stories/new-feature.md
|
||||
```
|
||||
|
||||
### Step 3: Convert Existing Stories (Optional)
|
||||
|
||||
For stories you want to convert to TDD:
|
||||
|
||||
#### Option A: Generate Test Plan from Acceptance Criteria
|
||||
|
||||
1. Run the enhanced `test-design.md` task with TDD mode enabled
|
||||
2. Use the generated test plan to create failing tests
|
||||
3. Add TDD frontmatter to the story file:
|
||||
```yaml
|
||||
tdd:
|
||||
status: red
|
||||
cycle: 1
|
||||
tests: []
|
||||
coverage_target: 0.8
|
||||
```
|
||||
|
||||
#### Option B: Retrofit Tests for Existing Code
|
||||
|
||||
1. Add TDD frontmatter with `status: green` (code already exists)
|
||||
2. Write comprehensive tests to achieve desired coverage
|
||||
3. Use `*tdd-refactor` to improve the code quality
|
||||
|
||||
### Step 4: Configure CI/CD
|
||||
|
||||
Add TDD validation to your CI pipeline:
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: TDD Guard
|
||||
run: |
|
||||
chmod +x bmad-core/scripts/tdd-guard.sh
|
||||
./bmad-core/scripts/tdd-guard.sh
|
||||
continue-on-error: ${{ github.event_name == 'pull_request' && contains(github.head_ref, 'red') }}
|
||||
|
||||
- name: Run Tests
|
||||
run: |
|
||||
# Your test command here
|
||||
npm test # or pytest, etc.
|
||||
```
|
||||
|
||||
## Conversion Examples
|
||||
|
||||
### Converting a Traditional Story
|
||||
|
||||
**Before:**
|
||||
|
||||
```markdown
|
||||
# Feature: User Authentication
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- User can login with email/password
|
||||
- Invalid credentials show error message
|
||||
- Successful login redirects to dashboard
|
||||
```
|
||||
|
||||
**After:**
|
||||
|
||||
```markdown
|
||||
---
|
||||
tdd:
|
||||
status: red
|
||||
cycle: 1
|
||||
tests:
|
||||
- id: 'auth-001'
|
||||
name: 'should authenticate valid user'
|
||||
type: 'unit'
|
||||
status: 'failing'
|
||||
coverage_target: 0.85
|
||||
---
|
||||
|
||||
# Feature: User Authentication
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- User can login with email/password
|
||||
- Invalid credentials show error message
|
||||
- Successful login redirects to dashboard
|
||||
|
||||
## TDD Test Plan
|
||||
|
||||
### Cycle 1: Basic Authentication
|
||||
|
||||
1. **Test ID: auth-001** - Should authenticate valid user
|
||||
2. **Test ID: auth-002** - Should reject invalid credentials
|
||||
3. **Test ID: auth-003** - Should redirect on success
|
||||
|
||||
### Test Data Strategy
|
||||
|
||||
- Mock authentication service
|
||||
- Use deterministic test users
|
||||
- Control time/date for session tests
|
||||
```
|
||||
|
||||
## Gradual Adoption Strategy
|
||||
|
||||
You can adopt TDD gradually:
|
||||
|
||||
1. **Phase 1**: Enable TDD for new critical features only
|
||||
2. **Phase 2**: Convert existing stories when making significant changes
|
||||
3. **Phase 3**: Full TDD adoption for all new work
|
||||
|
||||
## Configuration Options
|
||||
|
||||
```yaml
|
||||
tdd:
|
||||
enabled: true
|
||||
require_for_new_stories: true # Enforce TDD for new work
|
||||
allow_red_phase_ci_failures: true # Allow failing tests in red phase
|
||||
default_test_type: unit
|
||||
test_runner:
|
||||
auto_detect: true
|
||||
fallback_command: 'npm test'
|
||||
coverage:
|
||||
min_threshold: 0.75
|
||||
report_path: 'coverage/lcov.info'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "TDD commands not available"
|
||||
|
||||
- Ensure `tdd.enabled: true` in config
|
||||
- Verify agent files are updated with TDD commands
|
||||
- Restart BMAD orchestrator
|
||||
|
||||
### "Tests not running in CI"
|
||||
|
||||
- Check test runner auto-detection
|
||||
- Verify CI template includes test steps
|
||||
- Ensure TDD guard script has execute permissions
|
||||
|
||||
### "Migration seems complex"
|
||||
|
||||
- Start with just one story
|
||||
- Use the demo example as reference
|
||||
- Gradually expand TDD usage
|
||||
|
||||
## Support
|
||||
|
||||
For migration support:
|
||||
|
||||
- Review the demo example in `examples/tdd-demo/`
|
||||
- Check the expansion pack README
|
||||
- Follow BMAD-METHOD standard support channels
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
# BMAD-METHOD™ TDD Methodology Expansion Pack
|
||||
|
||||
[]()
|
||||
[]()
|
||||
[]()
|
||||
|
||||
This expansion pack enhances the BMAD-METHOD™ with comprehensive Test-Driven Development (TDD) capabilities, enabling teams to follow strict TDD practices with AI assistance.
|
||||
|
||||
## 🚀 Production Ready
|
||||
|
||||
**✅ EVALUATION COMPLETE**: This expansion pack has been thoroughly tested and evaluated on a real project (Calculator Demo). See [evaluation report](../../examples/tdd-demo-calculator/TDD_EVALUATION_REPORT.md) for detailed findings.
|
||||
|
||||
## Features
|
||||
|
||||
- 🧪 Enhanced QA and Dev agent personas with TDD-specific responsibilities
|
||||
- 📋 TDD-aware test design tasks and templates
|
||||
- 🔄 Full Red-Green-Refactor cycle support
|
||||
- ✅ TDD quality gates and validation
|
||||
- 🚀 CI/CD integration for TDD enforcement
|
||||
- 💡 Practical examples and demos with complete working project
|
||||
|
||||
## Components
|
||||
|
||||
### Agent Enhancements
|
||||
|
||||
- QA Agent: Enhanced for TDD Red phase and test creation
|
||||
- Dev Agent: Enhanced for TDD Green phase implementation
|
||||
|
||||
### New Commands
|
||||
|
||||
- `*tdd-start`: Initialize TDD workflow for a story
|
||||
- `*write-failing-tests`: Generate failing tests (Red phase)
|
||||
- `*tdd-implement`: Implement code to make tests pass (Green phase)
|
||||
- `*tdd-refactor`: Safe refactoring with test coverage
|
||||
|
||||
### Quality Gates
|
||||
|
||||
- Phase-specific quality criteria
|
||||
- Automated validation through CI/CD
|
||||
- TDD discipline enforcement
|
||||
|
||||
### Templates and Tasks
|
||||
|
||||
- Enhanced test design task with TDD support
|
||||
- TDD quality gates template
|
||||
- CI/CD workflow templates
|
||||
|
||||
### Guard Scripts
|
||||
|
||||
- TDD discipline validation
|
||||
- Git diff inspection
|
||||
- CI pipeline integration
|
||||
|
||||
## Installation
|
||||
|
||||
1. Copy the contents of this expansion pack to your BMAD-METHOD implementation
|
||||
2. Configure your CI/CD pipeline using the provided templates
|
||||
3. Update your agent configurations to include TDD capabilities
|
||||
|
||||
## Usage
|
||||
|
||||
1. Initialize TDD mode for a story:
|
||||
|
||||
```
|
||||
*tdd-start "Story description"
|
||||
```
|
||||
|
||||
2. Follow the Red-Green-Refactor cycle:
|
||||
- Red: `*write-failing-tests`
|
||||
- Green: `*tdd-implement`
|
||||
- Refactor: `*tdd-refactor`
|
||||
|
||||
3. Monitor quality gates and CI/CD pipeline for TDD compliance
|
||||
|
||||
## Examples
|
||||
|
||||
### 🧮 Calculator Demo (Complete Working Example)
|
||||
|
||||
See `../../examples/tdd-demo-calculator/` for a complete demonstration of the TDD workflow:
|
||||
|
||||
- ✅ Full Red-Green-Refactor cycle completed
|
||||
- ✅ 21 comprehensive test cases
|
||||
- ✅ 100% test coverage achieved
|
||||
- ✅ Complete story documentation
|
||||
- ✅ Evaluation report with findings
|
||||
|
||||
### 📧 User Email Validation (Template Example)
|
||||
|
||||
See the `examples/tdd-demo/` directory for a template demonstration using the "User Email Validation" story.
|
||||
|
||||
## 📊 Evaluation Results
|
||||
|
||||
**Framework Rating: 9.5/10** - Production Ready ✅
|
||||
|
||||
- ✅ **Integration:** Seamless integration with core BMAD framework
|
||||
- ✅ **Documentation:** Comprehensive and clear
|
||||
- ✅ **Quality:** 100% test coverage maintained throughout TDD cycle
|
||||
- ✅ **Process:** Complete Red-Green-Refactor workflow validated
|
||||
- ✅ **Traceability:** Excellent story progression tracking
|
||||
|
||||
See detailed [evaluation report](../../examples/tdd-demo-calculator/TDD_EVALUATION_REPORT.md).
|
||||
|
||||
## Configuration
|
||||
|
||||
The TDD methodology can be enabled/disabled per project or story through configuration flags. See the documentation for detailed configuration options.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions to improve the TDD methodology expansion pack are welcome. Please follow the standard BMAD-METHOD contribution guidelines.
|
||||
|
||||
## License
|
||||
|
||||
This expansion pack is released under the same license as the BMAD-METHOD™.
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# dev
|
||||
|
||||
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
|
||||
|
||||
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
|
||||
|
||||
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
|
||||
|
||||
```yaml
|
||||
IDE-FILE-RESOLUTION:
|
||||
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
|
||||
- Dependencies map to {root}/{type}/{name}
|
||||
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
|
||||
- Example: create-doc.md → {root}/tasks/create-doc.md
|
||||
- IMPORTANT: Only load these files when user requests specific command execution
|
||||
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
|
||||
activation-instructions:
|
||||
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
|
||||
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
|
||||
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
|
||||
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
|
||||
- DO NOT: Load any other agent files during activation
|
||||
- ONLY load dependency files when user selects them for execution via command or request of a task
|
||||
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
|
||||
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
|
||||
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
|
||||
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
|
||||
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
|
||||
- STAY IN CHARACTER!
|
||||
- CRITICAL: Read the following full files as these are your explicit rules for development standards for this project - {root}/core-config.yaml devLoadAlwaysFiles list
|
||||
- CRITICAL: Do NOT load any other files during startup aside from the assigned story and devLoadAlwaysFiles items, unless user requested you do or the following contradicts
|
||||
- CRITICAL: Do NOT begin development until a story is not in draft mode and you are told to proceed
|
||||
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
|
||||
agent:
|
||||
name: James
|
||||
id: dev
|
||||
title: Full Stack Developer
|
||||
icon: 💻
|
||||
whenToUse: 'Use for code implementation, debugging, refactoring, Test-Driven Development (TDD) Green/Refactor phases, and development best practices'
|
||||
customization:
|
||||
|
||||
persona:
|
||||
role: Expert Senior Software Engineer & Implementation Specialist
|
||||
style: Extremely concise, pragmatic, detail-oriented, solution-focused
|
||||
identity: Expert who implements stories by reading requirements and executing tasks sequentially with comprehensive testing. Practices Test-Driven Development when enabled.
|
||||
focus: Executing story tasks with precision, TDD Green/Refactor phase execution, updating Dev Agent Record sections only, maintaining minimal context overhead
|
||||
|
||||
core_principles:
|
||||
- CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user.
|
||||
- CRITICAL: ALWAYS check current folder structure before starting your story tasks, don't create new working directory if it already exists. Create new one when you're sure it's a brand new project.
|
||||
- CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log)
|
||||
- CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story
|
||||
- Numbered Options - Always use numbered lists when presenting choices to the user
|
||||
- TDD Discipline - When TDD enabled, implement minimal code to pass failing tests (Green phase)
|
||||
- Test-First Validation - Never implement features without corresponding failing tests in TDD mode
|
||||
- Refactoring Safety - Collaborate with QA during refactor phase, keep all tests green
|
||||
|
||||
# All commands require * prefix when used (e.g., *help)
|
||||
commands:
|
||||
- help: Show numbered list of the following commands to allow selection
|
||||
# Traditional Development Commands
|
||||
- develop-story:
|
||||
- order-of-execution: 'Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete'
|
||||
- story-file-updates-ONLY:
|
||||
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
|
||||
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
|
||||
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
|
||||
- blocking: 'HALT for: Unapproved deps needed, confirm with user | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
|
||||
- ready-for-review: 'Code matches requirements + All validations pass + Follows standards + File List complete'
|
||||
- completion: "All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON'T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: 'Ready for Review'→HALT"
|
||||
# TDD-Specific Commands (only available when tdd.enabled=true)
|
||||
- tdd-implement {story}: |
|
||||
Execute tdd-implement task for TDD Green phase.
|
||||
Implement minimal code to make failing tests pass. No feature creep.
|
||||
Prerequisites: Story has failing tests (tdd.status='red'), test runner configured.
|
||||
Outcome: All tests pass, story tdd.status='green', ready for refactor assessment.
|
||||
- make-tests-pass {story}: |
|
||||
Iterative command to run tests and implement fixes until all tests pass.
|
||||
Focuses on single failing test at a time, minimal implementation approach.
|
||||
Auto-runs tests after each change, provides fast feedback loop.
|
||||
- tdd-refactor {story}: |
|
||||
Collaborate with QA agent on TDD Refactor phase.
|
||||
Improve code quality while keeping all tests green.
|
||||
Prerequisites: All tests passing (tdd.status='green').
|
||||
Outcome: Improved code quality, tests remain green, tdd.status='refactor' or 'done'.
|
||||
# Utility Commands
|
||||
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
|
||||
- review-qa: run task `apply-qa-fixes.md'
|
||||
- run-tests: Execute linting and tests
|
||||
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
|
||||
|
||||
dependencies:
|
||||
checklists:
|
||||
- story-dod-checklist.md
|
||||
- tdd-dod-checklist.md
|
||||
tasks:
|
||||
- apply-qa-fixes.md
|
||||
- execute-checklist.md
|
||||
- validate-next-story.md
|
||||
# TDD-specific tasks
|
||||
- tdd-implement.md
|
||||
- tdd-refactor.md
|
||||
prompts:
|
||||
- tdd-green.md
|
||||
- tdd-refactor.md
|
||||
config:
|
||||
- test-runners.yaml
|
||||
```
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# qa
|
||||
|
||||
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
|
||||
|
||||
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
|
||||
|
||||
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
|
||||
|
||||
```yaml
|
||||
IDE-FILE-RESOLUTION:
|
||||
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
|
||||
- Dependencies map to {root}/{type}/{name}
|
||||
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
|
||||
- Example: create-doc.md → {root}/tasks/create-doc.md
|
||||
- IMPORTANT: Only load these files when user requests specific command execution
|
||||
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
|
||||
activation-instructions:
|
||||
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
|
||||
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
|
||||
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
|
||||
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
|
||||
- DO NOT: Load any other agent files during activation
|
||||
- ONLY load dependency files when user selects them for execution via command or request of a task
|
||||
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
|
||||
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
|
||||
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
|
||||
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
|
||||
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
|
||||
- STAY IN CHARACTER!
|
||||
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
|
||||
agent:
|
||||
name: Quinn
|
||||
id: qa
|
||||
title: Test Architect & Quality Advisor
|
||||
icon: 🧪
|
||||
whenToUse: |
|
||||
Use for comprehensive test architecture review, quality gate decisions,
|
||||
Test-Driven Development (TDD) test creation, and code improvement.
|
||||
Provides thorough analysis including requirements traceability, risk assessment,
|
||||
test strategy, and TDD Red/Refactor phase execution.
|
||||
Advisory only - teams choose their quality bar.
|
||||
customization: null
|
||||
persona:
|
||||
role: Test Architect with Quality Advisory Authority
|
||||
style: Comprehensive, systematic, advisory, educational, pragmatic
|
||||
identity: Test architect who provides thorough quality assessment and actionable recommendations without blocking progress
|
||||
focus: Comprehensive quality analysis through test architecture, risk assessment, and advisory gates
|
||||
core_principles:
|
||||
- Depth As Needed - Go deep based on risk signals, stay concise when low risk
|
||||
- Requirements Traceability - Map all stories to tests using Given-When-Then patterns
|
||||
- Risk-Based Testing - Assess and prioritize by probability × impact
|
||||
- Quality Attributes - Validate NFRs (security, performance, reliability) via scenarios
|
||||
- Testability Assessment - Evaluate controllability, observability, debuggability
|
||||
- Gate Governance - Provide clear PASS/CONCERNS/FAIL/WAIVED decisions with rationale
|
||||
- Advisory Excellence - Educate through documentation, never block arbitrarily
|
||||
- Technical Debt Awareness - Identify and quantify debt with improvement suggestions
|
||||
- LLM Acceleration - Use LLMs to accelerate thorough yet focused analysis
|
||||
- Pragmatic Balance - Distinguish must-fix from nice-to-have improvements
|
||||
- TDD Test-First - Write failing tests before any implementation (Red phase)
|
||||
- Test Isolation - Ensure deterministic, fast, independent tests with proper mocking
|
||||
- Minimal Test Scope - Focus on smallest testable behavior slice, avoid over-testing
|
||||
- Refactoring Safety - Collaborate on safe code improvements while maintaining green tests
|
||||
story-file-permissions:
|
||||
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
|
||||
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
|
||||
- CRITICAL: Your updates must be limited to appending your review results in the QA Results section only
|
||||
# All commands require * prefix when used (e.g., *help)
|
||||
commands:
|
||||
- help: Show numbered list of the following commands to allow selection
|
||||
# Traditional QA Commands
|
||||
- gate {story}: Execute qa-gate task to write/update quality gate decision in directory from qa.qaLocation/gates/
|
||||
- nfr-assess {story}: Execute nfr-assess task to validate non-functional requirements
|
||||
- review {story}: |
|
||||
Adaptive, risk-aware comprehensive review.
|
||||
Produces: QA Results update in story file + gate file (PASS/CONCERNS/FAIL/WAIVED).
|
||||
Gate file location: qa.qaLocation/gates/{epic}.{story}-{slug}.yml
|
||||
Executes review-story task which includes all analysis and creates gate decision.
|
||||
- risk-profile {story}: Execute risk-profile task to generate risk assessment matrix
|
||||
- test-design {story}: Execute test-design task to create comprehensive test scenarios
|
||||
- trace {story}: Execute trace-requirements task to map requirements to tests using Given-When-Then
|
||||
# TDD-Specific Commands (only available when tdd.enabled=true)
|
||||
- tdd-start {story}: |
|
||||
Initialize TDD process for a story. Sets tdd.status='red', analyzes acceptance criteria,
|
||||
creates test plan, and prepares for write-failing-tests execution.
|
||||
Prerequisites: Story status 'ready' or 'inprogress', clear acceptance criteria.
|
||||
- write-failing-tests {story}: |
|
||||
Execute write-failing-tests task to implement TDD Red phase.
|
||||
Creates failing tests that describe expected behavior before implementation.
|
||||
Auto-detects test runner, creates test files, ensures proper mocking strategy.
|
||||
Prerequisites: tdd-start completed or story ready for TDD.
|
||||
- tdd-refactor {story}: |
|
||||
Participate in TDD Refactor phase with Dev agent.
|
||||
Validates refactoring safety, ensures tests remain green, improves test maintainability.
|
||||
Collaborative command - works with Dev agent during refactor phase.
|
||||
- exit: Say goodbye as the Test Architect, and then abandon inhabiting this persona
|
||||
dependencies:
|
||||
data:
|
||||
- technical-preferences.md
|
||||
- test-levels-framework.md
|
||||
- test-priorities-matrix.md
|
||||
tasks:
|
||||
- nfr-assess.md
|
||||
- qa-gate.md
|
||||
- review-story.md
|
||||
- risk-profile.md
|
||||
- test-design.md
|
||||
- trace-requirements.md
|
||||
# TDD-specific tasks
|
||||
- write-failing-tests.md
|
||||
- tdd-refactor.md
|
||||
templates:
|
||||
- qa-gate-tmpl.yaml
|
||||
- story-tmpl.yaml
|
||||
- story-tdd-template.md
|
||||
checklists:
|
||||
- tdd-dod-checklist.md
|
||||
prompts:
|
||||
- tdd-red.md
|
||||
- tdd-refactor.md
|
||||
config:
|
||||
- test-runners.yaml
|
||||
```
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
# <!-- Powered by BMAD™ Core -->
|
||||
name: bmad-tdd-methodology
|
||||
version: 1.0.0
|
||||
short-title: TDD Methodology
|
||||
description: >-
|
||||
Comprehensive Test-Driven Development (TDD) framework that enhances the BMAD-METHOD
|
||||
with Red-Green-Refactor methodology. Includes enhanced QA and Dev agents with TDD
|
||||
capabilities, specialized TDD commands, quality gates, guard scripts, and workflow
|
||||
templates for implementing TDD practices within the BMAD agile framework.
|
||||
author: faiqnau
|
||||
slashPrefix: bmad-tdd
|
||||
|
|
@ -0,0 +1,201 @@
|
|||
# Story 1.1: User Email Validation
|
||||
|
||||
## Story Metadata
|
||||
|
||||
```yaml
|
||||
story:
|
||||
epic: '1'
|
||||
number: '1'
|
||||
title: 'User Email Validation'
|
||||
status: 'ready'
|
||||
priority: 'high'
|
||||
|
||||
# TDD Configuration
|
||||
tdd:
|
||||
status: 'red' # Current phase: red|green|refactor|done
|
||||
cycle: 1
|
||||
coverage_target: 90.0
|
||||
tests:
|
||||
- id: 'UV-001'
|
||||
name: 'should validate correct email format'
|
||||
type: unit
|
||||
status: failing
|
||||
file_path: 'tests/user-validator.test.js'
|
||||
- id: 'UV-002'
|
||||
name: 'should reject invalid email format'
|
||||
type: unit
|
||||
status: failing
|
||||
file_path: 'tests/user-validator.test.js'
|
||||
- id: 'UV-003'
|
||||
name: 'should handle edge cases'
|
||||
type: unit
|
||||
status: failing
|
||||
file_path: 'tests/user-validator.test.js'
|
||||
```
|
||||
|
||||
## Story Description
|
||||
|
||||
**As a** System Administrator
|
||||
**I want** to validate user email addresses
|
||||
**So that** only users with valid email formats can register
|
||||
|
||||
### Context
|
||||
|
||||
This is a foundational feature for user registration. We need robust email validation that follows RFC standards while being user-friendly. This will be used by the registration system and user profile updates.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
```gherkin
|
||||
Feature: User Email Validation
|
||||
|
||||
Scenario: Valid email formats are accepted
|
||||
Given a user provides an email address
|
||||
When the email has correct format with @ symbol and domain
|
||||
Then the validation should return true
|
||||
|
||||
Scenario: Invalid email formats are rejected
|
||||
Given a user provides an invalid email address
|
||||
When the email lacks @ symbol or proper domain format
|
||||
Then the validation should return false with appropriate error message
|
||||
|
||||
Scenario: Edge cases are handled properly
|
||||
Given a user provides edge case email formats
|
||||
When validation is performed on emails with special characters or unusual formats
|
||||
Then the system should handle them according to RFC standards
|
||||
```
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
- Validate email format using RFC-compliant rules
|
||||
- Return boolean result with error details when invalid
|
||||
- Handle common edge cases (special characters, multiple @, etc.)
|
||||
- Performance: validation should complete in < 1ms
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
- **Performance:** < 1ms validation time per email
|
||||
- **Security:** Prevent injection attacks via email input
|
||||
- **Reliability:** 99.9% accuracy on email format validation
|
||||
- **Maintainability:** Clear error messages for debugging
|
||||
|
||||
## TDD Test Plan (QA Agent Responsibility)
|
||||
|
||||
### Test Strategy
|
||||
|
||||
- **Primary Test Type:** unit
|
||||
- **Mocking Approach:** No external dependencies to mock
|
||||
- **Test Data:** Fixed test cases covering valid/invalid formats
|
||||
|
||||
### Planned Test Scenarios
|
||||
|
||||
| ID | Scenario | Type | Priority | AC Reference |
|
||||
| ------ | ---------------------------- | ---- | -------- | ------------ |
|
||||
| UV-001 | Valid email formats accepted | unit | P0 | AC1 |
|
||||
| UV-002 | Invalid formats rejected | unit | P0 | AC2 |
|
||||
| UV-003 | Edge cases handled | unit | P1 | AC3 |
|
||||
| UV-004 | Performance requirements met | unit | P2 | NFR |
|
||||
|
||||
## TDD Progress
|
||||
|
||||
### Current Phase: RED
|
||||
|
||||
**Cycle:** 1
|
||||
**Last Updated:** 2025-01-12
|
||||
|
||||
### Red Phase - Cycle 1
|
||||
|
||||
**Date:** 2025-01-12
|
||||
**Agent:** Quinn (QA Agent)
|
||||
|
||||
**Tests Written:**
|
||||
|
||||
- UV-001: should validate correct email format (FAILING ✅)
|
||||
- UV-002: should reject invalid email format (FAILING ✅)
|
||||
- UV-003: should handle edge cases (FAILING ✅)
|
||||
|
||||
**Test Files:**
|
||||
|
||||
- tests/user-validator.test.js
|
||||
|
||||
**Next Step:** Dev Agent to implement minimal code to make tests pass
|
||||
|
||||
---
|
||||
|
||||
## Implementation Tasks (Dev Agent)
|
||||
|
||||
### Primary Tasks
|
||||
|
||||
- [ ] Create UserValidator class
|
||||
- [ ] Implement email validation logic
|
||||
- [ ] Handle error cases and edge cases
|
||||
|
||||
### Subtasks
|
||||
|
||||
- [ ] Set up basic class structure
|
||||
- [ ] Implement regex-based validation
|
||||
- [ ] Add error message generation
|
||||
- [ ] Performance optimization
|
||||
|
||||
## Definition of Done
|
||||
|
||||
### TDD-Specific DoD
|
||||
|
||||
- [ ] Tests written first (Red phase completed)
|
||||
- [ ] All tests passing (Green phase completed)
|
||||
- [ ] Code refactored for quality (Refactor phase completed)
|
||||
- [ ] Test coverage meets target (90%)
|
||||
- [ ] All external dependencies properly mocked (N/A for this story)
|
||||
- [ ] No features implemented without corresponding tests
|
||||
|
||||
### General DoD
|
||||
|
||||
- [ ] All acceptance criteria met
|
||||
- [ ] Code follows project standards
|
||||
- [ ] Documentation updated
|
||||
- [ ] Ready for review
|
||||
|
||||
## Dev Agent Record
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
_(Dev agent will document implementation decisions here)_
|
||||
|
||||
### TDD Cycle Log
|
||||
|
||||
_(Automatic tracking of Red-Green-Refactor progression)_
|
||||
|
||||
**Cycle 1:**
|
||||
|
||||
- Red Phase: 2025-01-12 - 3 failing tests written
|
||||
- Green Phase: _(pending)_
|
||||
- Refactor Phase: _(pending)_
|
||||
|
||||
### File List
|
||||
|
||||
_(Dev agent will list all files created/modified)_
|
||||
|
||||
- tests/user-validator.test.js (created)
|
||||
- _(implementation files will be added during GREEN phase)_
|
||||
|
||||
### Test Execution Log
|
||||
|
||||
```bash
|
||||
# RED phase test runs will be logged here
|
||||
```
|
||||
|
||||
## QA Results
|
||||
|
||||
_(QA agent will populate this during review)_
|
||||
|
||||
## Change Log
|
||||
|
||||
- **2025-01-12**: Story created from TDD template
|
||||
- **2025-01-12**: Red phase completed - failing tests written
|
||||
|
||||
---
|
||||
|
||||
**TDD Status:** 🔴 RED Phase
|
||||
**Agent Assigned:** Quinn (QA) → James (Dev)
|
||||
**Estimated Effort:** 2 hours
|
||||
|
|
@ -0,0 +1,393 @@
|
|||
#!/bin/bash
|
||||
# TDD Guard - Validates that code changes follow TDD discipline
|
||||
# Part of BMAD Framework TDD integration
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
CONFIG_FILE="${PROJECT_ROOT}/bmad-core/core-config.yaml"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Default values
|
||||
TDD_ENABLED="false"
|
||||
ALLOW_RED_PHASE_FAILURES="true"
|
||||
EXIT_CODE=0
|
||||
|
||||
# Usage information
|
||||
usage() {
|
||||
cat << EOF
|
||||
TDD Guard - Validates TDD discipline in code changes
|
||||
|
||||
Usage: $0 [options]
|
||||
|
||||
Options:
|
||||
-h, --help Show this help message
|
||||
-c, --config PATH Path to BMAD core config file
|
||||
-b, --base REF Base commit/branch for comparison (default: HEAD~1)
|
||||
-v, --verbose Verbose output
|
||||
--phase PHASE Current TDD phase: red|green|refactor
|
||||
--ci Running in CI mode (affects exit behavior)
|
||||
--dry-run Show what would be checked without failing
|
||||
|
||||
Examples:
|
||||
$0 # Check changes against HEAD~1
|
||||
$0 --base main # Check changes against main branch
|
||||
$0 --phase green # Validate green phase rules
|
||||
$0 --ci --phase red # CI mode, red phase (allows failures)
|
||||
|
||||
Exit Codes:
|
||||
0 No TDD violations found
|
||||
1 TDD violations found (in green phase)
|
||||
2 Configuration error
|
||||
3 Git/repository error
|
||||
EOF
|
||||
}
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
BASE_REF="HEAD~1"
|
||||
VERBOSE=false
|
||||
TDD_PHASE=""
|
||||
CI_MODE=false
|
||||
DRY_RUN=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
-c|--config)
|
||||
CONFIG_FILE="$2"
|
||||
shift 2
|
||||
;;
|
||||
-b|--base)
|
||||
BASE_REF="$2"
|
||||
shift 2
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--phase)
|
||||
TDD_PHASE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--ci)
|
||||
CI_MODE=true
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown option: $1"
|
||||
usage
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Check if we're in a git repository
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
log_error "Not in a git repository"
|
||||
exit 3
|
||||
fi
|
||||
|
||||
# Load configuration
|
||||
load_config() {
|
||||
if [[ ! -f "$CONFIG_FILE" ]]; then
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_warn "Config file not found: $CONFIG_FILE"
|
||||
log_info "Assuming TDD disabled"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Extract TDD settings from YAML (basic parsing)
|
||||
if command -v yq > /dev/null 2>&1; then
|
||||
TDD_ENABLED=$(yq e '.tdd.enabled // false' "$CONFIG_FILE" 2>/dev/null || echo "false")
|
||||
ALLOW_RED_PHASE_FAILURES=$(yq e '.tdd.allow_red_phase_ci_failures // true' "$CONFIG_FILE" 2>/dev/null || echo "true")
|
||||
else
|
||||
# Fallback: basic grep parsing
|
||||
if grep -q "tdd:" "$CONFIG_FILE" && grep -A 10 "tdd:" "$CONFIG_FILE" | grep -q "enabled: true"; then
|
||||
TDD_ENABLED="true"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "TDD enabled: $TDD_ENABLED"
|
||||
log_info "Allow red phase failures: $ALLOW_RED_PHASE_FAILURES"
|
||||
fi
|
||||
}
|
||||
|
||||
# Detect TDD phase from commit messages or branch name
|
||||
detect_tdd_phase() {
|
||||
if [[ -n "$TDD_PHASE" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check recent commit messages for TDD phase indicators
|
||||
RECENT_COMMITS=$(git log --oneline -5 "$BASE_REF".."HEAD" 2>/dev/null || echo "")
|
||||
|
||||
if echo "$RECENT_COMMITS" | grep -qi "\[RED\]"; then
|
||||
TDD_PHASE="red"
|
||||
elif echo "$RECENT_COMMITS" | grep -qi "\[GREEN\]"; then
|
||||
TDD_PHASE="green"
|
||||
elif echo "$RECENT_COMMITS" | grep -qi "\[REFACTOR\]"; then
|
||||
TDD_PHASE="refactor"
|
||||
else
|
||||
# Try to detect from branch name
|
||||
BRANCH_NAME=$(git branch --show-current 2>/dev/null || echo "")
|
||||
if echo "$BRANCH_NAME" | grep -qi "tdd"; then
|
||||
TDD_PHASE="green" # Default assumption
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "Detected TDD phase: ${TDD_PHASE:-unknown}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Get changed files between base and current
|
||||
get_changed_files() {
|
||||
# Get list of changed files
|
||||
CHANGED_FILES=$(git diff --name-only "$BASE_REF"..."HEAD" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -z "$CHANGED_FILES" ]]; then
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "No changed files detected"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Separate source and test files
|
||||
SOURCE_FILES=""
|
||||
TEST_FILES=""
|
||||
|
||||
while IFS= read -r file; do
|
||||
if [[ -f "$file" ]]; then
|
||||
if is_test_file "$file"; then
|
||||
TEST_FILES="$TEST_FILES$file"$'\n'
|
||||
elif is_source_file "$file"; then
|
||||
SOURCE_FILES="$SOURCE_FILES$file"$'\n'
|
||||
fi
|
||||
fi
|
||||
done <<< "$CHANGED_FILES"
|
||||
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "Source files changed: $(echo "$SOURCE_FILES" | wc -l | tr -d ' ')"
|
||||
log_info "Test files changed: $(echo "$TEST_FILES" | wc -l | tr -d ' ')"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if file is a test file
|
||||
is_test_file() {
|
||||
local file="$1"
|
||||
# Common test file patterns
|
||||
if [[ "$file" =~ \.(test|spec)\.(js|ts|py|go|java|cs)$ ]] || \
|
||||
[[ "$file" =~ _test\.(py|go)$ ]] || \
|
||||
[[ "$file" =~ Test\.(java|cs)$ ]] || \
|
||||
[[ "$file" =~ tests?/ ]] || \
|
||||
[[ "$file" =~ spec/ ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# Check if file is a source file
|
||||
is_source_file() {
|
||||
local file="$1"
|
||||
# Common source file patterns (excluding test files)
|
||||
if [[ "$file" =~ \.(js|ts|py|go|java|cs|rb|php|cpp|c|h)$ ]] && ! is_test_file "$file"; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# Check if commit message indicates refactoring
|
||||
is_refactor_commit() {
|
||||
local commits=$(git log --oneline "$BASE_REF".."HEAD" 2>/dev/null || echo "")
|
||||
if echo "$commits" | grep -qi "\[refactor\]"; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# Validate TDD rules
|
||||
validate_tdd_rules() {
|
||||
local violations=0
|
||||
|
||||
if [[ -z "$SOURCE_FILES" && -z "$TEST_FILES" ]]; then
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "No relevant source or test files changed"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
case "$TDD_PHASE" in
|
||||
"red")
|
||||
# Red phase: Tests should be added/modified, minimal or no source changes
|
||||
if [[ -n "$SOURCE_FILES" ]] && [[ -z "$TEST_FILES" ]]; then
|
||||
log_warn "RED phase violation: Source code changed without corresponding test changes"
|
||||
log_warn "In TDD Red phase, tests should be written first"
|
||||
if [[ "$ALLOW_RED_PHASE_FAILURES" == "false" ]] || [[ "$CI_MODE" == "false" ]]; then
|
||||
violations=$((violations + 1))
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
"green")
|
||||
# Green phase: Source changes must have corresponding test changes
|
||||
if [[ -n "$SOURCE_FILES" ]] && [[ -z "$TEST_FILES" ]]; then
|
||||
log_error "GREEN phase violation: Source code changed without corresponding tests"
|
||||
log_error "In TDD Green phase, implementation should only make existing tests pass"
|
||||
log_error "Source files modified:"
|
||||
echo "$SOURCE_FILES" | while IFS= read -r file; do
|
||||
[[ -n "$file" ]] && log_error " - $file"
|
||||
done
|
||||
violations=$((violations + 1))
|
||||
fi
|
||||
|
||||
# Check for large changes (potential feature creep)
|
||||
if [[ -n "$SOURCE_FILES" ]]; then
|
||||
local large_changes=0
|
||||
while IFS= read -r file; do
|
||||
if [[ -n "$file" ]] && [[ -f "$file" ]]; then
|
||||
local additions=$(git diff --numstat "$BASE_REF" "$file" | cut -f1)
|
||||
if [[ "$additions" =~ ^[0-9]+$ ]] && [[ "$additions" -gt 50 ]]; then
|
||||
log_warn "Large change detected in $file: $additions lines added"
|
||||
log_warn "Consider smaller, more focused changes in TDD Green phase"
|
||||
large_changes=$((large_changes + 1))
|
||||
fi
|
||||
fi
|
||||
done <<< "$SOURCE_FILES"
|
||||
|
||||
if [[ "$large_changes" -gt 0 ]]; then
|
||||
log_warn "Consider breaking large changes into smaller TDD cycles"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
"refactor")
|
||||
# Refactor phase: Source changes allowed, tests should remain stable
|
||||
if is_refactor_commit; then
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "Refactor phase: Changes detected with proper [REFACTOR] tag"
|
||||
fi
|
||||
else
|
||||
if [[ -n "$SOURCE_FILES" ]] && [[ -z "$TEST_FILES" ]]; then
|
||||
log_warn "Potential refactor phase: Consider tagging commits with [REFACTOR]"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
# Unknown or no TDD phase
|
||||
if [[ "$TDD_ENABLED" == "true" ]]; then
|
||||
log_warn "TDD enabled but phase not detected"
|
||||
log_warn "Consider tagging commits with [RED], [GREEN], or [REFACTOR]"
|
||||
if [[ -n "$SOURCE_FILES" ]] && [[ -z "$TEST_FILES" ]]; then
|
||||
log_warn "Source changes without test changes - may violate TDD discipline"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
return $violations
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "TDD Guard starting..."
|
||||
log_info "Base reference: $BASE_REF"
|
||||
log_info "Config file: $CONFIG_FILE"
|
||||
fi
|
||||
|
||||
load_config
|
||||
|
||||
if [[ "$TDD_ENABLED" != "true" ]]; then
|
||||
if [[ "$VERBOSE" == true ]]; then
|
||||
log_info "TDD not enabled, skipping validation"
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
detect_tdd_phase
|
||||
get_changed_files
|
||||
|
||||
if [[ "$DRY_RUN" == true ]]; then
|
||||
log_info "DRY RUN - Would check:"
|
||||
log_info " TDD Phase: ${TDD_PHASE:-unknown}"
|
||||
log_info " Source files: $(echo "$SOURCE_FILES" | grep -c . || echo 0)"
|
||||
log_info " Test files: $(echo "$TEST_FILES" | grep -c . || echo 0)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
validate_tdd_rules
|
||||
local violations=$?
|
||||
|
||||
if [[ "$violations" -eq 0 ]]; then
|
||||
log_success "TDD validation passed"
|
||||
exit 0
|
||||
else
|
||||
log_error "$violations TDD violation(s) found"
|
||||
|
||||
# Provide helpful suggestions
|
||||
echo ""
|
||||
echo "💡 TDD Suggestions:"
|
||||
case "$TDD_PHASE" in
|
||||
"green")
|
||||
echo " - Ensure all source changes have corresponding failing tests first"
|
||||
echo " - Consider running QA agent's *write-failing-tests command"
|
||||
echo " - Keep implementation minimal - only make tests pass"
|
||||
;;
|
||||
"red")
|
||||
echo " - Write failing tests before implementation"
|
||||
echo " - Use QA agent to create test cases first"
|
||||
;;
|
||||
*)
|
||||
echo " - Follow TDD Red-Green-Refactor cycle"
|
||||
echo " - Tag commits with [RED], [GREEN], or [REFACTOR]"
|
||||
echo " - Enable TDD workflow in BMAD configuration"
|
||||
;;
|
||||
esac
|
||||
echo ""
|
||||
|
||||
if [[ "$TDD_PHASE" == "red" ]] && [[ "$ALLOW_RED_PHASE_FAILURES" == "true" ]] && [[ "$CI_MODE" == "true" ]]; then
|
||||
log_warn "Red phase violations allowed in CI mode"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
|
|
@ -0,0 +1,323 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# tdd-implement
|
||||
|
||||
Implement minimal code to make failing tests pass - the "Green" phase of TDD.
|
||||
|
||||
## Purpose
|
||||
|
||||
Write the simplest possible implementation that makes all failing tests pass. This is the "Green" phase of TDD where we focus on making tests pass with minimal, clean code.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Story has failing tests (tdd.status: red)
|
||||
- All tests fail for correct reasons (missing implementation, not bugs)
|
||||
- Test runner is configured and working
|
||||
- Dev agent has reviewed failing tests and acceptance criteria
|
||||
|
||||
## Inputs
|
||||
|
||||
```yaml
|
||||
required:
|
||||
- story_id: '{epic}.{story}' # e.g., "1.3"
|
||||
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
|
||||
- failing_tests: # List from story TDD metadata
|
||||
- id: test identifier
|
||||
- file_path: path to test file
|
||||
- status: failing
|
||||
```
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Review Failing Tests
|
||||
|
||||
Before writing any code:
|
||||
|
||||
- Read each failing test to understand expected behavior
|
||||
- Identify the interfaces/classes/functions that need to be created
|
||||
- Note expected inputs, outputs, and error conditions
|
||||
- Understand the test's mocking strategy
|
||||
|
||||
### 2. Design Minimal Implementation
|
||||
|
||||
**TDD Green Phase Principles:**
|
||||
|
||||
- **Make it work first, then make it right**
|
||||
- **Simplest thing that could possibly work**
|
||||
- **No feature without a failing test**
|
||||
- **Avoid premature abstraction**
|
||||
- **Prefer duplication over wrong abstraction**
|
||||
|
||||
### 3. Implement Code
|
||||
|
||||
**Implementation Strategy:**
|
||||
|
||||
```yaml
|
||||
approach: 1. Start with simplest happy path test
|
||||
2. Write minimal code to pass that test
|
||||
3. Run tests frequently (after each small change)
|
||||
4. Move to next failing test
|
||||
5. Repeat until all tests pass
|
||||
|
||||
avoid:
|
||||
- Adding features not covered by tests
|
||||
- Complex algorithms when simple ones suffice
|
||||
- Premature optimization
|
||||
- Over-engineering the solution
|
||||
```
|
||||
|
||||
**Example Implementation Progression:**
|
||||
|
||||
```javascript
|
||||
// First test: should return user with id
|
||||
// Minimal implementation:
|
||||
function createUser(userData) {
|
||||
return { id: 1, ...userData };
|
||||
}
|
||||
|
||||
// Second test: should validate email format
|
||||
// Expand implementation:
|
||||
function createUser(userData) {
|
||||
if (!userData.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
return { id: 1, ...userData };
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Run Tests Continuously
|
||||
|
||||
**Test-Driven Workflow:**
|
||||
|
||||
1. Run specific failing test
|
||||
2. Write minimal code to make it pass
|
||||
3. Run that test again to confirm green
|
||||
4. Run full test suite to ensure no regressions
|
||||
5. Move to next failing test
|
||||
|
||||
**Test Execution Commands:**
|
||||
|
||||
```bash
|
||||
# Run specific test file
|
||||
npm test -- user-service.test.js
|
||||
pytest tests/unit/test_user_service.py
|
||||
go test ./services/user_test.go
|
||||
|
||||
# Run full test suite
|
||||
npm test
|
||||
pytest
|
||||
go test ./...
|
||||
```
|
||||
|
||||
### 5. Handle Edge Cases
|
||||
|
||||
Implement only edge cases that have corresponding tests:
|
||||
|
||||
- Input validation as tested
|
||||
- Error conditions as specified in tests
|
||||
- Boundary conditions covered by tests
|
||||
- Nothing more, nothing less
|
||||
|
||||
### 6. Maintain Test-Code Traceability
|
||||
|
||||
**Commit Strategy:**
|
||||
|
||||
```bash
|
||||
git add tests/ src/
|
||||
git commit -m "GREEN: Implement user creation [UC-001, UC-002]"
|
||||
```
|
||||
|
||||
Link implementation to specific test IDs in commits for traceability.
|
||||
|
||||
### 7. Update Story Metadata
|
||||
|
||||
Update TDD status to green:
|
||||
|
||||
```yaml
|
||||
tdd:
|
||||
status: green
|
||||
cycle: 1
|
||||
tests:
|
||||
- id: 'UC-001'
|
||||
name: 'should create user with valid email'
|
||||
type: unit
|
||||
status: passing
|
||||
file_path: 'tests/unit/user-service.test.js'
|
||||
- id: 'UC-002'
|
||||
name: 'should reject user with invalid email'
|
||||
type: unit
|
||||
status: passing
|
||||
file_path: 'tests/unit/user-service.test.js'
|
||||
```
|
||||
|
||||
## Output Requirements
|
||||
|
||||
### 1. Working Implementation
|
||||
|
||||
Create source files that:
|
||||
|
||||
- Make all failing tests pass
|
||||
- Follow project coding standards
|
||||
- Are minimal and focused
|
||||
- Have clear, intention-revealing names
|
||||
|
||||
### 2. Test Execution Report
|
||||
|
||||
```bash
|
||||
Running tests...
|
||||
✅ UserService > should create user with valid email
|
||||
✅ UserService > should reject user with invalid email
|
||||
|
||||
2 passing, 0 failing
|
||||
```
|
||||
|
||||
### 3. Story File Updates
|
||||
|
||||
Append to TDD section:
|
||||
|
||||
```markdown
|
||||
## TDD Progress
|
||||
|
||||
### Green Phase - Cycle 1
|
||||
|
||||
**Date:** {current_date}
|
||||
**Agent:** James (Dev Agent)
|
||||
|
||||
**Implementation Summary:**
|
||||
|
||||
- Created UserService class with create() method
|
||||
- Added email validation for @ symbol
|
||||
- All tests now passing ✅
|
||||
|
||||
**Files Modified:**
|
||||
|
||||
- src/services/user-service.js (created)
|
||||
|
||||
**Test Results:**
|
||||
|
||||
- UC-001: should create user with valid email (PASSING ✅)
|
||||
- UC-002: should reject user with invalid email (PASSING ✅)
|
||||
|
||||
**Next Step:** Review implementation for refactoring opportunities
|
||||
```
|
||||
|
||||
## Implementation Guidelines
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
**During Green Phase:**
|
||||
|
||||
- **Readable:** Clear variable and function names
|
||||
- **Simple:** Avoid complex logic when simple works
|
||||
- **Testable:** Code structure supports the tests
|
||||
- **Focused:** Each function has single responsibility
|
||||
|
||||
**Acceptable Technical Debt (to be addressed in Refactor phase):**
|
||||
|
||||
- Code duplication if it keeps tests green
|
||||
- Hardcoded values if they make tests pass
|
||||
- Simple algorithms even if inefficient
|
||||
- Minimal error handling beyond what tests require
|
||||
|
||||
### Common Patterns
|
||||
|
||||
**Factory Functions:**
|
||||
|
||||
```javascript
|
||||
function createUser(data) {
|
||||
// Minimal validation
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
```
|
||||
|
||||
**Error Handling:**
|
||||
|
||||
```javascript
|
||||
function validateEmail(email) {
|
||||
if (!email.includes('@')) {
|
||||
throw new Error('Invalid email');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**State Management:**
|
||||
|
||||
```javascript
|
||||
class UserService {
|
||||
constructor(database) {
|
||||
this.db = database; // Accept injected dependency
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If tests still fail after implementation:**
|
||||
|
||||
- Review test expectations vs actual implementation
|
||||
- Check for typos in function/method names
|
||||
- Verify correct imports/exports
|
||||
- Ensure proper handling of async operations
|
||||
|
||||
**If tests pass unexpectedly without changes:**
|
||||
|
||||
- Implementation might already exist
|
||||
- Test might be incorrect
|
||||
- Review git status for unexpected changes
|
||||
|
||||
**If new tests start failing:**
|
||||
|
||||
- Implementation may have broken existing functionality
|
||||
- Review change impact
|
||||
- Fix regressions before continuing
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
**Feature Creep:**
|
||||
|
||||
- Don't implement features without failing tests
|
||||
- Don't add "obviously needed" functionality
|
||||
|
||||
**Premature Optimization:**
|
||||
|
||||
- Don't optimize for performance in green phase
|
||||
- Focus on correctness first
|
||||
|
||||
**Over-Engineering:**
|
||||
|
||||
- Don't add abstraction layers without tests requiring them
|
||||
- Avoid complex design patterns in initial implementation
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
- [ ] All previously failing tests now pass
|
||||
- [ ] No existing tests broken (regression check)
|
||||
- [ ] Implementation is minimal and focused
|
||||
- [ ] Code follows project standards
|
||||
- [ ] Story TDD status updated to 'green'
|
||||
- [ ] Files properly committed with test traceability
|
||||
- [ ] Ready for refactor phase assessment
|
||||
|
||||
## Validation Commands
|
||||
|
||||
```bash
|
||||
# Verify all tests pass
|
||||
npm test
|
||||
pytest
|
||||
go test ./...
|
||||
mvn test
|
||||
dotnet test
|
||||
|
||||
# Check code quality (basic)
|
||||
npm run lint
|
||||
flake8 .
|
||||
golint ./...
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Make it work:** Green tests are the only measure of success
|
||||
- **Keep it simple:** Resist urge to make it elegant yet
|
||||
- **One test at a time:** Focus on single failing test
|
||||
- **Fast feedback:** Run tests frequently during development
|
||||
- **No speculation:** Only implement what tests require
|
||||
|
|
@ -0,0 +1,371 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# tdd-refactor
|
||||
|
||||
Safely refactor code while keeping all tests green - the "Refactor" phase of TDD.
|
||||
|
||||
## Purpose
|
||||
|
||||
Improve code quality, eliminate duplication, and enhance design while maintaining all existing functionality. This is the "Refactor" phase of TDD where we make the code clean and maintainable.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- All tests are passing (tdd.status: green)
|
||||
- Implementation is complete and functional
|
||||
- Test suite provides safety net for refactoring
|
||||
- Code follows basic project standards
|
||||
|
||||
## Inputs
|
||||
|
||||
```yaml
|
||||
required:
|
||||
- story_id: '{epic}.{story}' # e.g., "1.3"
|
||||
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
|
||||
- passing_tests: # All tests should be green
|
||||
- id: test identifier
|
||||
- status: passing
|
||||
- implementation_files: # Source files to potentially refactor
|
||||
- path: file path
|
||||
- purpose: what it does
|
||||
```
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Identify Refactoring Opportunities
|
||||
|
||||
**Code Smells to Look For:**
|
||||
|
||||
```yaml
|
||||
common_smells:
|
||||
duplication:
|
||||
- Repeated code blocks
|
||||
- Similar logic in different places
|
||||
- Copy-paste patterns
|
||||
|
||||
complexity:
|
||||
- Long methods/functions (>10-15 lines)
|
||||
- Too many parameters (>3-4)
|
||||
- Nested conditions (>2-3 levels)
|
||||
- Complex boolean expressions
|
||||
|
||||
naming:
|
||||
- Unclear variable names
|
||||
- Non-descriptive function names
|
||||
- Inconsistent naming conventions
|
||||
|
||||
structure:
|
||||
- God objects/classes doing too much
|
||||
- Primitive obsession
|
||||
- Feature envy (method using more from other class)
|
||||
- Long parameter lists
|
||||
```
|
||||
|
||||
### 2. Plan Refactoring Steps
|
||||
|
||||
**Refactoring Strategy:**
|
||||
|
||||
- **One change at a time:** Make small, atomic improvements
|
||||
- **Run tests after each change:** Ensure no functionality breaks
|
||||
- **Commit frequently:** Create checkpoints for easy rollback
|
||||
- **Improve design:** Move toward better architecture
|
||||
|
||||
**Common Refactoring Techniques:**
|
||||
|
||||
```yaml
|
||||
extract_methods:
|
||||
when: 'Function is too long or doing multiple things'
|
||||
technique: 'Extract complex logic into named methods'
|
||||
|
||||
rename_variables:
|
||||
when: "Names don't clearly express intent"
|
||||
technique: 'Use intention-revealing names'
|
||||
|
||||
eliminate_duplication:
|
||||
when: 'Same code appears in multiple places'
|
||||
technique: 'Extract to shared function/method'
|
||||
|
||||
simplify_conditionals:
|
||||
when: 'Complex boolean logic is hard to understand'
|
||||
technique: 'Extract to well-named boolean methods'
|
||||
|
||||
introduce_constants:
|
||||
when: 'Magic numbers or strings appear repeatedly'
|
||||
technique: 'Create named constants'
|
||||
```
|
||||
|
||||
### 3. Execute Refactoring
|
||||
|
||||
**Step-by-Step Process:**
|
||||
|
||||
1. **Choose smallest improvement**
|
||||
2. **Make the change**
|
||||
3. **Run all tests**
|
||||
4. **Commit if green**
|
||||
5. **Repeat**
|
||||
|
||||
**Example Refactoring Sequence:**
|
||||
|
||||
```javascript
|
||||
// Before refactoring
|
||||
function createUser(data) {
|
||||
if (!data.email.includes('@') || data.email.length < 5) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
if (!data.name || data.name.trim().length === 0) {
|
||||
throw new Error('Name is required');
|
||||
}
|
||||
return {
|
||||
id: Math.floor(Math.random() * 1000000),
|
||||
...data,
|
||||
createdAt: new Date().toISOString(),
|
||||
};
|
||||
}
|
||||
|
||||
// After refactoring - Step 1: Extract validation
|
||||
function validateEmail(email) {
|
||||
return email.includes('@') && email.length >= 5;
|
||||
}
|
||||
|
||||
function validateName(name) {
|
||||
return name && name.trim().length > 0;
|
||||
}
|
||||
|
||||
function createUser(data) {
|
||||
if (!validateEmail(data.email)) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
if (!validateName(data.name)) {
|
||||
throw new Error('Name is required');
|
||||
}
|
||||
return {
|
||||
id: Math.floor(Math.random() * 1000000),
|
||||
...data,
|
||||
createdAt: new Date().toISOString(),
|
||||
};
|
||||
}
|
||||
|
||||
// After refactoring - Step 2: Extract ID generation
|
||||
function generateUserId() {
|
||||
return Math.floor(Math.random() * 1000000);
|
||||
}
|
||||
|
||||
function createUser(data) {
|
||||
if (!validateEmail(data.email)) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
if (!validateName(data.name)) {
|
||||
throw new Error('Name is required');
|
||||
}
|
||||
return {
|
||||
id: generateUserId(),
|
||||
...data,
|
||||
createdAt: new Date().toISOString(),
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Test After Each Change
|
||||
|
||||
**Critical Rule:** Never proceed without green tests
|
||||
|
||||
```bash
|
||||
# Run tests after each refactoring step
|
||||
npm test
|
||||
pytest
|
||||
go test ./...
|
||||
|
||||
# If tests fail:
|
||||
# 1. Undo the change
|
||||
# 2. Understand what broke
|
||||
# 3. Try smaller refactoring
|
||||
# 4. Fix tests if they need updating (rare)
|
||||
```
|
||||
|
||||
### 5. Collaborate with QA Agent
|
||||
|
||||
**When to involve QA:**
|
||||
|
||||
- Tests need updating due to interface changes
|
||||
- New test cases identified during refactoring
|
||||
- Questions about test coverage adequacy
|
||||
- Validation of refactoring safety
|
||||
|
||||
### 6. Update Story Documentation
|
||||
|
||||
Track refactoring progress:
|
||||
|
||||
```yaml
|
||||
tdd:
|
||||
status: refactor # or done if complete
|
||||
cycle: 1
|
||||
refactoring_notes:
|
||||
- extracted_methods: ['validateEmail', 'validateName', 'generateUserId']
|
||||
- eliminated_duplication: 'Email validation logic'
|
||||
- improved_readability: 'Function names now express intent'
|
||||
```
|
||||
|
||||
## Output Requirements
|
||||
|
||||
### 1. Improved Code Quality
|
||||
|
||||
**Measurable Improvements:**
|
||||
|
||||
- Reduced code duplication
|
||||
- Clearer naming and structure
|
||||
- Smaller, focused functions
|
||||
- Better separation of concerns
|
||||
|
||||
### 2. Maintained Test Coverage
|
||||
|
||||
```bash
|
||||
# All tests still passing
|
||||
✅ UserService > should create user with valid email
|
||||
✅ UserService > should reject user with invalid email
|
||||
✅ UserService > should require valid name
|
||||
|
||||
3 passing, 0 failing
|
||||
```
|
||||
|
||||
### 3. Story File Updates
|
||||
|
||||
Append to TDD section:
|
||||
|
||||
```markdown
|
||||
## TDD Progress
|
||||
|
||||
### Refactor Phase - Cycle 1
|
||||
|
||||
**Date:** {current_date}
|
||||
**Agents:** James (Dev) & Quinn (QA)
|
||||
|
||||
**Refactoring Completed:**
|
||||
|
||||
- ✅ Extracted validation functions for better readability
|
||||
- ✅ Eliminated duplicate email validation logic
|
||||
- ✅ Introduced generateUserId() for testability
|
||||
- ✅ Simplified createUser() main logic
|
||||
|
||||
**Code Quality Improvements:**
|
||||
|
||||
- Function length reduced from 12 to 6 lines
|
||||
- Three reusable validation functions created
|
||||
- Magic numbers eliminated
|
||||
- Test coverage maintained at 100%
|
||||
|
||||
**Files Modified:**
|
||||
|
||||
- src/services/user-service.js (refactored)
|
||||
|
||||
**All Tests Passing:** ✅
|
||||
|
||||
**Next Step:** Story ready for review or next TDD cycle
|
||||
```
|
||||
|
||||
## Refactoring Guidelines
|
||||
|
||||
### Safe Refactoring Practices
|
||||
|
||||
**Always Safe:**
|
||||
|
||||
- Rename variables/functions
|
||||
- Extract methods
|
||||
- Inline temporary variables
|
||||
- Replace magic numbers with constants
|
||||
|
||||
**Potentially Risky:**
|
||||
|
||||
- Changing method signatures
|
||||
- Modifying class hierarchies
|
||||
- Altering error handling
|
||||
- Changing async/sync behavior
|
||||
|
||||
**Never Do During Refactor:**
|
||||
|
||||
- Add new features
|
||||
- Change external behavior
|
||||
- Remove existing functionality
|
||||
- Skip running tests
|
||||
|
||||
### Code Quality Metrics
|
||||
|
||||
**Before/After Comparison:**
|
||||
|
||||
```yaml
|
||||
metrics_to_track:
|
||||
cyclomatic_complexity: 'Lower is better'
|
||||
function_length: 'Shorter is generally better'
|
||||
duplication_percentage: 'Should decrease'
|
||||
test_coverage: 'Should maintain 100%'
|
||||
|
||||
acceptable_ranges:
|
||||
function_length: '5-15 lines for most functions'
|
||||
parameters: '0-4 parameters per function'
|
||||
nesting_depth: 'Maximum 3 levels'
|
||||
```
|
||||
|
||||
## Advanced Refactoring Techniques
|
||||
|
||||
### Design Pattern Introduction
|
||||
|
||||
**When appropriate:**
|
||||
|
||||
- Template Method for algorithmic variations
|
||||
- Strategy Pattern for behavior selection
|
||||
- Factory Pattern for object creation
|
||||
- Observer Pattern for event handling
|
||||
|
||||
**Caution:** Only introduce patterns if they simplify the code
|
||||
|
||||
### Architecture Improvements
|
||||
|
||||
```yaml
|
||||
layering:
|
||||
- Separate business logic from presentation
|
||||
- Extract data access concerns
|
||||
- Isolate external dependencies
|
||||
|
||||
dependency_injection:
|
||||
- Make dependencies explicit
|
||||
- Enable easier testing
|
||||
- Improve modularity
|
||||
|
||||
error_handling:
|
||||
- Consistent error types
|
||||
- Meaningful error messages
|
||||
- Proper error propagation
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If tests fail during refactoring:**
|
||||
|
||||
1. **Undo immediately** - Use git to revert
|
||||
2. **Analyze the failure** - What assumption was wrong?
|
||||
3. **Try smaller steps** - More atomic refactoring
|
||||
4. **Consider test updates** - Only if interface must change
|
||||
|
||||
**If code becomes more complex:**
|
||||
|
||||
- Refactoring went wrong direction
|
||||
- Revert and try different approach
|
||||
- Consider if change is actually needed
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
- [ ] All identified code smells addressed or documented
|
||||
- [ ] All tests remain green throughout process
|
||||
- [ ] Code is more readable and maintainable
|
||||
- [ ] No new functionality added during refactoring
|
||||
- [ ] Story TDD status updated appropriately
|
||||
- [ ] Refactoring changes committed with clear messages
|
||||
- [ ] Code quality metrics improved or maintained
|
||||
- [ ] Ready for story completion or next TDD cycle
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Green Bar:** Never proceed with failing tests
|
||||
- **Small Steps:** Make incremental improvements
|
||||
- **Behavior Preservation:** External behavior must remain identical
|
||||
- **Frequent Commits:** Create rollback points
|
||||
- **Test First:** Let tests guide refactoring safety
|
||||
- **Collaborative:** Work with QA when test updates needed
|
||||
|
|
@ -0,0 +1,221 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# test-design
|
||||
|
||||
Create comprehensive test scenarios with appropriate test level recommendations for story implementation. Supports both traditional testing and Test-Driven Development (TDD) first approaches.
|
||||
|
||||
## Inputs
|
||||
|
||||
```yaml
|
||||
required:
|
||||
- story_id: '{epic}.{story}' # e.g., "1.3"
|
||||
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
|
||||
- story_title: '{title}' # If missing, derive from story file H1
|
||||
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
|
||||
optional:
|
||||
- tdd_mode: boolean # If true, design tests for TDD Red phase (before implementation)
|
||||
- existing_tests: array # List of existing tests to consider for gap analysis
|
||||
```
|
||||
|
||||
## Purpose
|
||||
|
||||
Design a complete test strategy that identifies what to test, at which level (unit/integration/e2e), and why. This ensures efficient test coverage without redundancy while maintaining appropriate test boundaries.
|
||||
|
||||
**TDD Mode**: When `tdd_mode=true`, design tests that will be written BEFORE implementation (Red phase), focusing on smallest testable behavior slices and proper mocking strategies.
|
||||
|
||||
## Dependencies
|
||||
|
||||
```yaml
|
||||
data:
|
||||
- test-levels-framework.md # Unit/Integration/E2E decision criteria
|
||||
- test-priorities-matrix.md # P0/P1/P2/P3 classification system
|
||||
```
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Analyze Story Requirements
|
||||
|
||||
Break down each acceptance criterion into testable scenarios. For each AC:
|
||||
|
||||
- Identify the core functionality to test
|
||||
- Determine data variations needed
|
||||
- Consider error conditions
|
||||
- Note edge cases
|
||||
|
||||
### 2. Apply Test Level Framework
|
||||
|
||||
**Reference:** Load `test-levels-framework.md` for detailed criteria
|
||||
|
||||
Quick rules:
|
||||
|
||||
- **Unit**: Pure logic, algorithms, calculations
|
||||
- **Integration**: Component interactions, DB operations
|
||||
- **E2E**: Critical user journeys, compliance
|
||||
|
||||
### 3. Assign Priorities
|
||||
|
||||
**Reference:** Load `test-priorities-matrix.md` for classification
|
||||
|
||||
Quick priority assignment:
|
||||
|
||||
- **P0**: Revenue-critical, security, compliance
|
||||
- **P1**: Core user journeys, frequently used
|
||||
- **P2**: Secondary features, admin functions
|
||||
- **P3**: Nice-to-have, rarely used
|
||||
|
||||
### 4. Design Test Scenarios
|
||||
|
||||
For each identified test need, create:
|
||||
|
||||
```yaml
|
||||
test_scenario:
|
||||
id: '{epic}.{story}-{LEVEL}-{SEQ}'
|
||||
requirement: 'AC reference'
|
||||
priority: P0|P1|P2|P3
|
||||
level: unit|integration|e2e
|
||||
description: 'What is being tested'
|
||||
justification: 'Why this level was chosen'
|
||||
mitigates_risks: ['RISK-001'] # If risk profile exists
|
||||
# TDD-specific fields (when tdd_mode=true)
|
||||
tdd_phase: red|green|refactor # When this test should be written
|
||||
mocking_strategy: mock|fake|stub|none # How to handle dependencies
|
||||
test_data_approach: fixed|builder|random # How to generate test data
|
||||
```
|
||||
|
||||
### 4a. TDD-Specific Test Design (when tdd_mode=true)
|
||||
|
||||
**Smallest-Next-Test Principle:**
|
||||
|
||||
- Design tests for the absolute smallest behavior increment
|
||||
- Each test should drive a single, focused implementation change
|
||||
- Avoid tests that require multiple features to pass
|
||||
|
||||
**Mocking Strategy Selection Matrix:**
|
||||
|
||||
| Dependency Type | Recommended Approach | Justification |
|
||||
| --------------- | -------------------- | -------------------------------------- |
|
||||
| External API | Mock | Control responses, avoid network calls |
|
||||
| Database | Fake | In-memory implementation for speed |
|
||||
| File System | Stub | Return fixed responses |
|
||||
| Time/Date | Mock | Deterministic time control |
|
||||
| Random Numbers | Stub | Predictable test outcomes |
|
||||
| Other Services | Mock/Fake | Depends on complexity and speed needs |
|
||||
|
||||
**Test Data Strategy:**
|
||||
|
||||
```yaml
|
||||
test_data_approaches:
|
||||
fixed_data:
|
||||
when: 'Simple, predictable scenarios'
|
||||
example: "const userId = 'test-user-123'"
|
||||
|
||||
builder_pattern:
|
||||
when: 'Complex objects with variations'
|
||||
example: "new UserBuilder().withEmail('test@example.com').build()"
|
||||
|
||||
avoid_random:
|
||||
why: 'Makes tests non-deterministic and hard to debug'
|
||||
instead: 'Use meaningful, fixed test data'
|
||||
```
|
||||
|
||||
### 5. Validate Coverage
|
||||
|
||||
Ensure:
|
||||
|
||||
- Every AC has at least one test
|
||||
- No duplicate coverage across levels
|
||||
- Critical paths have multiple levels
|
||||
- Risk mitigations are addressed
|
||||
|
||||
## Outputs
|
||||
|
||||
### Output 1: Test Design Document
|
||||
|
||||
**Save to:** `qa.qaLocation/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
|
||||
|
||||
```markdown
|
||||
# Test Design: Story {epic}.{story}
|
||||
|
||||
Date: {date}
|
||||
Designer: Quinn (Test Architect)
|
||||
|
||||
## Test Strategy Overview
|
||||
|
||||
- Total test scenarios: X
|
||||
- Unit tests: Y (A%)
|
||||
- Integration tests: Z (B%)
|
||||
- E2E tests: W (C%)
|
||||
- Priority distribution: P0: X, P1: Y, P2: Z
|
||||
|
||||
## Test Scenarios by Acceptance Criteria
|
||||
|
||||
### AC1: {description}
|
||||
|
||||
#### Scenarios
|
||||
|
||||
| ID | Level | Priority | Test | Justification |
|
||||
| ------------ | ----------- | -------- | ------------------------- | ------------------------ |
|
||||
| 1.3-UNIT-001 | Unit | P0 | Validate input format | Pure validation logic |
|
||||
| 1.3-INT-001 | Integration | P0 | Service processes request | Multi-component flow |
|
||||
| 1.3-E2E-001 | E2E | P1 | User completes journey | Critical path validation |
|
||||
|
||||
[Continue for all ACs...]
|
||||
|
||||
## Risk Coverage
|
||||
|
||||
[Map test scenarios to identified risks if risk profile exists]
|
||||
|
||||
## Recommended Execution Order
|
||||
|
||||
1. P0 Unit tests (fail fast)
|
||||
2. P0 Integration tests
|
||||
3. P0 E2E tests
|
||||
4. P1 tests in order
|
||||
5. P2+ as time permits
|
||||
```
|
||||
|
||||
### Output 2: Gate YAML Block
|
||||
|
||||
Generate for inclusion in quality gate:
|
||||
|
||||
```yaml
|
||||
test_design:
|
||||
scenarios_total: X
|
||||
by_level:
|
||||
unit: Y
|
||||
integration: Z
|
||||
e2e: W
|
||||
by_priority:
|
||||
p0: A
|
||||
p1: B
|
||||
p2: C
|
||||
coverage_gaps: [] # List any ACs without tests
|
||||
```
|
||||
|
||||
### Output 3: Trace References
|
||||
|
||||
Print for use by trace-requirements task:
|
||||
|
||||
```text
|
||||
Test design matrix: qa.qaLocation/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md
|
||||
P0 tests identified: {count}
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing, verify:
|
||||
|
||||
- [ ] Every AC has test coverage
|
||||
- [ ] Test levels are appropriate (not over-testing)
|
||||
- [ ] No duplicate coverage across levels
|
||||
- [ ] Priorities align with business risk
|
||||
- [ ] Test IDs follow naming convention
|
||||
- [ ] Scenarios are atomic and independent
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Shift left**: Prefer unit over integration, integration over E2E
|
||||
- **Risk-based**: Focus on what could go wrong
|
||||
- **Efficient coverage**: Test once at the right level
|
||||
- **Maintainability**: Consider long-term test maintenance
|
||||
- **Fast feedback**: Quick tests run first
|
||||
|
|
@ -0,0 +1,258 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# write-failing-tests
|
||||
|
||||
Write failing tests first to drive development using Test-Driven Development (TDD) Red phase.
|
||||
|
||||
## Purpose
|
||||
|
||||
Generate failing unit tests that describe expected behavior before implementation. This is the "Red" phase of TDD where we define what success looks like through tests that initially fail.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Story status must be "InProgress" or "Ready"
|
||||
- TDD must be enabled in core-config.yaml (`tdd.enabled: true`)
|
||||
- Acceptance criteria are clearly defined
|
||||
- Test runner is configured or auto-detected
|
||||
|
||||
## Inputs
|
||||
|
||||
```yaml
|
||||
required:
|
||||
- story_id: '{epic}.{story}' # e.g., "1.3"
|
||||
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
|
||||
- story_title: '{title}' # If missing, derive from story file H1
|
||||
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
|
||||
```
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Analyze Story Requirements
|
||||
|
||||
Read the story file and extract:
|
||||
|
||||
- Acceptance criteria (AC) that define success
|
||||
- Business rules and constraints
|
||||
- Edge cases and error conditions
|
||||
- Data inputs and expected outputs
|
||||
|
||||
### 2. Design Test Strategy
|
||||
|
||||
For each acceptance criterion:
|
||||
|
||||
- Identify the smallest testable unit
|
||||
- Choose appropriate test type (unit/integration/e2e)
|
||||
- Plan test data and scenarios
|
||||
- Consider mocking strategy for external dependencies
|
||||
|
||||
### 3. Detect/Configure Test Runner
|
||||
|
||||
```yaml
|
||||
detection_order:
|
||||
- Check project files for known patterns
|
||||
- JavaScript: package.json dependencies (jest, vitest, mocha)
|
||||
- Python: requirements files (pytest, unittest)
|
||||
- Java: pom.xml, build.gradle (junit, testng)
|
||||
- Go: go.mod (built-in testing)
|
||||
- .NET: *.csproj (xunit, nunit, mstest)
|
||||
- Fallback: tdd.test_runner.custom_command from config
|
||||
```
|
||||
|
||||
### 4. Write Failing Tests
|
||||
|
||||
**Test Quality Guidelines:**
|
||||
|
||||
- **Deterministic**: No random values, dates, or network calls
|
||||
- **Isolated**: Each test is independent and can run alone
|
||||
- **Fast**: Unit tests should run in milliseconds
|
||||
- **Readable**: Test names describe the behavior being tested
|
||||
- **Focused**: One assertion per test when possible
|
||||
|
||||
**Mocking Strategy:**
|
||||
|
||||
```yaml
|
||||
mock_vs_fake_vs_stub:
|
||||
mock: 'Verify interactions (calls, parameters)'
|
||||
fake: 'Simplified working implementation'
|
||||
stub: 'Predefined responses to calls'
|
||||
|
||||
use_mocks_for:
|
||||
- External APIs and web services
|
||||
- Database connections
|
||||
- File system operations
|
||||
- Time-dependent operations
|
||||
- Random number generation
|
||||
```
|
||||
|
||||
**Test Structure (Given-When-Then):**
|
||||
|
||||
```typescript
|
||||
// Example structure
|
||||
describe('UserService', () => {
|
||||
it('should create user with valid email', async () => {
|
||||
// Given (Arrange)
|
||||
const userData = { email: 'test@example.com', name: 'Test User' };
|
||||
const mockDb = jest.fn().mockResolvedValue({ id: 1, ...userData });
|
||||
|
||||
// When (Act)
|
||||
const result = await userService.create(userData);
|
||||
|
||||
// Then (Assert)
|
||||
expect(result).toEqual({ id: 1, ...userData });
|
||||
expect(mockDb).toHaveBeenCalledWith(userData);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 5. Create Test Files
|
||||
|
||||
**Naming Conventions:**
|
||||
|
||||
```yaml
|
||||
patterns:
|
||||
javascript: '{module}.test.js' or '{module}.spec.js'
|
||||
python: 'test_{module}.py' or '{module}_test.py'
|
||||
java: '{Module}Test.java'
|
||||
go: '{module}_test.go'
|
||||
csharp: '{Module}Tests.cs'
|
||||
```
|
||||
|
||||
**File Organization:**
|
||||
|
||||
```
|
||||
tests/
|
||||
├── unit/ # Fast, isolated tests
|
||||
├── integration/ # Component interaction tests
|
||||
└── e2e/ # End-to-end user journey tests
|
||||
```
|
||||
|
||||
### 6. Verify Tests Fail
|
||||
|
||||
**Critical Step:** Run tests to ensure they fail for the RIGHT reason:
|
||||
|
||||
- ✅ Fail because functionality is not implemented
|
||||
- ❌ Fail because of syntax errors, import issues, or test bugs
|
||||
|
||||
**Test Run Command:** Use auto-detected or configured test runner
|
||||
|
||||
### 7. Update Story Metadata
|
||||
|
||||
Update story file frontmatter:
|
||||
|
||||
```yaml
|
||||
tdd:
|
||||
status: red
|
||||
cycle: 1
|
||||
tests:
|
||||
- id: 'UC-001'
|
||||
name: 'should create user with valid email'
|
||||
type: unit
|
||||
status: failing
|
||||
file_path: 'tests/unit/user-service.test.js'
|
||||
- id: 'UC-002'
|
||||
name: 'should reject user with invalid email'
|
||||
type: unit
|
||||
status: failing
|
||||
file_path: 'tests/unit/user-service.test.js'
|
||||
```
|
||||
|
||||
## Output Requirements
|
||||
|
||||
### 1. Test Files Created
|
||||
|
||||
Generate test files with:
|
||||
|
||||
- Clear, descriptive test names
|
||||
- Proper setup/teardown
|
||||
- Mock configurations
|
||||
- Expected assertions
|
||||
|
||||
### 2. Test Execution Report
|
||||
|
||||
```bash
|
||||
Running tests...
|
||||
❌ UserService > should create user with valid email
|
||||
❌ UserService > should reject user with invalid email
|
||||
|
||||
2 failing, 0 passing
|
||||
```
|
||||
|
||||
### 3. Story File Updates
|
||||
|
||||
Append to TDD section:
|
||||
|
||||
```markdown
|
||||
## TDD Progress
|
||||
|
||||
### Red Phase - Cycle 1
|
||||
|
||||
**Date:** {current_date}
|
||||
**Agent:** Quinn (QA Agent)
|
||||
|
||||
**Tests Written:**
|
||||
|
||||
- UC-001: should create user with valid email (FAILING ✅)
|
||||
- UC-002: should reject user with invalid email (FAILING ✅)
|
||||
|
||||
**Test Files:**
|
||||
|
||||
- tests/unit/user-service.test.js
|
||||
|
||||
**Next Step:** Dev Agent to implement minimal code to make tests pass
|
||||
```
|
||||
|
||||
## Constraints & Best Practices
|
||||
|
||||
### Constraints
|
||||
|
||||
- **Minimal Scope:** Write tests for the smallest possible feature slice
|
||||
- **No Implementation:** Do not implement the actual functionality
|
||||
- **External Dependencies:** Always mock external services, databases, APIs
|
||||
- **Deterministic Data:** Use fixed test data, mock time/random functions
|
||||
- **Fast Execution:** Unit tests must complete quickly (< 100ms each)
|
||||
|
||||
### Anti-Patterns to Avoid
|
||||
|
||||
- Testing implementation details instead of behavior
|
||||
- Writing tests after the code is written
|
||||
- Complex test setup that obscures intent
|
||||
- Tests that depend on external systems
|
||||
- Overly broad tests covering multiple behaviors
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If tests pass unexpectedly:**
|
||||
|
||||
- Implementation may already exist
|
||||
- Test may be testing wrong behavior
|
||||
- HALT and clarify requirements
|
||||
|
||||
**If tests fail for wrong reasons:**
|
||||
|
||||
- Fix syntax/import errors
|
||||
- Verify mocks are properly configured
|
||||
- Check test runner configuration
|
||||
|
||||
**If no test runner detected:**
|
||||
|
||||
- Fallback to tdd.test_runner.custom_command
|
||||
- If not configured, prompt user for test command
|
||||
- Document setup in story notes
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
- [ ] All planned tests are written and failing
|
||||
- [ ] Tests fail for correct reasons (missing implementation)
|
||||
- [ ] Story TDD metadata updated with test list
|
||||
- [ ] Test files follow project conventions
|
||||
- [ ] All external dependencies are properly mocked
|
||||
- [ ] Tests run deterministically and quickly
|
||||
- [ ] Ready to hand off to Dev Agent for implementation
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Fail First:** Tests must fail before any implementation
|
||||
- **Describe Behavior:** Tests define what "done" looks like
|
||||
- **Start Small:** Begin with simplest happy path scenario
|
||||
- **Isolate Dependencies:** External systems should be mocked
|
||||
- **Fast Feedback:** Tests should run quickly to enable rapid iteration
|
||||
|
|
@ -0,0 +1,171 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# Story {epic}.{story}: {title}
|
||||
|
||||
## Story Metadata
|
||||
|
||||
```yaml
|
||||
story:
|
||||
epic: '{epic}'
|
||||
number: '{story}'
|
||||
title: '{title}'
|
||||
status: 'draft'
|
||||
priority: 'medium'
|
||||
|
||||
# TDD Configuration (only when tdd.enabled=true)
|
||||
tdd:
|
||||
status: 'red' # red|green|refactor|done
|
||||
cycle: 1
|
||||
coverage_target: 80.0
|
||||
tests: [] # Will be populated by QA agent during Red phase
|
||||
```
|
||||
|
||||
## Story Description
|
||||
|
||||
**As a** {user_type}
|
||||
**I want** {capability}
|
||||
**So that** {business_value}
|
||||
|
||||
### Context
|
||||
|
||||
{Provide context about why this story is needed, what problem it solves, and how it fits into the larger epic/project}
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
```gherkin
|
||||
Feature: {Feature name}
|
||||
|
||||
Scenario: {Primary happy path}
|
||||
Given {initial conditions}
|
||||
When {action performed}
|
||||
Then {expected outcome}
|
||||
|
||||
Scenario: {Error condition 1}
|
||||
Given {error setup}
|
||||
When {action that causes error}
|
||||
Then {expected error handling}
|
||||
|
||||
Scenario: {Edge case}
|
||||
Given {edge case setup}
|
||||
When {edge case action}
|
||||
Then {edge case outcome}
|
||||
```
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
- {Requirement 1}
|
||||
- {Requirement 2}
|
||||
- {Requirement 3}
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
- **Performance:** {Response time, throughput requirements}
|
||||
- **Security:** {Authentication, authorization, data protection}
|
||||
- **Reliability:** {Error handling, recovery requirements}
|
||||
- **Maintainability:** {Code quality, documentation standards}
|
||||
|
||||
## TDD Test Plan (QA Agent Responsibility)
|
||||
|
||||
### Test Strategy
|
||||
|
||||
- **Primary Test Type:** {unit|integration|e2e}
|
||||
- **Mocking Approach:** {mock external services, databases, etc.}
|
||||
- **Test Data:** {how test data will be managed}
|
||||
|
||||
### Planned Test Scenarios
|
||||
|
||||
| ID | Scenario | Type | Priority | AC Reference |
|
||||
| ------ | ------------------ | ----------- | -------- | ------------ |
|
||||
| TC-001 | {test description} | unit | P0 | AC1 |
|
||||
| TC-002 | {test description} | unit | P0 | AC2 |
|
||||
| TC-003 | {test description} | integration | P1 | AC3 |
|
||||
|
||||
_(This section will be populated by QA agent during test planning)_
|
||||
|
||||
## TDD Progress
|
||||
|
||||
### Current Phase: {RED|GREEN|REFACTOR|DONE}
|
||||
|
||||
**Cycle:** {cycle_number}
|
||||
**Last Updated:** {date}
|
||||
|
||||
_(TDD progress will be tracked here through Red-Green-Refactor cycles)_
|
||||
|
||||
---
|
||||
|
||||
## Implementation Tasks (Dev Agent)
|
||||
|
||||
### Primary Tasks
|
||||
|
||||
- [ ] {Main implementation task 1}
|
||||
- [ ] {Main implementation task 2}
|
||||
- [ ] {Main implementation task 3}
|
||||
|
||||
### Subtasks
|
||||
|
||||
- [ ] {Detailed subtask}
|
||||
- [ ] {Another subtask}
|
||||
|
||||
## Definition of Done
|
||||
|
||||
### TDD-Specific DoD
|
||||
|
||||
- [ ] Tests written first (Red phase completed)
|
||||
- [ ] All tests passing (Green phase completed)
|
||||
- [ ] Code refactored for quality (Refactor phase completed)
|
||||
- [ ] Test coverage meets target ({coverage_target}%)
|
||||
- [ ] All external dependencies properly mocked
|
||||
- [ ] No features implemented without corresponding tests
|
||||
|
||||
### General DoD
|
||||
|
||||
- [ ] All acceptance criteria met
|
||||
- [ ] Code follows project standards
|
||||
- [ ] Documentation updated
|
||||
- [ ] Ready for review
|
||||
|
||||
## Dev Agent Record
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
_(Dev agent will document implementation decisions here)_
|
||||
|
||||
### TDD Cycle Log
|
||||
|
||||
_(Automatic tracking of Red-Green-Refactor progression)_
|
||||
|
||||
**Cycle 1:**
|
||||
|
||||
- Red Phase: {date} - {test count} failing tests written
|
||||
- Green Phase: {date} - Implementation completed, all tests pass
|
||||
- Refactor Phase: {date} - {refactoring summary}
|
||||
|
||||
### File List
|
||||
|
||||
_(Dev agent will list all files created/modified)_
|
||||
|
||||
- {file1}
|
||||
- {file2}
|
||||
|
||||
### Test Execution Log
|
||||
|
||||
```bash
|
||||
# Test runs will be logged here during development
|
||||
```
|
||||
|
||||
## QA Results
|
||||
|
||||
_(QA agent will populate this during review)_
|
||||
|
||||
## Change Log
|
||||
|
||||
- **{date}**: Story created from TDD template
|
||||
- **{date}**: {change description}
|
||||
|
||||
---
|
||||
|
||||
**TDD Status:** 🔴 RED | ⚫ Not Started
|
||||
**Agent Assigned:** {agent_name}
|
||||
**Estimated Effort:** {hours} hours
|
||||
|
|
@ -0,0 +1,351 @@
|
|||
# TDD-Enhanced CI/CD Workflow Template for BMAD Framework
|
||||
# This template shows how to integrate TDD validation into CI/CD pipelines
|
||||
|
||||
name: TDD-Enhanced CI/CD
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop, 'feature/tdd-*' ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
|
||||
env:
|
||||
# TDD Configuration
|
||||
TDD_ENABLED: true
|
||||
TDD_ALLOW_RED_PHASE_FAILURES: true
|
||||
TDD_COVERAGE_THRESHOLD: 80
|
||||
|
||||
jobs:
|
||||
# Detect TDD phase and validate changes
|
||||
tdd-validation:
|
||||
name: TDD Phase Validation
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
tdd-phase: ${{ steps.detect-phase.outputs.phase }}
|
||||
tdd-enabled: ${{ steps.detect-phase.outputs.enabled }}
|
||||
should-run-tests: ${{ steps.detect-phase.outputs.should-run-tests }}
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Need full history for TDD guard
|
||||
|
||||
- name: Detect TDD phase
|
||||
id: detect-phase
|
||||
run: |
|
||||
# Check if TDD is enabled in bmad-core/core-config.yaml
|
||||
if grep -q "enabled: true" bmad-core/core-config.yaml 2>/dev/null; then
|
||||
echo "enabled=true" >> $GITHUB_OUTPUT
|
||||
echo "should-run-tests=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "enabled=false" >> $GITHUB_OUTPUT
|
||||
echo "should-run-tests=true" >> $GITHUB_OUTPUT # Run tests anyway
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Detect TDD phase from commit messages
|
||||
COMMITS=$(git log --oneline -10 ${{ github.event.before }}..${{ github.sha }} || git log --oneline -10)
|
||||
|
||||
if echo "$COMMITS" | grep -qi "\[RED\]"; then
|
||||
echo "phase=red" >> $GITHUB_OUTPUT
|
||||
elif echo "$COMMITS" | grep -qi "\[GREEN\]"; then
|
||||
echo "phase=green" >> $GITHUB_OUTPUT
|
||||
elif echo "$COMMITS" | grep -qi "\[REFACTOR\]"; then
|
||||
echo "phase=refactor" >> $GITHUB_OUTPUT
|
||||
else
|
||||
# Default phase detection based on branch
|
||||
if [[ "${{ github.ref }}" =~ tdd ]]; then
|
||||
echo "phase=green" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "phase=unknown" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
fi
|
||||
|
||||
- name: Run TDD Guard
|
||||
if: steps.detect-phase.outputs.enabled == 'true'
|
||||
run: |
|
||||
echo "🧪 Running TDD Guard validation..."
|
||||
|
||||
# Make TDD guard executable
|
||||
chmod +x bmad-core/scripts/tdd-guard.sh
|
||||
|
||||
# Run TDD guard with appropriate settings
|
||||
if [[ "${{ steps.detect-phase.outputs.phase }}" == "red" ]]; then
|
||||
# Allow red phase failures in CI
|
||||
./bmad-core/scripts/tdd-guard.sh --ci --phase red --verbose || {
|
||||
echo "⚠️ Red phase violations detected but allowed in CI"
|
||||
exit 0
|
||||
}
|
||||
else
|
||||
# Strict validation for green/refactor phases
|
||||
./bmad-core/scripts/tdd-guard.sh --phase "${{ steps.detect-phase.outputs.phase }}" --verbose
|
||||
fi
|
||||
|
||||
# Test execution with TDD awareness
|
||||
test:
|
||||
name: Run Tests
|
||||
runs-on: ubuntu-latest
|
||||
needs: tdd-validation
|
||||
if: needs.tdd-validation.outputs.should-run-tests == 'true'
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
# Add multiple language support as needed
|
||||
language: [javascript, python]
|
||||
include:
|
||||
- language: javascript
|
||||
test-command: npm test
|
||||
coverage-command: npm run test:coverage
|
||||
setup: |
|
||||
node-version: 18
|
||||
cache: npm
|
||||
- language: python
|
||||
test-command: pytest
|
||||
coverage-command: pytest --cov=.
|
||||
setup: |
|
||||
python-version: '3.9'
|
||||
cache: pip
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# Language-specific setup
|
||||
- name: Setup Node.js
|
||||
if: matrix.language == 'javascript'
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 18
|
||||
cache: 'npm'
|
||||
|
||||
- name: Setup Python
|
||||
if: matrix.language == 'python'
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.9'
|
||||
cache: 'pip'
|
||||
|
||||
# Install dependencies
|
||||
- name: Install JavaScript dependencies
|
||||
if: matrix.language == 'javascript'
|
||||
run: |
|
||||
if [ -f package.json ]; then
|
||||
npm ci
|
||||
fi
|
||||
|
||||
- name: Install Python dependencies
|
||||
if: matrix.language == 'python'
|
||||
run: |
|
||||
if [ -f requirements.txt ]; then
|
||||
pip install -r requirements.txt
|
||||
fi
|
||||
if [ -f requirements-dev.txt ]; then
|
||||
pip install -r requirements-dev.txt
|
||||
fi
|
||||
|
||||
# Run tests with TDD phase awareness
|
||||
- name: Run tests
|
||||
id: run-tests
|
||||
run: |
|
||||
echo "🧪 Running tests for TDD phase: ${{ needs.tdd-validation.outputs.tdd-phase }}"
|
||||
|
||||
case "${{ needs.tdd-validation.outputs.tdd-phase }}" in
|
||||
"red")
|
||||
echo "RED phase: Expecting some tests to fail"
|
||||
${{ matrix.test-command }} || {
|
||||
echo "⚠️ Tests failed as expected in RED phase"
|
||||
if [[ "$TDD_ALLOW_RED_PHASE_FAILURES" == "true" ]]; then
|
||||
echo "test-result=red-expected-fail" >> $GITHUB_OUTPUT
|
||||
exit 0
|
||||
else
|
||||
echo "test-result=fail" >> $GITHUB_OUTPUT
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
echo "test-result=pass" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
"green"|"refactor")
|
||||
echo "GREEN/REFACTOR phase: All tests should pass"
|
||||
${{ matrix.test-command }}
|
||||
echo "test-result=pass" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
*)
|
||||
echo "Unknown phase: Running standard test suite"
|
||||
${{ matrix.test-command }}
|
||||
echo "test-result=pass" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
esac
|
||||
|
||||
# Generate coverage report
|
||||
- name: Generate coverage report
|
||||
if: env.TDD_COVERAGE_THRESHOLD > 0
|
||||
run: |
|
||||
echo "📊 Generating coverage report..."
|
||||
${{ matrix.coverage-command }} || echo "Coverage command failed"
|
||||
|
||||
# Upload coverage reports
|
||||
- name: Upload coverage to Codecov
|
||||
if: matrix.language == 'javascript' || matrix.language == 'python'
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
file: ./coverage.xml
|
||||
flags: ${{ matrix.language }}
|
||||
name: ${{ matrix.language }}-coverage
|
||||
fail_ci_if_error: false
|
||||
|
||||
# Quality gates specific to TDD phase
|
||||
quality-gates:
|
||||
name: Quality Gates
|
||||
runs-on: ubuntu-latest
|
||||
needs: [tdd-validation, test]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: TDD Phase Quality Gate
|
||||
run: |
|
||||
echo "🚦 Evaluating quality gates for TDD phase: ${{ needs.tdd-validation.outputs.tdd-phase }}"
|
||||
|
||||
case "${{ needs.tdd-validation.outputs.tdd-phase }}" in
|
||||
"red")
|
||||
echo "RED phase quality gate:"
|
||||
echo "✅ Tests written first"
|
||||
echo "✅ Implementation minimal or non-existent"
|
||||
if [[ "${{ needs.test.result }}" == "success" ]] || [[ "${{ needs.test.outputs.test-result }}" == "red-expected-fail" ]]; then
|
||||
echo "✅ RED phase gate: PASS"
|
||||
else
|
||||
echo "❌ RED phase gate: FAIL"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
"green")
|
||||
echo "GREEN phase quality gate:"
|
||||
echo "✅ All tests passing"
|
||||
echo "✅ Minimal implementation"
|
||||
echo "✅ No feature creep"
|
||||
if [[ "${{ needs.test.result }}" == "success" ]]; then
|
||||
echo "✅ GREEN phase gate: PASS"
|
||||
else
|
||||
echo "❌ GREEN phase gate: FAIL"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
"refactor")
|
||||
echo "REFACTOR phase quality gate:"
|
||||
echo "✅ Tests remain green"
|
||||
echo "✅ Code quality improved"
|
||||
echo "✅ Behavior preserved"
|
||||
if [[ "${{ needs.test.result }}" == "success" ]]; then
|
||||
echo "✅ REFACTOR phase gate: PASS"
|
||||
else
|
||||
echo "❌ REFACTOR phase gate: FAIL"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Standard quality gate:"
|
||||
if [[ "${{ needs.test.result }}" == "success" ]]; then
|
||||
echo "✅ Standard gate: PASS"
|
||||
else
|
||||
echo "❌ Standard gate: FAIL"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Generate TDD Report
|
||||
if: needs.tdd-validation.outputs.tdd-enabled == 'true'
|
||||
run: |
|
||||
echo "# TDD Pipeline Report" > tdd-report.md
|
||||
echo "" >> tdd-report.md
|
||||
echo "**TDD Phase:** ${{ needs.tdd-validation.outputs.tdd-phase }}" >> tdd-report.md
|
||||
echo "**Test Result:** ${{ needs.test.outputs.test-result || 'unknown' }}" >> tdd-report.md
|
||||
echo "**Quality Gate:** $([ "${{ job.status }}" == "success" ] && echo "PASS" || echo "FAIL")" >> tdd-report.md
|
||||
echo "" >> tdd-report.md
|
||||
echo "## Phase-Specific Results" >> tdd-report.md
|
||||
|
||||
case "${{ needs.tdd-validation.outputs.tdd-phase }}" in
|
||||
"red")
|
||||
echo "- ✅ Failing tests written first" >> tdd-report.md
|
||||
echo "- ✅ Implementation postponed until GREEN phase" >> tdd-report.md
|
||||
;;
|
||||
"green")
|
||||
echo "- ✅ Tests now passing" >> tdd-report.md
|
||||
echo "- ✅ Minimal implementation completed" >> tdd-report.md
|
||||
;;
|
||||
"refactor")
|
||||
echo "- ✅ Code quality improved" >> tdd-report.md
|
||||
echo "- ✅ All tests remain green" >> tdd-report.md
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Comment TDD Report on PR
|
||||
if: github.event_name == 'pull_request' && needs.tdd-validation.outputs.tdd-enabled == 'true'
|
||||
uses: actions/github-script@v6
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
if (fs.existsSync('tdd-report.md')) {
|
||||
const report = fs.readFileSync('tdd-report.md', 'utf8');
|
||||
|
||||
github.rest.issues.createComment({
|
||||
issue_number: context.issue.number,
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
body: `## 🧪 TDD Pipeline Results\n\n${report}`
|
||||
});
|
||||
}
|
||||
|
||||
# Deploy only after successful TDD validation
|
||||
deploy:
|
||||
name: Deploy
|
||||
runs-on: ubuntu-latest
|
||||
needs: [tdd-validation, test, quality-gates]
|
||||
if: github.ref == 'refs/heads/main' && success()
|
||||
|
||||
steps:
|
||||
- name: Deploy Application
|
||||
run: |
|
||||
echo "🚀 Deploying application after successful TDD validation"
|
||||
echo "TDD Phase: ${{ needs.tdd-validation.outputs.tdd-phase }}"
|
||||
|
||||
# Add your deployment steps here
|
||||
# Only deploy if all TDD phases pass validation
|
||||
|
||||
if [[ "${{ needs.tdd-validation.outputs.tdd-phase }}" == "green" ]] || [[ "${{ needs.tdd-validation.outputs.tdd-phase }}" == "refactor" ]]; then
|
||||
echo "✅ Safe to deploy: Implementation complete and tested"
|
||||
else
|
||||
echo "⚠️ Deployment skipped: Not in a stable TDD phase"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Additional workflow for TDD metrics collection
|
||||
tdd-metrics:
|
||||
name: TDD Metrics
|
||||
runs-on: ubuntu-latest
|
||||
needs: [tdd-validation, test, quality-gates]
|
||||
if: always() && needs.tdd-validation.outputs.tdd-enabled == 'true'
|
||||
|
||||
steps:
|
||||
- name: Collect TDD Metrics
|
||||
run: |
|
||||
echo "📊 Collecting TDD metrics..."
|
||||
|
||||
# Calculate TDD cycle time (example)
|
||||
CYCLE_START=$(git log --grep="\[RED\]" --format="%ct" -1 || echo $(date +%s))
|
||||
CYCLE_END=$(date +%s)
|
||||
CYCLE_TIME=$(( (CYCLE_END - CYCLE_START) / 60 )) # minutes
|
||||
|
||||
echo "TDD Cycle Metrics:"
|
||||
echo "- Phase: ${{ needs.tdd-validation.outputs.tdd-phase }}"
|
||||
echo "- Cycle Time: ${CYCLE_TIME} minutes"
|
||||
echo "- Test Status: ${{ needs.test.outputs.test-result }}"
|
||||
echo "- Quality Gate: $([ "${{ needs.quality-gates.result }}" == "success" ] && echo "PASS" || echo "FAIL")"
|
||||
|
||||
# Store metrics (example - adapt to your metrics system)
|
||||
echo "Metrics would be stored in your preferred system (Grafana, etc.)"
|
||||
|
|
@ -0,0 +1,188 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# TDD Story Definition of Done Checklist
|
||||
|
||||
## Instructions for Agents
|
||||
|
||||
This checklist ensures TDD stories meet quality standards across all Red-Green-Refactor cycles. Both QA and Dev agents should validate completion before marking a story as Done.
|
||||
|
||||
[[LLM: TDD DOD VALIDATION INSTRUCTIONS
|
||||
|
||||
This is a specialized DoD checklist for Test-Driven Development stories. It extends the standard DoD with TDD-specific quality gates.
|
||||
|
||||
EXECUTION APPROACH:
|
||||
|
||||
1. Verify TDD cycle progression (Red → Green → Refactor → Done)
|
||||
2. Validate test-first approach was followed
|
||||
3. Ensure proper test isolation and determinism
|
||||
4. Check code quality improvements from refactoring
|
||||
5. Confirm coverage targets are met
|
||||
|
||||
CRITICAL: Never mark a TDD story as Done without completing all TDD phases.]]
|
||||
|
||||
## TDD Cycle Validation
|
||||
|
||||
### Red Phase Completion
|
||||
|
||||
[[LLM: Verify tests were written BEFORE implementation]]
|
||||
|
||||
- [ ] **Tests written first:** All tests were created before any implementation code
|
||||
- [ ] **Failing correctly:** Tests fail for the right reasons (missing functionality, not bugs)
|
||||
- [ ] **Proper test structure:** Tests follow Given-When-Then or Arrange-Act-Assert patterns
|
||||
- [ ] **Deterministic tests:** No random values, network calls, or time dependencies
|
||||
- [ ] **External dependencies mocked:** All external services, databases, APIs properly mocked
|
||||
- [ ] **Test naming:** Clear, descriptive test names that express intent
|
||||
- [ ] **Story metadata updated:** TDD status set to 'red' and test list populated
|
||||
|
||||
### Green Phase Completion
|
||||
|
||||
[[LLM: Ensure minimal implementation that makes tests pass]]
|
||||
|
||||
- [ ] **All tests passing:** 100% of tests pass consistently
|
||||
- [ ] **Minimal implementation:** Only code necessary to make tests pass was written
|
||||
- [ ] **No feature creep:** No functionality added without corresponding failing tests
|
||||
- [ ] **Test-code traceability:** Implementation clearly addresses specific test requirements
|
||||
- [ ] **Regression protection:** All previously passing tests remain green
|
||||
- [ ] **Story metadata updated:** TDD status set to 'green' and test results documented
|
||||
|
||||
### Refactor Phase Completion
|
||||
|
||||
[[LLM: Verify code quality improvements while maintaining green tests]]
|
||||
|
||||
- [ ] **Tests remain green:** All tests continue to pass after refactoring
|
||||
- [ ] **Code quality improved:** Duplication eliminated, naming improved, structure clarified
|
||||
- [ ] **Design enhanced:** Better separation of concerns, cleaner interfaces
|
||||
- [ ] **Technical debt addressed:** Known code smells identified and resolved
|
||||
- [ ] **Commit discipline:** Small, incremental commits with green tests after each
|
||||
- [ ] **Story metadata updated:** Refactoring notes and improvements documented
|
||||
|
||||
## Test Quality Standards
|
||||
|
||||
### Test Implementation Quality
|
||||
|
||||
[[LLM: Ensure tests are maintainable and reliable]]
|
||||
|
||||
- [ ] **Fast execution:** Unit tests complete in <100ms each
|
||||
- [ ] **Isolated tests:** Each test can run independently in any order
|
||||
- [ ] **Single responsibility:** Each test validates one specific behavior
|
||||
- [ ] **Clear assertions:** Test failures provide meaningful error messages
|
||||
- [ ] **Appropriate test types:** Right mix of unit/integration/e2e tests
|
||||
- [ ] **Mock strategy:** Appropriate use of mocks vs fakes vs stubs
|
||||
|
||||
### Coverage and Completeness
|
||||
|
||||
[[LLM: Validate comprehensive test coverage]]
|
||||
|
||||
- [ ] **Coverage target met:** Code coverage meets story's target percentage
|
||||
- [ ] **Acceptance criteria covered:** All ACs have corresponding tests
|
||||
- [ ] **Edge cases tested:** Boundary conditions and error scenarios included
|
||||
- [ ] **Happy path validated:** Primary success scenarios thoroughly tested
|
||||
- [ ] **Error handling tested:** Exception paths and error recovery validated
|
||||
|
||||
## Implementation Quality
|
||||
|
||||
### Code Standards Compliance
|
||||
|
||||
[[LLM: Ensure production-ready code quality]]
|
||||
|
||||
- [ ] **Coding standards followed:** Code adheres to project style guidelines
|
||||
- [ ] **Architecture alignment:** Implementation follows established patterns
|
||||
- [ ] **Security practices:** Input validation, error handling, no hardcoded secrets
|
||||
- [ ] **Performance considerations:** No obvious performance bottlenecks introduced
|
||||
- [ ] **Documentation updated:** Code comments and documentation reflect changes
|
||||
|
||||
### File Organization and Management
|
||||
|
||||
[[LLM: Verify proper project structure]]
|
||||
|
||||
- [ ] **Test file organization:** Tests follow project's testing folder structure
|
||||
- [ ] **Naming conventions:** Files and functions follow established patterns
|
||||
- [ ] **Dependencies managed:** New dependencies properly declared and justified
|
||||
- [ ] **Import/export clarity:** Clear module interfaces and dependencies
|
||||
- [ ] **File list accuracy:** All created/modified files documented in story
|
||||
|
||||
## TDD Process Adherence
|
||||
|
||||
### Methodology Compliance
|
||||
|
||||
[[LLM: Confirm true TDD practice was followed]]
|
||||
|
||||
- [ ] **Test-first discipline:** No implementation code written before tests
|
||||
- [ ] **Minimal cycles:** Small Red-Green-Refactor iterations maintained
|
||||
- [ ] **Refactoring safety:** Only refactored with green test coverage
|
||||
- [ ] **Requirements traceability:** Clear mapping from tests to acceptance criteria
|
||||
- [ ] **Collaboration evidence:** QA and Dev agent coordination documented
|
||||
|
||||
### Documentation and Traceability
|
||||
|
||||
[[LLM: Ensure proper tracking and communication]]
|
||||
|
||||
- [ ] **TDD progress tracked:** Story shows progression through all TDD phases
|
||||
- [ ] **Test execution logged:** Evidence of test runs and results captured
|
||||
- [ ] **Refactoring documented:** Changes made during refactor phase explained
|
||||
- [ ] **Agent collaboration:** Clear handoffs between QA (Red) and Dev (Green/Refactor)
|
||||
- [ ] **Story metadata complete:** All TDD fields properly populated
|
||||
|
||||
## Integration and Deployment Readiness
|
||||
|
||||
### Build and Deployment
|
||||
|
||||
[[LLM: Ensure story integrates properly with project]]
|
||||
|
||||
- [ ] **Project builds successfully:** Code compiles without errors or warnings
|
||||
- [ ] **All tests pass in CI:** Automated test suite runs successfully
|
||||
- [ ] **No breaking changes:** Existing functionality remains intact
|
||||
- [ ] **Environment compatibility:** Code works across development environments
|
||||
- [ ] **Configuration managed:** Any new config values properly documented
|
||||
|
||||
### Review Readiness
|
||||
|
||||
[[LLM: Story is ready for peer review]]
|
||||
|
||||
- [ ] **Complete implementation:** All acceptance criteria fully implemented
|
||||
- [ ] **Clean commit history:** Clear, logical progression of changes
|
||||
- [ ] **Review artifacts:** All necessary files and documentation available
|
||||
- [ ] **No temporary code:** Debug code, TODOs, and temporary hacks removed
|
||||
- [ ] **Quality gates passed:** All automated quality checks successful
|
||||
|
||||
## Final TDD Validation
|
||||
|
||||
### Holistic Assessment
|
||||
|
||||
[[LLM: Overall TDD process and outcome validation]]
|
||||
|
||||
- [ ] **TDD value delivered:** Process improved code design and quality
|
||||
- [ ] **Test suite value:** Tests provide reliable safety net for changes
|
||||
- [ ] **Knowledge captured:** Future developers can understand and maintain code
|
||||
- [ ] **Standards elevated:** Code quality meets or exceeds project standards
|
||||
- [ ] **Learning documented:** Any insights or patterns discovered are captured
|
||||
|
||||
### Story Completion Criteria
|
||||
|
||||
[[LLM: Final checklist before marking Done]]
|
||||
|
||||
- [ ] **Business value delivered:** Story provides promised user value
|
||||
- [ ] **Technical debt managed:** Any remaining debt is documented and acceptable
|
||||
- [ ] **Future maintainability:** Code can be easily modified and extended
|
||||
- [ ] **Production readiness:** Code is ready for production deployment
|
||||
- [ ] **TDD story complete:** All TDD-specific requirements fulfilled
|
||||
|
||||
## Completion Declaration
|
||||
|
||||
**Agent Validation:**
|
||||
|
||||
- [ ] **QA Agent confirms:** Test strategy executed successfully, coverage adequate
|
||||
- [ ] **Dev Agent confirms:** Implementation complete, code quality satisfactory
|
||||
|
||||
**Final Status:**
|
||||
|
||||
- [ ] **Story marked Done:** All DoD criteria met and verified
|
||||
- [ ] **TDD status complete:** Story TDD metadata shows 'done' status
|
||||
- [ ] **Ready for review:** Story package complete for stakeholder review
|
||||
|
||||
---
|
||||
|
||||
**Validation Date:** {date}
|
||||
**Validating Agents:** {qa_agent} & {dev_agent}
|
||||
**TDD Cycles Completed:** {cycle_count}
|
||||
**Final Test Status:** {passing_count} passing, {failing_count} failing
|
||||
|
|
@ -0,0 +1,381 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# TDD Green Phase Prompts
|
||||
|
||||
Instructions for Dev agents when implementing minimal code to make tests pass in Test-Driven Development.
|
||||
|
||||
## Core Green Phase Mindset
|
||||
|
||||
**You are a Dev Agent in TDD GREEN PHASE. Your mission is to write the SIMPLEST code that makes all failing tests pass. Resist the urge to be clever - be minimal.**
|
||||
|
||||
### Primary Objectives
|
||||
|
||||
1. **Make it work first** - Focus on making tests pass, not perfect design
|
||||
2. **Minimal implementation** - Write only what's needed for green tests
|
||||
3. **No feature creep** - Don't add functionality without failing tests
|
||||
4. **Fast feedback** - Run tests frequently during implementation
|
||||
5. **Traceability** - Link implementation directly to test requirements
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### The Three Rules of TDD (Uncle Bob)
|
||||
|
||||
1. **Don't write production code** unless it makes a failing test pass
|
||||
2. **Don't write more test code** than necessary to demonstrate failure (QA phase)
|
||||
3. **Don't write more production code** than necessary to make failing tests pass
|
||||
|
||||
### Green Phase Workflow
|
||||
|
||||
```yaml
|
||||
workflow:
|
||||
1. read_failing_test: 'Understand what the test expects'
|
||||
2. write_minimal_code: 'Simplest implementation to pass'
|
||||
3. run_test: 'Verify this specific test passes'
|
||||
4. run_all_tests: 'Ensure no regressions'
|
||||
5. repeat: 'Move to next failing test'
|
||||
|
||||
never_skip:
|
||||
- running_tests_after_each_change
|
||||
- checking_for_regressions
|
||||
- committing_when_green
|
||||
```
|
||||
|
||||
### Minimal Implementation Examples
|
||||
|
||||
**Example 1: Start with Hardcoded Values**
|
||||
|
||||
```javascript
|
||||
// Test expects:
|
||||
it('should return user with ID when creating user', () => {
|
||||
const result = userService.createUser({ name: 'Test' });
|
||||
expect(result).toEqual({ id: 1, name: 'Test' });
|
||||
});
|
||||
|
||||
// Minimal implementation (hardcode first):
|
||||
function createUser(userData) {
|
||||
return { id: 1, name: userData.name };
|
||||
}
|
||||
|
||||
// Test expects different ID:
|
||||
it('should return different ID for second user', () => {
|
||||
userService.createUser({ name: 'First' });
|
||||
const result = userService.createUser({ name: 'Second' });
|
||||
expect(result.id).toBe(2);
|
||||
});
|
||||
|
||||
// Now make it dynamic:
|
||||
let nextId = 1;
|
||||
function createUser(userData) {
|
||||
return { id: nextId++, name: userData.name };
|
||||
}
|
||||
```
|
||||
|
||||
**Example 2: Validation Implementation**
|
||||
|
||||
```javascript
|
||||
// Test expects validation error:
|
||||
it('should throw error when email is invalid', () => {
|
||||
expect(() => createUser({ email: 'invalid' })).toThrow('Invalid email format');
|
||||
});
|
||||
|
||||
// Minimal validation:
|
||||
function createUser(userData) {
|
||||
if (!userData.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
return { id: nextId++, ...userData };
|
||||
}
|
||||
```
|
||||
|
||||
## Avoiding Feature Creep
|
||||
|
||||
### What NOT to Add (Yet)
|
||||
|
||||
```javascript
|
||||
// Don't add these without failing tests:
|
||||
|
||||
// ❌ Comprehensive validation
|
||||
function createUser(data) {
|
||||
if (!data.email || !data.email.includes('@')) throw new Error('Invalid email');
|
||||
if (!data.name || data.name.trim().length === 0) throw new Error('Name required');
|
||||
if (data.age && (data.age < 0 || data.age > 150)) throw new Error('Invalid age');
|
||||
// ... only add validation that has failing tests
|
||||
}
|
||||
|
||||
// ❌ Performance optimizations
|
||||
function createUser(data) {
|
||||
// Don't add caching, connection pooling, etc. without tests
|
||||
}
|
||||
|
||||
// ❌ Future features
|
||||
function createUser(data) {
|
||||
// Don't add roles, permissions, etc. unless tests require it
|
||||
}
|
||||
```
|
||||
|
||||
### What TO Add
|
||||
|
||||
```javascript
|
||||
// ✅ Only what tests require:
|
||||
function createUser(data) {
|
||||
// Only validate what failing tests specify
|
||||
if (!data.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
|
||||
// Only return what tests expect
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
```
|
||||
|
||||
## Test-Code Traceability
|
||||
|
||||
### Linking Implementation to Tests
|
||||
|
||||
```javascript
|
||||
// Test ID: UC-001
|
||||
it('should create user with valid email', () => {
|
||||
const result = createUser({ email: 'test@example.com', name: 'Test' });
|
||||
expect(result).toHaveProperty('id');
|
||||
});
|
||||
|
||||
// Implementation comment linking to test:
|
||||
function createUser(data) {
|
||||
// UC-001: Return user with generated ID
|
||||
return {
|
||||
id: generateId(),
|
||||
...data,
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Commit Messages with Test References
|
||||
|
||||
```bash
|
||||
# Good commit messages:
|
||||
git commit -m "GREEN: Implement user creation [UC-001, UC-002]"
|
||||
git commit -m "GREEN: Add email validation for createUser [UC-003]"
|
||||
git commit -m "GREEN: Handle edge case for empty name [UC-004]"
|
||||
|
||||
# Avoid vague messages:
|
||||
git commit -m "Fixed user service"
|
||||
git commit -m "Added validation"
|
||||
```
|
||||
|
||||
## Handling Different Test Types
|
||||
|
||||
### Unit Tests - Pure Logic
|
||||
|
||||
```javascript
|
||||
// Test: Calculate tax for purchase
|
||||
it('should calculate 10% tax on purchase amount', () => {
|
||||
expect(calculateTax(100)).toBe(10);
|
||||
});
|
||||
|
||||
// Minimal implementation:
|
||||
function calculateTax(amount) {
|
||||
return amount * 0.1;
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Tests - Component Interaction
|
||||
|
||||
```javascript
|
||||
// Test: Service uses injected database
|
||||
it('should save user to database when created', async () => {
|
||||
const mockDb = { save: jest.fn().mockResolvedValue({ id: 1 }) };
|
||||
const service = new UserService(mockDb);
|
||||
|
||||
await service.createUser({ name: 'Test' });
|
||||
|
||||
expect(mockDb.save).toHaveBeenCalledWith({ name: 'Test' });
|
||||
});
|
||||
|
||||
// Minimal implementation:
|
||||
class UserService {
|
||||
constructor(database) {
|
||||
this.db = database;
|
||||
}
|
||||
|
||||
async createUser(userData) {
|
||||
return await this.db.save(userData);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling Tests
|
||||
|
||||
```javascript
|
||||
// Test: Handle database connection failure
|
||||
it('should throw service error when database is unavailable', async () => {
|
||||
const mockDb = { save: jest.fn().mockRejectedValue(new Error('DB down')) };
|
||||
const service = new UserService(mockDb);
|
||||
|
||||
await expect(service.createUser({ name: 'Test' }))
|
||||
.rejects.toThrow('Service temporarily unavailable');
|
||||
});
|
||||
|
||||
// Minimal error handling:
|
||||
async createUser(userData) {
|
||||
try {
|
||||
return await this.db.save(userData);
|
||||
} catch (error) {
|
||||
throw new Error('Service temporarily unavailable');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Fast Feedback Loop
|
||||
|
||||
### Test Execution Strategy
|
||||
|
||||
```bash
|
||||
# Run single test file while implementing:
|
||||
npm test -- user-service.test.js --watch
|
||||
pytest tests/unit/test_user_service.py -v
|
||||
go test ./services -run TestUserService
|
||||
|
||||
# Run full suite after each feature:
|
||||
npm test
|
||||
pytest
|
||||
go test ./...
|
||||
```
|
||||
|
||||
### IDE Integration
|
||||
|
||||
```yaml
|
||||
recommended_setup:
|
||||
- test_runner_integration: 'Tests run on save'
|
||||
- live_feedback: 'Immediate pass/fail indicators'
|
||||
- coverage_display: 'Show which lines are tested'
|
||||
- failure_details: 'Quick access to error messages'
|
||||
```
|
||||
|
||||
## Common Green Phase Mistakes
|
||||
|
||||
### Mistake: Over-Implementation
|
||||
|
||||
```javascript
|
||||
// Wrong: Adding features without tests
|
||||
function createUser(data) {
|
||||
// No test requires password hashing yet
|
||||
const hashedPassword = hashPassword(data.password);
|
||||
|
||||
// No test requires audit logging yet
|
||||
auditLog.record('user_created', data);
|
||||
|
||||
// Only implement what tests require
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
```
|
||||
|
||||
### Mistake: Premature Abstraction
|
||||
|
||||
```javascript
|
||||
// Wrong: Creating abstractions too early
|
||||
class UserValidatorFactory {
|
||||
static createValidator(type) {
|
||||
// Complex factory pattern without tests requiring it
|
||||
}
|
||||
}
|
||||
|
||||
// Right: Keep it simple until tests demand complexity
|
||||
function createUser(data) {
|
||||
if (!data.email.includes('@')) {
|
||||
throw new Error('Invalid email');
|
||||
}
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
```
|
||||
|
||||
### Mistake: Not Running Tests Frequently
|
||||
|
||||
```javascript
|
||||
// Wrong: Writing lots of code before testing
|
||||
function createUser(data) {
|
||||
// 20 lines of code without running tests
|
||||
// Many assumptions about what tests expect
|
||||
}
|
||||
|
||||
// Right: Small changes, frequent test runs
|
||||
function createUser(data) {
|
||||
return { id: 1, ...data }; // Run test - passes
|
||||
}
|
||||
|
||||
// Then add next failing test's requirement:
|
||||
function createUser(data) {
|
||||
if (!data.email.includes('@')) throw new Error('Invalid email');
|
||||
return { id: 1, ...data }; // Run test - passes
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Standards in Green Phase
|
||||
|
||||
### Acceptable Technical Debt
|
||||
|
||||
```javascript
|
||||
// OK during Green phase (will fix in Refactor):
|
||||
function createUser(data) {
|
||||
// Hardcoded values
|
||||
const id = 1;
|
||||
|
||||
// Duplicated validation logic
|
||||
if (!data.email.includes('@')) throw new Error('Invalid email');
|
||||
if (!data.name || data.name.trim() === '') throw new Error('Name required');
|
||||
|
||||
// Simple algorithm even if inefficient
|
||||
return { id: Math.floor(Math.random() * 1000000), ...data };
|
||||
}
|
||||
```
|
||||
|
||||
### Minimum Standards (Even in Green)
|
||||
|
||||
```javascript
|
||||
// Always maintain:
|
||||
function createUser(data) {
|
||||
// Clear variable names
|
||||
const userData = { ...data };
|
||||
const userId = generateId();
|
||||
|
||||
// Proper error messages
|
||||
if (!userData.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
|
||||
// Return expected structure
|
||||
return { id: userId, ...userData };
|
||||
}
|
||||
```
|
||||
|
||||
## Green Phase Checklist
|
||||
|
||||
Before moving to Refactor phase, ensure:
|
||||
|
||||
- [ ] **All tests passing** - No failing tests remain
|
||||
- [ ] **No regressions** - Previously passing tests still pass
|
||||
- [ ] **Minimal implementation** - Only code needed for tests
|
||||
- [ ] **Clear test traceability** - Implementation addresses specific tests
|
||||
- [ ] **No feature creep** - No functionality without tests
|
||||
- [ ] **Basic quality standards** - Code is readable and correct
|
||||
- [ ] **Frequent commits** - Changes committed with test references
|
||||
- [ ] **Story metadata updated** - TDD status set to 'green'
|
||||
|
||||
## Success Indicators
|
||||
|
||||
**You know you're succeeding in Green phase when:**
|
||||
|
||||
1. **All tests consistently pass**
|
||||
2. **Implementation is obviously minimal**
|
||||
3. **Each code block addresses specific test requirements**
|
||||
4. **No functionality exists without corresponding tests**
|
||||
5. **Tests run quickly and reliably**
|
||||
6. **Code changes are small and focused**
|
||||
|
||||
**Green phase is complete when:**
|
||||
|
||||
- Zero failing tests
|
||||
- Implementation covers all test scenarios
|
||||
- Code is minimal but correct
|
||||
- Ready for refactoring improvements
|
||||
|
||||
Remember: Green phase is about making it work, not making it perfect. Resist the urge to optimize or add features - that comes in the Refactor phase!
|
||||
|
|
@ -0,0 +1,299 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# TDD Quality Gates Template
|
||||
|
||||
Quality gate criteria and checkpoints for Test-Driven Development workflows.
|
||||
|
||||
## Gate Structure
|
||||
|
||||
Each TDD phase has specific quality gates that must be met before progression to the next phase.
|
||||
|
||||
## Red Phase Gates
|
||||
|
||||
### Prerequisites for Red Phase Entry
|
||||
|
||||
- [ ] Story has clear acceptance criteria
|
||||
- [ ] Test runner detected or configured
|
||||
- [ ] Story status is 'ready' or 'inprogress'
|
||||
- [ ] TDD enabled in core-config.yaml
|
||||
|
||||
### Red Phase Completion Gates
|
||||
|
||||
**PASS Criteria:**
|
||||
|
||||
- [ ] At least one test written and failing
|
||||
- [ ] Tests fail for correct reasons (missing implementation, not syntax errors)
|
||||
- [ ] All external dependencies properly mocked
|
||||
- [ ] Test data is deterministic (no random values, current time)
|
||||
- [ ] Test names clearly describe expected behavior
|
||||
- [ ] Story TDD metadata updated (status='red', test list populated)
|
||||
- [ ] Test files follow project naming conventions
|
||||
|
||||
**FAIL Criteria:**
|
||||
|
||||
- [ ] No tests written
|
||||
- [ ] Tests pass unexpectedly (implementation may already exist)
|
||||
- [ ] Tests fail due to syntax errors or configuration issues
|
||||
- [ ] External dependencies not mocked (network calls, file system, etc.)
|
||||
- [ ] Non-deterministic tests (random data, time-dependent)
|
||||
|
||||
**Gate Decision:**
|
||||
|
||||
```yaml
|
||||
red_phase_gate:
|
||||
status: PASS|FAIL
|
||||
failing_tests_count: { number }
|
||||
tests_fail_correctly: true|false
|
||||
mocking_complete: true|false
|
||||
deterministic_tests: true|false
|
||||
metadata_updated: true|false
|
||||
ready_for_green_phase: true|false
|
||||
```
|
||||
|
||||
## Green Phase Gates
|
||||
|
||||
### Prerequisites for Green Phase Entry
|
||||
|
||||
- [ ] Red phase gate passed
|
||||
- [ ] Story tdd.status = 'red'
|
||||
- [ ] Failing tests exist and documented
|
||||
- [ ] Test runner confirmed working
|
||||
|
||||
### Green Phase Completion Gates
|
||||
|
||||
**PASS Criteria:**
|
||||
|
||||
- [ ] All previously failing tests now pass
|
||||
- [ ] No new tests added during implementation
|
||||
- [ ] Implementation is minimal (only what's needed for tests)
|
||||
- [ ] No feature creep beyond test requirements
|
||||
- [ ] All existing tests remain green (no regressions)
|
||||
- [ ] Code follows basic quality standards
|
||||
- [ ] Story TDD metadata updated (status='green')
|
||||
|
||||
**CONCERNS Criteria:**
|
||||
|
||||
- [ ] Implementation seems overly complex for test requirements
|
||||
- [ ] Additional functionality added without corresponding tests
|
||||
- [ ] Code quality significantly below project standards
|
||||
- [ ] Performance implications not addressed
|
||||
|
||||
**FAIL Criteria:**
|
||||
|
||||
- [ ] Tests still failing after implementation attempt
|
||||
- [ ] New regressions introduced (previously passing tests now fail)
|
||||
- [ ] Implementation missing for some failing tests
|
||||
- [ ] Significant feature creep detected
|
||||
|
||||
**Gate Decision:**
|
||||
|
||||
```yaml
|
||||
green_phase_gate:
|
||||
status: PASS|CONCERNS|FAIL
|
||||
all_tests_passing: true|false
|
||||
no_regressions: true|false
|
||||
minimal_implementation: true|false
|
||||
feature_creep_detected: false|true
|
||||
code_quality_acceptable: true|false
|
||||
ready_for_refactor_phase: true|false
|
||||
```
|
||||
|
||||
## Refactor Phase Gates
|
||||
|
||||
### Prerequisites for Refactor Phase Entry
|
||||
|
||||
- [ ] Green phase gate passed
|
||||
- [ ] Story tdd.status = 'green'
|
||||
- [ ] All tests consistently passing
|
||||
- [ ] Code quality issues identified
|
||||
|
||||
### Refactor Phase Completion Gates
|
||||
|
||||
**PASS Criteria:**
|
||||
|
||||
- [ ] All tests remain green throughout refactoring
|
||||
- [ ] Code quality measurably improved
|
||||
- [ ] No behavior changes introduced
|
||||
- [ ] Refactoring changes committed incrementally
|
||||
- [ ] Technical debt reduced in story scope
|
||||
- [ ] Story TDD metadata updated (status='refactor' or 'done')
|
||||
|
||||
**CONCERNS Criteria:**
|
||||
|
||||
- [ ] Some code smells remain unaddressed
|
||||
- [ ] Refactoring introduced minor complexity
|
||||
- [ ] Test execution time increased significantly
|
||||
- [ ] Marginal quality improvements
|
||||
|
||||
**FAIL Criteria:**
|
||||
|
||||
- [ ] Tests broken by refactoring changes
|
||||
- [ ] Behavior changed during refactoring
|
||||
- [ ] Code quality degraded
|
||||
- [ ] Large, risky refactoring attempts
|
||||
|
||||
**Gate Decision:**
|
||||
|
||||
```yaml
|
||||
refactor_phase_gate:
|
||||
status: PASS|CONCERNS|FAIL
|
||||
tests_remain_green: true|false
|
||||
code_quality_improved: true|false
|
||||
behavior_preserved: true|false
|
||||
technical_debt_reduced: true|false
|
||||
safe_incremental_changes: true|false
|
||||
ready_for_completion: true|false
|
||||
```
|
||||
|
||||
## Story Completion Gates
|
||||
|
||||
### TDD Story Completion Criteria
|
||||
|
||||
**Must Have:**
|
||||
|
||||
- [ ] All TDD phases completed (Red → Green → Refactor)
|
||||
- [ ] Final test suite passes consistently
|
||||
- [ ] Code quality meets project standards
|
||||
- [ ] All acceptance criteria covered by tests
|
||||
- [ ] TDD-specific DoD checklist completed
|
||||
|
||||
**Quality Metrics:**
|
||||
|
||||
- [ ] Test coverage meets story target
|
||||
- [ ] No obvious code smells remain
|
||||
- [ ] Test execution time reasonable (< 2x baseline)
|
||||
- [ ] All TDD artifacts documented in story
|
||||
|
||||
**Documentation:**
|
||||
|
||||
- [ ] TDD cycle progression tracked in story
|
||||
- [ ] Test-to-requirement traceability clear
|
||||
- [ ] Refactoring decisions documented
|
||||
- [ ] Lessons learned captured
|
||||
|
||||
## Gate Failure Recovery
|
||||
|
||||
### Red Phase Recovery
|
||||
|
||||
```yaml
|
||||
red_phase_failures:
|
||||
no_failing_tests:
|
||||
action: 'Review acceptance criteria, create simpler test cases'
|
||||
escalation: 'Consult SM for requirement clarification'
|
||||
|
||||
tests_pass_unexpectedly:
|
||||
action: 'Check if implementation already exists, adjust test scope'
|
||||
escalation: 'Review story scope with PO'
|
||||
|
||||
mocking_issues:
|
||||
action: 'Review external dependencies, implement proper mocks'
|
||||
escalation: 'Consult with Dev agent on architecture'
|
||||
```
|
||||
|
||||
### Green Phase Recovery
|
||||
|
||||
```yaml
|
||||
green_phase_failures:
|
||||
tests_still_failing:
|
||||
action: 'Break down implementation into smaller steps'
|
||||
escalation: 'Review test expectations vs implementation approach'
|
||||
|
||||
regressions_introduced:
|
||||
action: 'Revert changes, identify conflicting logic'
|
||||
escalation: 'Architectural review with team'
|
||||
|
||||
feature_creep_detected:
|
||||
action: 'Remove features not covered by tests'
|
||||
escalation: 'Return to Red phase for additional tests'
|
||||
```
|
||||
|
||||
### Refactor Phase Recovery
|
||||
|
||||
```yaml
|
||||
refactor_phase_failures:
|
||||
tests_broken:
|
||||
action: 'Immediately revert breaking changes'
|
||||
escalation: 'Use smaller refactoring steps'
|
||||
|
||||
behavior_changed:
|
||||
action: 'Revert and analyze where behavior diverged'
|
||||
escalation: 'Review refactoring approach with QA agent'
|
||||
|
||||
quality_degraded:
|
||||
action: 'Revert changes, try different refactoring technique'
|
||||
escalation: 'Accept current code quality, document technical debt'
|
||||
```
|
||||
|
||||
## Quality Metrics Dashboard
|
||||
|
||||
### Per-Phase Metrics
|
||||
|
||||
```yaml
|
||||
metrics_tracking:
|
||||
red_phase:
|
||||
- failing_tests_count
|
||||
- test_creation_time
|
||||
- mocking_complexity
|
||||
|
||||
green_phase:
|
||||
- implementation_time
|
||||
- lines_of_code_added
|
||||
- test_pass_rate
|
||||
|
||||
refactor_phase:
|
||||
- code_quality_delta
|
||||
- test_execution_time_delta
|
||||
- refactoring_safety_score
|
||||
```
|
||||
|
||||
### Story-Level Metrics
|
||||
|
||||
```yaml
|
||||
story_metrics:
|
||||
total_tdd_cycle_time: '{hours}'
|
||||
cycles_completed: '{count}'
|
||||
test_to_code_ratio: '{percentage}'
|
||||
coverage_achieved: '{percentage}'
|
||||
quality_improvement_score: '{0-100}'
|
||||
```
|
||||
|
||||
## Integration with Standard Gates
|
||||
|
||||
### How TDD Gates Extend Standard QA Gates
|
||||
|
||||
- **Standard gates still apply** for final story review
|
||||
- **TDD gates are additional checkpoints** during development
|
||||
- **Phase-specific criteria** supplement overall quality assessment
|
||||
- **Traceability maintained** between TDD progress and story completion
|
||||
|
||||
### Gate Reporting
|
||||
|
||||
```yaml
|
||||
gate_report_template:
|
||||
story_id: '{epic}.{story}'
|
||||
tdd_enabled: true
|
||||
phases_completed: ['red', 'green', 'refactor']
|
||||
|
||||
phase_gates:
|
||||
red:
|
||||
status: 'PASS'
|
||||
completed_date: '2025-01-01T10:00:00Z'
|
||||
criteria_met: 6/6
|
||||
|
||||
green:
|
||||
status: 'PASS'
|
||||
completed_date: '2025-01-01T14:00:00Z'
|
||||
criteria_met: 7/7
|
||||
|
||||
refactor:
|
||||
status: 'PASS'
|
||||
completed_date: '2025-01-01T16:00:00Z'
|
||||
criteria_met: 6/6
|
||||
|
||||
final_assessment:
|
||||
overall_gate: 'PASS'
|
||||
quality_score: 92
|
||||
recommendations: []
|
||||
```
|
||||
|
||||
This template ensures consistent quality standards across all TDD phases while maintaining compatibility with existing BMAD quality gates.
|
||||
|
|
@ -0,0 +1,320 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# TDD Red Phase Prompts
|
||||
|
||||
Instructions for QA agents when writing failing tests first in Test-Driven Development.
|
||||
|
||||
## Core Red Phase Mindset
|
||||
|
||||
**You are a QA Agent in TDD RED PHASE. Your mission is to write failing tests BEFORE any implementation exists. These tests define what success looks like.**
|
||||
|
||||
### Primary Objectives
|
||||
|
||||
1. **Test First, Always:** Write tests before any production code
|
||||
2. **Describe Behavior:** Tests should express user/system expectations
|
||||
3. **Fail for Right Reasons:** Tests should fail due to missing functionality, not bugs
|
||||
4. **Minimal Scope:** Start with the smallest possible feature slice
|
||||
5. **External Isolation:** Mock all external dependencies
|
||||
|
||||
## Test Writing Guidelines
|
||||
|
||||
### Test Structure Template
|
||||
|
||||
```javascript
|
||||
describe('{ComponentName}', () => {
|
||||
describe('{specific_behavior}', () => {
|
||||
it('should {expected_behavior} when {condition}', () => {
|
||||
// Given (Arrange) - Set up test conditions
|
||||
const input = createTestInput();
|
||||
const mockDependency = createMock();
|
||||
|
||||
// When (Act) - Perform the action
|
||||
const result = systemUnderTest.performAction(input);
|
||||
|
||||
// Then (Assert) - Verify expectations
|
||||
expect(result).toEqual(expectedOutput);
|
||||
expect(mockDependency).toHaveBeenCalledWith(expectedArgs);
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Test Naming Conventions
|
||||
|
||||
**Pattern:** `should {expected_behavior} when {condition}`
|
||||
|
||||
**Good Examples:**
|
||||
|
||||
- `should return user profile when valid ID provided`
|
||||
- `should throw validation error when email is invalid`
|
||||
- `should create empty cart when user first visits`
|
||||
|
||||
**Avoid:**
|
||||
|
||||
- `testUserCreation` (not descriptive)
|
||||
- `should work correctly` (too vague)
|
||||
- `test_valid_input` (focuses on input, not behavior)
|
||||
|
||||
## Mocking Strategy
|
||||
|
||||
### When to Mock
|
||||
|
||||
```yaml
|
||||
always_mock:
|
||||
- External APIs and web services
|
||||
- Database connections and queries
|
||||
- File system operations
|
||||
- Network requests
|
||||
- Current time/date functions
|
||||
- Random number generators
|
||||
- Third-party libraries
|
||||
|
||||
never_mock:
|
||||
- Pure functions without side effects
|
||||
- Simple data structures
|
||||
- Language built-ins (unless time/random)
|
||||
- Domain objects under test
|
||||
```
|
||||
|
||||
### Mock Implementation Examples
|
||||
|
||||
```javascript
|
||||
// Mock external API
|
||||
const mockApiClient = {
|
||||
getUserById: jest.fn().mockResolvedValue({ id: 1, name: 'Test User' }),
|
||||
createUser: jest.fn().mockResolvedValue({ id: 2, name: 'New User' }),
|
||||
};
|
||||
|
||||
// Mock time for deterministic tests
|
||||
const mockDate = new Date('2025-01-01T10:00:00Z');
|
||||
jest.useFakeTimers().setSystemTime(mockDate);
|
||||
|
||||
// Mock database
|
||||
const mockDb = {
|
||||
users: {
|
||||
findById: jest.fn(),
|
||||
create: jest.fn(),
|
||||
update: jest.fn(),
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Deterministic Test Data
|
||||
|
||||
```javascript
|
||||
// Good: Predictable, meaningful test data
|
||||
const testUser = {
|
||||
id: 'user-123',
|
||||
email: 'test@example.com',
|
||||
name: 'Test User',
|
||||
createdAt: '2025-01-01T10:00:00Z',
|
||||
};
|
||||
|
||||
// Avoid: Random or meaningless data
|
||||
const testUser = {
|
||||
id: Math.random(),
|
||||
email: 'a@b.com',
|
||||
name: 'x',
|
||||
};
|
||||
```
|
||||
|
||||
### Test Data Builders
|
||||
|
||||
```javascript
|
||||
class UserBuilder {
|
||||
constructor() {
|
||||
this.user = {
|
||||
id: 'default-id',
|
||||
email: 'default@example.com',
|
||||
name: 'Default User',
|
||||
};
|
||||
}
|
||||
|
||||
withEmail(email) {
|
||||
this.user.email = email;
|
||||
return this;
|
||||
}
|
||||
|
||||
withId(id) {
|
||||
this.user.id = id;
|
||||
return this;
|
||||
}
|
||||
|
||||
build() {
|
||||
return { ...this.user };
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
const validUser = new UserBuilder().withEmail('valid@email.com').build();
|
||||
const invalidUser = new UserBuilder().withEmail('invalid-email').build();
|
||||
```
|
||||
|
||||
## Edge Cases and Error Scenarios
|
||||
|
||||
### Prioritize Error Conditions
|
||||
|
||||
```javascript
|
||||
// Test error conditions first - they're often forgotten
|
||||
describe('UserService.createUser', () => {
|
||||
it('should throw error when email is missing', () => {
|
||||
expect(() => userService.createUser({ name: 'Test' })).toThrow('Email is required');
|
||||
});
|
||||
|
||||
it('should throw error when email format is invalid', () => {
|
||||
expect(() => userService.createUser({ email: 'invalid' })).toThrow('Invalid email format');
|
||||
});
|
||||
|
||||
// Happy path comes after error conditions
|
||||
it('should create user when all data is valid', () => {
|
||||
const userData = { email: 'test@example.com', name: 'Test' };
|
||||
const result = userService.createUser(userData);
|
||||
expect(result).toEqual(expect.objectContaining(userData));
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Boundary Value Testing
|
||||
|
||||
```javascript
|
||||
describe('validateAge', () => {
|
||||
it('should reject age below minimum (17)', () => {
|
||||
expect(() => validateAge(17)).toThrow('Age must be 18 or older');
|
||||
});
|
||||
|
||||
it('should accept minimum valid age (18)', () => {
|
||||
expect(validateAge(18)).toBe(true);
|
||||
});
|
||||
|
||||
it('should accept maximum reasonable age (120)', () => {
|
||||
expect(validateAge(120)).toBe(true);
|
||||
});
|
||||
|
||||
it('should reject unreasonable age (121)', () => {
|
||||
expect(() => validateAge(121)).toThrow('Invalid age');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Test Organization
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── unit/
|
||||
│ ├── services/
|
||||
│ │ ├── user-service.test.js
|
||||
│ │ └── order-service.test.js
|
||||
│ ├── utils/
|
||||
│ │ └── validation.test.js
|
||||
├── integration/
|
||||
│ ├── api/
|
||||
│ │ └── user-api.integration.test.js
|
||||
└── fixtures/
|
||||
├── users.js
|
||||
└── orders.js
|
||||
```
|
||||
|
||||
### Test Suite Organization
|
||||
|
||||
```javascript
|
||||
describe('UserService', () => {
|
||||
// Setup once per test suite
|
||||
beforeAll(() => {
|
||||
// Expensive setup that can be shared
|
||||
});
|
||||
|
||||
// Setup before each test
|
||||
beforeEach(() => {
|
||||
// Fresh state for each test
|
||||
mockDb.reset();
|
||||
});
|
||||
|
||||
describe('createUser', () => {
|
||||
// Group related tests
|
||||
});
|
||||
|
||||
describe('updateUser', () => {
|
||||
// Another behavior group
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Red Phase Checklist
|
||||
|
||||
Before handing off to Dev Agent, ensure:
|
||||
|
||||
- [ ] **Tests written first** - No implementation code exists yet
|
||||
- [ ] **Tests are failing** - Confirmed by running test suite
|
||||
- [ ] **Fail for right reasons** - Missing functionality, not syntax errors
|
||||
- [ ] **External dependencies mocked** - No network/DB/file system calls
|
||||
- [ ] **Deterministic data** - No random values or current time
|
||||
- [ ] **Clear test names** - Behavior is obvious from test name
|
||||
- [ ] **Proper assertions** - Tests verify expected outcomes
|
||||
- [ ] **Error scenarios included** - Edge cases and validation errors
|
||||
- [ ] **Minimal scope** - Tests cover smallest useful feature
|
||||
- [ ] **Story metadata updated** - TDD status set to 'red', test list populated
|
||||
|
||||
## Common Red Phase Mistakes
|
||||
|
||||
### Mistake: Writing Tests After Code
|
||||
|
||||
```javascript
|
||||
// Wrong: Implementation already exists
|
||||
function createUser(data) {
|
||||
return { id: 1, ...data }; // Code exists
|
||||
}
|
||||
|
||||
it('should create user', () => {
|
||||
// Writing test after implementation
|
||||
});
|
||||
```
|
||||
|
||||
### Mistake: Testing Implementation Details
|
||||
|
||||
```javascript
|
||||
// Wrong: Testing how it works
|
||||
it('should call database.insert with user data', () => {
|
||||
// Testing internal implementation
|
||||
});
|
||||
|
||||
// Right: Testing what it does
|
||||
it('should return created user with ID', () => {
|
||||
// Testing observable behavior
|
||||
});
|
||||
```
|
||||
|
||||
### Mistake: Non-Deterministic Tests
|
||||
|
||||
```javascript
|
||||
// Wrong: Random data
|
||||
const userId = Math.random();
|
||||
const createdAt = new Date(); // Current time
|
||||
|
||||
// Right: Fixed data
|
||||
const userId = 'test-user-123';
|
||||
const createdAt = '2025-01-01T10:00:00Z';
|
||||
```
|
||||
|
||||
## Success Indicators
|
||||
|
||||
**You know you're succeeding in Red phase when:**
|
||||
|
||||
1. **Tests clearly describe expected behavior**
|
||||
2. **All tests fail with meaningful error messages**
|
||||
3. **No external dependencies cause test failures**
|
||||
4. **Tests can be understood without seeing implementation**
|
||||
5. **Error conditions are tested first**
|
||||
6. **Test names tell a story of what the system should do**
|
||||
|
||||
**Red phase is complete when:**
|
||||
|
||||
- All planned tests are written and failing
|
||||
- Failure messages clearly indicate missing functionality
|
||||
- Dev Agent can understand exactly what to implement
|
||||
- Story metadata reflects current TDD state
|
||||
|
||||
Remember: Your tests are the specification. Make them clear, complete, and compelling!
|
||||
|
|
@ -0,0 +1,562 @@
|
|||
<!-- Powered by BMAD™ Core -->
|
||||
|
||||
# TDD Refactor Phase Prompts
|
||||
|
||||
Instructions for Dev and QA agents when refactoring code while maintaining green tests in Test-Driven Development.
|
||||
|
||||
## Core Refactor Phase Mindset
|
||||
|
||||
**You are in TDD REFACTOR PHASE. Your mission is to improve code quality while keeping ALL tests green. Every change must preserve existing behavior.**
|
||||
|
||||
### Primary Objectives
|
||||
|
||||
1. **Preserve behavior** - External behavior must remain exactly the same
|
||||
2. **Improve design** - Make code more readable, maintainable, and extensible
|
||||
3. **Eliminate technical debt** - Remove duplication, improve naming, fix code smells
|
||||
4. **Maintain test coverage** - All tests must stay green throughout
|
||||
5. **Small steps** - Make incremental improvements with frequent test runs
|
||||
|
||||
## Refactoring Safety Rules
|
||||
|
||||
### The Golden Rule
|
||||
|
||||
**NEVER proceed with a refactoring step if tests are red.** Always revert and try smaller changes.
|
||||
|
||||
### Safe Refactoring Workflow
|
||||
|
||||
```yaml
|
||||
refactoring_cycle:
|
||||
1. identify_smell: 'Find specific code smell to address'
|
||||
2. plan_change: 'Decide on minimal improvement step'
|
||||
3. run_tests: 'Ensure all tests are green before starting'
|
||||
4. make_change: 'Apply single, small refactoring'
|
||||
5. run_tests: 'Verify tests are still green'
|
||||
6. commit: 'Save progress if tests pass'
|
||||
7. repeat: 'Move to next improvement'
|
||||
|
||||
abort_conditions:
|
||||
- tests_turn_red: 'Immediately revert and try smaller step'
|
||||
- behavior_changes: 'Revert if external interface changes'
|
||||
- complexity_increases: 'Revert if code becomes harder to understand'
|
||||
```
|
||||
|
||||
## Code Smells and Refactoring Techniques
|
||||
|
||||
### Duplication Elimination
|
||||
|
||||
**Before: Repeated validation logic**
|
||||
|
||||
```javascript
|
||||
function createUser(data) {
|
||||
if (!data.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
|
||||
function updateUser(id, data) {
|
||||
if (!data.email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
return { id, ...data };
|
||||
}
|
||||
```
|
||||
|
||||
**After: Extract validation function**
|
||||
|
||||
```javascript
|
||||
function validateEmail(email) {
|
||||
if (!email.includes('@')) {
|
||||
throw new Error('Invalid email format');
|
||||
}
|
||||
}
|
||||
|
||||
function createUser(data) {
|
||||
validateEmail(data.email);
|
||||
return { id: generateId(), ...data };
|
||||
}
|
||||
|
||||
function updateUser(id, data) {
|
||||
validateEmail(data.email);
|
||||
return { id, ...data };
|
||||
}
|
||||
```
|
||||
|
||||
### Long Method Refactoring
|
||||
|
||||
**Before: Method doing too much**
|
||||
|
||||
```javascript
|
||||
function processUserRegistration(userData) {
|
||||
// Validation (5 lines)
|
||||
if (!userData.email.includes('@')) throw new Error('Invalid email');
|
||||
if (!userData.name || userData.name.trim().length === 0) throw new Error('Name required');
|
||||
if (userData.age < 18) throw new Error('Must be 18 or older');
|
||||
|
||||
// Data transformation (4 lines)
|
||||
const user = {
|
||||
id: generateId(),
|
||||
email: userData.email.toLowerCase(),
|
||||
name: userData.name.trim(),
|
||||
age: userData.age,
|
||||
};
|
||||
|
||||
// Business logic (3 lines)
|
||||
if (userData.age >= 65) {
|
||||
user.discountEligible = true;
|
||||
}
|
||||
|
||||
return user;
|
||||
}
|
||||
```
|
||||
|
||||
**After: Extract methods**
|
||||
|
||||
```javascript
|
||||
function validateUserData(userData) {
|
||||
if (!userData.email.includes('@')) throw new Error('Invalid email');
|
||||
if (!userData.name || userData.name.trim().length === 0) throw new Error('Name required');
|
||||
if (userData.age < 18) throw new Error('Must be 18 or older');
|
||||
}
|
||||
|
||||
function normalizeUserData(userData) {
|
||||
return {
|
||||
id: generateId(),
|
||||
email: userData.email.toLowerCase(),
|
||||
name: userData.name.trim(),
|
||||
age: userData.age,
|
||||
};
|
||||
}
|
||||
|
||||
function applyBusinessRules(user) {
|
||||
if (user.age >= 65) {
|
||||
user.discountEligible = true;
|
||||
}
|
||||
return user;
|
||||
}
|
||||
|
||||
function processUserRegistration(userData) {
|
||||
validateUserData(userData);
|
||||
const user = normalizeUserData(userData);
|
||||
return applyBusinessRules(user);
|
||||
}
|
||||
```
|
||||
|
||||
### Magic Numbers and Constants
|
||||
|
||||
**Before: Magic numbers scattered**
|
||||
|
||||
```javascript
|
||||
function calculateShipping(weight) {
|
||||
if (weight < 5) {
|
||||
return 4.99;
|
||||
} else if (weight < 20) {
|
||||
return 9.99;
|
||||
} else {
|
||||
return 19.99;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After: Named constants**
|
||||
|
||||
```javascript
|
||||
const SHIPPING_RATES = {
|
||||
LIGHT_WEIGHT_THRESHOLD: 5,
|
||||
MEDIUM_WEIGHT_THRESHOLD: 20,
|
||||
LIGHT_SHIPPING_COST: 4.99,
|
||||
MEDIUM_SHIPPING_COST: 9.99,
|
||||
HEAVY_SHIPPING_COST: 19.99,
|
||||
};
|
||||
|
||||
function calculateShipping(weight) {
|
||||
if (weight < SHIPPING_RATES.LIGHT_WEIGHT_THRESHOLD) {
|
||||
return SHIPPING_RATES.LIGHT_SHIPPING_COST;
|
||||
} else if (weight < SHIPPING_RATES.MEDIUM_WEIGHT_THRESHOLD) {
|
||||
return SHIPPING_RATES.MEDIUM_SHIPPING_COST;
|
||||
} else {
|
||||
return SHIPPING_RATES.HEAVY_SHIPPING_COST;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Variable Naming Improvements
|
||||
|
||||
**Before: Unclear names**
|
||||
|
||||
```javascript
|
||||
function calc(u, p) {
|
||||
const t = u * p;
|
||||
const d = t * 0.1;
|
||||
return t - d;
|
||||
}
|
||||
```
|
||||
|
||||
**After: Intention-revealing names**
|
||||
|
||||
```javascript
|
||||
function calculateNetPrice(unitPrice, quantity) {
|
||||
const totalPrice = unitPrice * quantity;
|
||||
const discount = totalPrice * 0.1;
|
||||
return totalPrice - discount;
|
||||
}
|
||||
```
|
||||
|
||||
## Refactoring Strategies by Code Smell
|
||||
|
||||
### Complex Conditionals
|
||||
|
||||
**Before: Nested conditions**
|
||||
|
||||
```javascript
|
||||
function determineUserType(user) {
|
||||
if (user.age >= 18) {
|
||||
if (user.hasAccount) {
|
||||
if (user.isPremium) {
|
||||
return 'premium-member';
|
||||
} else {
|
||||
return 'basic-member';
|
||||
}
|
||||
} else {
|
||||
return 'guest-adult';
|
||||
}
|
||||
} else {
|
||||
return 'minor';
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After: Guard clauses and early returns**
|
||||
|
||||
```javascript
|
||||
function determineUserType(user) {
|
||||
if (user.age < 18) {
|
||||
return 'minor';
|
||||
}
|
||||
|
||||
if (!user.hasAccount) {
|
||||
return 'guest-adult';
|
||||
}
|
||||
|
||||
return user.isPremium ? 'premium-member' : 'basic-member';
|
||||
}
|
||||
```
|
||||
|
||||
### Large Classes (God Object)
|
||||
|
||||
**Before: Class doing too much**
|
||||
|
||||
```javascript
|
||||
class UserManager {
|
||||
validateUser(data) {
|
||||
/* validation logic */
|
||||
}
|
||||
createUser(data) {
|
||||
/* creation logic */
|
||||
}
|
||||
sendWelcomeEmail(user) {
|
||||
/* email logic */
|
||||
}
|
||||
logUserActivity(user, action) {
|
||||
/* logging logic */
|
||||
}
|
||||
calculateUserStats(user) {
|
||||
/* analytics logic */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After: Single responsibility classes**
|
||||
|
||||
```javascript
|
||||
class UserValidator {
|
||||
validate(data) {
|
||||
/* validation logic */
|
||||
}
|
||||
}
|
||||
|
||||
class UserService {
|
||||
create(data) {
|
||||
/* creation logic */
|
||||
}
|
||||
}
|
||||
|
||||
class EmailService {
|
||||
sendWelcome(user) {
|
||||
/* email logic */
|
||||
}
|
||||
}
|
||||
|
||||
class ActivityLogger {
|
||||
log(user, action) {
|
||||
/* logging logic */
|
||||
}
|
||||
}
|
||||
|
||||
class UserAnalytics {
|
||||
calculateStats(user) {
|
||||
/* analytics logic */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Collaborative Refactoring (Dev + QA)
|
||||
|
||||
### When to Involve QA Agent
|
||||
|
||||
**QA Agent should participate when:**
|
||||
|
||||
```yaml
|
||||
qa_involvement_triggers:
|
||||
test_modification_needed:
|
||||
- 'Test expectations need updating'
|
||||
- 'New test cases discovered during refactoring'
|
||||
- 'Mock strategies need adjustment'
|
||||
|
||||
coverage_assessment:
|
||||
- 'Refactoring exposes untested code paths'
|
||||
- 'New methods need test coverage'
|
||||
- 'Test organization needs improvement'
|
||||
|
||||
design_validation:
|
||||
- 'Interface changes affect test structure'
|
||||
- 'Mocking strategy becomes complex'
|
||||
- 'Test maintainability concerns'
|
||||
```
|
||||
|
||||
### Dev-QA Collaboration Workflow
|
||||
|
||||
```yaml
|
||||
collaborative_steps:
|
||||
1. dev_identifies_refactoring: 'Dev spots code smell'
|
||||
2. assess_test_impact: 'Both agents review test implications'
|
||||
3. plan_refactoring: 'Agree on approach and steps'
|
||||
4. dev_refactors: 'Dev makes incremental changes'
|
||||
5. qa_validates_tests: 'QA ensures tests remain valid'
|
||||
6. both_review: 'Joint review of improved code and tests'
|
||||
```
|
||||
|
||||
## Advanced Refactoring Patterns
|
||||
|
||||
### Extract Interface for Testability
|
||||
|
||||
**Before: Hard to test due to dependencies**
|
||||
|
||||
```javascript
|
||||
class OrderService {
|
||||
constructor() {
|
||||
this.emailSender = new EmailSender();
|
||||
this.paymentProcessor = new PaymentProcessor();
|
||||
}
|
||||
|
||||
processOrder(order) {
|
||||
const result = this.paymentProcessor.charge(order.total);
|
||||
this.emailSender.sendConfirmation(order.customerEmail);
|
||||
return result;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After: Dependency injection for testability**
|
||||
|
||||
```javascript
|
||||
class OrderService {
|
||||
constructor(emailSender, paymentProcessor) {
|
||||
this.emailSender = emailSender;
|
||||
this.paymentProcessor = paymentProcessor;
|
||||
}
|
||||
|
||||
processOrder(order) {
|
||||
const result = this.paymentProcessor.charge(order.total);
|
||||
this.emailSender.sendConfirmation(order.customerEmail);
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
// Usage in production:
|
||||
const orderService = new OrderService(new EmailSender(), new PaymentProcessor());
|
||||
|
||||
// Usage in tests:
|
||||
const mockEmail = { sendConfirmation: jest.fn() };
|
||||
const mockPayment = { charge: jest.fn().mockReturnValue('success') };
|
||||
const orderService = new OrderService(mockEmail, mockPayment);
|
||||
```
|
||||
|
||||
### Replace Conditional with Polymorphism
|
||||
|
||||
**Before: Switch statement**
|
||||
|
||||
```javascript
|
||||
function calculateArea(shape) {
|
||||
switch (shape.type) {
|
||||
case 'circle':
|
||||
return Math.PI * shape.radius * shape.radius;
|
||||
case 'rectangle':
|
||||
return shape.width * shape.height;
|
||||
case 'triangle':
|
||||
return 0.5 * shape.base * shape.height;
|
||||
default:
|
||||
throw new Error('Unknown shape type');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After: Polymorphic classes**
|
||||
|
||||
```javascript
|
||||
class Circle {
|
||||
constructor(radius) {
|
||||
this.radius = radius;
|
||||
}
|
||||
|
||||
calculateArea() {
|
||||
return Math.PI * this.radius * this.radius;
|
||||
}
|
||||
}
|
||||
|
||||
class Rectangle {
|
||||
constructor(width, height) {
|
||||
this.width = width;
|
||||
this.height = height;
|
||||
}
|
||||
|
||||
calculateArea() {
|
||||
return this.width * this.height;
|
||||
}
|
||||
}
|
||||
|
||||
class Triangle {
|
||||
constructor(base, height) {
|
||||
this.base = base;
|
||||
this.height = height;
|
||||
}
|
||||
|
||||
calculateArea() {
|
||||
return 0.5 * this.base * this.height;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Refactoring Safety Checks
|
||||
|
||||
### Before Each Refactoring Step
|
||||
|
||||
```bash
|
||||
# 1. Ensure all tests are green
|
||||
npm test
|
||||
pytest
|
||||
go test ./...
|
||||
|
||||
# 2. Consider impact
|
||||
# - Will this change external interfaces?
|
||||
# - Are there hidden dependencies?
|
||||
# - Could this affect performance significantly?
|
||||
|
||||
# 3. Plan the smallest possible step
|
||||
# - What's the minimal change that improves code?
|
||||
# - Can this be broken into smaller steps?
|
||||
```
|
||||
|
||||
### After Each Refactoring Step
|
||||
|
||||
```bash
|
||||
# 1. Run tests immediately
|
||||
npm test
|
||||
|
||||
# 2. If tests fail:
|
||||
git checkout -- . # Revert changes
|
||||
# Plan smaller refactoring step
|
||||
|
||||
# 3. If tests pass:
|
||||
git add .
|
||||
git commit -m "REFACTOR: Extract validateEmail function [maintains UC-001, UC-002]"
|
||||
```
|
||||
|
||||
## Refactoring Anti-Patterns
|
||||
|
||||
### Don't Change Behavior
|
||||
|
||||
```javascript
|
||||
// Wrong: Changing logic during refactoring
|
||||
function calculateDiscount(amount) {
|
||||
// Original: 10% discount
|
||||
return amount * 0.1;
|
||||
|
||||
// Refactored: DON'T change the discount rate
|
||||
return amount * 0.15; // This changes behavior!
|
||||
}
|
||||
|
||||
// Right: Only improve structure
|
||||
const DISCOUNT_RATE = 0.1; // Extract constant
|
||||
function calculateDiscount(amount) {
|
||||
return amount * DISCOUNT_RATE; // Same behavior
|
||||
}
|
||||
```
|
||||
|
||||
### Don't Add Features
|
||||
|
||||
```javascript
|
||||
// Wrong: Adding features during refactoring
|
||||
function validateUser(userData) {
|
||||
validateEmail(userData.email); // Existing
|
||||
validateName(userData.name); // Existing
|
||||
validateAge(userData.age); // DON'T add new validation
|
||||
}
|
||||
|
||||
// Right: Only improve existing code
|
||||
function validateUser(userData) {
|
||||
validateEmail(userData.email);
|
||||
validateName(userData.name);
|
||||
// Age validation needs its own failing test first
|
||||
}
|
||||
```
|
||||
|
||||
### Don't Make Large Changes
|
||||
|
||||
```javascript
|
||||
// Wrong: Massive refactoring in one step
|
||||
class UserService {
|
||||
// Completely rewrite entire class structure
|
||||
}
|
||||
|
||||
// Right: Small, incremental improvements
|
||||
class UserService {
|
||||
// Extract one method at a time
|
||||
// Rename one variable at a time
|
||||
// Improve one code smell at a time
|
||||
}
|
||||
```
|
||||
|
||||
## Refactor Phase Checklist
|
||||
|
||||
Before considering refactoring complete:
|
||||
|
||||
- [ ] **All tests remain green** - No test failures introduced
|
||||
- [ ] **Code quality improved** - Measurable improvement in readability/maintainability
|
||||
- [ ] **No behavior changes** - External behavior is identical
|
||||
- [ ] **Technical debt reduced** - Specific code smells addressed
|
||||
- [ ] **Small commits made** - Each improvement committed separately
|
||||
- [ ] **Documentation updated** - Comments and docs reflect changes
|
||||
- [ ] **Performance maintained** - No significant performance degradation
|
||||
- [ ] **Story metadata updated** - Refactoring notes and improvements documented
|
||||
|
||||
## Success Indicators
|
||||
|
||||
**Refactoring is successful when:**
|
||||
|
||||
1. **All tests consistently pass** throughout the process
|
||||
2. **Code is noticeably easier to read** and understand
|
||||
3. **Duplication has been eliminated** or significantly reduced
|
||||
4. **Method/class sizes are more reasonable** (functions < 15 lines)
|
||||
5. **Variable and function names clearly express intent**
|
||||
6. **Code complexity has decreased** (fewer nested conditions)
|
||||
7. **Future changes will be easier** due to better structure
|
||||
|
||||
**Refactoring is complete when:**
|
||||
|
||||
- No obvious code smells remain in the story scope
|
||||
- Code quality metrics show improvement
|
||||
- Tests provide comprehensive safety net
|
||||
- Ready for next TDD cycle or story completion
|
||||
|
||||
Remember: Refactoring is about improving design, not adding features. Keep tests green, make small changes, and focus on making the code better for the next developer!
|
||||
|
|
@ -0,0 +1,261 @@
|
|||
# <!-- Powered by BMAD™ Core -->
|
||||
name: TDD Story Development Workflow
|
||||
description: Test-Driven Development workflow for story implementation
|
||||
version: "1.0"
|
||||
type: story_workflow
|
||||
|
||||
# TDD-specific workflow that orchestrates Red-Green-Refactor cycles
|
||||
workflow:
|
||||
prerequisites:
|
||||
- tdd.enabled: true
|
||||
- story.status: ["ready", "inprogress"]
|
||||
- story.acceptance_criteria: "defined"
|
||||
|
||||
phases:
|
||||
# Phase 1: RED - Write failing tests first
|
||||
red_phase:
|
||||
description: "Write failing tests that describe expected behavior"
|
||||
agent: qa
|
||||
status_check: "tdd.status != 'red'"
|
||||
|
||||
tasks:
|
||||
- name: test-design
|
||||
description: "Design comprehensive test strategy"
|
||||
inputs:
|
||||
- story_id
|
||||
- acceptance_criteria
|
||||
outputs:
|
||||
- test_design_document
|
||||
- test_scenarios
|
||||
|
||||
- name: write-failing-tests
|
||||
description: "Implement failing tests for story scope"
|
||||
inputs:
|
||||
- story_id
|
||||
- test_scenarios
|
||||
- codebase_context
|
||||
outputs:
|
||||
- test_files
|
||||
- failing_test_report
|
||||
|
||||
completion_criteria:
|
||||
- "At least one test is failing"
|
||||
- "Tests fail for correct reasons (missing implementation)"
|
||||
- "All external dependencies mocked"
|
||||
- "Story tdd.status = 'red'"
|
||||
|
||||
gates:
|
||||
pass_conditions:
|
||||
- tests_created: true
|
||||
- tests_failing_correctly: true
|
||||
- mocking_strategy_applied: true
|
||||
- story_metadata_updated: true
|
||||
|
||||
fail_conditions:
|
||||
- tests_passing_unexpectedly: true
|
||||
- syntax_errors_in_tests: true
|
||||
- missing_test_runner: true
|
||||
|
||||
# Phase 2: GREEN - Make tests pass with minimal code
|
||||
green_phase:
|
||||
description: "Implement minimal code to make all tests pass"
|
||||
agent: dev
|
||||
status_check: "tdd.status != 'green'"
|
||||
|
||||
prerequisites:
|
||||
- "tdd.status == 'red'"
|
||||
- "failing_tests.count > 0"
|
||||
|
||||
tasks:
|
||||
- name: tdd-implement
|
||||
description: "Write simplest code to make tests pass"
|
||||
inputs:
|
||||
- story_id
|
||||
- failing_tests
|
||||
- codebase_context
|
||||
outputs:
|
||||
- implementation_files
|
||||
- passing_test_report
|
||||
|
||||
completion_criteria:
|
||||
- "All tests are passing"
|
||||
- "No feature creep beyond test requirements"
|
||||
- "Code follows basic standards"
|
||||
- "Story tdd.status = 'green'"
|
||||
|
||||
gates:
|
||||
pass_conditions:
|
||||
- all_tests_passing: true
|
||||
- implementation_minimal: true
|
||||
- no_breaking_changes: true
|
||||
- story_metadata_updated: true
|
||||
|
||||
fail_conditions:
|
||||
- tests_still_failing: true
|
||||
- feature_creep_detected: true
|
||||
- regression_introduced: true
|
||||
|
||||
# Phase 3: REFACTOR - Improve code quality while keeping tests green
|
||||
refactor_phase:
|
||||
description: "Improve code quality while maintaining green tests"
|
||||
agents: [dev, qa] # Collaborative phase
|
||||
status_check: "tdd.status != 'refactor'"
|
||||
|
||||
prerequisites:
|
||||
- "tdd.status == 'green'"
|
||||
- "all_tests_passing == true"
|
||||
|
||||
tasks:
|
||||
- name: tdd-refactor
|
||||
description: "Safely refactor code with test coverage"
|
||||
inputs:
|
||||
- story_id
|
||||
- passing_tests
|
||||
- implementation_files
|
||||
- code_quality_metrics
|
||||
outputs:
|
||||
- refactored_files
|
||||
- quality_improvements
|
||||
- maintained_test_coverage
|
||||
|
||||
completion_criteria:
|
||||
- "All tests remain green throughout"
|
||||
- "Code quality improved"
|
||||
- "Technical debt addressed"
|
||||
- "Story tdd.status = 'done' or ready for next cycle"
|
||||
|
||||
gates:
|
||||
pass_conditions:
|
||||
- tests_remain_green: true
|
||||
- quality_metrics_improved: true
|
||||
- refactoring_documented: true
|
||||
- commits_atomic: true
|
||||
|
||||
fail_conditions:
|
||||
- tests_broken_by_refactoring: true
|
||||
- code_quality_degraded: true
|
||||
- feature_changes_during_refactor: true
|
||||
|
||||
# Cycle management - can repeat Red-Green-Refactor for complex stories
|
||||
cycle_management:
|
||||
max_cycles: 5 # Reasonable limit to prevent infinite cycles
|
||||
|
||||
next_cycle_conditions:
|
||||
- "More acceptance criteria remain unimplemented"
|
||||
- "Story scope requires additional functionality"
|
||||
- "Technical complexity requires iterative approach"
|
||||
|
||||
cycle_completion_check:
|
||||
- "All acceptance criteria have tests and implementation"
|
||||
- "Code quality meets project standards"
|
||||
- "No remaining technical debt from TDD cycles"
|
||||
|
||||
# Quality gates for phase transitions
|
||||
transition_gates:
|
||||
red_to_green:
|
||||
required:
|
||||
- failing_tests_exist: true
|
||||
- tests_fail_for_right_reasons: true
|
||||
- external_dependencies_mocked: true
|
||||
blocked_by:
|
||||
- no_failing_tests: true
|
||||
- syntax_errors: true
|
||||
- missing_test_infrastructure: true
|
||||
|
||||
green_to_refactor:
|
||||
required:
|
||||
- all_tests_passing: true
|
||||
- implementation_complete: true
|
||||
- basic_quality_standards_met: true
|
||||
blocked_by:
|
||||
- failing_tests: true
|
||||
- incomplete_implementation: true
|
||||
- major_quality_violations: true
|
||||
|
||||
refactor_to_done:
|
||||
required:
|
||||
- tests_remain_green: true
|
||||
- code_quality_improved: true
|
||||
- all_acceptance_criteria_met: true
|
||||
blocked_by:
|
||||
- broken_tests: true
|
||||
- degraded_code_quality: true
|
||||
- incomplete_acceptance_criteria: true
|
||||
|
||||
# Error handling and recovery
|
||||
error_handling:
|
||||
phase_failures:
|
||||
red_phase_failure:
|
||||
- "Review acceptance criteria clarity"
|
||||
- "Check test runner configuration"
|
||||
- "Verify mocking strategy"
|
||||
- "Consult with SM for requirements clarification"
|
||||
|
||||
green_phase_failure:
|
||||
- "Review test expectations vs implementation"
|
||||
- "Check for missing dependencies"
|
||||
- "Verify implementation approach"
|
||||
- "Consider breaking down into smaller cycles"
|
||||
|
||||
refactor_phase_failure:
|
||||
- "Immediately revert breaking changes"
|
||||
- "Use smaller refactoring steps"
|
||||
- "Review test coverage adequacy"
|
||||
- "Consider technical debt acceptance"
|
||||
|
||||
# Agent coordination
|
||||
agent_handoffs:
|
||||
qa_to_dev:
|
||||
trigger: "tdd.status == 'red'"
|
||||
handoff_artifacts:
|
||||
- failing_test_suite
|
||||
- test_execution_report
|
||||
- story_with_updated_metadata
|
||||
- mocking_strategy_documentation
|
||||
|
||||
dev_back_to_qa:
|
||||
trigger: "questions about test expectations or refactoring safety"
|
||||
collaboration_points:
|
||||
- test_clarification_needed
|
||||
- refactoring_impact_assessment
|
||||
- additional_test_coverage_discussion
|
||||
|
||||
both_agents:
|
||||
trigger: "tdd.status == 'refactor'"
|
||||
joint_activities:
|
||||
- code_quality_assessment
|
||||
- refactoring_safety_validation
|
||||
- test_maintenance_discussion
|
||||
|
||||
# Integration with existing BMAD workflows
|
||||
bmad_integration:
|
||||
extends: "story_workflow_base"
|
||||
|
||||
modified_sections:
|
||||
story_creation:
|
||||
- "Use story-tdd-template.md when tdd.enabled=true"
|
||||
- "Initialize TDD metadata in story frontmatter"
|
||||
|
||||
quality_gates:
|
||||
- "Apply tdd-dod-checklist.md instead of standard DoD"
|
||||
- "Include TDD-specific review criteria"
|
||||
|
||||
agent_selection:
|
||||
- "Route to QA agent first for Red phase"
|
||||
- "Enforce phase-based agent assignment"
|
||||
|
||||
# Configuration and customization
|
||||
configuration:
|
||||
tdd_settings:
|
||||
cycle_timeout: "2 days" # Maximum time per TDD cycle
|
||||
required_coverage_minimum: 0.8 # 80% default
|
||||
max_failing_tests_per_cycle: 10 # Prevent scope creep
|
||||
|
||||
quality_thresholds:
|
||||
complexity_increase_limit: 10 # Max complexity increase per cycle
|
||||
duplication_tolerance: 5 # Max acceptable code duplication %
|
||||
|
||||
automation_hooks:
|
||||
test_execution: "Run tests automatically on file changes"
|
||||
coverage_reporting: "Generate coverage reports per cycle"
|
||||
quality_metrics: "Track metrics before/after refactoring"
|
||||
|
|
@ -0,0 +1,296 @@
|
|||
# <!-- Powered by BMAD™ Core -->
|
||||
# Test Runner Auto-Detection Configuration
|
||||
# Used by BMAD TDD framework to detect and configure test runners
|
||||
|
||||
detection_rules:
|
||||
# JavaScript/TypeScript ecosystem
|
||||
javascript:
|
||||
priority: 1
|
||||
detection_files:
|
||||
- "package.json"
|
||||
detection_logic:
|
||||
- check_dependencies: ["jest", "vitest", "mocha", "cypress", "@testing-library"]
|
||||
- check_scripts: ["test", "test:unit", "test:integration"]
|
||||
|
||||
runners:
|
||||
jest:
|
||||
detection_patterns:
|
||||
- dependency: "jest"
|
||||
- config_file: ["jest.config.js", "jest.config.json"]
|
||||
commands:
|
||||
test: "npm test"
|
||||
test_single_file: "npm test -- {file_path}"
|
||||
test_watch: "npm test -- --watch"
|
||||
test_coverage: "npm test -- --coverage"
|
||||
file_patterns:
|
||||
unit: ["**/*.test.js", "**/*.spec.js", "**/*.test.ts", "**/*.spec.ts"]
|
||||
integration: ["**/*.integration.test.js", "**/*.int.test.js"]
|
||||
report_paths:
|
||||
coverage: "coverage/lcov-report/index.html"
|
||||
junit: "coverage/junit.xml"
|
||||
|
||||
vitest:
|
||||
detection_patterns:
|
||||
- dependency: "vitest"
|
||||
- config_file: ["vitest.config.js", "vitest.config.ts"]
|
||||
commands:
|
||||
test: "npm run test"
|
||||
test_single_file: "npx vitest run {file_path}"
|
||||
test_watch: "npx vitest"
|
||||
test_coverage: "npx vitest run --coverage"
|
||||
file_patterns:
|
||||
unit: ["**/*.test.js", "**/*.spec.js", "**/*.test.ts", "**/*.spec.ts"]
|
||||
integration: ["**/*.integration.test.js", "**/*.int.test.js"]
|
||||
report_paths:
|
||||
coverage: "coverage/index.html"
|
||||
|
||||
mocha:
|
||||
detection_patterns:
|
||||
- dependency: "mocha"
|
||||
- config_file: [".mocharc.json", ".mocharc.yml"]
|
||||
commands:
|
||||
test: "npx mocha"
|
||||
test_single_file: "npx mocha {file_path}"
|
||||
test_watch: "npx mocha --watch"
|
||||
test_coverage: "npx nyc mocha"
|
||||
file_patterns:
|
||||
unit: ["test/**/*.js", "test/**/*.ts"]
|
||||
integration: ["test/integration/**/*.js"]
|
||||
report_paths:
|
||||
coverage: "coverage/index.html"
|
||||
|
||||
# Python ecosystem
|
||||
python:
|
||||
priority: 2
|
||||
detection_files:
|
||||
- "requirements.txt"
|
||||
- "requirements-dev.txt"
|
||||
- "pyproject.toml"
|
||||
- "setup.py"
|
||||
- "pytest.ini"
|
||||
- "tox.ini"
|
||||
detection_logic:
|
||||
- check_requirements: ["pytest", "unittest2", "nose2"]
|
||||
- check_pyproject: ["pytest", "unittest"]
|
||||
|
||||
runners:
|
||||
pytest:
|
||||
detection_patterns:
|
||||
- requirement: "pytest"
|
||||
- config_file: ["pytest.ini", "pyproject.toml", "setup.cfg"]
|
||||
commands:
|
||||
test: "pytest"
|
||||
test_single_file: "pytest {file_path}"
|
||||
test_watch: "pytest-watch"
|
||||
test_coverage: "pytest --cov=."
|
||||
file_patterns:
|
||||
unit: ["test_*.py", "*_test.py", "tests/unit/**/*.py"]
|
||||
integration: ["tests/integration/**/*.py", "tests/int/**/*.py"]
|
||||
report_paths:
|
||||
coverage: "htmlcov/index.html"
|
||||
junit: "pytest-report.xml"
|
||||
|
||||
unittest:
|
||||
detection_patterns:
|
||||
- python_version: ">=2.7"
|
||||
- fallback: true
|
||||
commands:
|
||||
test: "python -m unittest discover"
|
||||
test_single_file: "python -m unittest {module_path}"
|
||||
test_coverage: "coverage run -m unittest discover && coverage html"
|
||||
file_patterns:
|
||||
unit: ["test_*.py", "*_test.py"]
|
||||
integration: ["integration_test_*.py"]
|
||||
report_paths:
|
||||
coverage: "htmlcov/index.html"
|
||||
|
||||
# Go ecosystem
|
||||
go:
|
||||
priority: 3
|
||||
detection_files:
|
||||
- "go.mod"
|
||||
- "go.sum"
|
||||
detection_logic:
|
||||
- check_go_files: ["*_test.go"]
|
||||
|
||||
runners:
|
||||
go_test:
|
||||
detection_patterns:
|
||||
- files_exist: ["*.go", "*_test.go"]
|
||||
commands:
|
||||
test: "go test ./..."
|
||||
test_single_package: "go test {package_path}"
|
||||
test_single_file: "go test -run {test_function}"
|
||||
test_coverage: "go test -coverprofile=coverage.out ./... && go tool cover -html=coverage.out"
|
||||
test_watch: "gotestsum --watch"
|
||||
file_patterns:
|
||||
unit: ["*_test.go"]
|
||||
integration: ["*_integration_test.go", "*_int_test.go"]
|
||||
report_paths:
|
||||
coverage: "coverage.html"
|
||||
|
||||
# Java ecosystem
|
||||
java:
|
||||
priority: 4
|
||||
detection_files:
|
||||
- "pom.xml"
|
||||
- "build.gradle"
|
||||
- "build.gradle.kts"
|
||||
detection_logic:
|
||||
- check_maven_dependencies: ["junit", "testng", "junit-jupiter"]
|
||||
- check_gradle_dependencies: ["junit", "testng", "junit-platform"]
|
||||
|
||||
runners:
|
||||
maven:
|
||||
detection_patterns:
|
||||
- file: "pom.xml"
|
||||
commands:
|
||||
test: "mvn test"
|
||||
test_single_class: "mvn test -Dtest={class_name}"
|
||||
test_coverage: "mvn clean jacoco:prepare-agent test jacoco:report"
|
||||
file_patterns:
|
||||
unit: ["src/test/java/**/*Test.java", "src/test/java/**/*Tests.java"]
|
||||
integration: ["src/test/java/**/*IT.java", "src/integration-test/java/**/*.java"]
|
||||
report_paths:
|
||||
coverage: "target/site/jacoco/index.html"
|
||||
surefire: "target/surefire-reports"
|
||||
|
||||
gradle:
|
||||
detection_patterns:
|
||||
- file: ["build.gradle", "build.gradle.kts"]
|
||||
commands:
|
||||
test: "gradle test"
|
||||
test_single_class: "gradle test --tests {class_name}"
|
||||
test_coverage: "gradle test jacocoTestReport"
|
||||
file_patterns:
|
||||
unit: ["src/test/java/**/*Test.java", "src/test/java/**/*Tests.java"]
|
||||
integration: ["src/integrationTest/java/**/*.java"]
|
||||
report_paths:
|
||||
coverage: "build/reports/jacoco/test/html/index.html"
|
||||
junit: "build/test-results/test"
|
||||
|
||||
# .NET ecosystem
|
||||
dotnet:
|
||||
priority: 5
|
||||
detection_files:
|
||||
- "*.csproj"
|
||||
- "*.sln"
|
||||
- "global.json"
|
||||
detection_logic:
|
||||
- check_project_references: ["Microsoft.NET.Test.Sdk", "xunit", "NUnit", "MSTest"]
|
||||
|
||||
runners:
|
||||
dotnet_test:
|
||||
detection_patterns:
|
||||
- files_exist: ["*.csproj"]
|
||||
- test_project_reference: ["Microsoft.NET.Test.Sdk"]
|
||||
commands:
|
||||
test: "dotnet test"
|
||||
test_single_project: "dotnet test {project_path}"
|
||||
test_coverage: 'dotnet test --collect:"XPlat Code Coverage"'
|
||||
test_watch: "dotnet watch test"
|
||||
file_patterns:
|
||||
unit: ["**/*Tests.cs", "**/*Test.cs"]
|
||||
integration: ["**/*IntegrationTests.cs", "**/*.Integration.Tests.cs"]
|
||||
report_paths:
|
||||
coverage: "TestResults/*/coverage.cobertura.xml"
|
||||
trx: "TestResults/*.trx"
|
||||
|
||||
# Ruby ecosystem
|
||||
ruby:
|
||||
priority: 6
|
||||
detection_files:
|
||||
- "Gemfile"
|
||||
- "*.gemspec"
|
||||
detection_logic:
|
||||
- check_gems: ["rspec", "minitest", "test-unit"]
|
||||
|
||||
runners:
|
||||
rspec:
|
||||
detection_patterns:
|
||||
- gem: "rspec"
|
||||
- config_file: [".rspec", "spec/spec_helper.rb"]
|
||||
commands:
|
||||
test: "rspec"
|
||||
test_single_file: "rspec {file_path}"
|
||||
test_coverage: "rspec --coverage"
|
||||
file_patterns:
|
||||
unit: ["spec/**/*_spec.rb"]
|
||||
integration: ["spec/integration/**/*_spec.rb"]
|
||||
report_paths:
|
||||
coverage: "coverage/index.html"
|
||||
|
||||
minitest:
|
||||
detection_patterns:
|
||||
- gem: "minitest"
|
||||
commands:
|
||||
test: "ruby -Itest test/test_*.rb"
|
||||
test_single_file: "ruby -Itest {file_path}"
|
||||
file_patterns:
|
||||
unit: ["test/test_*.rb", "test/*_test.rb"]
|
||||
report_paths:
|
||||
coverage: "coverage/index.html"
|
||||
|
||||
# Auto-detection algorithm
|
||||
detection_algorithm:
|
||||
steps:
|
||||
1. scan_project_root: "Look for detection files in project root"
|
||||
2. check_subdirectories: "Scan up to 2 levels deep for test indicators"
|
||||
3. apply_priority_rules: "Higher priority languages checked first"
|
||||
4. validate_runner: "Ensure detected runner actually works"
|
||||
5. fallback_to_custom: "Use custom command if no runner detected"
|
||||
|
||||
validation_commands:
|
||||
- run_help_command: "Check if runner responds to --help"
|
||||
- run_version_command: "Verify runner version"
|
||||
- check_sample_test: "Try to run a simple test if available"
|
||||
|
||||
# Fallback configuration
|
||||
fallback:
|
||||
enabled: true
|
||||
custom_command: null # Will be prompted from user or config
|
||||
|
||||
prompt_user:
|
||||
- "No test runner detected. Please specify test command:"
|
||||
- "Example: 'npm test' or 'pytest' or 'go test ./...'"
|
||||
- "Leave blank to skip test execution"
|
||||
|
||||
# TDD-specific settings
|
||||
tdd_configuration:
|
||||
preferred_test_types:
|
||||
- unit # Fastest, most isolated
|
||||
- integration # Component interactions
|
||||
- e2e # Full user journeys
|
||||
|
||||
test_execution_timeout: 300 # 5 minutes max per test run
|
||||
|
||||
coverage_thresholds:
|
||||
minimum: 0.0 # No minimum by default
|
||||
warning: 70.0 # Warn below 70%
|
||||
target: 80.0 # Target 80%
|
||||
excellent: 90.0 # Excellent above 90%
|
||||
|
||||
watch_mode:
|
||||
enabled: true
|
||||
file_patterns: ["src/**/*", "test/**/*", "tests/**/*"]
|
||||
ignore_patterns: ["node_modules/**", "coverage/**", "dist/**"]
|
||||
|
||||
# Integration with BMAD agents
|
||||
agent_integration:
|
||||
qa_agent:
|
||||
commands_available:
|
||||
- "run_failing_tests"
|
||||
- "verify_test_isolation"
|
||||
- "check_mocking_strategy"
|
||||
|
||||
dev_agent:
|
||||
commands_available:
|
||||
- "run_tests_for_implementation"
|
||||
- "check_coverage_improvement"
|
||||
- "validate_no_feature_creep"
|
||||
|
||||
both_agents:
|
||||
commands_available:
|
||||
- "run_full_regression_suite"
|
||||
- "generate_coverage_report"
|
||||
- "validate_test_performance"
|
||||
Loading…
Reference in New Issue