BMAD-METHOD/docs/dev-quality-standards.md

327 lines
11 KiB
Markdown

# Developer Quality Standards
**BMAD Method Documentation**
## Introduction
This document defines the quality standards for developers working within the BMAD Method framework. These standards ensure consistent, maintainable, and high-quality code across all projects. The standards are designed to be measurable, actionable, and aligned with industry best practices.
## Quality Dimensions
Developer quality is measured across six key dimensions, each with a specific weight in the overall quality assessment:
| Dimension | Weight | Description |
|-----------|--------|-------------|
| Code Quality | 25% | Structural quality of the code itself |
| Functionality | 20% | Correctness and completeness of implementation |
| Performance | 15% | Efficiency and responsiveness |
| Maintainability | 15% | Ease of understanding and modifying code |
| Testing | 15% | Test coverage and quality |
| Documentation | 10% | Completeness and clarity of documentation |
## Quality Standards by Dimension
### 1. Code Quality (25%)
#### 1.1 Code Structure
- **Standard**: Code follows the project's architectural patterns and design principles
- **Measurement**: Automated static analysis tools and manual review
- **Threshold**: No critical or high-severity structural issues
#### 1.2 Code Complexity
- **Standard**: Functions and components maintain reasonable complexity
- **Measurement**: Cyclomatic complexity < 15, cognitive complexity < 10
- **Threshold**: 90% of code meets complexity targets
#### 1.3 Code Duplication
- **Standard**: Minimal code duplication across the codebase
- **Measurement**: Automated duplication detection
- **Threshold**: < 5% duplication
#### 1.4 Coding Standards Compliance
- **Standard**: Code adheres to project style guide and linting rules
- **Measurement**: Automated linting
- **Threshold**: Zero linting errors, < 5 warnings
#### 1.5 Clean Code Principles
- **Standard**: Code follows clean code principles (meaningful names, single responsibility, etc.)
- **Measurement**: Manual review using clean code checklist
- **Threshold**: 90% compliance with clean code checklist
### 2. Functionality (20%)
#### 2.1 Requirements Fulfillment
- **Standard**: Implementation meets all specified requirements
- **Measurement**: Requirements traceability matrix
- **Threshold**: 100% of requirements implemented correctly
#### 2.2 Error Handling
- **Standard**: Robust error handling for all operations
- **Measurement**: Error scenario testing and code review
- **Threshold**: All identified error scenarios handled appropriately
#### 2.3 Edge Case Handling
- **Standard**: Code handles edge cases and boundary conditions
- **Measurement**: Edge case test coverage
- **Threshold**: All identified edge cases handled correctly
#### 2.4 Accessibility Compliance
- **Standard**: Implementation meets accessibility requirements
- **Measurement**: Automated accessibility testing and manual review
- **Threshold**: WCAG 2.1 AA compliance
#### 2.5 Cross-browser/Cross-device Compatibility
- **Standard**: Code works consistently across required platforms
- **Measurement**: Cross-platform testing
- **Threshold**: Consistent behavior across all specified platforms
### 3. Performance (15%)
#### 3.1 Rendering Performance
- **Standard**: UI renders and updates efficiently
- **Measurement**: Render timing metrics
- **Threshold**: Initial render < 200ms, updates < 50ms
#### 3.2 Memory Usage
- **Standard**: Efficient memory usage without leaks
- **Measurement**: Memory profiling
- **Threshold**: No memory leaks, memory usage within defined limits
#### 3.3 Network Efficiency
- **Standard**: Efficient network resource usage
- **Measurement**: Network request count, payload size, caching
- **Threshold**: Meets project-specific network performance budgets
#### 3.4 Bundle Size
- **Standard**: Optimized bundle size
- **Measurement**: Bundle analysis
- **Threshold**: Meets project-specific bundle size budgets
#### 3.5 Algorithm Efficiency
- **Standard**: Efficient algorithms and data structures
- **Measurement**: Performance profiling and complexity analysis
- **Threshold**: O(n) or better for common operations where possible
### 4. Maintainability (15%)
#### 4.1 Code Readability
- **Standard**: Code is easy to read and understand
- **Measurement**: Manual review using readability checklist
- **Threshold**: 90% compliance with readability standards
#### 4.2 Modularity
- **Standard**: Code is properly modularized with clear boundaries
- **Measurement**: Module coupling and cohesion metrics
- **Threshold**: High cohesion, low coupling across modules
#### 4.3 Extensibility
- **Standard**: Code can be extended without significant modification
- **Measurement**: Manual review of extension points
- **Threshold**: Key extension points identified and implemented
#### 4.4 Configuration vs. Hardcoding
- **Standard**: Configurable elements are not hardcoded
- **Measurement**: Hardcoded value detection
- **Threshold**: No business rules or configurable values hardcoded
#### 4.5 Technical Debt
- **Standard**: Minimal technical debt in implementation
- **Measurement**: Technical debt tracking tools
- **Threshold**: Technical debt ratio < 5%
### 5. Testing (15%)
#### 5.1 Test Coverage
- **Standard**: Comprehensive test coverage
- **Measurement**: Code coverage metrics
- **Threshold**: > 80% line coverage, 100% coverage of critical paths
#### 5.2 Test Quality
- **Standard**: Tests are reliable, maintainable, and meaningful
- **Measurement**: Manual review of test quality
- **Threshold**: 90% compliance with test quality checklist
#### 5.3 Test Types
- **Standard**: Appropriate mix of test types (unit, integration, e2e)
- **Measurement**: Test type distribution
- **Threshold**: Balanced test pyramid with appropriate coverage at each level
#### 5.4 Test Independence
- **Standard**: Tests are independent and deterministic
- **Measurement**: Test flakiness metrics
- **Threshold**: < 1% flaky tests
#### 5.5 Test Maintainability
- **Standard**: Tests are maintainable and follow testing best practices
- **Measurement**: Manual review of test maintainability
- **Threshold**: 90% compliance with test maintainability standards
### 6. Documentation (10%)
#### 6.1 Code Comments
- **Standard**: Code is appropriately commented
- **Measurement**: Comment quality and coverage review
- **Threshold**: All complex logic and non-obvious code commented
#### 6.2 API Documentation
- **Standard**: Complete and accurate API documentation
- **Measurement**: API documentation coverage
- **Threshold**: 100% of public APIs documented
#### 6.3 README and Setup Documentation
- **Standard**: Clear setup and usage documentation
- **Measurement**: Documentation completeness review
- **Threshold**: All setup steps and common usage scenarios documented
#### 6.4 Architecture Documentation
- **Standard**: Component architecture and relationships documented
- **Measurement**: Architecture documentation review
- **Threshold**: All major components and interactions documented
#### 6.5 Knowledge Sharing
- **Standard**: Complex implementations include knowledge sharing
- **Measurement**: Knowledge sharing artifacts
- **Threshold**: Knowledge sharing completed for all complex features
## Quality Measurement Framework
### Quality Scoring System
Each quality dimension is scored on a 5-point scale:
| Score | Description |
|-------|-------------|
| 5 | Exceptional - Exceeds all standards |
| 4 | Strong - Meets all standards with some exceeding |
| 3 | Satisfactory - Meets all minimum standards |
| 2 | Needs Improvement - Falls short on some standards |
| 1 | Unsatisfactory - Falls short on multiple standards |
### Calculation of Overall Quality Score
The overall quality score is calculated as a weighted average of the dimension scores:
```
Overall Score = (Code Quality 0.25) + (Functionality 0.20) + (Performance 0.15) +
(Maintainability 0.15) + (Testing 0.15) + (Documentation 0.10)
```
### Quality Levels
Based on the overall score, work is classified into one of four quality levels:
| Level | Score Range | Description |
|-------|-------------|-------------|
| Exceptional | 4.5 - 5.0 | Exemplary quality that can serve as a reference |
| Strong | 3.5 - 4.4 | High-quality work that meets all standards |
| Satisfactory | 3.0 - 3.4 | Acceptable quality that meets minimum standards |
| Needs Improvement | < 3.0 | Quality issues that need to be addressed |
## Quality Assurance Process
### 1. Self-Assessment
Developers perform a self-assessment against quality standards:
- Complete the quality checklist for each dimension
- Document any known deviations from standards
- Prepare justification for any standards that cannot be met
### 2. Peer Review
Code undergoes peer review before being considered complete:
- At least one peer reviewer must approve
- Reviewer evaluates against all quality dimensions
- Reviewer provides specific feedback on areas for improvement
### 3. Automated Checks
Automated tools verify compliance with measurable standards:
- Static code analysis
- Linting and style checking
- Test coverage analysis
- Performance benchmarking
- Accessibility testing
### 4. Quality Gates
Implementation must pass quality gates at key milestones:
- **Development Complete**: Self-assessment and automated checks pass
- **Review Complete**: Peer review approval and resolution of feedback
- **Release Ready**: Final quality verification and documentation complete
## Continuous Improvement
### Quality Metrics Tracking
Quality metrics are tracked over time to identify trends:
- Team-level quality metrics dashboard
- Individual developer quality profiles
- Project-specific quality trends
### Learning and Development
Continuous improvement is supported through:
- Regular code quality workshops
- Sharing of best practices and lessons learned
- Recognition of exceptional quality work
- Targeted training for areas needing improvement
### Standards Evolution
Quality standards evolve based on:
- Retrospective feedback
- Industry best practice changes
- Technology stack evolution
- Project-specific requirements
## Appendix: Quality Checklists
### Code Quality Checklist
- [ ] Code follows architectural patterns
- [ ] Functions/components have single responsibility
- [ ] Naming is clear and consistent
- [ ] Error handling is comprehensive
- [ ] No code smells (long methods, large classes, etc.)
- [ ] No duplication of logic
- [ ] Consistent formatting and style
- [ ] No commented-out code
- [ ] No hardcoded values that should be configurable
- [ ] Appropriate use of language/framework features
### Functionality Checklist
- [ ] All requirements implemented correctly
- [ ] Edge cases handled appropriately
- [ ] Error states managed gracefully
- [ ] Accessibility requirements met
- [ ] Cross-browser/device compatibility verified
- [ ] Internationalization supported if required
- [ ] Security best practices followed
- [ ] User experience consistent with design specifications
- [ ] Performance requirements met
- [ ] Integration points work correctly
### Testing Checklist
- [ ] Unit tests cover core functionality
- [ ] Integration tests verify component interactions
- [ ] Edge cases and error conditions tested
- [ ] Performance tests for critical paths
- [ ] Accessibility tests included
- [ ] Tests are deterministic (no flakiness)
- [ ] Test descriptions are clear and meaningful
- [ ] Mocks and test doubles used appropriately
- [ ] Test coverage meets minimum thresholds
- [ ] Tests follow AAA (Arrange-Act-Assert) pattern
---
*Last Updated: June 2025*