6.1 KiB
6.1 KiB
common-defects
Reference guide for common software defects and their characteristics.
Defect Classification System
By Origin
- Requirements Defects - Ambiguous, incomplete, or incorrect requirements
- Design Defects - Architectural flaws, poor design decisions
- Coding Defects - Implementation errors, logic mistakes
- Testing Defects - Inadequate test coverage, wrong test assumptions
- Deployment Defects - Configuration errors, environment issues
- Documentation Defects - Outdated, incorrect, or missing documentation
By Type
Logic Defects
- Algorithm Errors: Incorrect implementation of business logic
- Control Flow Issues: Wrong branching, loop errors
- Boundary Violations: Off-by-one, overflow, underflow
- State Management: Invalid state transitions, race conditions
Data Defects
- Input Validation: Missing or incorrect validation
- Data Corruption: Incorrect data manipulation
- Type Errors: Wrong data types, failed conversions
- Persistence Issues: Failed saves, data loss
Interface Defects
- API Misuse: Incorrect parameter passing, wrong method calls
- Integration Errors: Component communication failures
- Protocol Violations: Incorrect message formats
- Version Incompatibility: Breaking changes not handled
Performance Defects
- Memory Leaks: Unreleased resources
- Inefficient Algorithms: O(n²) where O(n) possible
- Database Issues: N+1 queries, missing indexes
- Resource Contention: Deadlocks, bottlenecks
Security Defects
- Injection Flaws: SQL, XSS, command injection
- Authentication Issues: Weak auth, session problems
- Authorization Flaws: Privilege escalation, IDOR
- Data Exposure: Sensitive data leaks, weak encryption
Severity Classification
Critical (P0)
- Definition: System unusable, data loss, security breach
- Response Time: Immediate
- Examples:
- Application crash on startup
- Data corruption or loss
- Security vulnerability actively exploited
- Complete feature failure
High (P1)
- Definition: Major feature broken, significant impact
- Response Time: Within 24 hours
- Examples:
- Core functionality impaired
- Performance severely degraded
- Workaround exists but difficult
- Affects many users
Medium (P2)
- Definition: Feature impaired, moderate impact
- Response Time: Within sprint
- Examples:
- Non-core feature broken
- Easy workaround available
- Cosmetic issues with functional impact
- Affects some users
Low (P3)
- Definition: Minor issue, minimal impact
- Response Time: Next release
- Examples:
- Cosmetic issues
- Minor inconvenience
- Edge case scenarios
- Documentation errors
Root Cause Categories
Development Process
- Inadequate Requirements: Missing acceptance criteria
- Poor Communication: Misunderstood requirements
- Insufficient Review: Code review missed issues
- Time Pressure: Rushed implementation
Technical Factors
- Complexity: System too complex to understand fully
- Technical Debt: Accumulated shortcuts causing issues
- Tool Limitations: Development tools inadequate
- Knowledge Gap: Team lacks necessary expertise
Testing Gaps
- Missing Tests: Scenario not covered
- Wrong Assumptions: Tests based on incorrect understanding
- Environment Differences: Works in test, fails in production
- Data Issues: Test data not representative
Organizational Issues
- Process Failures: Procedures not followed
- Resource Constraints: Insufficient time/people
- Training Gaps: Team not properly trained
- Culture Issues: Quality not prioritized
Detection Methods
Static Analysis
- Code Review: Manual inspection by peers
- Linting: Automated style and error checking
- Security Scanning: SAST tools
- Complexity Analysis: Cyclomatic complexity metrics
Dynamic Analysis
- Unit Testing: Component-level testing
- Integration Testing: Component interaction testing
- System Testing: End-to-end testing
- Performance Testing: Load and stress testing
Runtime Monitoring
- Error Tracking: Sentry, Rollbar
- APM Tools: Application performance monitoring
- Log Analysis: Centralized logging
- User Reports: Bug reports from users
Formal Methods
- Fagan Inspection: Systematic peer review
- Code Walkthroughs: Step-by-step review
- Pair Programming: Real-time review
- Test-Driven Development: Test-first approach
Prevention Strategies
Process Improvements
- Clear Requirements: Use user stories with acceptance criteria
- Design Reviews: Architecture review before coding
- Code Standards: Enforce coding guidelines
- Automated Testing: CI/CD with comprehensive tests
Technical Practices
- Defensive Programming: Validate inputs, handle errors
- Design Patterns: Use proven solutions
- Refactoring: Regular code improvement
- Documentation: Keep docs current
Team Practices
- Knowledge Sharing: Regular tech talks, documentation
- Pair Programming: Collaborative development
- Code Reviews: Mandatory peer review
- Retrospectives: Learn from mistakes
Tool Support
- Static Analyzers: SonarQube, ESLint
- Test Frameworks: Jest, Pytest, JUnit
- CI/CD Pipelines: Jenkins, GitHub Actions
- Monitoring Tools: Datadog, New Relic
Defect Metrics
Detection Metrics
- Defect Density: Defects per KLOC
- Detection Rate: Defects found per time period
- Escape Rate: Defects reaching production
- Mean Time to Detect: Average detection time
Resolution Metrics
- Fix Rate: Defects fixed per time period
- Mean Time to Fix: Average fix time
- Reopen Rate: Defects reopened after fix
- Fix Effectiveness: First-time fix success rate
Quality Metrics
- Test Coverage: Percentage of code tested
- Code Complexity: Average cyclomatic complexity
- Technical Debt: Estimated remediation effort
- Customer Satisfaction: User-reported issues