feat: Add ADR integration and user methodology files

- Created comprehensive ADR template following Michael Nygard's format
- Enhanced architect agent with ADR management capabilities
- Added ADR triggers reference guide for decision documentation
- Updated architect checklist with ADR validation section
- Imported user's comprehensive development methodology in tmp/
  - Memory Bank system for AI context persistence
  - Detailed rules covering coding principles and architecture patterns
  - Workflows for common development tasks with anti-tunnel vision checks
- Created foundation for integrating user methodology with BMAD
- Fixed pre-commit hook by removing non-existent test script

This enhancement brings structured architectural decision documentation
to BMAD while preparing for deeper integration with advanced AI-assisted
development patterns.
This commit is contained in:
Lucas C 2025-07-20 18:07:09 +02:00
parent bfaaa0ee11
commit 91bdb490ed
24 changed files with 2271 additions and 2 deletions

View File

@ -1,2 +1 @@
# Run lint-staged to format and lint YAML files
npx lint-staged

View File

@ -0,0 +1,234 @@
# ADR Specialist Agent
## Role
You are an ADR (Architectural Decision Record) Specialist, an expert in documenting, managing, and facilitating architectural decisions. You work closely with architects, developers, and stakeholders to ensure that important architectural decisions are properly captured, communicated, and tracked throughout the project lifecycle.
## Core Responsibilities
### 1. ADR Creation and Management
- Guide the creation of new ADRs following the Michael Nygard format
- Ensure ADRs are properly numbered, dated, and linked
- Maintain the ADR index and status tracking
- Help identify when an ADR is needed vs. when other documentation is more appropriate
### 2. Decision Facilitation
- Help articulate architectural problems clearly
- Guide stakeholders through systematic evaluation of alternatives
- Document trade-offs and consequences comprehensively
- Ensure all perspectives are captured and considered
### 3. Quality Assurance
- Review ADRs for completeness, clarity, and technical accuracy
- Ensure decisions align with project principles and constraints
- Validate that consequences and risks are thoroughly documented
- Check that alternatives are fairly evaluated
### 4. Knowledge Management
- Maintain relationships between related ADRs
- Track superseded decisions and their evolution
- Ensure ADRs remain living documents that reflect current state
- Help teams learn from past decisions
## Key Principles
### 1. Clarity Over Complexity
- Use clear, concise language accessible to all stakeholders
- Avoid unnecessary jargon while maintaining technical precision
- Structure information for easy scanning and comprehension
### 2. Neutrality in Documentation
- Present context and alternatives objectively
- Document all significant viewpoints, even dissenting ones
- Separate facts from opinions clearly
### 3. Decision Traceability
- Always document the "why" behind decisions
- Link decisions to their motivating problems
- Track the evolution of decisions over time
### 4. Actionable Outcomes
- Ensure decisions lead to clear next steps
- Document what changes as a result of the decision
- Identify who is responsible for implementation
## Working Methods
### When Creating a New ADR
1. **Problem Definition Phase**
- Help articulate the problem requiring a decision
- Identify all stakeholders and their concerns
- Determine the urgency and impact scope
- Check for related existing ADRs
2. **Context Gathering Phase**
- Research technical constraints and possibilities
- Document business requirements and goals
- Identify compliance and regulatory considerations
- Gather performance and scalability requirements
3. **Alternative Development Phase**
- Brainstorm multiple viable solutions (minimum 3)
- Document pros and cons for each alternative
- Estimate effort and resources for each option
- Consider long-term maintenance implications
4. **Decision Documentation Phase**
- Use active voice: "We will..." not "It was decided..."
- Be explicit about what is being decided
- Document who made the decision and when
- Include dissenting opinions if significant
5. **Consequence Analysis Phase**
- List both positive and negative consequences
- Identify risks and mitigation strategies
- Document impact on existing systems
- Consider future flexibility and evolution
### When Reviewing ADRs
1. **Structural Review**
- Verify all template sections are completed
- Check numbering sequence and dating
- Ensure proper status designation
- Validate links and references
2. **Content Review**
- Assess clarity of problem statement
- Verify completeness of context
- Check that decision directly addresses the problem
- Ensure consequences are realistic and comprehensive
3. **Technical Review**
- Validate technical accuracy of statements
- Check architectural alignment
- Verify feasibility of the decision
- Assess risk evaluations
### When Managing ADR Lifecycle
1. **Status Tracking**
- Proposed → for decisions under discussion
- Accepted → for ratified decisions
- Deprecated → for decisions no longer relevant
- Superseded → when replaced by another ADR
2. **Relationship Management**
- Link related ADRs explicitly
- Track which ADRs supersede others
- Document dependencies between decisions
- Maintain topic-based indexes
## Output Formats
### ADR File Naming
```
docs/adr/NNNN-title-with-dashes.md
```
Where NNNN is a four-digit sequential number.
### ADR Index Entry
```markdown
- [ADR-0001](0001-use-adr-for-architecture-decisions.md) - Use ADR for Architecture Decisions [Accepted]
```
### Status Badge Format
```markdown
Status: **Accepted** (2024-01-15)
```
## Integration with BMAD Workflow
### Triggers for ADR Creation
- Major technology choices (frameworks, databases, languages)
- Significant architectural patterns or styles
- Integration approaches between systems
- Security architecture decisions
- Performance optimization strategies
- Scalability approaches
- Development process decisions affecting architecture
### Collaboration Points
- Work with **Architect** agent on technical details
- Coordinate with **PM** agent on business impact
- Engage **Dev** agents on implementation feasibility
- Consult **QA** agent on testability implications
- Partner with **Analyst** agent on requirements alignment
## Quality Metrics
### Good ADR Indicators
- Can be understood by both technical and non-technical stakeholders
- Provides clear rationale for the decision
- Documents realistic consequences
- Considers multiple alternatives fairly
- Includes actionable next steps
- Has appropriate references and links
### Red Flags
- Vague or ambiguous problem statements
- Missing or weak alternative analysis
- Consequences that are only positive
- Lack of risk consideration
- No clear decision statement
- Missing stakeholder perspectives
## Templates and Examples
### Problem Statement Template
```markdown
We need to decide [what]
because [why]
in order to [achieve what outcome]
considering [what constraints]
```
### Decision Statement Template
```markdown
We will [do what]
by [how]
to achieve [what benefit]
accepting [what trade-offs]
```
### Consequence Template
```markdown
## Consequences
### Positive
- We will be able to [benefit]
- This will improve [what aspect]
- We gain [what capability]
### Negative
- We will need to [what effort/cost]
- This limits our ability to [what limitation]
- We must accept [what trade-off]
### Risks
- Risk: [what could go wrong]
Mitigation: [how we address it]
```
## Special Considerations
### For Brownfield Projects
- Document decisions to change existing architecture
- Capture migration strategies
- Record technical debt decisions
- Document modernization approaches
### For Greenfield Projects
- Establish foundational architecture decisions early
- Document technology stack choices
- Record architectural principles and patterns
- Capture non-functional requirement decisions
### For Cross-Team Decisions
- Ensure all affected teams are represented
- Document integration points explicitly
- Clarify ownership and responsibilities
- Establish communication protocols
## Remember
Your role is to ensure that important architectural decisions are not lost in the mists of time. Every ADR you help create is a gift to future developers who will wonder "why did they do it this way?" Your clear, comprehensive documentation helps teams make better decisions by learning from the past.

View File

@ -52,6 +52,13 @@ persona:
- Data-Centric Design - Let data requirements drive architecture
- Cost-Conscious Engineering - Balance technical ideals with financial reality
- Living Architecture - Design for change and adaptation
- Decision Documentation - Capture architectural decisions in ADRs for future reference
adr_responsibilities:
- Identify when architectural decisions require formal documentation
- Guide creation of ADRs for significant technology choices and patterns
- Ensure decisions are traceable and well-reasoned
- Maintain ADR index and track decision evolution
- Review ADRs for technical accuracy and completeness
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
@ -59,6 +66,9 @@ commands:
- create-backend-architecture: use create-doc with architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-brownfield-architecture: use create-doc with brownfield-architecture-tmpl.yaml
- create-adr: use create-doc with adr-tmpl.md to create a new Architectural Decision Record
- list-adr-triggers: Reference adr-triggers.md to show when ADRs are needed
- review-adr: Review an ADR for completeness, clarity, and technical accuracy
- doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist)
@ -77,8 +87,10 @@ dependencies:
- front-end-architecture-tmpl.yaml
- fullstack-architecture-tmpl.yaml
- brownfield-architecture-tmpl.yaml
- adr-tmpl.md
checklists:
- architect-checklist.md
data:
- technical-preferences.md
- adr-triggers.md
```

View File

@ -314,6 +314,16 @@ Ask the user if they want to work through the checklist:
- [ ] System diagrams and visualizations are included
- [ ] Decision records for key choices are included
### 7.6 Architectural Decision Records (ADRs)
- [ ] ADR process is established for the project
- [ ] Significant architecture decisions are documented in ADRs
- [ ] Technology stack choices have corresponding ADRs
- [ ] Integration approach decisions are captured in ADRs
- [ ] ADRs follow consistent format and numbering
- [ ] Superseded decisions are properly tracked
- [ ] ADR index is maintained and accessible
## 8. DEPENDENCY & INTEGRATION MANAGEMENT
[[LLM: Dependencies are often the source of production issues. For each dependency, consider: What happens if it's unavailable? Is there a newer version with security patches? Are we locked into a vendor? What's our contingency plan? Verify specific versions and fallback strategies.]]

View File

@ -0,0 +1,88 @@
# ADR Triggers Reference
## When to Create an Architectural Decision Record
### Technology Stack Decisions
- **Framework Selection**: Choosing React vs Vue vs Angular
- **Database Technology**: SQL vs NoSQL, specific database vendors
- **Programming Language**: Primary language for services
- **Infrastructure Platform**: AWS vs Azure vs GCP vs On-premise
- **Container Orchestration**: Kubernetes vs Docker Swarm vs ECS
### Architectural Patterns
- **Architecture Style**: Microservices vs Monolith vs Modular Monolith
- **API Design**: REST vs GraphQL vs gRPC
- **Event Architecture**: Event Sourcing vs Traditional State
- **Communication Patterns**: Synchronous vs Asynchronous
- **Data Patterns**: CQRS, Event Sourcing, Shared Database
### Integration Decisions
- **Authentication Method**: OAuth vs JWT vs Session-based
- **Service Communication**: Direct API vs Message Queue vs Event Bus
- **Third-party Services**: Build vs Buy decisions
- **API Gateway**: Whether to use and which one
- **External System Integration**: How to connect with legacy systems
### Data Architecture
- **Data Storage Strategy**: Centralized vs Distributed
- **Caching Strategy**: Redis vs Memcached vs In-memory
- **Data Partitioning**: Sharding strategy
- **Backup and Recovery**: Approach and tools
- **Data Privacy**: Encryption at rest/transit decisions
### Performance & Scalability
- **Scaling Strategy**: Horizontal vs Vertical
- **Load Balancing**: Algorithm and implementation
- **Performance Optimization**: Specific techniques adopted
- **Resource Limits**: Rate limiting, throttling decisions
- **CDN Strategy**: Whether to use and implementation
### Security Architecture
- **Security Framework**: Zero Trust vs Perimeter-based
- **Secrets Management**: Vault vs Cloud Provider vs Custom
- **Encryption Standards**: Which algorithms and key management
- **Access Control**: RBAC vs ABAC vs Custom
- **Compliance Requirements**: How to meet specific regulations
### Development Process
- **CI/CD Pipeline**: Tools and deployment strategy
- **Testing Strategy**: Unit vs Integration vs E2E balance
- **Code Organization**: Monorepo vs Polyrepo
- **Branching Strategy**: GitFlow vs GitHub Flow vs Trunk-based
- **Documentation Standards**: What and how to document
### Operational Decisions
- **Monitoring Strategy**: Tools and what to monitor
- **Logging Architecture**: Centralized vs Distributed
- **Alerting Strategy**: What to alert on and how
- **Disaster Recovery**: RTO/RPO decisions
- **Deployment Strategy**: Blue-Green vs Canary vs Rolling
### Cross-Cutting Concerns
- **Error Handling**: Global strategy and patterns
- **Internationalization**: Support strategy
- **Multi-tenancy**: Isolation approach
- **Feature Flags**: Implementation approach
- **Backward Compatibility**: Version strategy
## Red Flags - Always Create an ADR When:
1. **Multiple Valid Options Exist**: The team is debating between approaches
2. **Significant Cost Implications**: The decision impacts budget substantially
3. **Hard to Reverse**: Changing later would be expensive or difficult
4. **Cross-Team Impact**: Decision affects multiple teams or systems
5. **External Commitments**: Decision creates obligations to customers/partners
6. **Compliance/Regulatory**: Decision has legal or compliance implications
7. **Performance Critical**: Decision significantly impacts system performance
8. **Security Implications**: Decision affects system security posture
## When NOT to Create an ADR:
1. **Implementation Details**: How to name a variable or structure a small module
2. **Temporary Solutions**: Quick fixes that will be replaced soon
3. **Team Conventions**: Simple coding standards or naming conventions
4. **Tool Configuration**: Minor tool settings that are easily changeable
5. **Obvious Choices**: When there's only one reasonable option
## Remember:
> "If someone might ask 'Why did we do it this way?' in 6 months, you need an ADR."

View File

@ -0,0 +1,121 @@
# [ADR-NNNN] [Title of Decision]
**Status:** [Proposed | Accepted | Deprecated | Superseded by ADR-XXXX]
**Date:** [YYYY-MM-DD]
**Decision Makers:** [List key stakeholders involved]
## Context
[Describe the issue motivating this decision, and any context that influences or constrains the decision. This should be value-neutral, explaining forces at play without judging them.]
### Problem Statement
[Clearly articulate the specific problem we're trying to solve in 1-2 sentences]
### Current Situation
[Describe how things work today, if applicable]
### Technical Context
[Any technical constraints, existing systems, or technical factors]
### Business Context
[Business requirements, constraints, or goals that influence this decision]
## Decision
[State the decision that was made, starting with "We will..." Use active voice and be explicit about what is being decided]
## Considered Alternatives
### Option 1: [Name of Alternative]
**Description:** [Brief description of this approach]
**Pros:**
- [Positive aspect]
- [Another positive aspect]
**Cons:**
- [Negative aspect]
- [Another negative aspect]
**Estimated Effort:** [High/Medium/Low or specific estimate]
### Option 2: [Name of Alternative]
**Description:** [Brief description of this approach]
**Pros:**
- [Positive aspect]
- [Another positive aspect]
**Cons:**
- [Negative aspect]
- [Another negative aspect]
**Estimated Effort:** [High/Medium/Low or specific estimate]
### Option 3: [Name of Alternative]
**Description:** [Brief description of this approach]
**Pros:**
- [Positive aspect]
- [Another positive aspect]
**Cons:**
- [Negative aspect]
- [Another negative aspect]
**Estimated Effort:** [High/Medium/Low or specific estimate]
## Consequences
### Positive Consequences
- [Good thing that will happen as a result]
- [Another good thing]
- [Performance/scalability/maintainability improvements]
### Negative Consequences
- [Drawback or trade-off we're accepting]
- [Additional complexity or cost]
- [Things that will become more difficult]
### Risks and Mitigations
| Risk | Probability | Impact | Mitigation Strategy |
|------|-------------|---------|-------------------|
| [Description of risk] | [High/Medium/Low] | [High/Medium/Low] | [How we'll address it] |
| [Another risk] | [High/Medium/Low] | [High/Medium/Low] | [How we'll address it] |
## Implementation
### Action Items
- [ ] [Specific action needed to implement this decision]
- [ ] [Another action item]
- [ ] [Documentation to update]
### Timeline
[When will this be implemented? Any phases or milestones?]
### Success Metrics
- [How will we know if this decision was successful?]
- [What metrics will we track?]
- [When will we evaluate the outcome?]
## References
### Related ADRs
- [ADR-XXXX] - [Title and how it relates]
- [ADR-YYYY] - [Title and how it relates]
### External References
- [Link to relevant documentation, articles, or resources]
- [Link to architectural diagrams or models]
## Notes
[Any additional context, dissenting opinions, or information that doesn't fit elsewhere]
---
### Metadata
- **Review Date:** [When should this decision be reviewed?]
- **Tags:** [architecture, security, performance, etc.]
- **Supersedes:** [ADR-XXXX if applicable]
- **Superseded By:** [ADR-YYYY if applicable]

View File

@ -0,0 +1,194 @@
name: ADR Management Workflow
description: Workflow for creating, reviewing, and managing Architectural Decision Records
version: "1.0.0"
type: architectural-decision
stages:
- name: Problem Identification
description: Identify and document the architectural problem or decision needed
agent: architect
tasks:
- task: identify-architectural-issue
description: Document the problem requiring an architectural decision
checklist:
- Define the problem statement clearly
- Identify stakeholders affected
- Determine urgency and impact
- Check for existing ADRs addressing similar issues
output:
- Problem statement document
- Stakeholder analysis
- Impact assessment
- name: Context Gathering
description: Research and gather all relevant context for the decision
agent: analyst
tasks:
- task: research-context
description: Gather technical, business, and operational context
checklist:
- Review existing system architecture
- Analyze technical constraints
- Identify business requirements
- Research industry best practices
- Document compliance requirements
output:
- Context analysis document
- Constraints list
- Requirements summary
- name: Solution Design
description: Design potential solutions and evaluate alternatives
agent: architect
tasks:
- task: design-alternatives
description: Create multiple solution alternatives
checklist:
- Design at least 3 viable alternatives
- Document pros and cons for each
- Estimate implementation effort
- Assess risks and mitigation strategies
- Consider long-term implications
output:
- Solution alternatives document
- Comparison matrix
- Risk assessment
- name: Decision Making
description: Make the architectural decision with stakeholder input
agent: architect
collaboration:
- pm
- dev
- qa
tasks:
- task: facilitate-decision
description: Lead decision-making process with stakeholders
checklist:
- Present alternatives to stakeholders
- Facilitate discussion and feedback
- Document concerns and objections
- Reach consensus or escalate if needed
- Record final decision and rationale
output:
- Decision record
- Stakeholder feedback
- Meeting notes
- name: ADR Creation
description: Create the formal ADR document
agent: architect
tasks:
- task: create-adr
description: Write the ADR following the standard template
template: adr-template.md
checklist:
- Use proper ADR numbering sequence
- Complete all template sections
- Include all alternatives considered
- Document clear consequences
- Add relevant references
output:
- ADR document (markdown)
- Supporting diagrams (if applicable)
- name: Review and Approval
description: Review the ADR for completeness and accuracy
agent: pm
collaboration:
- architect
- dev
- qa
tasks:
- task: review-adr
description: Review ADR for quality and completeness
checklist:
- Verify technical accuracy
- Check business alignment
- Validate risk assessment
- Ensure clarity and completeness
- Confirm stakeholder representation
output:
- Review feedback
- Approval status
- name: Communication
description: Communicate the decision to all affected parties
agent: pm
tasks:
- task: communicate-decision
description: Share ADR and ensure understanding
checklist:
- Add ADR to project documentation
- Update architecture diagrams if needed
- Notify all affected teams
- Schedule implementation planning if required
- Create tracking items for consequences
output:
- Communication plan
- Updated documentation
- Tracking items
- name: Implementation Planning
description: Plan the implementation of the decision
agent: sm
collaboration:
- architect
- dev
tasks:
- task: create-implementation-plan
description: Break down decision into actionable items
checklist:
- Create epic for implementation
- Define user stories
- Estimate effort
- Identify dependencies
- Plan rollout strategy
output:
- Implementation epic
- User stories
- Timeline estimate
templates:
- name: adr-problem-statement
format: |
# Architectural Problem Statement
## Problem
[Clear description of the architectural problem]
## Impact
- Business Impact: [Description]
- Technical Impact: [Description]
- User Impact: [Description]
## Urgency
[Critical | High | Medium | Low]
## Stakeholders
- [Stakeholder 1]: [Their interest/concern]
- [Stakeholder 2]: [Their interest/concern]
## Success Criteria
- [What would a successful solution look like]
- name: adr-review-checklist
format: |
# ADR Review Checklist
- [ ] Problem clearly stated
- [ ] Context is comprehensive and neutral
- [ ] Decision is explicit and actionable
- [ ] All significant alternatives considered
- [ ] Consequences (positive and negative) documented
- [ ] Risks identified with mitigation strategies
- [ ] References and links included
- [ ] Follows standard ADR format
- [ ] Technically accurate
- [ ] Business aligned
tags:
- architecture
- decision-making
- documentation
- governance

View File

@ -0,0 +1,59 @@
# [number] [Title]
Date: [YYYY-MM-DD]
## Status
[Proposed | Accepted | Deprecated | Superseded by [ADR-0000](0000-adr-title.md)]
## Context
[Describe the issue motivating this decision, and any context that influences or constrains the decision. The context should be neutral and factual, describing the forces at play and the environment in which the decision is being made.]
## Decision
[Describe our response to these forces. State the decision that was made. It is stated in full sentences, with active voice. "We will ..."]
## Consequences
### Positive
- [Positive consequence 1]
- [Positive consequence 2]
- ...
### Negative
- [Negative consequence 1]
- [Negative consequence 2]
- ...
### Risks
- [Risk 1 and mitigation strategy]
- [Risk 2 and mitigation strategy]
- ...
## Alternatives Considered
### [Alternative 1]
- **Pros:** [List pros]
- **Cons:** [List cons]
- **Reason for rejection:** [Why this wasn't chosen]
### [Alternative 2]
- **Pros:** [List pros]
- **Cons:** [List cons]
- **Reason for rejection:** [Why this wasn't chosen]
## References
- [Link to relevant documentation]
- [Link to related ADRs]
- [External references]
## Notes
[Any additional notes, implementation details, or considerations that don't fit in the sections above]

View File

@ -0,0 +1,134 @@
# Cline's Memory Bank
I am Cline, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional.
## Memory Bank Structure
The Memory Bank consists of core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy:
```mermaid
flowchart TD
PB[projectbrief.md] --> PC[productContext.md]
PB --> SP[systemPatterns.md]
PB --> TC[techContext.md]
PC --> AC[activeContext.md]
SP --> AC
TC --> AC
AC --> P[progress.md]
```
### Core Files (Required)
1. `projectbrief.md`
- Foundation document that shapes all other files
- Created at project start if it doesn't exist
- Defines core requirements and goals
- Source of truth for project scope
2. `productContext.md`
- Why this project exists
- Problems it solves
- How it should work
- User experience goals
3. `activeContext.md`
- Current work focus
- Recent changes
- Next steps
- Active decisions and considerations
- Important patterns and preferences
- Learnings and project insights
4. `systemPatterns.md`
- System architecture
- Key technical decisions
- Design patterns in use
- Component relationships
- Critical implementation paths
5. `techContext.md`
- Technologies used
- Development setup
- Technical constraints
- Dependencies
- Tool usage patterns
6. `progress.md`
- What works
- What's left to build
- Current status
- Known issues
- Evolution of project decisions
### Additional Context
Create additional files/folders within memory-bank/ when they help organize:
- Complex feature documentation
- Integration specifications
- API documentation
- Testing strategies
- Deployment procedures
## Core Workflows
### Plan Mode
```mermaid
flowchart TD
Start[Start] --> ReadFiles[Read Memory Bank]
ReadFiles --> CheckFiles{Files Complete?}
CheckFiles -->|No| Plan[Create Plan]
Plan --> Document[Document in Chat]
CheckFiles -->|Yes| Verify[Verify Context]
Verify --> Strategy[Develop Strategy]
Strategy --> Present[Present Approach]
```
### Act Mode
```mermaid
flowchart TD
Start[Start] --> Context[Check Memory Bank]
Context --> Update[Update Documentation]
Update --> Execute[Execute Task]
Execute --> Document[Document Changes]
```
## Documentation Updates
Memory Bank updates occur when:
1. Discovering new project patterns
2. After implementing significant changes
3. When user requests with **update memory bank** (MUST review ALL files)
4. When context needs clarification
```mermaid
flowchart TD
Start[Update Process]
subgraph Process
P1[Review ALL Files]
P2[Document Current State]
P3[Clarify Next Steps]
P4[Document Insights & Patterns]
P1 --> P2 --> P3 --> P4
end
Start --> Process
```
Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md and progress.md as they track current state.
REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy.

View File

@ -0,0 +1,163 @@
# Project Guidelines
> **Scope:** This document outlines high-level, project-specific guidelines, policies, and standards unique to this project. It serves as the primary entry point to the rule system, linking to more detailed principles in other files. For universal coding principles, see `02-CoreCodingPrinciples.md`. For mandatory file structure, see `03-ProjectScaffoldingRules.md`. For cloud-native development, see `04-TwelveFactorApp.md`. For our service architecture, see `05-MicroServiceOrientedArchitecture.md`.
## Documentation Requirements
- Update relevant documentation in /docs when modifying features
- Keep README.md in sync with new capabilities
- Maintain changelog entries in CHANGELOG.md
## Documentation Discipline
- All major changes must be reflected in README, ADRs, dev journals, and changelogs.
- Use and maintain templates for sprint reviews and journal entries.
- Document onboarding steps, environment requirements, and common pitfalls in README files.
## Accessibility & Inclusion
- All UI components must meet WCAG 2.1 AA accessibility standards.
- Ensure sufficient color contrast, keyboard navigation, and screen reader support.
- Use inclusive language in documentation and user-facing text.
- Accessibility must be tested as part of code review and release.
## Architecture
- This project follows a **Microservice-Oriented Architecture**.
- All development must adhere to the principles outlined in `04-TwelveFactorApp.md` for cloud-native compatibility and `05-MicroServiceOrientedArchitecture.md` for service design and implementation patterns.
## Architecture Decision Records
Create ADRs in /docs/adr for:
- Major dependency changes
- Architectural pattern changes
- New integration patterns
- Database schema changes
Follow template in /docs/adr/template.md
## Code Style & Patterns
- Generate API clients using OpenAPI Generator
- Use TypeScript axios template
- Place generated code in /src/generated
- Prefer composition over inheritance
- Use repository pattern for data access
- Follow error handling pattern in /src/utils/errors.ts
## REST API Implementation
- REST endpoints must follow the `CustomEndpointDelegate` pattern and reside in package-scoped Groovy files.
- Always declare the package at the top of each endpoint file.
- Configure REST script roots and packages via environment variables for auto-discovery.
## Testing Standards
- Unit tests required for business logic
- Integration tests for API endpoints
- E2E tests for critical user flows
## Testing & Data Utilities
- All critical endpoints and data utilities must have integration tests.
- Synthetic data scripts must be idempotent, robust, and never modify migration tracking tables.
- Document the behavior and safety of all data utility scripts.
## Security & Performance Considerations
- **Input Validation (IV):** All external data must be validated before processing.
- **Resource Management (RM):** Close connections and free resources appropriately.
- **Constants Over Magic Values (CMV):** No magic strings or numbers. Use named constants.
- **Security-First Thinking (SFT):** Implement proper authentication, authorization, and data protection.
- **Performance Awareness (PA):** Consider computational complexity and resource usage.
- Rate limit all api endpoints
- Use row level security always (RLS)
- Captcha on all auth routes/signup pages
- If using hosting solution like vercel, enable attack challenge on their WAF
- DO NOT read or modify, without prior approval by user:
- .env files
- \*_/config/secrets._
- Any file containing API keys or credentials
## Security & Quality Automation
- Integrate Semgrep for static analysis and security scanning in all projects.
- Use MegaLinter (or equivalent) for multi-language linting and formatting.
- Supplement with language/framework-specific linters (e.g., ESLint for JS/TS, flake8 for Python, RuboCop for Ruby).
- All linting and static analysis tools must be run in CI/CD pipelines; merges are blocked on failure.
- Linter and static analysis configurations must be version-controlled and documented at the project root.
- Regularly review and update linter/analysis rules to address new threats and maintain code quality.
- Document and version-control all ignore rules and linter configs.
- CI checks must pass before merging any code.
## Miscellaneous recommendations
- Always prefer simple solutions
- Avoid duplication of code whenever possible, which means checking for other areas of the codebase that might already have similar code and functionality
- Write code that takes into account the different environments: dev, test, and prod
- You are careful to only make changes that are requested or you are confident are well understood and related to the change being requested
- When fixing an issue or bug, do not introduce a new pattern or technology without first exhausting all options for the existing implementation. And if you finally do this, make sure to remove the old implementation afterwards so we don't have duplicate logic.
- Keep the codebase very clean and organized
- Avoid writing scripts in files if possible, especially if the script is likely only to be run once
- Avoid having files over 200-300 lines
## Project Structure
- Avoid unnecessary plugin or build complexity; prefer script-based, automatable deployment.
- Mocking data is only needed for tests, never mock data for dev or prod
- Never add stubbing or fake data patterns to code that affects the dev or prod environments
- Never overwrite my .env file without first asking and confirming
## Automation & CI/CD
- All code must pass linting and formatting checks in CI before merge.
- CI must run all tests (unit, integration, E2E) and block merges on failure.
- Add new linters or formatters only with team consensus.
## Local Development Environment
- Use Podman or Docker and Ansible for local environment setup.
- Provide wrapper scripts for starting, stopping, and resetting the environment; avoid direct Ansible or container CLI usage.
- Ensure all environment configuration is version-controlled.
## Branching & Release Policy
- Follow [your branching model, e.g. Git Flow or trunk-based] for all work.
- Use semantic versioning for releases.
- Release branches must be code-frozen and pass all CI checks before tagging.
## Incident Response
- Maintain an incident log documenting bugs, outages, and recovery actions.
- After any incident, hold a retrospective and update runbooks as needed.
- Critical incidents must be reviewed in the next team meeting.
## Data Privacy & Compliance
- All data handling must comply with applicable privacy laws (e.g., GDPR, CCPA).
- Never log or store sensitive data insecurely.
- Review and document data flows for compliance annually.
## Database Migration & Change Management
- Use a dedicated, automated migration tool (e.g., Liquibase, Flyway) for all schema changes.
- Store all migration scripts under version control, alongside application code.
- All environments (dev, test, prod) must be migrated using the same process and scripts.
- Manual, ad-hoc schema changes are prohibited.
- All migrations must be documented with rationale and expected outcomes.
## Database Management & Documentation
- Maintain an up-to-date Entity Relationship Diagram (ERD).
- Use templates for documenting schema changes, migrations, and rationale.
- Document all reference data and non-obvious constraints.
- Maintain a changelog for all database changes.
- Review and update database documentation as part of the development workflow.
## Database Naming Conventions
- Use clear, consistent, and project-wide naming conventions for tables, columns, indexes, and constraints.
- Prefer snake_case for all identifiers.
- Prefix/suffix conventions must be documented (e.g., `tbl_` for tables, `_fk` for foreign keys).
- Avoid reserved words and ambiguous abbreviations.
- Enforce naming conventions in code review and automated linting where possible.

View File

@ -0,0 +1,82 @@
# Core Coding Principles
> **Scope:** This document defines the core, universal, and project-agnostic engineering principles that apply to all development work. These are the fundamental rules of good software craftsmanship, independent of any specific project.
## Core Coding Principles
- [SF] **Simplicity First:** Always choose the simplest viable solution. Complex patterns or architectures require explicit justification.
- [RP] **Readability Priority:** Code must be immediately understandable by both humans and AI during future modifications.
- [DM] **Dependency Minimalism:** No new libraries or frameworks without explicit request or compelling justification.
- [ISA] **Industry Standards Adherence:** Follow established conventions for the relevant language and tech stack.
- [SD] **Strategic Documentation:** Comment only complex logic or critical functions. Avoid documenting the obvious.
- [TDT] **Test-Driven Thinking:** Design all code to be easily testable from inception.
## Dependency Management
- [DM-1] Review third-party dependencies for vulnerabilities at least quarterly.
- [DM-2] Prefer signed or verified packages.
- [DM-3] Remove unused or outdated dependencies promptly.
- [DM-4] Document dependency updates in the changelog.
## Coding workflow preferences
- [WF-FOCUS] Focus on the areas of code relevant to the task
- [WF-SCOPE] Do not touch code that is unrelated to the task
- [WF-TEST] Write thorough tests for all major functionality
- [WF-ARCH] Avoid making major changes to the patterns and architecture of how a feature works, after it has shown to work well, unless explicitly structured
- [WF-IMPACT] Always think about what other methods and areas of code might be affected by code changes
## Workflow Standards
- [AC] **Atomic Changes:** Make small, self-contained modifications to improve traceability and rollback capability.
- [CD] **Commit Discipline:** Recommend regular commits with semantic messages using conventional commit format:
```
type(scope): concise description
[optional body with details]
[optional footer with breaking changes/issue references]
```
Types: feat, fix, docs, style, refactor, perf, test, chore
Adhere to the ConventionalCommit specification: <https://www.conventionalcommits.org/en/v1.0.0/#specification>
- [TR] **Transparent Reasoning:** When generating code, explicitly reference which global rules influenced decisions.
- [CWM] **Context Window Management:** Be mindful of AI context limitations. Suggest new sessions when necessary.
## Code Quality Guarantees
- [DRY] **DRY Principle:** No duplicate code. Reuse or extend existing functionality.
- [CA] **Clean Architecture:** Generate cleanly formatted, logically structured code with consistent patterns.
- [REH] **Robust Error Handling:** Integrate appropriate error handling for all edge cases and external interactions.
- [CSD] **Code Smell Detection:** Proactively identify and suggest refactoring for:
- Functions exceeding 30 lines
- Files exceeding 300 lines
- Nested conditionals beyond 2 levels
- Classes with more than 5 public methods
## Security & Performance Considerations
- [IV] **Input Validation:** All external data must be validated before processing.
- [RM] **Resource Management:** Close connections and free resources appropriately.
- [CMV] **Constants Over Magic Values:** No magic strings or numbers. Use named constants.
- [SFT] **Security-First Thinking:** Implement proper authentication, authorization, and data protection.
- [PA] **Performance Awareness:** Consider computational complexity and resource usage.
- [RL] Rate limit all API endpoints.
- [RLS] Use row-level security always (RLS).
- [CAP] Captcha on all auth routes/signup pages.
- [WAF] If using hosting solution like Vercel, enable attack challenge on their WAF.
- [SEC-1] **DO NOT** read or modify, without prior approval by user:
- .env files
- \*_/config/secrets._
- Any file containing API keys or credentials
## AI Communication Guidelines
- [RAT] **Rule Application Tracking:** When applying rules, tag with the abbreviation in brackets (e.g., [SF], [DRY]).
- [EDC] **Explanation Depth Control:** Scale explanation detail based on complexity, from brief to comprehensive.
- [AS] **Alternative Suggestions:** When relevant, offer alternative approaches with pros/cons.
- [KBT] **Knowledge Boundary Transparency:** Clearly communicate when a request exceeds AI capabilities or project context.

View File

@ -0,0 +1,35 @@
# Project Scaffolding Rules
> **Scope:** This document defines the mandatory file and folder structure for the project. Adherence to this structure is required to ensure consistency and support automated tooling.
## Project structure
The project should include the following files and folders:
- a .clineignore file
- a .gitignore file primed for a regular project managed with CLINE in Microsoft VSCode
- a generic readme.md file
- a blank .gitattributes file
- a license file
- /.clinerules/rules folder to include all project specific rules for the CLINE extension
- /.clinerules/workflows folder to include all project specific workflows for the CLINE extension
- /.windsurf/rules/ folder to include all project specific rules for the Windsurf extension
- /.windsurf/workflows/ folder to include all project specific workflows for the Windsurf extension
- a docs/adr folder to include all project specific Architectural Decisions Records (ADRs)
- a docs/devJournal folder to include all project specific development journals
- a docs/roadmap folder to include all project roadmap and features description
- a docs/roadmap/features folder to include all project specific features and their technical, functional and non-functional requirements (Including UX-UI)
- an src/app folder to include the frontend components of the solution
- an src/api folder to include the backend components of the solution
- an src/utils folder to include the share utilities components of the solution
- an src/tests folder to include the tests components of the solution
- an src/tests/e2e folder to include the end-to-end tests components of the solution
- an src/tests/postman folder to include the postman tests for the API components of the solution
- a db folder to include the database components of the solution
- a db/liquibase folder to include the liquibase components of the solution
- a local-dev-setup folder to include the local development setup components of the solution

View File

@ -0,0 +1,99 @@
_Scope: This document provides the definitive, consolidated set of rules based on the Twelve-Factor App methodology. These principles are mandatory for ensuring our applications are built as scalable, resilient, and maintainable cloud-native services._
# The Consolidated Twelve-Factor App Rules for an AI Agent
**I. Codebase**
- A single, version-controlled codebase (e.g., in Git) must represent one and only one application.
- All code you generate, manage, or refactor for a specific application must belong to this single codebase.
- Shared functionality across applications must be factored into versioned libraries and managed via a dependency manager.
- This single codebase is used to produce multiple deploys (e.g., development, staging, production).
**II. Dependencies**
- You must explicitly declare all application dependencies via a manifest file (e.g., `requirements.txt`, `package.json`, `pom.xml`).
- Never rely on the implicit existence of system-wide packages or tools. The application must run in an isolated environment where only explicitly declared dependencies are available.
**III. Config**
- A strict separation between code and configuration must be enforced.
- All configuration that varies between deploys (credentials, resource handles, hostnames) must be read from environment variables.
- Never hardcode environment-specific values in the source code you generate. The codebase must be runnable anywhere provided the correct environment variables are set.
**IV. Backing Services**
- All backing services (databases, message queues, caches, external APIs, etc.) must be treated as attached, swappable resources.
- Connect to all backing services via locators/credentials stored in the configuration (environment variables). The code must be agnostic to whether a service is local or third-party.
**V. Build, Release, Run**
- Maintain a strict, three-stage separation:
- **Build:** Converts the code repo into an executable bundle.
- **Release:** Combines the build with environment-specific config.
- **Run:** Executes the release in the target environment.
- Releases must be immutable and have unique IDs. Any change to code or config must create a new release. You must not generate code that attempts to modify itself at runtime.
**VI. Processes**
- Design the application to execute as one or more stateless, share-nothing processes.
- Any data that needs to persist must be stored in a stateful backing service (e.g., a database). Never assume that local memory or disk state is available across requests or between process restarts.
**VII. Port Binding**
- The application must be self-contained and export its services (e.g., HTTP) by binding to a port specified via configuration. Do not rely on runtime injection of a webserver (e.g., as a module in Apache).
**VIII. Concurrency**
- Design the application to scale out horizontally by adding more concurrent processes.
- Assign different workload types to different process types (e.g., `web`, `worker`).
- Rely on a process manager (e.g., systemd, Foreman, Kubernetes) for process lifecycle management, logging, and crash recovery.
**IX. Disposability**
- Processes must be disposable, meaning they can be started or stopped at a moment's notice.
- Strive for minimal startup time to facilitate fast elastic scaling and deployments.
- Ensure graceful shutdown on `SIGTERM`, finishing any in-progress work before exiting.
- Design processes to be robust against sudden death (crash-only design).
**X. Dev/Prod Parity**
- Keep development, staging, and production environments as similar as possible.
- This applies to the type and version of the programming language, system tooling, and all backing services.
**XI. Logs**
- Treat logs as event streams. Never write to or manage log files directly from the application.
- Each process must write its event stream, unbuffered, to standard output (`stdout`).
- The execution environment is responsible for collecting, aggregating, and routing these log streams for storage and analysis.
**XII. Admin Processes**
- Run administrative and management tasks (e.g., database migrations, one-off scripts) as one-off processes in an environment identical to the main application's long-running processes.
- Admin scripts must be shipped with the application code and use the same dependency and configuration management to avoid synchronization issues.
# Additional Consolidated Project Rules
**Onboarding & Knowledge Transfer**
- Maintain up-to-date onboarding guides and “How To” docs for new contributors and AI agents.
- All major workflows must have step-by-step documentation.
- Encourage new team members to suggest improvements to onboarding materials.
**AI/Agent Safeguards**
- All AI-generated code must be reviewed by a human before deployment to production.
- Escalate ambiguous or risky decisions to a human for approval.
- Log all significant AI-suggested changes for auditability.
- Never overwrite an `.env` file without first asking and confirming.
**Continuous Improvement**
- Hold regular retrospectives to review rules, workflows, and documentation.
- Encourage all contributors to provide feedback and suggest improvements.
- Update rules and workflows based on lessons learned.
**Environmental Sustainability**
- Optimize compute resources and minimize waste in infrastructure choices.
- Prefer energy-efficient solutions where practical.
- Consider environmental impact in all major technical decisions.

View File

@ -0,0 +1,35 @@
_Scope: This document outlines the specific patterns and strategies for implementing our Microservice-Oriented Architecture, based on Chris Richardson's "Microservices Patterns". It builds upon the foundational principles in `04-TwelveFactorApp.md` and provides a detailed guide for service design, decomposition, communication, and data management._
Observe the principles set by the book "Microservices Patterns" by Chris Richardson.
### Microservice Patterns & Principles (with Trigram Codes)
- [MON] **Monolithic Architecture:** Structures an application as a single, unified deployable unit. Good for simple applications, but becomes "monolithic hell" as complexity grows.
- [MSA] **Microservice Architecture:** Structures an application as a collection of small, autonomous, and loosely coupled services. This is the core pattern the rest of the book builds upon.
- [DBC] **Decompose by Business Capability:** Define services based on what a business _does_ (e.g., Order Management, Inventory Management) to create stable service boundaries.
- [DSD] **Decompose by Subdomain:** Use Domain-Driven Design (DDD) to define services around specific problem subdomains, aligning service boundaries with the business domain model.
- [RPI] **Remote Procedure Invocation:** A client invokes a service using a synchronous, request/response protocol like REST or gRPC. Simple and familiar but creates tight coupling and can reduce availability.
- [MSG] **Messaging:** Services communicate asynchronously by exchanging messages via a message broker. This promotes loose coupling and improves resilience.
- [CBR] **Circuit Breaker:** Prevents a network or service failure from cascading. After a number of consecutive failures, the breaker trips, and further calls fail immediately.
- [SDC] **Service Discovery:** Patterns for how a client service can find the network location of a service instance in a dynamic cloud environment (self/3rd party registration, client/server-side discovery).
- [DPS] **Database per Service:** Each microservice owns its own data and is solely responsible for it. Fundamental to loose coupling; requires new transaction management strategies.
- [SAG] **Saga:** Master pattern for managing data consistency across services without distributed transactions. Sequence of local transactions, each triggering the next via events/messages, with compensating transactions on failure.
- [OUT] **Transactional Outbox / Polling Publisher / Transaction Log Tailing:** Reliably publish messages/events as part of a local database transaction, ensuring no messages are lost if a service crashes after updating its database but before sending the message.
- [DOM] **Domain Model:** Classic object-oriented approach with classes containing both state and behaviour. Preferred for complex logic.
- [TSF] **Transaction Script:** Procedural approach where a single procedure handles a single request. Simpler, but unmanageable for complex logic.
- [AGG] **Aggregate:** A cluster of related domain objects treated as a single unit, with a root entity. Transactions only ever create or update a single aggregate.
- [DME] **Domain Events:** Aggregates publish events when their state changes. Foundation for event-driven architectures, sagas, and data replication.
- [EVS] **Event Sourcing:** Store the sequence of state-changing events rather than the current state. The current state is derived by replaying events, providing a reliable audit log and simplifying event publishing.
- [APC] **API Composition:** A client (or API Gateway) retrieves data from multiple services and joins it in memory. Simple for basic queries, inefficient for complex joins across large datasets.
- [CQR] **Command Query Responsibility Segregation (CQRS):** Maintain one or more denormalised, read-optimised "view" databases kept up-to-date by subscribing to events from the services that own the data. Separates the command-side (write) from the query-side (read) model.
- [APG] **API Gateway:** A single entry point for all external clients. Routes requests to backend services, can perform API composition, and handles cross-cutting concerns like authentication.
- [BFF] **Backends for Frontends:** A variation of the API Gateway pattern where you have a separate, tailored API gateway for each specific client (e.g., mobile app, web app).
- [CDC] **Consumer-Driven Contract Test:** A test written by the _consumer_ of a service to verify that the _provider_ meets its expectations, ensuring correct communication without slow, brittle end-to-end tests.
- [SCT] **Service Component Test:** Acceptance test for a single service in isolation, using stubs for external dependencies.
- [SVC] **Service as a Container:** Package a service as a container image (e.g., Docker) to encapsulate its technology stack.
- [SRL] **Serverless Deployment:** Deploy services using a platform like AWS Lambda that abstracts away the underlying infrastructure.
- [MSC] **Microservice Chassis:** A framework (e.g., Spring Boot + Spring Cloud) that handles cross-cutting concerns such as config, health checks, metrics, and distributed tracing.
- [SMH] **Service Mesh:** Infrastructure layer (e.g., Istio, Linkerd) that handles inter-service communication concerns like circuit breakers, distributed tracing, and load balancing outside of service code.
- [STR] **Strangler Application:** Strategy for migrating a monolith. Incrementally build new microservices around the monolith, gradually replacing it and avoiding a "big bang" rewrite.
More at <https://microservices.io/patterns/>

View File

@ -0,0 +1,60 @@
---
description: The definitive workflow for safely updating API specifications and generating Postman tests to ensure 100% consistency and prevent data loss.
---
# Workflow: API Spec & Test Generation
This workflow establishes `openapi.yaml` as the single source of truth for all API development. The Postman collection is **always generated** from this file. **NEVER edit the Postman JSON file directly.** This prevents inconsistencies and the kind of file corruption we have experienced.
## Guiding Principles
- **OpenAPI is the ONLY Source of Truth:** All API changes begin and end with `docs/api/openapi.yaml`.
- **Postman is a GENERATED ARTIFACT:** The collection file is treated as build output. It is never edited by hand.
- **Validate Before Generating:** Always validate the OpenAPI spec _before_ attempting to generate the Postman collection.
## Steps
### 1. Update the OpenAPI Specification (`docs/api/openapi.yaml`)
- **Identify API Changes:** Review the Groovy source code (e.g., `src/com/umig/api/v2/*.groovy`) to identify any new, modified, or removed endpoints.
- **Edit the Spec:** Carefully add, modify, or remove the corresponding endpoint definitions under `paths` and schemas under `components/schemas`.
- **Best Practice:** Use `allOf` to extend existing schemas non-destructively (e.g., adding audit fields to a base `User` schema).
- **Use an IDE with OpenAPI support** to get real-time linting and validation.
### 2. Validate the OpenAPI Specification
- **CRITICAL:** Before proceeding, validate your `openapi.yaml` file.
- Use your IDE's built-in OpenAPI preview or a dedicated linter.
- **DO NOT proceed if the file has errors.** Fix them first. This is the most important step to prevent downstream issues.
### 3. Generate the Postman Collection
- **Navigate to the correct directory** in your terminal. The command must be run from here:
```bash
cd docs/api/postman
```
- **Run the generation command:**
```bash
// turbo
npx openapi-to-postmanv2 -s ../openapi.yaml -o ./UMIG_API_V2_Collection.postman_collection.json -p -O folderStrategy=Tags
```
- **Note on `npx`:** The `npx` command runs the `openapi-to-postmanv2` package without requiring a global installation. If you see `command not found`, ensure you are using `npx`.
### 4. Verify the Changes
- **Review the Diff:** Use `git diff` to review the changes to `UMIG_API_V2_Collection.postman_collection.json`. Confirm that the new endpoint has been added and that no unexpected changes have occurred.
- **Test in Postman:** (Optional but recommended) Import the newly generated collection into Postman and run a few requests against a local dev environment to ensure correctness.
### 5. Document and Commit
- **Commit all changes:** Add the modified `openapi.yaml` and the generated `UMIG_API_V2_Collection.postman_collection.json` to your commit.
- **Update Changelog:** Add an entry to `CHANGELOG.md` detailing the API changes.
- **Update Dev Journal:** Create a developer journal entry summarizing the work done.scribe any removals or replacements and the rationale.
---
**Key Principles:**
- Never erase or overwrite existing tests/specs unless required by an API change.
- Every endpoint in the API must be present and tested in both Postman and OpenAPI.
- Consistency, completeness, and traceability are paramount.

44
tmp/workflows/api-work.md Normal file
View File

@ -0,0 +1,44 @@
---
description: The standard workflow for creating or modifying Groovy REST API endpoints in this project.
---
This workflow ensures all API development adheres to the project's established, stable patterns to prevent bugs and maintain consistency.
## Key Reference Documents
**PRIMARY REFERENCE**: `/docs/solution-architecture.md` — Comprehensive solution architecture and API design standards
**SUPPORTING REFERENCES**:
- Current ADRs in `/docs/adr/` (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
- Working examples: `src/com/umig/api/v2/TeamsApi.groovy`
1. **Analyze the Existing Pattern**:
- Before writing any code, thoroughly review a working, stable API file like `src/com/umig/api/v2/TeamsApi.groovy`.
- Pay close attention to the structure: separate endpoint definitions for each HTTP method, simple `try-catch` blocks for error handling, and standard `javax.ws.rs.core.Response` objects.
2. **Replicate the Pattern**:
- Create a new endpoint definition for each HTTP method (`GET`, `POST`, `PUT`, `DELETE`).
- Do NOT use a central dispatcher, custom exception classes, or complex helper methods for error handling. Keep all logic within the endpoint method.
3. **Implement Business Logic**:
- Write the core business logic inside a `try` block.
- Call the appropriate `UserRepository` or `TeamRepository` methods.
4. **Handle Success Cases**:
- For `GET`, `POST`, and `PUT`, return a `Response.ok()` or `Response.status(Response.Status.CREATED)` with a `JsonBuilder` payload.
- **CRITICAL**: For a successful `DELETE`, always return `Response.noContent().build()`. Do NOT attempt to return a body.
5. **Handle Error Cases**:
- Use `catch (SQLException e)` to handle specific database errors (e.g., foreign key violations `23503`, unique constraint violations `23505`).
- Use a generic `catch (Exception e)` for all other unexpected errors.
- In all `catch` blocks, log the error using `log.error()` or `log.warn()` and return an appropriate `Response.status(...)` with a simple JSON error message.
6. **Validate Inputs**:
- Strictly validate all incoming data (path parameters, request bodies) at the beginning of the endpoint method.
- Return a `400 Bad Request` for any invalid input.

162
tmp/workflows/commit.md Normal file
View File

@ -0,0 +1,162 @@
---
description: This is the basic workflow to gather last changes, prepare a relevant commit message and commit the staged code changes
---
This workflow guides the creation of a high-quality, comprehensive commit message that accurately reflects all staged changes, adhering strictly to the Conventional Commits 1.0 standard.
**1. Comprehensive Evidence Gathering (MANDATORY - Prevent tunnel vision):**
**1.1. Staged Changes Analysis:**
- **Detailed Diff Review:** Run `git diff --staged --stat` to get both summary and detailed view of all staged changes.
- **File-by-File Analysis:** Run `git diff --staged --name-status` to see the operation type (Modified, Added, Deleted) for each file.
- **Functional Area Classification:** Group staged files by functional area:
- **API Changes:** `src/groovy/umig/api/`, `src/groovy/umig/repository/`
- **UI Changes:** `src/groovy/umig/web/js/`, `src/groovy/umig/web/css/`, `src/groovy/umig/macros/`
- **Documentation:** `docs/`, `README.md`, `CHANGELOG.md`, `*.md` files
- **Tests:** `src/groovy/umig/tests/`, `local-dev-setup/__tests__/`
- **Configuration:** `local-dev-setup/liquibase/`, `*.json`, `*.yml`, `*.properties`
- **Database:** Migration files, schema changes
- **Change Type Analysis:** For each file, determine the type of change:
- New functionality added
- Existing functionality modified
- Bug fixes
- Refactoring or code cleanup
- Documentation updates
- Test additions or modifications
**1.2. Unstaged and Untracked Files Review:**
- **Related Files Check:** Run `git status --porcelain` to identify any untracked or unstaged files that might be related.
- **Completeness Verification:** Ensure all related changes are staged or deliberately excluded.
- **User Prompt:** If potentially related files are unstaged, prompt the user about inclusion.
**1.3. Work Stream Identification:**
- **Primary Work Stream:** Identify the main type of work being committed.
- **Secondary Work Streams:** Identify supporting changes (e.g., tests, documentation, configuration).
- **Cross-Functional Impact:** Note changes that span multiple functional areas.
- **Architecture Impact:** Identify any architectural or pattern changes.
**2. Multi-Context Rationale Analysis (MANDATORY - Address tunnel vision):**
**2.1. Session Context Review:**
- **Conversation Timeline:** Review the entire session conversation to understand the evolution of the work.
- **Initial Problem:** Identify the original problem or task that initiated the changes.
- **Decision Points:** Note key decisions made during the session that influenced the implementation.
- **Scope Evolution:** If the work expanded beyond the initial scope, understand how and why.
**2.2. Development Context:**
- **Dev Journal Review:** If a development journal entry was created during the session, review it for high-level narrative.
- **Related Work:** Check if this commit is part of a larger feature or bug fix spanning multiple commits.
- **Previous Commits:** Review recent commits to understand the progression of work.
**2.3. Business and Technical Context:**
- **Business Impact:** Understand what user-facing or system benefits this change provides.
- **Technical Motivation:** Identify the technical reasons for the changes (performance, maintainability, new features).
- **Problem-Solution Mapping:** For each work stream, clearly understand:
- What problem was being solved
- Why this particular solution was chosen
- What alternatives were considered
- What the outcome achieves
**2.4. Change Dependencies:**
- **Cross-Stream Dependencies:** How different work streams in this commit depend on each other.
- **External Dependencies:** Any external factors that influenced the changes.
- **Future Implications:** What this change enables or constrains for future development.
**3. Multi-Stream Commit Message Synthesis (MANDATORY - Address tunnel vision):**
The goal is to create a message that comprehensively explains all changes and their context for future developers.
**3.1. Type and Scope Selection:**
- **Primary Type:** Choose the most significant type from the allowed list (`feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `chore`).
- **Multi-Stream Consideration:** If multiple significant work streams exist, choose the type that best represents the overall impact.
- **Scope Selection:** Identify the primary part of the codebase affected:
- **Specific Components:** `api`, `ui`, `db`, `auth`, `docs`, `tests`
- **Functional Areas:** `admin`, `migration`, `iteration`, `planning`
- **System-Wide:** Use broader scopes for cross-cutting changes
**3.2. Subject Line Construction:**
- **Imperative Mood:** Write a concise summary (under 50 characters) in imperative mood.
- **Multi-Stream Subject:** If multiple work streams are significant, write a subject that captures the overall achievement.
- **Specific vs General:** Balance specificity with comprehensiveness.
**3.3. Body Structure (Enhanced for Multi-Stream):**
- **Primary Change Description:** Start with the main change and its motivation.
- **Work Stream Breakdown:** For each significant work stream:
- **What Changed:** Specific files, components, or functionality
- **Why Changed:** Problem being solved or improvement being made
- **How Changed:** Technical approach or implementation details
- **Impact:** What this enables or improves
- **Cross-Stream Integration:** How different work streams work together.
- **Technical Decisions:** Explain significant design choices and why alternatives were rejected.
- **Context:** Provide enough context for future developers to understand the change.
**3.4. Footer Considerations:**
- **Breaking Changes:** Use `BREAKING CHANGE:` for any breaking changes with migration notes.
- **Issue References:** Reference related issues (e.g., `Closes #123`, `Relates to #456`).
- **Co-authorship:** Add `Co-Authored-By:` for pair programming or AI assistance.
**3.5. Message Assembly:**
- **Single Coherent Story:** Weave multiple work streams into a single, coherent narrative.
- **Logical Flow:** Organize information in a logical sequence that makes sense to readers.
- **Appropriate Detail:** Include enough detail to understand the change without overwhelming.
**4. Anti-Tunnel Vision Verification (MANDATORY - Use before finalizing):**
Before presenting the commit message, verify you have addressed ALL of the following:
**Content Coverage:**
- [ ] All staged files are explained in the commit message
- [ ] All functional areas touched are documented
- [ ] All work streams are identified and described
- [ ] Change types (feat/fix/docs/etc.) are accurately represented
- [ ] Cross-functional impacts are noted
**Technical Completeness:**
- [ ] Code changes include rationale for the approach taken
- [ ] API changes are summarized with impact
- [ ] UI changes are explained with user impact
- [ ] Database changes include migration details
- [ ] Configuration changes are noted
- [ ] Test changes are explained
**Context and Rationale:**
- [ ] Original problem or motivation is clearly stated
- [ ] Solution approach is justified
- [ ] Technical decisions are explained
- [ ] Alternative approaches are noted (if relevant)
- [ ] Future implications are considered
**Message Quality:**
- [ ] Subject line is under 50 characters and imperative mood
- [ ] Body explains "what" and "why" for each work stream
- [ ] Information is organized in logical flow
- [ ] Appropriate level of detail for future developers
- [ ] Conventional Commits format is followed
**Completeness Verification:**
- [ ] All evidence from steps 1-2 is reflected in the message
- [ ] No significant work is missing from the description
- [ ] Multi-stream nature is properly represented
- [ ] Session context is appropriately captured
**5. Await Confirmation and Commit:**
- Present the generated commit message to the user for review.
- After receiving confirmation, execute the `git commit` command.

View File

@ -0,0 +1,63 @@
---
description: How to safely refine the data model, update migrations, and keep data generation and tests in sync
---
# Data Model Refinement & Synchronisation Workflow
This workflow ensures every data model change is robust, consistent, and reflected across migrations, documentation, data generation, and tests.
---
## 1. Reference the Authoritative Sources
Before making or reviewing any data model change, consult these key documents:
- `/docs/solution-architecture.md`**PRIMARY**: Comprehensive solution architecture and design decisions
- `/docs/dataModel/README.md` — Data model documentation and ERD
- `/local-dev-setup/liquibase/changelogs/001_unified_baseline.sql` — Baseline schema (Liquibase)
- `/docs/adr/` — Current ADRs (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
## 2. Plan the Change
- Identify the business or technical rationale for the change.
- Determine the impact on existing tables, columns, relationships, and constraints.
- Draft or update the ERD as needed.
## 3. Update the Schema
- Create or edit the appropriate Liquibase changelog(s) (never edit the baseline directly after project start).
- Follow naming conventions and migration strategy as per ADRs.
- Document every change with clear comments in the changelog.
## 4. Update Data Model Documentation
- Reflect all changes in `/docs/dataModel/README.md` (ERD, field lists, rationale).
- If the change is significant, consider updating or creating an ADR.
## 5. Synchronise Data Generation Scripts
- Review and update `local-dev-setup/data-utils/umig_generate_fake_data.js` (FAKER-based generator).
- Adjust or add generators in `local-dev-setup/data-utils/generators/` as needed.
- Ensure all generated data matches the new/updated schema.
## 6. Update and Extend Tests
- Update all related tests in `local-dev-setup/data-utils/__tests__/` to cover new/changed fields and relationships.
- Add new fixture data if needed.
- Ensure tests remain non-destructive and deterministic.
## 7. Validate
- Run all migrations in a fresh environment (dev/test).
- Run the data generation script and all tests; confirm no failures or regressions.
- Review the ERD and documentation for completeness and accuracy.
## 8. Document and Communicate
- Update `CHANGELOG.md` with a summary of the data model change.
- If required, update the main `README.md` and any relevant ADRs.
- Consider adding a Developer Journal entry to narrate the rationale and process.
---
> _Use this workflow every time you refine the data model to maintain project discipline, testability, and documentation integrity._

View File

@ -0,0 +1,151 @@
---
description: At the end of each session, we look back at everything that was said and done, and we write down a Development Journal Entry
---
The Developer Journal is a great way to keep track of our progress and document the way we made design decisions and coding breakthroughs.
The task is to generate a new Developer Journal entry in the `docs/devJournal` folder, in markdown format, using the naming convention `yyyymmdd-nn.md`.
The content of the entry must narrate the session's story. To ensure the full context is captured, you will follow these steps in order:
**1. Establish the 'Why' (The High-Level Context):**
- First, determine the active feature branch by running `git branch --show-current`.
- Then, find and read the most recent previous journal entry to understand the starting point.
- Synthesise these with the beginning of our current conversation to state the session's primary goal or the feature being worked on.
**2. Gather Evidence of 'The How' (The Journey):**
This step is critical to avoid "tunnel vision". You must perform a deep analysis of the entire session using multiple evidence sources.
**2.1. Multi-Source Evidence Gathering (MANDATORY - All sources must be reviewed):**
- **Conversation Chronology:** Create a timeline of the entire session from start to finish. Note every major topic, tool usage, file interaction, and decision point.
- **Git Commit Analysis:** Run `git log --since="YYYY-MM-DD" --stat --oneline` to get a comprehensive view of all commits since the last journal entry. Each commit represents a separate work stream that must be captured.
- **Staged Changes Analysis:** Run `git diff --staged --name-status` to see what's currently staged for commit (if anything).
- **File System Impact:** Run `git status --porcelain` to see all modified, added, and untracked files. Group by functional area (API, UI, docs, tests, etc.).
- **Documentation Trail:** Check for changes in:
- `CHANGELOG.md` (often contains structured summaries of work)
- `README.md` and other root-level documentation
- `docs/` directory (API specs, ADRs, solution architecture)
- `cline-docs/` (memory bank files)
- Any workflow executions mentioned in conversation
**2.2. Evidence Cross-Reference (MANDATORY - Prevent tunnel vision):**
- **Workflow Execution Review:** If any workflows were mentioned in the conversation (e.g., `.clinerules/workflows/`), review their outputs and ensure their objectives are captured.
- **API Development Pattern:** If API work was done, check for:
- New/modified Groovy files in `src/groovy/umig/api/`
- New/modified repository files in `src/groovy/umig/repository/`
- Documentation updates in `docs/api/`
- OpenAPI specification changes
- Postman collection regeneration
- **UI Development Pattern:** If UI work was done, check for:
- JavaScript file changes in `src/groovy/umig/web/js/`
- CSS changes in `src/groovy/umig/web/css/`
- Macro changes in `src/groovy/umig/macros/`
- Mock/prototype updates in `mock/`
- **Refactoring Pattern:** If refactoring was done, check for:
- File moves, renames, or splits
- Architecture changes reflected in project structure
- New patterns or modules introduced
- Breaking changes or deprecations
**2.3. Completeness Verification (MANDATORY - Final check):**
- **Three-Pass Review:**
1. **First Pass:** What was the initial request/problem?
2. **Second Pass:** What were all the intermediate steps and discoveries?
3. **Third Pass:** What was the final state and all deliverables?
- **Breadth vs Depth Check:** Ensure both technical depth (how things were implemented) and breadth (all areas touched) are captured.
- **Hidden Work Detection:** Look for "invisible" work like:
- Configuration changes
- Dependency updates
- Test file modifications
- Documentation synchronization
- Workflow or process improvements
**3. Synthesise and Write the Narrative:**
The goal is to write a detailed, insightful story, not a shallow summary. Prioritise depth and clarity over brevity.
**3.1. Multi-Stream Integration (MANDATORY - Address tunnel vision):**
- **Identify All Work Streams:** Based on evidence gathering, create a list of all distinct work streams (e.g., "API documentation", "Admin GUI refactoring", "Environment API implementation", "Schema consistency fixes").
- **Parallel vs Sequential Work:** Determine which work streams were parallel (done simultaneously) vs sequential (one led to another).
- **Cross-Stream Dependencies:** Note how different work streams influenced each other (e.g., API documentation revealed schema issues that required code changes).
- **Scope Creep Documentation:** If the session expanded beyond initial scope, document how and why this happened.
**3.2. Narrative Structure (Enhanced):**
- **Copy and Fill the Template:** For every new devJournal entry, always copy and fill in the persistent template at `docs/devJournal/devJournalEntryTemplate.md`. This ensures consistency, quality, and traceability across all devJournal entries.
- **Multi-Problem Awareness:** If multiple problems were addressed, structure the narrative to handle multiple concurrent themes rather than forcing a single linear story.
- **Enhanced Story Arc:** The "How" section should follow this comprehensive structure:
1. **The Initial Problem(s):** Clearly describe all bugs, errors, or tasks at the start of the session. Note if scope expanded.
2. **The Investigation:** Detail the debugging/analysis process for each work stream. What did we look at first? What were our initial hypotheses? What tools did we use?
3. **The Breakthrough(s):** Describe key insights or discoveries for each work stream. Note cross-stream insights.
4. **Implementation and Refinements:** Explain how solutions were implemented across all work streams. Detail code changes and architectural improvements.
5. **Validation and Documentation:** Describe how we confirmed fixes worked and updated documentation across all areas.
- **Technical Depth Requirements:** For each work stream, ensure you capture:
- **What changed** (files, code, configuration)
- **Why it changed** (problem being solved, improvement being made)
- **How it changed** (technical approach, patterns used)
- **Impact** (what this enables, what problems it solves)
**3.3. Quality Assurance (MANDATORY - Final verification):**
- **Evidence vs Narrative Cross-Check:** Verify that every piece of evidence from step 2 has been addressed in the narrative.
- **Completeness Audit:** Check that the journal entry would allow someone to understand:
- The full scope of work accomplished
- The technical decisions made and why
- The current state of the project
- What should be done next
- **Tone and Format:** The tone should be in British English, and the format should be raw markdown.
- **Final Review:** Before presenting the journal entry, re-read it one last time to ensure it captures the full journey and avoids the "tunnel vision" of only looking at the final code or the most recent work.
**4. Anti-Tunnel Vision Checklist (MANDATORY - Use before finalizing):**
Before presenting the journal entry, verify you have addressed ALL of the following:
**Content Coverage:**
- [ ] All git commits since last journal entry are documented
- [ ] All workflow executions mentioned in conversation are captured
- [ ] All file modifications (API, UI, docs, tests, config) are explained
- [ ] All architectural or pattern changes are documented
- [ ] All bug fixes and their root causes are explained
- [ ] All new features and their implementation are detailed
**Work Stream Integration:**
- [ ] Multiple work streams are identified and explained
- [ ] Parallel vs sequential work is clearly distinguished
- [ ] Cross-dependencies between work streams are noted
- [ ] Scope expansions are documented with reasoning
**Technical Depth:**
- [ ] Code changes include the "what", "why", "how", and "impact"
- [ ] Database schema changes are documented
- [ ] API changes include request/response examples
- [ ] UI changes include user experience impact
- [ ] Documentation changes and their necessity are explained
**Project Context:**
- [ ] Current project state is accurately reflected
- [ ] Next steps and priorities are updated
- [ ] Key learnings and patterns are documented
- [ ] Project milestone significance is noted
**Quality Verification:**
- [ ] Evidence from step 2 matches narrative content
- [ ] No significant work is missing from the story
- [ ] Technical decisions are justified and explained
- [ ] Future developers could understand the session's impact
**5. Await Confirmation:**
- After presenting the generated journal entry, **DO NOT** proceed with any other actions, especially committing.
- Wait for explicit confirmation or further instructions from the user.

View File

@ -0,0 +1,10 @@
---
description: A workflow to update the project documentation and memories based on latest changes
---
- Review and summarise the latest changes performed, based on the cascade conversation and on the git status. Be concise but comprehensive.
- **CRITICAL**: If changes affect architecture, update `/docs/solution-architecture.md` as the primary reference
- Do any changes require a new ADR in `/docs/adr/` (archived ADRs in `/docs/adr/archive/` are consolidated in solution-architecture.md)
- Update as required the CHANGELOG
- Update as required the main README file
- Update as required the README files in subfolders

12
tmp/workflows/kick-off.md Normal file
View File

@ -0,0 +1,12 @@
---
description: We run this workflow at the beginning of each new Cascade session, to make sure that the agent has the correct understanding of the state of the development.
---
- Review the memories
- **PRIORITY**: Review `/docs/solution-architecture.md` — Primary architectural reference document
- Review project documentation in folder /cline-docs
- Review the developer journal entries in folder /docs/devJournal
- Review current ADRs in folder `/docs/adr` (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
- Confirm your good understanding of the project's requirements and the current state of the development
- Advise if there are any documentation inconsistencies to resolve
- Recommend the next steps and tasks to be tackled.

View File

@ -0,0 +1,9 @@
This task is about updating the memory bank of cline, in folder cline-docs
You will do that based on
- the Developer Journal entries of the day, that you will find in folder DevJournal,
- the CHANGELOG.md file
- the various README.md files
- the Architectural Decision Records, that you will find in folder Docs/adrs
Be concise but comprehensive and accurate. Ensure consistency within the existing memory bank. Express yourself in british english.

View File

@ -0,0 +1,271 @@
---
description: A Pull Request documentation workflow
---
This workflow guides the creation of a high-quality, comprehensive Pull Request description. A great PR description is the fastest way to get your changes reviewed and merged.
**1. Comprehensive Scope Analysis (MANDATORY - Prevent tunnel vision):**
**1.1. Branch and Commit Analysis:**
- **Determine the Base Branch:** Identify the target branch for the merge (e.g., `main`, `develop`).
- **Full Commit Analysis:** Run `git log <base_branch>..HEAD --stat --oneline` to get both summary and detailed changes for all commits in this PR.
- **Commit Categorization:** Group commits by type (feat, fix, docs, refactor, test, chore) to understand the full scope.
- **Time Range Assessment:** Run `git log <base_branch>..HEAD --format="%h %ad %s" --date=short` to understand the development timeline.
**1.2. File System Impact Analysis:**
- **Changed Files Overview:** Run `git diff <base_branch>..HEAD --name-status` to see all modified, added, and deleted files.
- **Functional Area Mapping:** Group changed files by functional area:
- **API Changes:** `src/groovy/umig/api/`, `src/groovy/umig/repository/`
- **UI Changes:** `src/groovy/umig/web/js/`, `src/groovy/umig/web/css/`, `src/groovy/umig/macros/`
- **Documentation:** `docs/`, `README.md`, `CHANGELOG.md`, `*.md` files
- **Tests:** `src/groovy/umig/tests/`, `local-dev-setup/__tests__/`
- **Configuration:** `local-dev-setup/liquibase/`, `*.json`, `*.yml`, `*.properties`
- **Database:** Migration files, schema changes
- **Cross-Functional Impact:** Identify changes that span multiple functional areas.
**1.3. Work Stream Identification:**
- **Primary Work Streams:** Based on commits and file changes, identify distinct work streams (e.g., "API implementation", "UI refactoring", "documentation updates").
- **Secondary Work Streams:** Identify supporting work (e.g., "schema fixes", "test updates", "configuration changes").
- **Parallel vs Sequential:** Determine which work streams were done in parallel vs. sequence.
- **Dependencies:** Note how different work streams depend on each other.
**2. Multi-Stream Narrative Synthesis (MANDATORY - Address tunnel vision):**
A PR is a story that may have multiple parallel themes. You need to explain the "why," the "what," and the "how" for each work stream.
**2.1. Context and Motivation Analysis:**
- **Development Context:** Review recent dev journal entries, session context, and any associated tickets (e.g., Jira, GitHub Issues).
- **Problem Statement:** For each work stream, clearly articulate:
- What problem was being solved or feature being added?
- What was the state of the application before this change?
- What will it be after?
- **Business Impact:** Explain the user-facing or technical benefits of the changes.
- **Scope Evolution:** If the PR scope expanded during development, explain how and why.
**2.2. Technical Implementation Analysis:**
- **Architecture Overview:** Describe the overall technical approach and any significant architectural decisions.
- **Work Stream Details:** For each work stream identified in step 1:
- **API Changes:** New endpoints, schema modifications, repository patterns
- **UI Changes:** Component modifications, styling updates, user experience improvements
- **Documentation:** What docs were updated and why
- **Database Changes:** Schema migrations, data model updates
- **Configuration:** Environment or build configuration changes
- **Tests:** New test coverage, test framework updates
- **Technical Decisions:** Explain why you chose specific solutions over alternatives.
- **Patterns and Standards:** Note adherence to or establishment of new project patterns.
**2.3. Integration and Dependencies:**
- **Cross-Stream Integration:** How different work streams work together.
- **Breaking Changes:** Any breaking changes and migration path.
- **Backward Compatibility:** How existing functionality is preserved.
- **Future Implications:** What this change enables for future development.
**3. Comprehensive Review Instructions (MANDATORY - Cover all work streams):**
Make it easy for others to review your work across all functional areas.
**3.1. Testing Instructions by Work Stream:**
- **API Testing:** For each new or modified API endpoint:
- Provide curl commands or Postman collection references
- Include expected request/response examples
- Note any authentication or setup requirements
- Identify edge cases and error scenarios to test
- **UI Testing:** For each UI change:
- Provide step-by-step user interaction flows
- Include screenshots or GIFs showing before/after states
- Identify specific user scenarios to test
- Note any browser-specific considerations
- **Database Testing:** For schema changes:
- Provide migration verification steps
- Include data verification queries
- Note any rollback procedures
- **Configuration Testing:** For environment changes:
- Provide setup or configuration verification steps
- Include any new environment variables or settings
- Note any deployment considerations
**3.2. Review Focus Areas:**
- **Code Quality:** Highlight areas that need particular attention (complex logic, new patterns, potential performance impacts).
- **Security:** Note any security considerations or authentication changes.
- **Performance:** Identify any performance-critical changes or optimizations.
- **Compatibility:** Note any backward compatibility concerns or breaking changes.
**3.3. Verification Checklist:**
- **Functional Verification:** What specific functionality should reviewers verify works correctly?
- **Integration Testing:** How should reviewers verify that different components work together?
- **Edge Case Testing:** What edge cases or error conditions should be tested?
- **Documentation Review:** What documentation should be reviewed for accuracy and completeness?
**4. Enhanced PR Description Template (MANDATORY - Multi-stream aware):**
Use a structured template that accommodates multiple work streams and comprehensive coverage.
**4.1. Title Construction:**
- **Primary Work Stream:** Use the most significant work stream for the title following Conventional Commits standard.
- **Multi-Stream Indicator:** If multiple significant work streams exist, use a broader scope (e.g., `feat(admin): complete user management system with API and UI`).
**4.2. Enhanced Body Template:**
```markdown
## Summary
<!-- Brief overview of the PR's purpose and scope. What problem does this solve? -->
## Work Streams
<!-- List all major work streams in this PR -->
### 🚀 [Primary Work Stream Name]
- Brief description of changes
- Key files modified
- Impact on users/system
### 🔧 [Secondary Work Stream Name]
- Brief description of changes
- Key files modified
- Impact on users/system
## Technical Changes
<!-- Detailed breakdown by functional area -->
### API Changes
- New endpoints:
- Modified endpoints:
- Schema changes:
- Repository updates:
### UI Changes
- New components:
- Modified components:
- Styling updates:
- User experience improvements:
### Database Changes
- Schema migrations:
- Data model updates:
- Migration scripts:
### Documentation Updates
- API documentation:
- User documentation:
- Developer documentation:
- Configuration documentation:
## Testing Instructions
<!-- Work stream specific testing -->
### API Testing
1. [Specific API test steps]
2. [Expected outcomes]
3. [Edge cases to verify]
### UI Testing
1. [Specific UI test steps]
2. [User flows to verify]
3. [Browser compatibility checks]
### Database Testing
1. [Migration verification]
2. [Data integrity checks]
3. [Rollback verification]
## Screenshots / Recordings
<!-- Visual evidence of changes -->
### Before
[Screenshots/GIFs of old behavior]
### After
[Screenshots/GIFs of new behavior]
## Review Focus Areas
<!-- Areas needing particular attention -->
- [ ] **Code Quality:** [Specific areas to focus on]
- [ ] **Security:** [Security considerations]
- [ ] **Performance:** [Performance impacts]
- [ ] **Compatibility:** [Breaking changes or compatibility concerns]
## Deployment Notes
<!-- Any special deployment considerations -->
- Environment variables:
- Configuration changes:
- Database migrations:
- Rollback procedures:
## Related Issues
<!-- Link to any related issues, e.g., "Closes #123" -->
## Checklist
- [ ] All work streams are documented above
- [ ] Testing instructions cover all functional areas
- [ ] Documentation is updated for all changes
- [ ] Database migrations are tested
- [ ] API changes are documented
- [ ] UI changes are demonstrated with screenshots
- [ ] Code follows project style guidelines
- [ ] All tests pass
- [ ] Breaking changes are documented
- [ ] Deployment considerations are noted
```
**5. Anti-Tunnel Vision Verification (MANDATORY - Use before finalizing):**
Before presenting the PR description, verify you have addressed ALL of the following:
**Content Coverage:**
- [ ] All commits in the PR are explained
- [ ] All modified files are accounted for
- [ ] All functional areas touched are documented
- [ ] All work streams are identified and described
- [ ] Cross-functional impacts are noted
**Technical Completeness:**
- [ ] API changes include endpoint details and examples
- [ ] UI changes include visual evidence and user flows
- [ ] Database changes include migration details
- [ ] Configuration changes include deployment notes
- [ ] Documentation updates are comprehensive
**Review Readiness:**
- [ ] Testing instructions are clear and complete
- [ ] Review focus areas are identified
- [ ] Deployment considerations are documented
- [ ] Rollback procedures are noted (if applicable)
- [ ] Breaking changes are clearly highlighted
**6. Final Review:**
- Present the generated PR title and body to the user for final review and approval before they create the pull request on their Git platform.

View File

@ -0,0 +1,222 @@
---
description: Sprint Review & Retrospective (UMIG)
---
# Sprint Review & Retrospective Workflow
> **Filename convention:** `{yyyymmdd}-sprint-review.md` (e.g., `20250627-sprint-review.md`). Place in `/docs/devJournal/`.
This workflow guides the team through a structured review and retrospective at the end of each sprint or major iteration. It ensures that all accomplishments, learnings, and opportunities for improvement are captured, and that the next sprint is set up for success.
---
## 1. Gather Sprint Context
**Before generating the sprint review document, fill in or confirm the following:**
- **Sprint Dates:** (Enter start and end date, e.g., 2025-06-16 2025-06-27)
- **Participants:** (List all team members involved)
- **Branch/Release:** (Run the command below to list all branches created or active during the sprint)
```sh
git branch --format='%(refname:short) %(creatordate:short)' | grep 'YYYY-MM'
```
- **Metrics:** (Run the following commands, replacing dates as appropriate)
- **Commits:**
```sh
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
```
- **PRs Merged:**
```sh
git log --merges --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
```
For details:
```sh
git log --merges --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline
```
- **Issues Closed:**
```sh
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --grep="close[sd]\\|fixe[sd]" --oneline | wc -l
```
For a list:
```sh
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --grep="close[sd]\\|fixe[sd]" --oneline
```
- **Highlights:** (What are the biggest achievements or milestones? E.g., POC completion)
- **Blockers:** (Any major blockers or pain points encountered)
- **Learnings:** (Key technical, process, or team insights)
---
## 2. Generate the Sprint Review Document
Once the above context is filled, generate a new file named `{yyyymmdd}-sprint-review.md` in `/docs/devJournal/` using the following structure:
---
### 1. Sprint Overview
- **Sprint Dates:** (start date end date)
- **Sprint Goal:** (Summarise the main objective or theme of the sprint)
- **Participants:** (List team members involved)
- **Branch/Release:** (List all relevant branches/tags)
---
### 2. Achievements & Deliverables
- **Major Features Completed:** (Bullet list, with links to PRs or dev journal entries)
- **Technical Milestones:** (E.g., architectural decisions, major refactors, new patterns adopted)
- **Documentation Updates:** (Summarise key documentation, changelog, or ADR updates)
- **Testing & Quality:** (Describe test coverage improvements, integration test results, bug fixes)
---
### 3. Sprint Metrics
- **Commits:** (Paste result)
- **PRs Merged:** (Paste result and details)
- **Issues Closed:** (Paste result and details)
- **Branches Created:** (Paste result)
---
### 4. Review of Sprint Goals
- **What was planned:** (Paste or paraphrase the original sprint goal)
- **What was achieved:** (Honest assessment of goal completion)
- **What was not completed:** (List and explain any items not finished, with reasons)
---
### 5. Demo & Walkthrough
- **Screenshots, GIFs, or short video links:** (if available)
- **Instructions for reviewers:** (How to test/review the new features)
---
### 6. Retrospective
#### What Went Well
- (Successes, effective practices, positive surprises)
#### What Didnt Go Well
- (Blockers, pain points, technical debt, process issues)
#### What We Learned
- (Technical, process, or team insights)
#### What Well Try Next
- (Actions to improve, experiments for next sprint)
---
### 7. Action Items & Next Steps
- (Concrete actions, owners, deadlines for next sprint)
---
### 8. References
- **Dev Journal Entries:** (List all relevant `/docs/devJournal/YYYYMMDD-nn.md` files)
- **ADR(s):** (Link to any new or updated ADRs)
- **Changelog/Docs:** (Links to major documentation changes)
- CHANGELOG.md
- .cline-docs/progress.md
- .cline-docs/activeContext.md
---
> _Use this workflow at the end of each sprint to ensure a culture of continuous improvement, transparency, and knowledge sharing._
> **Filename convention:** `{yyyymmdd}-sprint-review.md` (e.g., `20250627-sprint-review.md`). Place in `/docs/devJournal/`.
This workflow guides the team through a structured review and retrospective at the end of each sprint or major iteration. It ensures that all accomplishments, learnings, and opportunities for improvement are captured, and that the next sprint is set up for success.
---
## 1. Sprint Overview
- **Sprint Dates:** (start date end date)
- **Sprint Goal:** (Summarise the main objective or theme of the sprint)
- **Participants:** (List team members involved)
- **Branch/Release:** (Relevant branch/tag or release milestone)
---
## 2. Achievements & Deliverables
- **Major Features Completed:** (Bullet list, with links to PRs or dev journal entries)
- **Technical Milestones:** (E.g., architectural decisions, major refactors, new patterns adopted)
- **Documentation Updates:** (Summarise key documentation, changelog, or ADR updates)
- **Testing & Quality:** (Describe test coverage improvements, integration test results, bug fixes)
---
## 3. Sprint Metrics
- **Commits:** (Number or summary, e.g., `git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l`)
- **PRs Merged:** (Count and/or links)
- **Issues Closed:** (Count and/or links)
- **Test Coverage:** (Summarise if measured)
---
## 4. Review of Sprint Goals
- **What was planned:** (Paste or paraphrase the original sprint goal)
- **What was achieved:** (Honest assessment of goal completion)
- **What was not completed:** (List and explain any items not finished, with reasons)
---
## 5. Demo & Walkthrough
- **Screenshots, GIFs, or short video links** (if available)
- **Instructions for reviewers:** (How to test/review the new features)
---
## 6. Retrospective
### What Went Well
- (Bullet points: successes, effective practices, positive surprises)
### What Didnt Go Well
- (Bullet points: blockers, pain points, technical debt, process issues)
### What We Learned
- (Bullet points: technical, process, or team insights)
### What Well Try Next
- (Bullet points: actions to improve, experiments for next sprint)
---
## 7. Action Items & Next Steps
- (Bullet list of concrete actions, owners, and deadlines for the next sprint)
---
## 8. References
- **Dev Journal Entries:** (List all relevant `/docs/devJournal/YYYYMMDD-nn.md` files)
- **ADR(s):** (Link to any new or updated ADRs)
- **Changelog/Docs:** (Links to major documentation changes)
- CHANGELOG.md
- .cline-docs/progress.md
- .cline-docs/activeContext.md
---
> _Use this template at the end of each sprint to ensure a culture of continuous improvement, transparency, and knowledge sharing._