feat: comprehensive BMAD enhancements with Memory Bank, ADRs, and Sprint workflows
This commit introduces major enhancements to the BMAD framework, focusing on continuous context management, architectural decision recording, and agile sprint execution. ## Major Additions ### Memory Bank Pattern Integration - Added session-kickoff task for consistent agent initialization - Updated all agents to include Memory Bank awareness - Created session-kickoff-checklist.md for validation - Enhanced all templates with Memory Bank cross-references - Integrated Memory Bank updates into workflow handoffs ### Architectural Decision Records (ADRs) - Added ADR triggers and patterns to data files - Updated architect agent with ADR creation responsibilities - Enhanced templates to reference ADR documentation - Created ADR-aware checklists for architectural reviews ### Sprint Ceremonies & Reviews - Added conduct-sprint-review task - Created sprint-review-checklist.md - Added sprint-planning-tmpl.yaml and sprint-review-tmpl.yaml - Updated PM and SM agents with sprint review capabilities - Added sprint-review-triggers.md for automation ### Development Journals - Enhanced dev journal template with sprint cross-references - Updated create-dev-journal task with Memory Bank integration - Added sprint-end journal documentation requirements ### Technical Principles & Standards - Added twelve-factor-principles.md - Added microservice-patterns.md - Added coding-standards.md - Created project-scaffolding-preference.md from generic rules - Updated technical-preferences.md with cloud-native patterns ### New Workflows - sprint-execution.yaml - End-to-end sprint workflow - documentation-update.yaml - Documentation maintenance - technical-debt.yaml - Debt reduction workflow - performance-optimization.yaml - Performance improvement - system-migration.yaml - Legacy system migration - quick-fix.yaml - Rapid issue resolution ## Framework-Wide Updates ### Agent Enhancements - All agents updated with new checklist dependencies - Added data file dependencies for technical standards - Enhanced startup instructions with context awareness - Improved handoff protocols between agents ### Template Improvements - All templates updated with Memory Bank sections - Added LLM instructions for AI-specific guidance - Enhanced YAML templates with new validation rules - Improved cross-referencing between documents ### Checklist Updates - Updated all existing checklists with new sections - Added Memory Bank verification steps - Enhanced validation criteria - Added sprint-specific checkpoints ### Data Files - Reorganized and enhanced all data files - Added new reference documents for standards - Updated bmad-kb.md with comprehensive documentation - Added elicitation method improvements ### Expansion Pack Updates - Updated all expansion pack templates - Added Memory Bank awareness to game dev packs - Enhanced infrastructure DevOps templates ## Technical Improvements ### Tools & Build System - Updated all tool files with latest patterns - Enhanced web builder for new file types - Improved dependency resolution - Updated IDE configurations ### Cleanup - Removed tmp/ directory with obsolete rules - Consolidated rules into framework data files - Added megalinter-reports/ to .gitignore ## Breaking Changes None - all changes are backward compatible ## Testing - All file modifications validated - Dependencies verified across framework - Installation process tested This represents a major evolution of the BMAD framework, providing comprehensive support for continuous context management, architectural decision tracking, and agile sprint execution while maintaining full backward compatibility. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
b6ca88a608
commit
ded53af686
|
|
@ -1,5 +1,5 @@
|
||||||
name: Release
|
name: Release
|
||||||
'on':
|
"on":
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
@ -22,7 +22,7 @@ permissions:
|
||||||
jobs:
|
jobs:
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
if: '!contains(github.event.head_commit.message, ''[skip ci]'')'
|
if: "!contains(github.event.head_commit.message, '[skip ci]')"
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
|
@ -32,7 +32,7 @@ jobs:
|
||||||
- name: Setup Node.js
|
- name: Setup Node.js
|
||||||
uses: actions/setup-node@v4
|
uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
node-version: '18'
|
node-version: "18"
|
||||||
cache: npm
|
cache: npm
|
||||||
registry-url: https://registry.npmjs.org
|
registry-url: https://registry.npmjs.org
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
|
|
|
||||||
|
|
@ -28,3 +28,4 @@ sample-project/*
|
||||||
.gemini
|
.gemini
|
||||||
.bmad*/.cursor/
|
.bmad*/.cursor/
|
||||||
web-bundles/
|
web-bundles/
|
||||||
|
megalinter-reports/
|
||||||
|
|
@ -9,7 +9,12 @@
|
||||||
[
|
[
|
||||||
"@semantic-release/git",
|
"@semantic-release/git",
|
||||||
{
|
{
|
||||||
"assets": ["package.json", "package-lock.json", "tools/installer/package.json", "CHANGELOG.md"],
|
"assets": [
|
||||||
|
"package.json",
|
||||||
|
"package-lock.json",
|
||||||
|
"tools/installer/package.json",
|
||||||
|
"CHANGELOG.md"
|
||||||
|
],
|
||||||
"message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}"
|
"message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ bundle:
|
||||||
description: Includes every core system agent.
|
description: Includes every core system agent.
|
||||||
agents:
|
agents:
|
||||||
- bmad-orchestrator
|
- bmad-orchestrator
|
||||||
- '*'
|
- "*"
|
||||||
workflows:
|
workflows:
|
||||||
- brownfield-fullstack.yaml
|
- brownfield-fullstack.yaml
|
||||||
- brownfield-service.yaml
|
- brownfield-service.yaml
|
||||||
|
|
@ -12,3 +12,9 @@ workflows:
|
||||||
- greenfield-fullstack.yaml
|
- greenfield-fullstack.yaml
|
||||||
- greenfield-service.yaml
|
- greenfield-service.yaml
|
||||||
- greenfield-ui.yaml
|
- greenfield-ui.yaml
|
||||||
|
- sprint-execution.yaml
|
||||||
|
- quick-fix.yaml
|
||||||
|
- technical-debt.yaml
|
||||||
|
- documentation-update.yaml
|
||||||
|
- system-migration.yaml
|
||||||
|
- performance-optimization.yaml
|
||||||
|
|
|
||||||
|
|
@ -16,3 +16,9 @@ workflows:
|
||||||
- greenfield-fullstack.yaml
|
- greenfield-fullstack.yaml
|
||||||
- greenfield-service.yaml
|
- greenfield-service.yaml
|
||||||
- greenfield-ui.yaml
|
- greenfield-ui.yaml
|
||||||
|
- sprint-execution.yaml
|
||||||
|
- quick-fix.yaml
|
||||||
|
- technical-debt.yaml
|
||||||
|
- documentation-update.yaml
|
||||||
|
- system-migration.yaml
|
||||||
|
- performance-optimization.yaml
|
||||||
|
|
|
||||||
|
|
@ -11,3 +11,9 @@ agents:
|
||||||
workflows:
|
workflows:
|
||||||
- greenfield-service.yaml
|
- greenfield-service.yaml
|
||||||
- brownfield-service.yaml
|
- brownfield-service.yaml
|
||||||
|
- sprint-execution.yaml
|
||||||
|
- quick-fix.yaml
|
||||||
|
- technical-debt.yaml
|
||||||
|
- documentation-update.yaml
|
||||||
|
- system-migration.yaml
|
||||||
|
- performance-optimization.yaml
|
||||||
|
|
|
||||||
|
|
@ -51,12 +51,19 @@ persona:
|
||||||
- Maintaining a Broad Perspective - Stay aware of market trends and dynamics
|
- Maintaining a Broad Perspective - Stay aware of market trends and dynamics
|
||||||
- Integrity of Information - Ensure accurate sourcing and representation
|
- Integrity of Information - Ensure accurate sourcing and representation
|
||||||
- Numbered Options Protocol - Always use numbered lists for selections
|
- Numbered Options Protocol - Always use numbered lists for selections
|
||||||
|
memory_bank_awareness:
|
||||||
|
- Project briefs can form foundation of Memory Bank projectbrief.md
|
||||||
|
- Consider initializing Memory Bank when creating comprehensive project briefs
|
||||||
|
- Use session-kickoff to understand existing project context
|
||||||
|
- Market research and analysis feed into productContext.md
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
- create-project-brief: use task create-doc with project-brief-tmpl.yaml
|
- create-project-brief: use task create-doc with project-brief-tmpl.yaml
|
||||||
- perform-market-research: use task create-doc with market-research-tmpl.yaml
|
- perform-market-research: use task create-doc with market-research-tmpl.yaml
|
||||||
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
|
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
|
||||||
|
- initialize-memory-bank: Execute task initialize-memory-bank.md to create Memory Bank structure
|
||||||
- yolo: Toggle Yolo Mode
|
- yolo: Toggle Yolo Mode
|
||||||
- doc-out: Output full document in progress to current destination file
|
- doc-out: Output full document in progress to current destination file
|
||||||
- research-prompt {topic}: execute task create-deep-research-prompt.md
|
- research-prompt {topic}: execute task create-deep-research-prompt.md
|
||||||
|
|
@ -70,12 +77,17 @@ dependencies:
|
||||||
- create-doc.md
|
- create-doc.md
|
||||||
- advanced-elicitation.md
|
- advanced-elicitation.md
|
||||||
- document-project.md
|
- document-project.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- initialize-memory-bank.md
|
||||||
templates:
|
templates:
|
||||||
- project-brief-tmpl.yaml
|
- project-brief-tmpl.yaml
|
||||||
- market-research-tmpl.yaml
|
- market-research-tmpl.yaml
|
||||||
- competitor-analysis-tmpl.yaml
|
- competitor-analysis-tmpl.yaml
|
||||||
- brainstorming-output-tmpl.yaml
|
- brainstorming-output-tmpl.yaml
|
||||||
|
- projectbrief-tmpl.yaml
|
||||||
|
- productContext-tmpl.yaml
|
||||||
data:
|
data:
|
||||||
- bmad-kb.md
|
- bmad-kb.md
|
||||||
- brainstorming-techniques.md
|
- brainstorming-techniques.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -53,6 +53,12 @@ persona:
|
||||||
- Cost-Conscious Engineering - Balance technical ideals with financial reality
|
- Cost-Conscious Engineering - Balance technical ideals with financial reality
|
||||||
- Living Architecture - Design for change and adaptation
|
- Living Architecture - Design for change and adaptation
|
||||||
- Decision Documentation - Capture architectural decisions in ADRs for future reference
|
- Decision Documentation - Capture architectural decisions in ADRs for future reference
|
||||||
|
technical_principles_awareness:
|
||||||
|
- Apply coding standards from data/coding-standards.md to all generated code
|
||||||
|
- Follow twelve-factor principles for cloud-native applications
|
||||||
|
- Consider microservice patterns for distributed systems when appropriate
|
||||||
|
- Reference principles when making architectural decisions
|
||||||
|
- Document pattern choices and rationale in ADRs
|
||||||
adr_responsibilities:
|
adr_responsibilities:
|
||||||
- Identify when architectural decisions require formal documentation
|
- Identify when architectural decisions require formal documentation
|
||||||
- Guide creation of ADRs for significant technology choices and patterns
|
- Guide creation of ADRs for significant technology choices and patterns
|
||||||
|
|
@ -68,6 +74,7 @@ persona:
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
- create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
|
- create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
|
||||||
- create-backend-architecture: use create-doc with architecture-tmpl.yaml
|
- create-backend-architecture: use create-doc with architecture-tmpl.yaml
|
||||||
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
|
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
|
||||||
|
|
@ -97,6 +104,7 @@ dependencies:
|
||||||
- create-comprehensive-pr.md
|
- create-comprehensive-pr.md
|
||||||
- initialize-memory-bank.md
|
- initialize-memory-bank.md
|
||||||
- update-memory-bank.md
|
- update-memory-bank.md
|
||||||
|
- session-kickoff.md
|
||||||
templates:
|
templates:
|
||||||
- architecture-tmpl.yaml
|
- architecture-tmpl.yaml
|
||||||
- front-end-architecture-tmpl.yaml
|
- front-end-architecture-tmpl.yaml
|
||||||
|
|
@ -111,7 +119,12 @@ dependencies:
|
||||||
- progress-tmpl.yaml
|
- progress-tmpl.yaml
|
||||||
checklists:
|
checklists:
|
||||||
- architect-checklist.md
|
- architect-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
data:
|
data:
|
||||||
- technical-preferences.md
|
- technical-preferences.md
|
||||||
- adr-triggers.md
|
- adr-triggers.md
|
||||||
|
- coding-standards.md
|
||||||
|
- twelve-factor-principles.md
|
||||||
|
- microservice-patterns.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -46,6 +46,14 @@ persona:
|
||||||
- Expert knowledge of all BMad resources if using *kb
|
- Expert knowledge of all BMad resources if using *kb
|
||||||
- Always presents numbered lists for choices
|
- Always presents numbered lists for choices
|
||||||
- Process (*) commands immediately, All commands require * prefix when used (e.g., *help)
|
- Process (*) commands immediately, All commands require * prefix when used (e.g., *help)
|
||||||
|
enhanced_capabilities_awareness:
|
||||||
|
- Memory Bank pattern for context persistence across sessions
|
||||||
|
- Architectural Decision Records (ADRs) for decision documentation
|
||||||
|
- Development Journals for session documentation
|
||||||
|
- Comprehensive commit and PR workflows
|
||||||
|
- Technical principles (coding standards, twelve-factor, microservices)
|
||||||
|
- Session kickoff protocol for proper agent initialization
|
||||||
|
- Sprint reviews and retrospectives for continuous improvement
|
||||||
|
|
||||||
commands:
|
commands:
|
||||||
- help: Show these listed commands in a numbered list
|
- help: Show these listed commands in a numbered list
|
||||||
|
|
@ -56,6 +64,14 @@ commands:
|
||||||
- document-project: execute the task document-project.md
|
- document-project: execute the task document-project.md
|
||||||
- execute-checklist {checklist}: Run task execute-checklist (no checklist = ONLY show available checklists listed under dependencies/checklist below)
|
- execute-checklist {checklist}: Run task execute-checklist (no checklist = ONLY show available checklists listed under dependencies/checklist below)
|
||||||
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
|
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
|
- initialize-memory-bank: Execute task initialize-memory-bank.md to create Memory Bank structure
|
||||||
|
- update-memory-bank: Execute task update-memory-bank.md to update project context
|
||||||
|
- create-adr: Execute task create-adr.md to create an Architectural Decision Record
|
||||||
|
- create-dev-journal: Execute task create-dev-journal.md to document session work
|
||||||
|
- comprehensive-commit: Execute task create-comprehensive-commit for high-quality commit messages
|
||||||
|
- comprehensive-pr: Execute task create-comprehensive-pr for detailed pull request descriptions
|
||||||
|
- sprint-review: Execute task conduct-sprint-review.md to facilitate sprint review
|
||||||
- yolo: Toggle Yolo Mode
|
- yolo: Toggle Yolo Mode
|
||||||
- exit: Exit (confirm)
|
- exit: Exit (confirm)
|
||||||
|
|
||||||
|
|
@ -74,6 +90,14 @@ dependencies:
|
||||||
- generate-ai-frontend-prompt.md
|
- generate-ai-frontend-prompt.md
|
||||||
- index-docs.md
|
- index-docs.md
|
||||||
- shard-doc.md
|
- shard-doc.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- initialize-memory-bank.md
|
||||||
|
- update-memory-bank.md
|
||||||
|
- create-adr.md
|
||||||
|
- create-dev-journal.md
|
||||||
|
- create-comprehensive-commit.md
|
||||||
|
- create-comprehensive-pr.md
|
||||||
|
- conduct-sprint-review.md
|
||||||
templates:
|
templates:
|
||||||
- architecture-tmpl.yaml
|
- architecture-tmpl.yaml
|
||||||
- brownfield-architecture-tmpl.yaml
|
- brownfield-architecture-tmpl.yaml
|
||||||
|
|
@ -86,11 +110,25 @@ dependencies:
|
||||||
- prd-tmpl.yaml
|
- prd-tmpl.yaml
|
||||||
- project-brief-tmpl.yaml
|
- project-brief-tmpl.yaml
|
||||||
- story-tmpl.yaml
|
- story-tmpl.yaml
|
||||||
|
- adr-tmpl.yaml
|
||||||
|
- dev-journal-tmpl.yaml
|
||||||
|
- productContext-tmpl.yaml
|
||||||
|
- systemPatterns-tmpl.yaml
|
||||||
|
- techContext-tmpl.yaml
|
||||||
|
- activeContext-tmpl.yaml
|
||||||
|
- progress-tmpl.yaml
|
||||||
|
- sprint-review-tmpl.yaml
|
||||||
data:
|
data:
|
||||||
- bmad-kb.md
|
- bmad-kb.md
|
||||||
- brainstorming-techniques.md
|
- brainstorming-techniques.md
|
||||||
- elicitation-methods.md
|
- elicitation-methods.md
|
||||||
- technical-preferences.md
|
- technical-preferences.md
|
||||||
|
- adr-triggers.md
|
||||||
|
- memory-bank-triggers.md
|
||||||
|
- coding-standards.md
|
||||||
|
- twelve-factor-principles.md
|
||||||
|
- microservice-patterns.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
workflows:
|
workflows:
|
||||||
- brownfield-fullstack.md
|
- brownfield-fullstack.md
|
||||||
- brownfield-service.md
|
- brownfield-service.md
|
||||||
|
|
@ -103,6 +141,8 @@ dependencies:
|
||||||
- change-checklist.md
|
- change-checklist.md
|
||||||
- pm-checklist.md
|
- pm-checklist.md
|
||||||
- po-master-checklist.md
|
- po-master-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
|
- sprint-review-checklist.md
|
||||||
- story-dod-checklist.md
|
- story-dod-checklist.md
|
||||||
- story-draft-checklist.md
|
- story-draft-checklist.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -52,6 +52,14 @@ persona:
|
||||||
- Always use numbered lists for choices
|
- Always use numbered lists for choices
|
||||||
- Process commands starting with * immediately
|
- Process commands starting with * immediately
|
||||||
- Always remind users that commands require * prefix
|
- Always remind users that commands require * prefix
|
||||||
|
enhanced_capabilities_awareness:
|
||||||
|
- Memory Bank pattern for context persistence across sessions
|
||||||
|
- Architectural Decision Records (ADRs) for decision documentation
|
||||||
|
- Development Journals for session documentation
|
||||||
|
- Comprehensive commit and PR workflows
|
||||||
|
- Technical principles (coding standards, twelve-factor, microservices)
|
||||||
|
- Session kickoff protocol for proper agent initialization
|
||||||
|
- Sprint reviews and retrospectives for continuous improvement
|
||||||
commands: # All commands require * prefix when used (e.g., *help, *agent pm)
|
commands: # All commands require * prefix when used (e.g., *help, *agent pm)
|
||||||
help: Show this guide with available agents and workflows
|
help: Show this guide with available agents and workflows
|
||||||
chat-mode: Start conversational mode for detailed assistance
|
chat-mode: Start conversational mode for detailed assistance
|
||||||
|
|
@ -66,6 +74,14 @@ commands: # All commands require * prefix when used (e.g., *help, *agent pm)
|
||||||
plan-status: Show current workflow plan progress
|
plan-status: Show current workflow plan progress
|
||||||
plan-update: Update workflow plan status
|
plan-update: Update workflow plan status
|
||||||
checklist: Execute a checklist (list if name not specified)
|
checklist: Execute a checklist (list if name not specified)
|
||||||
|
session-kickoff: Execute session initialization protocol
|
||||||
|
initialize-memory-bank: Create Memory Bank structure for context persistence
|
||||||
|
update-memory-bank: Update project context in Memory Bank
|
||||||
|
create-adr: Create an Architectural Decision Record
|
||||||
|
create-dev-journal: Document session work in development journal
|
||||||
|
comprehensive-commit: Create high-quality commit messages
|
||||||
|
comprehensive-pr: Create detailed pull request descriptions
|
||||||
|
sprint-review: Conduct comprehensive sprint review and retrospective
|
||||||
yolo: Toggle skip confirmations mode
|
yolo: Toggle skip confirmations mode
|
||||||
party-mode: Group chat with all agents
|
party-mode: Group chat with all agents
|
||||||
doc-out: Output full document
|
doc-out: Output full document
|
||||||
|
|
@ -92,6 +108,16 @@ help-display-template: |
|
||||||
*plan-status ........ Show current workflow plan progress
|
*plan-status ........ Show current workflow plan progress
|
||||||
*plan-update ........ Update workflow plan status
|
*plan-update ........ Update workflow plan status
|
||||||
|
|
||||||
|
Enhanced Capabilities:
|
||||||
|
*session-kickoff .... Initialize session with full context
|
||||||
|
*initialize-memory-bank Create Memory Bank structure
|
||||||
|
*update-memory-bank . Update project context
|
||||||
|
*create-adr ......... Create Architectural Decision Record
|
||||||
|
*create-dev-journal . Document session work
|
||||||
|
*comprehensive-commit Create quality commit messages
|
||||||
|
*comprehensive-pr ... Create detailed PR descriptions
|
||||||
|
*sprint-review ...... Conduct sprint review/retrospective
|
||||||
|
|
||||||
Other Commands:
|
Other Commands:
|
||||||
*yolo ............... Toggle skip confirmations mode
|
*yolo ............... Toggle skip confirmations mode
|
||||||
*party-mode ......... Group chat with all agents
|
*party-mode ......... Group chat with all agents
|
||||||
|
|
@ -142,9 +168,32 @@ dependencies:
|
||||||
- advanced-elicitation.md
|
- advanced-elicitation.md
|
||||||
- create-doc.md
|
- create-doc.md
|
||||||
- kb-mode-interaction.md
|
- kb-mode-interaction.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- initialize-memory-bank.md
|
||||||
|
- update-memory-bank.md
|
||||||
|
- create-adr.md
|
||||||
|
- create-dev-journal.md
|
||||||
|
- create-comprehensive-commit.md
|
||||||
|
- create-comprehensive-pr.md
|
||||||
|
- conduct-sprint-review.md
|
||||||
|
templates:
|
||||||
|
- adr-tmpl.yaml
|
||||||
|
- dev-journal-tmpl.yaml
|
||||||
|
- projectbrief-tmpl.yaml
|
||||||
|
- productContext-tmpl.yaml
|
||||||
|
- systemPatterns-tmpl.yaml
|
||||||
|
- techContext-tmpl.yaml
|
||||||
|
- activeContext-tmpl.yaml
|
||||||
|
- progress-tmpl.yaml
|
||||||
|
- sprint-review-tmpl.yaml
|
||||||
data:
|
data:
|
||||||
- bmad-kb.md
|
- bmad-kb.md
|
||||||
- elicitation-methods.md
|
- elicitation-methods.md
|
||||||
|
- adr-triggers.md
|
||||||
|
- memory-bank-triggers.md
|
||||||
|
- coding-standards.md
|
||||||
|
- twelve-factor-principles.md
|
||||||
|
- microservice-patterns.md
|
||||||
utils:
|
utils:
|
||||||
- workflow-management.md
|
- workflow-management.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -64,10 +64,17 @@ core_principles:
|
||||||
- Numbered Options - Always use numbered lists when presenting choices to the user
|
- Numbered Options - Always use numbered lists when presenting choices to the user
|
||||||
- Session Documentation - Create dev journal entries for significant development sessions
|
- Session Documentation - Create dev journal entries for significant development sessions
|
||||||
- Knowledge Preservation - Document decisions, patterns, and learnings for future reference
|
- Knowledge Preservation - Document decisions, patterns, and learnings for future reference
|
||||||
|
coding_standards_awareness:
|
||||||
|
- Apply all coding standards from data/coding-standards.md
|
||||||
|
- Follow security principles [SFT], [IV], [RL], [RLS] by default
|
||||||
|
- Maintain code quality standards [DRY], [SF], [RP], [CA]
|
||||||
|
- Use conventional commit format [CD] for all commits
|
||||||
|
- Write testable code [TDT] with appropriate test coverage
|
||||||
|
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
- run-tests: Execute linting and tests
|
- run-tests: Execute linting and tests
|
||||||
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
|
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
|
||||||
- create-dev-journal: Create a development journal entry documenting the session's work
|
- create-dev-journal: Create a development journal entry documenting the session's work
|
||||||
|
|
@ -94,10 +101,15 @@ dependencies:
|
||||||
- create-comprehensive-commit.md
|
- create-comprehensive-commit.md
|
||||||
- create-comprehensive-pr.md
|
- create-comprehensive-pr.md
|
||||||
- update-memory-bank.md
|
- update-memory-bank.md
|
||||||
|
- session-kickoff.md
|
||||||
checklists:
|
checklists:
|
||||||
- story-dod-checklist.md
|
- story-dod-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
templates:
|
templates:
|
||||||
- dev-journal-tmpl.yaml
|
- dev-journal-tmpl.yaml
|
||||||
- activeContext-tmpl.yaml
|
- activeContext-tmpl.yaml
|
||||||
- progress-tmpl.yaml
|
- progress-tmpl.yaml
|
||||||
|
data:
|
||||||
|
- coding-standards.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -47,9 +47,21 @@ persona:
|
||||||
- Collaborative & iterative approach
|
- Collaborative & iterative approach
|
||||||
- Proactive risk identification
|
- Proactive risk identification
|
||||||
- Strategic thinking & outcome-oriented
|
- Strategic thinking & outcome-oriented
|
||||||
|
memory_bank_awareness:
|
||||||
|
- PRDs inform Memory Bank productContext.md and projectbrief.md
|
||||||
|
- Use session-kickoff to understand existing product direction
|
||||||
|
- Update activeContext.md when priorities shift
|
||||||
|
- Product decisions should align with Memory Bank documented goals
|
||||||
|
sprint_review_awareness:
|
||||||
|
- Collaborate with SM on sprint reviews for product insights
|
||||||
|
- Document product-related achievements and learnings
|
||||||
|
- Identify feature adoption and user feedback patterns
|
||||||
|
- Update product roadmap based on sprint outcomes
|
||||||
|
- Ensure product goals align with sprint accomplishments
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
- create-prd: run task create-doc.md with template prd-tmpl.yaml
|
- create-prd: run task create-doc.md with template prd-tmpl.yaml
|
||||||
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
|
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
|
||||||
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
|
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
|
||||||
|
|
@ -57,6 +69,8 @@ commands:
|
||||||
- doc-out: Output full document to current destination file
|
- doc-out: Output full document to current destination file
|
||||||
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
|
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
|
||||||
- correct-course: execute the correct-course task
|
- correct-course: execute the correct-course task
|
||||||
|
- update-memory-bank: Execute task update-memory-bank.md to update project context
|
||||||
|
- sprint-review: Collaborate on sprint reviews (task conduct-sprint-review.md)
|
||||||
- yolo: Toggle Yolo Mode
|
- yolo: Toggle Yolo Mode
|
||||||
- exit: Exit (confirm)
|
- exit: Exit (confirm)
|
||||||
dependencies:
|
dependencies:
|
||||||
|
|
@ -68,12 +82,22 @@ dependencies:
|
||||||
- brownfield-create-story.md
|
- brownfield-create-story.md
|
||||||
- execute-checklist.md
|
- execute-checklist.md
|
||||||
- shard-doc.md
|
- shard-doc.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- update-memory-bank.md
|
||||||
|
- conduct-sprint-review.md
|
||||||
templates:
|
templates:
|
||||||
- prd-tmpl.yaml
|
- prd-tmpl.yaml
|
||||||
- brownfield-prd-tmpl.yaml
|
- brownfield-prd-tmpl.yaml
|
||||||
|
- productContext-tmpl.yaml
|
||||||
|
- activeContext-tmpl.yaml
|
||||||
|
- sprint-review-tmpl.yaml
|
||||||
checklists:
|
checklists:
|
||||||
- pm-checklist.md
|
- pm-checklist.md
|
||||||
- change-checklist.md
|
- change-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
|
- sprint-review-checklist.md
|
||||||
data:
|
data:
|
||||||
- technical-preferences.md
|
- technical-preferences.md
|
||||||
|
- sprint-review-triggers.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -56,9 +56,16 @@ persona:
|
||||||
- Update activeContext.md when priorities shift
|
- Update activeContext.md when priorities shift
|
||||||
- Ensure stories align with Memory Bank documented goals
|
- Ensure stories align with Memory Bank documented goals
|
||||||
- Use Memory Bank for consistency validation
|
- Use Memory Bank for consistency validation
|
||||||
|
sprint_review_awareness:
|
||||||
|
- Validate story completion against acceptance criteria
|
||||||
|
- Document requirement changes and adaptations
|
||||||
|
- Review backlog priorities based on sprint outcomes
|
||||||
|
- Identify patterns in story completion rates
|
||||||
|
- Collaborate with SM on retrospective insights
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for comprehensive session initialization
|
||||||
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
|
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
|
||||||
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
|
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
|
||||||
- correct-course: execute the correct-course task
|
- correct-course: execute the correct-course task
|
||||||
|
|
@ -68,6 +75,7 @@ commands:
|
||||||
- validate-story-draft {story}: run the task validate-next-story against the provided story file
|
- validate-story-draft {story}: run the task validate-next-story against the provided story file
|
||||||
- initialize-memory-bank: Execute task initialize-memory-bank.md to create Memory Bank structure
|
- initialize-memory-bank: Execute task initialize-memory-bank.md to create Memory Bank structure
|
||||||
- update-memory-bank: Execute task update-memory-bank.md to update project context
|
- update-memory-bank: Execute task update-memory-bank.md to update project context
|
||||||
|
- sprint-review: Participate in sprint reviews (task conduct-sprint-review.md)
|
||||||
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
|
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
|
||||||
- exit: Exit (confirm)
|
- exit: Exit (confirm)
|
||||||
dependencies:
|
dependencies:
|
||||||
|
|
@ -78,13 +86,21 @@ dependencies:
|
||||||
- validate-next-story.md
|
- validate-next-story.md
|
||||||
- initialize-memory-bank.md
|
- initialize-memory-bank.md
|
||||||
- update-memory-bank.md
|
- update-memory-bank.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- conduct-sprint-review.md
|
||||||
templates:
|
templates:
|
||||||
- story-tmpl.yaml
|
- story-tmpl.yaml
|
||||||
- project-brief-tmpl.yaml
|
- project-brief-tmpl.yaml
|
||||||
- productContext-tmpl.yaml
|
- productContext-tmpl.yaml
|
||||||
- activeContext-tmpl.yaml
|
- activeContext-tmpl.yaml
|
||||||
- progress-tmpl.yaml
|
- progress-tmpl.yaml
|
||||||
|
- sprint-review-tmpl.yaml
|
||||||
checklists:
|
checklists:
|
||||||
- po-master-checklist.md
|
- po-master-checklist.md
|
||||||
- change-checklist.md
|
- change-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
|
- sprint-review-checklist.md
|
||||||
|
data:
|
||||||
|
- sprint-review-triggers.md
|
||||||
|
- project-scaffolding-preference.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -50,6 +50,13 @@ persona:
|
||||||
- Risk-Based Testing - Prioritize testing based on risk and critical areas
|
- Risk-Based Testing - Prioritize testing based on risk and critical areas
|
||||||
- Continuous Improvement - Balance perfection with pragmatism
|
- Continuous Improvement - Balance perfection with pragmatism
|
||||||
- Architecture & Design Patterns - Ensure proper patterns and maintainable code structure
|
- Architecture & Design Patterns - Ensure proper patterns and maintainable code structure
|
||||||
|
coding_standards_awareness:
|
||||||
|
- Apply all coding standards from data/coding-standards.md during reviews
|
||||||
|
- Enforce security principles [SFT], [IV], [RL], [RLS] in all code
|
||||||
|
- Verify code quality standards [DRY], [SF], [RP], [CA] are met
|
||||||
|
- Check for proper error handling [REH] and resource management [RM]
|
||||||
|
- Ensure twelve-factor principles compliance for cloud-native apps
|
||||||
|
- Validate testing standards [TDT] and test coverage
|
||||||
story-file-permissions:
|
story-file-permissions:
|
||||||
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
|
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
|
||||||
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
|
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
|
||||||
|
|
@ -64,6 +71,8 @@ dependencies:
|
||||||
- review-story.md
|
- review-story.md
|
||||||
data:
|
data:
|
||||||
- technical-preferences.md
|
- technical-preferences.md
|
||||||
|
- coding-standards.md
|
||||||
|
- twelve-factor-principles.md
|
||||||
templates:
|
templates:
|
||||||
- story-tmpl.yaml
|
- story-tmpl.yaml
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -32,31 +32,53 @@ agent:
|
||||||
id: sm
|
id: sm
|
||||||
title: Scrum Master
|
title: Scrum Master
|
||||||
icon: 🏃
|
icon: 🏃
|
||||||
whenToUse: Use for story creation, epic management, retrospectives in party-mode, and agile process guidance
|
whenToUse: Use for story creation, epic management, sprint reviews, retrospectives, and agile process guidance
|
||||||
customization: null
|
customization: null
|
||||||
persona:
|
persona:
|
||||||
role: Technical Scrum Master - Story Preparation Specialist
|
role: Technical Scrum Master - Story & Sprint Facilitator
|
||||||
style: Task-oriented, efficient, precise, focused on clear developer handoffs
|
style: Task-oriented, efficient, precise, focused on clear developer handoffs and team success
|
||||||
identity: Story creation expert who prepares detailed, actionable stories for AI developers
|
identity: Scrum expert who prepares actionable stories and facilitates sprint ceremonies
|
||||||
focus: Creating crystal-clear stories that dumb AI agents can implement without confusion
|
focus: Creating crystal-clear stories and conducting effective sprint reviews/retrospectives
|
||||||
core_principles:
|
core_principles:
|
||||||
- Rigorously follow `create-next-story` procedure to generate the detailed user story
|
- Rigorously follow `create-next-story` procedure to generate the detailed user story
|
||||||
- Will ensure all information comes from the PRD and Architecture to guide the dumb dev agent
|
- Will ensure all information comes from the PRD and Architecture to guide the dumb dev agent
|
||||||
- You are NOT allowed to implement stories or modify code EVER!
|
- You are NOT allowed to implement stories or modify code EVER!
|
||||||
|
- Facilitate sprint reviews to capture achievements, learnings, and improvements
|
||||||
|
- Drive continuous improvement through effective retrospectives
|
||||||
|
- Maintain sprint momentum and team morale
|
||||||
|
sprint_review_awareness:
|
||||||
|
- Conduct sprint reviews at end of each iteration
|
||||||
|
- Document achievements and metrics in dev journal
|
||||||
|
- Facilitate retrospectives for continuous improvement
|
||||||
|
- Update Memory Bank with sprint outcomes
|
||||||
|
- Create actionable improvement items for next sprint
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
- draft: Execute task create-next-story.md
|
- draft: Execute task create-next-story.md
|
||||||
- correct-course: Execute task correct-course.md
|
- correct-course: Execute task correct-course.md
|
||||||
- story-checklist: Execute task execute-checklist.md with checklist story-draft-checklist.md
|
- story-checklist: Execute task execute-checklist.md with checklist story-draft-checklist.md
|
||||||
|
- sprint-review: Execute task conduct-sprint-review.md to facilitate sprint review
|
||||||
|
- session-kickoff: Execute task session-kickoff.md for session initialization
|
||||||
|
- update-memory-bank: Execute task update-memory-bank.md after sprint review
|
||||||
- exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona
|
- exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona
|
||||||
dependencies:
|
dependencies:
|
||||||
tasks:
|
tasks:
|
||||||
- create-next-story.md
|
- create-next-story.md
|
||||||
- execute-checklist.md
|
- execute-checklist.md
|
||||||
- correct-course.md
|
- correct-course.md
|
||||||
|
- conduct-sprint-review.md
|
||||||
|
- session-kickoff.md
|
||||||
|
- update-memory-bank.md
|
||||||
templates:
|
templates:
|
||||||
- story-tmpl.yaml
|
- story-tmpl.yaml
|
||||||
|
- sprint-review-tmpl.yaml
|
||||||
|
- activeContext-tmpl.yaml
|
||||||
|
- progress-tmpl.yaml
|
||||||
checklists:
|
checklists:
|
||||||
- story-draft-checklist.md
|
- story-draft-checklist.md
|
||||||
|
- session-kickoff-checklist.md
|
||||||
|
- sprint-review-checklist.md
|
||||||
|
data:
|
||||||
|
- sprint-review-triggers.md
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -81,6 +81,8 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Component interactions and dependencies are mapped
|
- [ ] Component interactions and dependencies are mapped
|
||||||
- [ ] Data flows are clearly illustrated
|
- [ ] Data flows are clearly illustrated
|
||||||
- [ ] Technology choices for each component are specified
|
- [ ] Technology choices for each component are specified
|
||||||
|
- [ ] Memory Bank systemPatterns.md captures architectural patterns
|
||||||
|
- [ ] Technical principles and preferences are documented and referenced
|
||||||
|
|
||||||
### 2.2 Separation of Concerns
|
### 2.2 Separation of Concerns
|
||||||
|
|
||||||
|
|
@ -100,10 +102,13 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
### 2.4 Modularity & Maintainability
|
### 2.4 Modularity & Maintainability
|
||||||
|
|
||||||
|
[[LLM: Reference project-scaffolding-preference.md for standard project organization principles.]]
|
||||||
|
|
||||||
- [ ] System is divided into cohesive, loosely-coupled modules
|
- [ ] System is divided into cohesive, loosely-coupled modules
|
||||||
- [ ] Components can be developed and tested independently
|
- [ ] Components can be developed and tested independently
|
||||||
- [ ] Changes can be localized to specific components
|
- [ ] Changes can be localized to specific components
|
||||||
- [ ] Code organization promotes discoverability
|
- [ ] Code organization promotes discoverability
|
||||||
|
- [ ] Project structure follows project-scaffolding-preference.md
|
||||||
- [ ] Architecture specifically designed for AI agent implementation
|
- [ ] Architecture specifically designed for AI agent implementation
|
||||||
|
|
||||||
## 3. TECHNICAL STACK & DECISIONS
|
## 3. TECHNICAL STACK & DECISIONS
|
||||||
|
|
@ -158,7 +163,10 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
### 4.2 Frontend Structure & Organization
|
### 4.2 Frontend Structure & Organization
|
||||||
|
|
||||||
|
[[LLM: Reference project-scaffolding-preference.md for standard project structure guidelines.]]
|
||||||
|
|
||||||
- [ ] Directory structure is clearly documented with ASCII diagram
|
- [ ] Directory structure is clearly documented with ASCII diagram
|
||||||
|
- [ ] Structure aligns with project-scaffolding-preference.md standards
|
||||||
- [ ] Component organization follows stated patterns
|
- [ ] Component organization follows stated patterns
|
||||||
- [ ] File naming conventions are explicit
|
- [ ] File naming conventions are explicit
|
||||||
- [ ] Structure supports chosen framework's best practices
|
- [ ] Structure supports chosen framework's best practices
|
||||||
|
|
@ -316,13 +324,17 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
### 7.6 Architectural Decision Records (ADRs)
|
### 7.6 Architectural Decision Records (ADRs)
|
||||||
|
|
||||||
- [ ] ADR process is established for the project
|
- [ ] ADR process is established for the project with clear templates
|
||||||
- [ ] Significant architecture decisions are documented in ADRs
|
- [ ] All significant architecture decisions are documented in ADRs
|
||||||
- [ ] Technology stack choices have corresponding ADRs
|
- [ ] Technology stack choices have corresponding ADRs with alternatives considered
|
||||||
- [ ] Integration approach decisions are captured in ADRs
|
- [ ] Integration approach decisions are captured in ADRs with rationale
|
||||||
- [ ] ADRs follow consistent format and numbering
|
- [ ] ADRs follow consistent format (Context, Decision, Consequences) and numbering
|
||||||
- [ ] Superseded decisions are properly tracked
|
- [ ] Superseded decisions are properly tracked with links to new decisions
|
||||||
- [ ] ADR index is maintained and accessible
|
- [ ] ADR index is maintained and accessible in docs/adr/
|
||||||
|
- [ ] Each ADR includes status (proposed, accepted, deprecated, superseded)
|
||||||
|
- [ ] Trade-offs and implications are clearly documented
|
||||||
|
- [ ] ADRs are linked from relevant architecture sections
|
||||||
|
- [ ] Review process for ADRs is defined and followed
|
||||||
|
|
||||||
## 8. DEPENDENCY & INTEGRATION MANAGEMENT
|
## 8. DEPENDENCY & INTEGRATION MANAGEMENT
|
||||||
|
|
||||||
|
|
@ -388,11 +400,41 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Testing patterns are clearly defined
|
- [ ] Testing patterns are clearly defined
|
||||||
- [ ] Debugging guidance is provided
|
- [ ] Debugging guidance is provided
|
||||||
|
|
||||||
## 10. ACCESSIBILITY IMPLEMENTATION [[FRONTEND ONLY]]
|
## 10. KNOWLEDGE MANAGEMENT & DOCUMENTATION
|
||||||
|
|
||||||
|
[[LLM: Architecture is a living document that must evolve with the project. Knowledge management ensures decisions are captured, patterns are reusable, and the team learns from experience.]]
|
||||||
|
|
||||||
|
### 10.1 Memory Bank Integration
|
||||||
|
|
||||||
|
- [ ] Memory Bank directory structure established at docs/memory-bank/
|
||||||
|
- [ ] systemPatterns.md documents architectural patterns and decisions
|
||||||
|
- [ ] techContext.md captures technology stack context and constraints
|
||||||
|
- [ ] activeContext.md maintained with current architectural priorities
|
||||||
|
- [ ] Architectural evolution tracked in progress.md
|
||||||
|
- [ ] Cross-references between Memory Bank and architecture docs
|
||||||
|
|
||||||
|
### 10.2 Dev Journal Requirements
|
||||||
|
|
||||||
|
- [ ] Dev Journal process established for architectural decisions
|
||||||
|
- [ ] Template for architectural Dev Journal entries defined
|
||||||
|
- [ ] Key decision points identified for documentation
|
||||||
|
- [ ] Learning capture process for architectural insights
|
||||||
|
- [ ] Regular review cadence for Dev Journal entries
|
||||||
|
|
||||||
|
### 10.3 Technical Principles Alignment
|
||||||
|
|
||||||
|
- [ ] Core technical principles documented and accessible
|
||||||
|
- [ ] Architecture aligns with established coding standards
|
||||||
|
- [ ] Microservice patterns (if applicable) properly applied
|
||||||
|
- [ ] Twelve-factor principles considered and documented
|
||||||
|
- [ ] Security and performance principles integrated
|
||||||
|
- [ ] Deviations from principles justified in ADRs
|
||||||
|
|
||||||
|
## 11. ACCESSIBILITY IMPLEMENTATION [[FRONTEND ONLY]]
|
||||||
|
|
||||||
[[LLM: Skip this section for backend-only projects. Accessibility is a core requirement for any user interface.]]
|
[[LLM: Skip this section for backend-only projects. Accessibility is a core requirement for any user interface.]]
|
||||||
|
|
||||||
### 10.1 Accessibility Standards
|
### 11.1 Accessibility Standards
|
||||||
|
|
||||||
- [ ] Semantic HTML usage is emphasized
|
- [ ] Semantic HTML usage is emphasized
|
||||||
- [ ] ARIA implementation guidelines provided
|
- [ ] ARIA implementation guidelines provided
|
||||||
|
|
@ -400,7 +442,7 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Focus management approach specified
|
- [ ] Focus management approach specified
|
||||||
- [ ] Screen reader compatibility addressed
|
- [ ] Screen reader compatibility addressed
|
||||||
|
|
||||||
### 10.2 Accessibility Testing
|
### 11.2 Accessibility Testing
|
||||||
|
|
||||||
- [ ] Accessibility testing tools identified
|
- [ ] Accessibility testing tools identified
|
||||||
- [ ] Testing process integrated into workflow
|
- [ ] Testing process integrated into workflow
|
||||||
|
|
|
||||||
|
|
@ -93,6 +93,8 @@ Be thorough - missed conflicts cause future problems.]]
|
||||||
- [ ] Does the technology list need updating?
|
- [ ] Does the technology list need updating?
|
||||||
- [ ] Do data models or schemas need revision?
|
- [ ] Do data models or schemas need revision?
|
||||||
- [ ] Are external API integrations affected?
|
- [ ] Are external API integrations affected?
|
||||||
|
- [ ] Do existing ADRs need to be superseded or updated?
|
||||||
|
- [ ] Is a new ADR required to document the technical change decision?
|
||||||
- [ ] **Review Frontend Spec (if applicable):**
|
- [ ] **Review Frontend Spec (if applicable):**
|
||||||
- [ ] Does the issue conflict with the FE architecture, component library choice, or UI/UX design?
|
- [ ] Does the issue conflict with the FE architecture, component library choice, or UI/UX design?
|
||||||
- [ ] Are specific FE components or user flows impacted?
|
- [ ] Are specific FE components or user flows impacted?
|
||||||
|
|
@ -151,6 +153,8 @@ This proposal guides all subsequent work.]]
|
||||||
- [ ] **PRD MVP Impact:** Changes to scope/goals (if any).
|
- [ ] **PRD MVP Impact:** Changes to scope/goals (if any).
|
||||||
- [ ] **High-Level Action Plan:** Next steps for stories/updates.
|
- [ ] **High-Level Action Plan:** Next steps for stories/updates.
|
||||||
- [ ] **Agent Handoff Plan:** Identify roles needed (PM, Arch, Design Arch, PO).
|
- [ ] **Agent Handoff Plan:** Identify roles needed (PM, Arch, Design Arch, PO).
|
||||||
|
- [ ] **Memory Bank Updates Required:** Which Memory Bank files need updating (activeContext, systemPatterns, etc.).
|
||||||
|
- [ ] **Dev Journal Entry Plan:** Key decisions and rationale to document.
|
||||||
|
|
||||||
## 6. Final Review & Handoff
|
## 6. Final Review & Handoff
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -218,11 +218,14 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
### 6.3 First Epic Completeness
|
### 6.3 First Epic Completeness
|
||||||
|
|
||||||
|
[[LLM: Reference project-scaffolding-preference.md for comprehensive project structure and initialization guidelines.]]
|
||||||
|
|
||||||
- [ ] First epic includes all necessary setup steps
|
- [ ] First epic includes all necessary setup steps
|
||||||
- [ ] Project scaffolding and initialization addressed
|
- [ ] Project scaffolding follows project-scaffolding-preference.md
|
||||||
- [ ] Core infrastructure setup included
|
- [ ] Core infrastructure setup included
|
||||||
- [ ] Development environment setup addressed
|
- [ ] Development environment setup addressed
|
||||||
- [ ] Local testability established early
|
- [ ] Local testability established early
|
||||||
|
- [ ] BMAD-specific directories included in setup (Memory Bank, ADRs, Dev Journals)
|
||||||
|
|
||||||
## 7. TECHNICAL GUIDANCE
|
## 7. TECHNICAL GUIDANCE
|
||||||
|
|
||||||
|
|
@ -234,6 +237,7 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Performance considerations highlighted
|
- [ ] Performance considerations highlighted
|
||||||
- [ ] Security requirements articulated
|
- [ ] Security requirements articulated
|
||||||
- [ ] Known areas of high complexity or technical risk flagged for architectural deep-dive
|
- [ ] Known areas of high complexity or technical risk flagged for architectural deep-dive
|
||||||
|
- [ ] ADR (Architecture Decision Record) templates prepared for key decisions
|
||||||
|
|
||||||
### 7.2 Technical Decision Framework
|
### 7.2 Technical Decision Framework
|
||||||
|
|
||||||
|
|
@ -243,6 +247,8 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Non-negotiable technical requirements highlighted
|
- [ ] Non-negotiable technical requirements highlighted
|
||||||
- [ ] Areas requiring technical investigation identified
|
- [ ] Areas requiring technical investigation identified
|
||||||
- [ ] Guidance on technical debt approach provided
|
- [ ] Guidance on technical debt approach provided
|
||||||
|
- [ ] ADR creation process integrated into decision-making
|
||||||
|
- [ ] Technical principles and preferences aligned with project goals
|
||||||
|
|
||||||
### 7.3 Implementation Considerations
|
### 7.3 Implementation Considerations
|
||||||
|
|
||||||
|
|
@ -288,6 +294,7 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Technical terms are defined where necessary
|
- [ ] Technical terms are defined where necessary
|
||||||
- [ ] Diagrams/visuals included where helpful
|
- [ ] Diagrams/visuals included where helpful
|
||||||
- [ ] Documentation is versioned appropriately
|
- [ ] Documentation is versioned appropriately
|
||||||
|
- [ ] Technical principles clearly documented for team reference
|
||||||
|
|
||||||
### 9.2 Stakeholder Alignment
|
### 9.2 Stakeholder Alignment
|
||||||
|
|
||||||
|
|
@ -296,6 +303,16 @@ Ask the user if they want to work through the checklist:
|
||||||
- [ ] Potential areas of disagreement addressed
|
- [ ] Potential areas of disagreement addressed
|
||||||
- [ ] Communication plan for updates established
|
- [ ] Communication plan for updates established
|
||||||
- [ ] Approval process defined
|
- [ ] Approval process defined
|
||||||
|
- [ ] Sprint Review cadence and format agreed upon
|
||||||
|
|
||||||
|
### 9.3 Sprint Review Planning
|
||||||
|
|
||||||
|
- [ ] Sprint Review schedule established and communicated
|
||||||
|
- [ ] Review format aligned with stakeholder preferences
|
||||||
|
- [ ] Success metrics for sprint defined
|
||||||
|
- [ ] Demo scenarios planned for completed features
|
||||||
|
- [ ] Retrospective process integrated into sprint planning
|
||||||
|
- [ ] Documentation updates planned for sprint outcomes
|
||||||
|
|
||||||
## PRD & EPIC VALIDATION SUMMARY
|
## PRD & EPIC VALIDATION SUMMARY
|
||||||
|
|
||||||
|
|
@ -350,7 +367,7 @@ After presenting the report, ask if the user wants:
|
||||||
### Category Statuses
|
### Category Statuses
|
||||||
|
|
||||||
| Category | Status | Critical Issues |
|
| Category | Status | Critical Issues |
|
||||||
| -------------------------------- | ------ | --------------- |
|
|----------------------------------|--------|-----------------|
|
||||||
| 1. Problem Definition & Context | _TBD_ | |
|
| 1. Problem Definition & Context | _TBD_ | |
|
||||||
| 2. MVP Scope Definition | _TBD_ | |
|
| 2. MVP Scope Definition | _TBD_ | |
|
||||||
| 3. User Experience Requirements | _TBD_ | |
|
| 3. User Experience Requirements | _TBD_ | |
|
||||||
|
|
|
||||||
|
|
@ -59,17 +59,51 @@ Ask the user if they want to work through the checklist:
|
||||||
- Section by section (interactive mode) - Review each section, get confirmation before proceeding
|
- Section by section (interactive mode) - Review each section, get confirmation before proceeding
|
||||||
- All at once (comprehensive mode) - Complete full analysis and present report at end]]
|
- All at once (comprehensive mode) - Complete full analysis and present report at end]]
|
||||||
|
|
||||||
|
## 0. SESSION INITIALIZATION & CONTEXT
|
||||||
|
|
||||||
|
[[LLM: Before any validation, ensure complete project understanding through systematic session kickoff. This prevents context gaps that lead to suboptimal decisions.]]
|
||||||
|
|
||||||
|
### 0.1 Session Kickoff Completion
|
||||||
|
|
||||||
|
- [ ] Session kickoff task completed to establish project context
|
||||||
|
- [ ] Memory Bank files reviewed (if they exist)
|
||||||
|
- [ ] Recent Dev Journal entries reviewed for current state
|
||||||
|
- [ ] Architecture documentation reviewed and understood
|
||||||
|
- [ ] Git status and recent commits analyzed
|
||||||
|
- [ ] Documentation inconsistencies identified and noted
|
||||||
|
|
||||||
|
### 0.2 Memory Bank Initialization [[NEW PROJECT]]
|
||||||
|
|
||||||
|
- [ ] Memory Bank directory structure created at `docs/memory-bank/`
|
||||||
|
- [ ] Initial `projectbrief.md` created with project foundation
|
||||||
|
- [ ] `activeContext.md` initialized with current priorities
|
||||||
|
- [ ] `progress.md` started to track project state
|
||||||
|
- [ ] `systemPatterns.md` prepared for architecture decisions
|
||||||
|
- [ ] `techContext.md` and `productContext.md` initialized
|
||||||
|
|
||||||
|
### 0.3 Technical Principles Alignment
|
||||||
|
|
||||||
|
- [ ] Technical principles and preferences documented
|
||||||
|
- [ ] Coding standards established and referenced
|
||||||
|
- [ ] Microservice patterns (if applicable) documented
|
||||||
|
- [ ] Twelve-factor principles considered and applied
|
||||||
|
- [ ] Security and performance standards defined
|
||||||
|
|
||||||
## 1. PROJECT SETUP & INITIALIZATION
|
## 1. PROJECT SETUP & INITIALIZATION
|
||||||
|
|
||||||
[[LLM: Project setup is the foundation. For greenfield, ensure clean start. For brownfield, ensure safe integration with existing system. Verify setup matches project type.]]
|
[[LLM: Project setup is the foundation. For greenfield, ensure clean start. For brownfield, ensure safe integration with existing system. Verify setup matches project type.]]
|
||||||
|
|
||||||
### 1.1 Project Scaffolding [[GREENFIELD ONLY]]
|
### 1.1 Project Scaffolding [[GREENFIELD ONLY]]
|
||||||
|
|
||||||
|
[[LLM: Reference project-scaffolding-preference.md in data dependencies for comprehensive project structure guidelines. Ensure project follows standardized directory structure and documentation practices.]]
|
||||||
|
|
||||||
- [ ] Epic 1 includes explicit steps for project creation/initialization
|
- [ ] Epic 1 includes explicit steps for project creation/initialization
|
||||||
|
- [ ] Project structure follows project-scaffolding-preference.md guidelines
|
||||||
- [ ] If using a starter template, steps for cloning/setup are included
|
- [ ] If using a starter template, steps for cloning/setup are included
|
||||||
- [ ] If building from scratch, all necessary scaffolding steps are defined
|
- [ ] If building from scratch, all necessary scaffolding steps are defined
|
||||||
- [ ] Initial README or documentation setup is included
|
- [ ] Initial README or documentation setup is included
|
||||||
- [ ] Repository setup and initial commit processes are defined
|
- [ ] Repository setup and initial commit processes are defined
|
||||||
|
- [ ] BMAD-specific directories created (docs/memory-bank, docs/adr, docs/devJournal)
|
||||||
|
|
||||||
### 1.2 Existing System Integration [[BROWNFIELD ONLY]]
|
### 1.2 Existing System Integration [[BROWNFIELD ONLY]]
|
||||||
|
|
||||||
|
|
@ -295,14 +329,15 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
## 9. DOCUMENTATION & HANDOFF
|
## 9. DOCUMENTATION & HANDOFF
|
||||||
|
|
||||||
[[LLM: Good documentation enables smooth development. For brownfield, documentation of integration points is critical.]]
|
[[LLM: Good documentation enables smooth development. For brownfield, documentation of integration points is critical. Include Dev Journal and Sprint Review processes.]]
|
||||||
|
|
||||||
### 9.1 Developer Documentation
|
### 9.1 Developer Documentation
|
||||||
|
|
||||||
- [ ] API documentation created alongside implementation
|
- [ ] API documentation created alongside implementation
|
||||||
- [ ] Setup instructions are comprehensive
|
- [ ] Setup instructions are comprehensive
|
||||||
- [ ] Architecture decisions documented
|
- [ ] Architecture decisions documented with ADRs
|
||||||
- [ ] Patterns and conventions documented
|
- [ ] Patterns and conventions documented
|
||||||
|
- [ ] Dev Journal maintained with daily/weekly updates
|
||||||
- [ ] [[BROWNFIELD ONLY]] Integration points documented in detail
|
- [ ] [[BROWNFIELD ONLY]] Integration points documented in detail
|
||||||
|
|
||||||
### 9.2 User Documentation
|
### 9.2 User Documentation
|
||||||
|
|
@ -314,11 +349,22 @@ Ask the user if they want to work through the checklist:
|
||||||
|
|
||||||
### 9.3 Knowledge Transfer
|
### 9.3 Knowledge Transfer
|
||||||
|
|
||||||
|
- [ ] Dev Journal entries capture key decisions and learnings
|
||||||
|
- [ ] Sprint Review documentation prepared for stakeholders
|
||||||
- [ ] [[BROWNFIELD ONLY]] Existing system knowledge captured
|
- [ ] [[BROWNFIELD ONLY]] Existing system knowledge captured
|
||||||
- [ ] [[BROWNFIELD ONLY]] Integration knowledge documented
|
- [ ] [[BROWNFIELD ONLY]] Integration knowledge documented
|
||||||
- [ ] Code review knowledge sharing planned
|
- [ ] Code review knowledge sharing planned
|
||||||
- [ ] Deployment knowledge transferred to operations
|
- [ ] Deployment knowledge transferred to operations
|
||||||
- [ ] Historical context preserved
|
- [ ] Historical context preserved in Memory Bank
|
||||||
|
|
||||||
|
### 9.4 Sprint Review Preparation
|
||||||
|
|
||||||
|
- [ ] Sprint objectives and completion status documented
|
||||||
|
- [ ] Key achievements and blockers identified
|
||||||
|
- [ ] Technical decisions and their rationale captured
|
||||||
|
- [ ] Lessons learned documented for future sprints
|
||||||
|
- [ ] Next sprint priorities aligned with project goals
|
||||||
|
- [ ] Memory Bank updated with sprint outcomes
|
||||||
|
|
||||||
## 10. POST-MVP CONSIDERATIONS
|
## 10. POST-MVP CONSIDERATIONS
|
||||||
|
|
||||||
|
|
@ -414,7 +460,8 @@ After presenting the report, ask if the user wants:
|
||||||
### Category Statuses
|
### Category Statuses
|
||||||
|
|
||||||
| Category | Status | Critical Issues |
|
| Category | Status | Critical Issues |
|
||||||
| --------------------------------------- | ------ | --------------- |
|
|-----------------------------------------|--------|-----------------|
|
||||||
|
| 0. Session Initialization & Context | _TBD_ | |
|
||||||
| 1. Project Setup & Initialization | _TBD_ | |
|
| 1. Project Setup & Initialization | _TBD_ | |
|
||||||
| 2. Infrastructure & Deployment | _TBD_ | |
|
| 2. Infrastructure & Deployment | _TBD_ | |
|
||||||
| 3. External Dependencies & Integrations | _TBD_ | |
|
| 3. External Dependencies & Integrations | _TBD_ | |
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,267 @@
|
||||||
|
# Session Kickoff Checklist
|
||||||
|
|
||||||
|
This checklist ensures AI agents have complete project context and understanding before starting work. It provides systematic session initialization across all agent types.
|
||||||
|
|
||||||
|
[[LLM: INITIALIZATION INSTRUCTIONS - SESSION KICKOFF
|
||||||
|
|
||||||
|
This is the FIRST checklist to run when starting any new AI agent session. It prevents context gaps, reduces mistakes, and ensures efficient work.
|
||||||
|
|
||||||
|
IMPORTANT: This checklist is mandatory for:
|
||||||
|
- New AI sessions on existing projects
|
||||||
|
- After significant time gaps (>24 hours)
|
||||||
|
- When switching between major project areas
|
||||||
|
- After major changes or pivots
|
||||||
|
- When onboarding new team members
|
||||||
|
|
||||||
|
The goal is to establish complete context BEFORE any work begins.]]
|
||||||
|
|
||||||
|
## 1. MEMORY BANK REVIEW
|
||||||
|
|
||||||
|
[[LLM: Memory Bank is the primary source of project truth. Review systematically, noting dates and potential staleness.]]
|
||||||
|
|
||||||
|
### 1.1 Core Memory Bank Files
|
||||||
|
|
||||||
|
- [ ] **projectbrief.md** reviewed - Project foundation, goals, and scope understood
|
||||||
|
- [ ] **activeContext.md** reviewed - Current priorities and immediate work identified
|
||||||
|
- [ ] **progress.md** reviewed - Project state and completed features understood
|
||||||
|
- [ ] **systemPatterns.md** reviewed - Architecture patterns and decisions noted
|
||||||
|
- [ ] **techContext.md** reviewed - Technology stack and constraints clear
|
||||||
|
- [ ] **productContext.md** reviewed - Problem space and user needs understood
|
||||||
|
- [ ] Last update timestamps noted for each file
|
||||||
|
- [ ] Potential inconsistencies between files identified
|
||||||
|
|
||||||
|
### 1.2 Memory Bank Health Assessment
|
||||||
|
|
||||||
|
- [ ] Files exist and are accessible
|
||||||
|
- [ ] Information appears current (updated within last sprint)
|
||||||
|
- [ ] No major gaps in documentation identified
|
||||||
|
- [ ] Cross-references between files are consistent
|
||||||
|
- [ ] Action items for updates noted if needed
|
||||||
|
|
||||||
|
### 1.3 Project Structure Verification
|
||||||
|
|
||||||
|
[[LLM: Reference project-scaffolding-preference.md for standard project structure. Verify actual structure aligns with BMAD conventions.]]
|
||||||
|
|
||||||
|
- [ ] Project follows standard directory structure
|
||||||
|
- [ ] BMAD-specific directories exist (docs/memory-bank, docs/adr, docs/devJournal)
|
||||||
|
- [ ] Documentation directories properly organized
|
||||||
|
- [ ] Source code organization follows conventions
|
||||||
|
- [ ] Test structure aligns with project type
|
||||||
|
|
||||||
|
## 2. ARCHITECTURE DOCUMENTATION
|
||||||
|
|
||||||
|
[[LLM: Architecture drives implementation. Understand the system design thoroughly.]]
|
||||||
|
|
||||||
|
### 2.1 Architecture Documents
|
||||||
|
|
||||||
|
- [ ] Primary architecture document located and reviewed
|
||||||
|
- [ ] Document type identified (greenfield, brownfield, frontend, fullstack)
|
||||||
|
- [ ] Core architectural decisions understood
|
||||||
|
- [ ] System components and relationships clear
|
||||||
|
- [ ] Technology choices and versions noted
|
||||||
|
- [ ] API documentation reviewed if exists
|
||||||
|
- [ ] Database schemas understood if applicable
|
||||||
|
|
||||||
|
### 2.2 Architecture Alignment
|
||||||
|
|
||||||
|
- [ ] Architecture aligns with Memory Bank information
|
||||||
|
- [ ] Recent changes or updates identified
|
||||||
|
- [ ] ADRs reviewed for architectural decisions
|
||||||
|
- [ ] Integration points clearly understood
|
||||||
|
- [ ] Deployment architecture reviewed
|
||||||
|
|
||||||
|
## 3. DEVELOPMENT HISTORY
|
||||||
|
|
||||||
|
[[LLM: Recent history provides context for current work and challenges.]]
|
||||||
|
|
||||||
|
### 3.1 Dev Journal Review
|
||||||
|
|
||||||
|
- [ ] Located Dev Journal entries (last 3-5)
|
||||||
|
- [ ] Recent work and decisions understood
|
||||||
|
- [ ] Challenges and blockers identified
|
||||||
|
- [ ] Technical debt or issues noted
|
||||||
|
- [ ] Patterns in development identified
|
||||||
|
- [ ] Key learnings extracted
|
||||||
|
|
||||||
|
### 3.2 ADR Review
|
||||||
|
|
||||||
|
- [ ] Recent ADRs reviewed (last 3-5)
|
||||||
|
- [ ] Current architectural decisions understood
|
||||||
|
- [ ] Superseded decisions noted
|
||||||
|
- [ ] Pending decisions identified
|
||||||
|
- [ ] ADR alignment with architecture verified
|
||||||
|
|
||||||
|
## 4. CURRENT PROJECT STATE
|
||||||
|
|
||||||
|
[[LLM: Understanding the current state prevents duplicate work and conflicts.]]
|
||||||
|
|
||||||
|
### 4.1 Git Status Check
|
||||||
|
|
||||||
|
- [ ] Current branch identified
|
||||||
|
- [ ] Clean working directory confirmed
|
||||||
|
- [ ] Recent commits reviewed (last 10)
|
||||||
|
- [ ] Outstanding changes understood
|
||||||
|
- [ ] Merge conflicts checked
|
||||||
|
- [ ] Remote synchronization status
|
||||||
|
|
||||||
|
### 4.2 Project Health
|
||||||
|
|
||||||
|
- [ ] Build status checked
|
||||||
|
- [ ] Test suite status verified
|
||||||
|
- [ ] Known failing tests documented
|
||||||
|
- [ ] Blocking issues identified
|
||||||
|
- [ ] Dependencies up to date
|
||||||
|
- [ ] Security vulnerabilities checked
|
||||||
|
|
||||||
|
## 5. SPRINT/ITERATION CONTEXT
|
||||||
|
|
||||||
|
[[LLM: Align work with current sprint goals and priorities.]]
|
||||||
|
|
||||||
|
### 5.1 Sprint Status
|
||||||
|
|
||||||
|
- [ ] Current sprint identified
|
||||||
|
- [ ] Sprint goals understood
|
||||||
|
- [ ] User stories in progress identified
|
||||||
|
- [ ] Completed stories this sprint noted
|
||||||
|
- [ ] Sprint timeline clear
|
||||||
|
- [ ] Team velocity understood
|
||||||
|
|
||||||
|
### 5.2 Priority Alignment
|
||||||
|
|
||||||
|
- [ ] Immediate priorities identified
|
||||||
|
- [ ] Blockers and dependencies clear
|
||||||
|
- [ ] Next planned work understood
|
||||||
|
- [ ] Risk areas identified
|
||||||
|
- [ ] Resource constraints noted
|
||||||
|
|
||||||
|
## 6. CONSISTENCY VALIDATION
|
||||||
|
|
||||||
|
[[LLM: Inconsistencies cause confusion and errors. Identify and flag them.]]
|
||||||
|
|
||||||
|
### 6.1 Cross-Reference Check
|
||||||
|
|
||||||
|
- [ ] Memory Bank aligns with codebase reality
|
||||||
|
- [ ] Architecture matches implementation
|
||||||
|
- [ ] ADRs reflected in current code
|
||||||
|
- [ ] Dev Journal matches git history
|
||||||
|
- [ ] Documentation current with changes
|
||||||
|
|
||||||
|
### 6.2 Gap Identification
|
||||||
|
|
||||||
|
- [ ] Missing documentation identified
|
||||||
|
- [ ] Outdated sections flagged
|
||||||
|
- [ ] Undocumented decisions noted
|
||||||
|
- [ ] Knowledge gaps listed
|
||||||
|
- [ ] Update requirements documented
|
||||||
|
|
||||||
|
## 7. AGENT-SPECIFIC CONTEXT
|
||||||
|
|
||||||
|
[[LLM: Different agents need different context emphasis.]]
|
||||||
|
|
||||||
|
### 7.1 Role-Based Focus
|
||||||
|
|
||||||
|
**For Architect:**
|
||||||
|
- [ ] Architectural decisions and rationale clear
|
||||||
|
- [ ] Technical debt understood
|
||||||
|
- [ ] Scalability considerations reviewed
|
||||||
|
- [ ] System boundaries defined
|
||||||
|
|
||||||
|
**For Developer:**
|
||||||
|
- [ ] Current implementation tasks clear
|
||||||
|
- [ ] Coding patterns understood
|
||||||
|
- [ ] Testing requirements known
|
||||||
|
- [ ] Local setup verified
|
||||||
|
|
||||||
|
**For PM/PO:**
|
||||||
|
- [ ] Requirements alignment verified
|
||||||
|
- [ ] User stories prioritized
|
||||||
|
- [ ] Stakeholder needs understood
|
||||||
|
- [ ] Timeline constraints clear
|
||||||
|
|
||||||
|
**For QA:**
|
||||||
|
- [ ] Test coverage understood
|
||||||
|
- [ ] Quality gates defined
|
||||||
|
- [ ] Known issues documented
|
||||||
|
- [ ] Testing strategy clear
|
||||||
|
|
||||||
|
### 7.2 Handoff Context
|
||||||
|
|
||||||
|
- [ ] Previous agent's work understood
|
||||||
|
- [ ] Pending decisions identified
|
||||||
|
- [ ] Open questions documented
|
||||||
|
- [ ] Next steps clear
|
||||||
|
|
||||||
|
## 8. RECOMMENDED ACTIONS
|
||||||
|
|
||||||
|
[[LLM: Based on the review, what should happen next?]]
|
||||||
|
|
||||||
|
### 8.1 Immediate Actions
|
||||||
|
|
||||||
|
- [ ] Most urgent task identified
|
||||||
|
- [ ] Blockers that need resolution listed
|
||||||
|
- [ ] Quick wins available noted
|
||||||
|
- [ ] Risk mitigation needed specified
|
||||||
|
|
||||||
|
### 8.2 Documentation Updates
|
||||||
|
|
||||||
|
- [ ] Memory Bank updates needed listed
|
||||||
|
- [ ] Architecture updates required noted
|
||||||
|
- [ ] ADRs to be created identified
|
||||||
|
- [ ] Dev Journal entries planned
|
||||||
|
|
||||||
|
### 8.3 Strategic Considerations
|
||||||
|
|
||||||
|
- [ ] Technical debt to address
|
||||||
|
- [ ] Architectural improvements needed
|
||||||
|
- [ ] Process improvements suggested
|
||||||
|
- [ ] Knowledge gaps to fill
|
||||||
|
|
||||||
|
## SESSION KICKOFF SUMMARY
|
||||||
|
|
||||||
|
[[LLM: Generate a concise summary report with:
|
||||||
|
|
||||||
|
1. **Project Context**
|
||||||
|
- Project name and purpose
|
||||||
|
- Current phase/sprint
|
||||||
|
- Key technologies
|
||||||
|
|
||||||
|
2. **Documentation Health**
|
||||||
|
- Memory Bank status (Current/Outdated/Missing)
|
||||||
|
- Architecture status
|
||||||
|
- Overall documentation quality
|
||||||
|
|
||||||
|
3. **Current State**
|
||||||
|
- Active work items
|
||||||
|
- Recent completions
|
||||||
|
- Immediate blockers
|
||||||
|
|
||||||
|
4. **Inconsistencies Found**
|
||||||
|
- List any misalignments
|
||||||
|
- Documentation gaps
|
||||||
|
- Update requirements
|
||||||
|
|
||||||
|
5. **Recommended Next Steps**
|
||||||
|
- Priority order
|
||||||
|
- Estimated effort
|
||||||
|
- Dependencies
|
||||||
|
|
||||||
|
Keep it action-oriented and concise.]]
|
||||||
|
|
||||||
|
### Summary Report
|
||||||
|
|
||||||
|
**Status:** [Complete/Partial/Blocked]
|
||||||
|
|
||||||
|
**Key Findings:**
|
||||||
|
- Documentation Health: [Good/Fair/Poor]
|
||||||
|
- Project State: [On Track/At Risk/Blocked]
|
||||||
|
- Context Quality: [Complete/Adequate/Insufficient]
|
||||||
|
|
||||||
|
**Priority Actions:**
|
||||||
|
1. [Most urgent action]
|
||||||
|
2. [Second priority]
|
||||||
|
3. [Third priority]
|
||||||
|
|
||||||
|
**Blockers:**
|
||||||
|
- [List any blocking issues]
|
||||||
|
|
||||||
|
**Agent Ready:** [Yes/No - with reason if No]
|
||||||
|
|
@ -0,0 +1,363 @@
|
||||||
|
# Sprint Review Checklist
|
||||||
|
|
||||||
|
This checklist guides teams through conducting effective sprint reviews that capture achievements, learnings, and set up the next sprint for success.
|
||||||
|
|
||||||
|
[[LLM: INITIALIZATION INSTRUCTIONS - SPRINT REVIEW
|
||||||
|
|
||||||
|
Sprint Reviews are critical ceremonies for:
|
||||||
|
- Demonstrating completed work to stakeholders
|
||||||
|
- Capturing lessons learned
|
||||||
|
- Adjusting project direction based on feedback
|
||||||
|
- Planning upcoming work
|
||||||
|
- Updating project documentation
|
||||||
|
|
||||||
|
This checklist should be used:
|
||||||
|
- At the end of each sprint/iteration
|
||||||
|
- Before major milestone reviews
|
||||||
|
- When significant changes occur
|
||||||
|
- For handoffs between teams
|
||||||
|
|
||||||
|
The goal is to create a comprehensive record of progress and decisions.]]
|
||||||
|
|
||||||
|
## 1. PRE-REVIEW PREPARATION
|
||||||
|
|
||||||
|
[[LLM: Good preparation ensures productive reviews. Complete these items 1-2 days before the review.]]
|
||||||
|
|
||||||
|
### 1.1 Sprint Metrics Collection
|
||||||
|
|
||||||
|
- [ ] Sprint goals documented and assessed
|
||||||
|
- [ ] User stories completed vs planned tallied
|
||||||
|
- [ ] Story points delivered calculated
|
||||||
|
- [ ] Velocity compared to previous sprints
|
||||||
|
- [ ] Burndown/burnup charts prepared
|
||||||
|
- [ ] Blockers and impediments listed
|
||||||
|
|
||||||
|
### 1.2 Demo Preparation
|
||||||
|
|
||||||
|
- [ ] Completed features identified for demo
|
||||||
|
- [ ] Demo environment prepared and tested
|
||||||
|
- [ ] Demo scripts/scenarios written
|
||||||
|
- [ ] Demo order determined (highest value first)
|
||||||
|
- [ ] Presenters assigned for each feature
|
||||||
|
- [ ] Backup plans for demo failures prepared
|
||||||
|
|
||||||
|
### 1.3 Documentation Review
|
||||||
|
|
||||||
|
- [ ] Dev Journal entries for sprint compiled
|
||||||
|
- [ ] ADRs created during sprint listed
|
||||||
|
- [ ] Memory Bank updates identified
|
||||||
|
- [ ] Architecture changes documented
|
||||||
|
- [ ] Technical debt items logged
|
||||||
|
|
||||||
|
## 2. STAKEHOLDER COORDINATION
|
||||||
|
|
||||||
|
[[LLM: Effective reviews require the right people with the right information.]]
|
||||||
|
|
||||||
|
### 2.1 Attendee Management
|
||||||
|
|
||||||
|
- [ ] Required stakeholders identified and invited
|
||||||
|
- [ ] Product Owner availability confirmed
|
||||||
|
- [ ] Technical team members scheduled
|
||||||
|
- [ ] Optional attendees invited
|
||||||
|
- [ ] Meeting logistics communicated
|
||||||
|
- [ ] Pre-read materials distributed
|
||||||
|
|
||||||
|
### 2.2 Agenda Creation
|
||||||
|
|
||||||
|
- [ ] Review objectives defined
|
||||||
|
- [ ] Time allocated per demo/topic
|
||||||
|
- [ ] Q&A time built in
|
||||||
|
- [ ] Feedback collection method determined
|
||||||
|
- [ ] Next steps discussion included
|
||||||
|
- [ ] Time for retrospective insights
|
||||||
|
|
||||||
|
## 3. SPRINT ACCOMPLISHMENTS
|
||||||
|
|
||||||
|
[[LLM: Focus on value delivered and outcomes achieved, not just features built.]]
|
||||||
|
|
||||||
|
### 3.1 Completed Work
|
||||||
|
|
||||||
|
- [ ] All completed user stories listed
|
||||||
|
- [ ] Business value of each story articulated
|
||||||
|
- [ ] Technical achievements highlighted
|
||||||
|
- [ ] Infrastructure improvements noted
|
||||||
|
- [ ] Bug fixes and issues resolved documented
|
||||||
|
- [ ] Performance improvements quantified
|
||||||
|
|
||||||
|
### 3.2 Partial/Incomplete Work
|
||||||
|
|
||||||
|
- [ ] In-progress stories status documented
|
||||||
|
- [ ] Reasons for incompletion analyzed
|
||||||
|
- [ ] Carry-over plan determined
|
||||||
|
- [ ] Re-estimation completed if needed
|
||||||
|
- [ ] Dependencies identified
|
||||||
|
- [ ] Risk mitigation planned
|
||||||
|
|
||||||
|
### 3.3 Unplanned Work
|
||||||
|
|
||||||
|
- [ ] Emergency fixes documented
|
||||||
|
- [ ] Scope changes captured
|
||||||
|
- [ ] Technical discoveries noted
|
||||||
|
- [ ] Time impact assessed
|
||||||
|
- [ ] Process improvements identified
|
||||||
|
- [ ] Prevention strategies discussed
|
||||||
|
|
||||||
|
## 4. TECHNICAL DECISIONS & LEARNINGS
|
||||||
|
|
||||||
|
[[LLM: Capture the "why" behind decisions for future reference.]]
|
||||||
|
|
||||||
|
### 4.1 Architectural Decisions
|
||||||
|
|
||||||
|
- [ ] Key technical decisions documented
|
||||||
|
- [ ] ADRs created or referenced
|
||||||
|
- [ ] Trade-offs explained
|
||||||
|
- [ ] Alternative approaches noted
|
||||||
|
- [ ] Impact on future work assessed
|
||||||
|
- [ ] Technical debt created/resolved
|
||||||
|
|
||||||
|
### 4.2 Process Learnings
|
||||||
|
|
||||||
|
- [ ] What worked well identified
|
||||||
|
- [ ] What didn't work documented
|
||||||
|
- [ ] Process improvements suggested
|
||||||
|
- [ ] Tool effectiveness evaluated
|
||||||
|
- [ ] Communication gaps noted
|
||||||
|
- [ ] Team dynamics assessed
|
||||||
|
|
||||||
|
### 4.3 Technical Learnings
|
||||||
|
|
||||||
|
- [ ] New technologies evaluated
|
||||||
|
- [ ] Performance insights gained
|
||||||
|
- [ ] Security findings documented
|
||||||
|
- [ ] Integration challenges noted
|
||||||
|
- [ ] Best practices identified
|
||||||
|
- [ ] Anti-patterns discovered
|
||||||
|
|
||||||
|
## 5. STAKEHOLDER FEEDBACK
|
||||||
|
|
||||||
|
[[LLM: Stakeholder input shapes future direction. Capture it systematically.]]
|
||||||
|
|
||||||
|
### 5.1 Feature Feedback
|
||||||
|
|
||||||
|
- [ ] User reactions to demos captured
|
||||||
|
- [ ] Feature requests documented
|
||||||
|
- [ ] Priority changes noted
|
||||||
|
- [ ] Usability concerns raised
|
||||||
|
- [ ] Performance feedback received
|
||||||
|
- [ ] Gap analysis completed
|
||||||
|
|
||||||
|
### 5.2 Strategic Feedback
|
||||||
|
|
||||||
|
- [ ] Alignment with business goals verified
|
||||||
|
- [ ] Market changes discussed
|
||||||
|
- [ ] Competitive insights shared
|
||||||
|
- [ ] Resource concerns raised
|
||||||
|
- [ ] Timeline adjustments proposed
|
||||||
|
- [ ] Success metrics validated
|
||||||
|
|
||||||
|
## 6. NEXT SPRINT PLANNING
|
||||||
|
|
||||||
|
[[LLM: Use review insights to plan effectively for the next sprint.]]
|
||||||
|
|
||||||
|
### 6.1 Backlog Refinement
|
||||||
|
|
||||||
|
- [ ] Backlog prioritization updated
|
||||||
|
- [ ] New stories created from feedback
|
||||||
|
- [ ] Technical debt items prioritized
|
||||||
|
- [ ] Dependencies identified
|
||||||
|
- [ ] Estimation needs noted
|
||||||
|
- [ ] Spike stories defined
|
||||||
|
|
||||||
|
### 6.2 Sprint Goal Setting
|
||||||
|
|
||||||
|
- [ ] Next sprint theme determined
|
||||||
|
- [ ] Specific goals articulated
|
||||||
|
- [ ] Success criteria defined
|
||||||
|
- [ ] Risks identified
|
||||||
|
- [ ] Capacity confirmed
|
||||||
|
- [ ] Commitment level agreed
|
||||||
|
|
||||||
|
### 6.3 Process Adjustments
|
||||||
|
|
||||||
|
- [ ] Retrospective actions incorporated
|
||||||
|
- [ ] Process improvements planned
|
||||||
|
- [ ] Tool changes identified
|
||||||
|
- [ ] Communication plans updated
|
||||||
|
- [ ] Meeting cadence adjusted
|
||||||
|
- [ ] Team agreements updated
|
||||||
|
|
||||||
|
## 7. DOCUMENTATION UPDATES
|
||||||
|
|
||||||
|
[[LLM: Keep project documentation current with sprint outcomes.]]
|
||||||
|
|
||||||
|
### 7.1 Memory Bank Updates
|
||||||
|
|
||||||
|
- [ ] progress.md updated with completions
|
||||||
|
- [ ] activeContext.md refreshed for next sprint
|
||||||
|
- [ ] systemPatterns.md updated with new patterns
|
||||||
|
- [ ] techContext.md updated if stack changed
|
||||||
|
- [ ] productContext.md adjusted based on feedback
|
||||||
|
- [ ] All updates committed and pushed
|
||||||
|
|
||||||
|
### 7.2 Project Documentation
|
||||||
|
|
||||||
|
- [ ] README updated if needed
|
||||||
|
- [ ] CHANGELOG updated with sprint changes
|
||||||
|
- [ ] Architecture docs updated
|
||||||
|
- [ ] API documentation current
|
||||||
|
- [ ] Deployment guides updated
|
||||||
|
- [ ] User documentation refreshed
|
||||||
|
|
||||||
|
### 7.3 Knowledge Sharing
|
||||||
|
|
||||||
|
- [ ] Dev Journal entries completed
|
||||||
|
- [ ] Key decisions documented in ADRs
|
||||||
|
- [ ] Lessons learned captured
|
||||||
|
- [ ] Best practices documented
|
||||||
|
- [ ] Team wiki updated
|
||||||
|
- [ ] Knowledge gaps identified
|
||||||
|
|
||||||
|
## 8. METRICS & REPORTING
|
||||||
|
|
||||||
|
[[LLM: Data-driven insights improve future performance.]]
|
||||||
|
|
||||||
|
### 8.1 Sprint Metrics
|
||||||
|
|
||||||
|
- [ ] Velocity calculated and tracked
|
||||||
|
- [ ] Cycle time measured
|
||||||
|
- [ ] Defect rates analyzed
|
||||||
|
- [ ] Test coverage reported
|
||||||
|
- [ ] Performance metrics captured
|
||||||
|
- [ ] Technical debt quantified
|
||||||
|
|
||||||
|
### 8.2 Quality Metrics
|
||||||
|
|
||||||
|
- [ ] Code review effectiveness assessed
|
||||||
|
- [ ] Test automation coverage measured
|
||||||
|
- [ ] Security scan results reviewed
|
||||||
|
- [ ] Performance benchmarks compared
|
||||||
|
- [ ] User satisfaction gathered
|
||||||
|
- [ ] Stability metrics tracked
|
||||||
|
|
||||||
|
### 8.3 Trend Analysis
|
||||||
|
|
||||||
|
- [ ] Velocity trends analyzed
|
||||||
|
- [ ] Quality trends identified
|
||||||
|
- [ ] Estimation accuracy reviewed
|
||||||
|
- [ ] Bottlenecks identified
|
||||||
|
- [ ] Improvement areas prioritized
|
||||||
|
- [ ] Predictions for next sprint
|
||||||
|
|
||||||
|
## 9. ACTION ITEMS
|
||||||
|
|
||||||
|
[[LLM: Reviews without follow-through waste time. Ensure actions are specific and assigned.]]
|
||||||
|
|
||||||
|
### 9.1 Immediate Actions
|
||||||
|
|
||||||
|
- [ ] Critical fixes identified and assigned
|
||||||
|
- [ ] Blocker resolution planned
|
||||||
|
- [ ] Documentation updates assigned
|
||||||
|
- [ ] Communication tasks defined
|
||||||
|
- [ ] Tool/access issues addressed
|
||||||
|
- [ ] Quick wins identified
|
||||||
|
|
||||||
|
### 9.2 Short-term Actions (Next Sprint)
|
||||||
|
|
||||||
|
- [ ] Process improvements scheduled
|
||||||
|
- [ ] Technical debt items planned
|
||||||
|
- [ ] Training needs addressed
|
||||||
|
- [ ] Tool implementations planned
|
||||||
|
- [ ] Architecture updates scheduled
|
||||||
|
- [ ] Team changes coordinated
|
||||||
|
|
||||||
|
### 9.3 Long-term Actions
|
||||||
|
|
||||||
|
- [ ] Strategic changes documented
|
||||||
|
- [ ] Major refactoring planned
|
||||||
|
- [ ] Platform migrations scheduled
|
||||||
|
- [ ] Team scaling addressed
|
||||||
|
- [ ] Skill development planned
|
||||||
|
- [ ] Innovation initiatives defined
|
||||||
|
|
||||||
|
## SPRINT REVIEW SUMMARY
|
||||||
|
|
||||||
|
[[LLM: Generate a comprehensive but concise summary for stakeholders and team records.
|
||||||
|
|
||||||
|
Include:
|
||||||
|
|
||||||
|
1. **Sprint Overview**
|
||||||
|
- Sprint number/name
|
||||||
|
- Duration
|
||||||
|
- Team composition
|
||||||
|
- Overall outcome (successful/challenged/failed)
|
||||||
|
|
||||||
|
2. **Achievements**
|
||||||
|
- Stories completed vs planned
|
||||||
|
- Value delivered
|
||||||
|
- Technical accomplishments
|
||||||
|
- Quality improvements
|
||||||
|
|
||||||
|
3. **Challenges**
|
||||||
|
- Major blockers faced
|
||||||
|
- Incomplete work
|
||||||
|
- Technical difficulties
|
||||||
|
- Process issues
|
||||||
|
|
||||||
|
4. **Key Decisions**
|
||||||
|
- Technical choices made
|
||||||
|
- Priority changes
|
||||||
|
- Process adjustments
|
||||||
|
- Resource changes
|
||||||
|
|
||||||
|
5. **Stakeholder Feedback**
|
||||||
|
- Satisfaction level
|
||||||
|
- Major concerns
|
||||||
|
- Feature requests
|
||||||
|
- Priority shifts
|
||||||
|
|
||||||
|
6. **Next Sprint Focus**
|
||||||
|
- Primary goals
|
||||||
|
- Key risks
|
||||||
|
- Dependencies
|
||||||
|
- Success metrics
|
||||||
|
|
||||||
|
7. **Action Items**
|
||||||
|
- Owner, action, due date
|
||||||
|
- Priority level
|
||||||
|
- Dependencies
|
||||||
|
|
||||||
|
Keep it scannable and action-oriented.]]
|
||||||
|
|
||||||
|
### Review Summary Template
|
||||||
|
|
||||||
|
**Sprint:** [Number/Name]
|
||||||
|
**Date:** [Review Date]
|
||||||
|
**Duration:** [Sprint Length]
|
||||||
|
**Attendees:** [List Key Attendees]
|
||||||
|
|
||||||
|
**Overall Assessment:** [Green/Yellow/Red]
|
||||||
|
|
||||||
|
**Completed:**
|
||||||
|
- X of Y stories (Z story points)
|
||||||
|
- Key features: [List]
|
||||||
|
- Technical achievements: [List]
|
||||||
|
|
||||||
|
**Incomplete:**
|
||||||
|
- X stories carried over
|
||||||
|
- Reasons: [Brief explanation]
|
||||||
|
|
||||||
|
**Key Feedback:**
|
||||||
|
|
||||||
|
**Next Sprint Focus:**
|
||||||
|
1. [Primary goal]
|
||||||
|
2. [Secondary goal]
|
||||||
|
3. [Technical focus]
|
||||||
|
|
||||||
|
**Critical Actions:**
|
||||||
|
|
||||||
|
| Action | Owner | Due Date |
|
||||||
|
|----------|--------|----------|
|
||||||
|
| [Action] | [Name] | [Date] |
|
||||||
|
|
||||||
|
**Review Completed By:** [Name]
|
||||||
|
**Documentation Updated:** [Yes/No]
|
||||||
|
|
@ -59,11 +59,15 @@ The goal is quality delivery, not just checking boxes.]]
|
||||||
|
|
||||||
5. **Story Administration:**
|
5. **Story Administration:**
|
||||||
|
|
||||||
[[LLM: Documentation helps the next developer. What should they know?]]
|
[[LLM: Documentation helps the next developer. What should they know? Update Memory Bank and Dev Journal]]
|
||||||
|
|
||||||
- [ ] All tasks within the story file are marked as complete.
|
- [ ] All tasks within the story file are marked as complete.
|
||||||
- [ ] Any clarifications or decisions made during development are documented in the story file or linked appropriately.
|
- [ ] Any clarifications or decisions made during development are documented in the story file or linked appropriately.
|
||||||
- [ ] The story wrap up section has been completed with notes of changes or information relevant to the next story or overall project, the agent model that was primarily used during development, and the changelog of any changes is properly updated.
|
- [ ] The story wrap up section has been completed with notes of changes or information relevant to the next story or overall project, the agent model that was primarily used during development, and the changelog of any changes is properly updated.
|
||||||
|
- [ ] Dev Journal entry created documenting implementation decisions and challenges
|
||||||
|
- [ ] Memory Bank updated with new patterns, decisions, or technical context
|
||||||
|
- [ ] ADR created if significant architectural decisions were made
|
||||||
|
- [ ] Comprehensive commit workflow followed with descriptive commit messages
|
||||||
|
|
||||||
6. **Dependencies, Build & Configuration:**
|
6. **Dependencies, Build & Configuration:**
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -38,13 +38,17 @@ We're checking for SUFFICIENT guidance, not exhaustive detail.]]
|
||||||
2. The business value or user benefit is clear
|
2. The business value or user benefit is clear
|
||||||
3. How this fits into the larger epic/product is explained
|
3. How this fits into the larger epic/product is explained
|
||||||
4. Dependencies are explicit ("requires Story X to be complete")
|
4. Dependencies are explicit ("requires Story X to be complete")
|
||||||
5. Success looks like something specific, not vague]]
|
5. Success looks like something specific, not vague
|
||||||
|
6. Memory Bank context has been considered
|
||||||
|
7. Technical principles alignment is clear]]
|
||||||
|
|
||||||
- [ ] Story goal/purpose is clearly stated
|
- [ ] Story goal/purpose is clearly stated
|
||||||
- [ ] Relationship to epic goals is evident
|
- [ ] Relationship to epic goals is evident
|
||||||
- [ ] How the story fits into overall system flow is explained
|
- [ ] How the story fits into overall system flow is explained
|
||||||
- [ ] Dependencies on previous stories are identified (if applicable)
|
- [ ] Dependencies on previous stories are identified (if applicable)
|
||||||
- [ ] Business context and value are clear
|
- [ ] Business context and value are clear
|
||||||
|
- [ ] Memory Bank context referenced where relevant
|
||||||
|
- [ ] Technical principles and preferences considered
|
||||||
|
|
||||||
## 2. TECHNICAL IMPLEMENTATION GUIDANCE
|
## 2. TECHNICAL IMPLEMENTATION GUIDANCE
|
||||||
|
|
||||||
|
|
@ -79,6 +83,8 @@ Note: We don't need every file listed - just the important ones.]]
|
||||||
- [ ] Critical information from previous stories is summarized (not just referenced)
|
- [ ] Critical information from previous stories is summarized (not just referenced)
|
||||||
- [ ] Context is provided for why references are relevant
|
- [ ] Context is provided for why references are relevant
|
||||||
- [ ] References use consistent format (e.g., `docs/filename.md#section`)
|
- [ ] References use consistent format (e.g., `docs/filename.md#section`)
|
||||||
|
- [ ] ADR references included where architectural decisions apply
|
||||||
|
- [ ] Memory Bank files referenced appropriately (activeContext, systemPatterns, etc.)
|
||||||
|
|
||||||
## 4. SELF-CONTAINMENT ASSESSMENT
|
## 4. SELF-CONTAINMENT ASSESSMENT
|
||||||
|
|
||||||
|
|
@ -142,7 +148,7 @@ Generate a concise validation report:
|
||||||
Be pragmatic - perfect documentation doesn't exist, but it must be enough to provide the extreme context a dev agent needs to get the work down and not create a mess.]]
|
Be pragmatic - perfect documentation doesn't exist, but it must be enough to provide the extreme context a dev agent needs to get the work down and not create a mess.]]
|
||||||
|
|
||||||
| Category | Status | Issues |
|
| Category | Status | Issues |
|
||||||
| ------------------------------------ | ------ | ------ |
|
|--------------------------------------|--------|--------|
|
||||||
| 1. Goal & Context Clarity | _TBD_ | |
|
| 1. Goal & Context Clarity | _TBD_ | |
|
||||||
| 2. Technical Implementation Guidance | _TBD_ | |
|
| 2. Technical Implementation Guidance | _TBD_ | |
|
||||||
| 3. Reference Effectiveness | _TBD_ | |
|
| 3. Reference Effectiveness | _TBD_ | |
|
||||||
|
|
|
||||||
|
|
@ -18,3 +18,32 @@ devLoadAlwaysFiles:
|
||||||
devDebugLog: .ai/debug-log.md
|
devDebugLog: .ai/debug-log.md
|
||||||
devStoryLocation: docs/stories
|
devStoryLocation: docs/stories
|
||||||
slashPrefix: BMad
|
slashPrefix: BMad
|
||||||
|
# Memory Bank configuration
|
||||||
|
memoryBank:
|
||||||
|
location: docs/memory-bank
|
||||||
|
autoLoadOnSessionStart: true
|
||||||
|
files:
|
||||||
|
- projectbrief.md
|
||||||
|
- productContext.md
|
||||||
|
- systemPatterns.md
|
||||||
|
- techContext.md
|
||||||
|
- activeContext.md
|
||||||
|
- progress.md
|
||||||
|
# Development Journal configuration
|
||||||
|
devJournal:
|
||||||
|
location: docs/devJournal
|
||||||
|
namePattern: YYYYMMDD-NN.md
|
||||||
|
autoCreateAfterSession: true
|
||||||
|
includeInMemoryBank: true
|
||||||
|
# Architectural Decision Records configuration
|
||||||
|
adr:
|
||||||
|
location: docs/adr
|
||||||
|
namePattern: adr-NNNN.md
|
||||||
|
templateFile: templates/adr-tmpl.md
|
||||||
|
autoLinkToMemoryBank: true
|
||||||
|
# Sprint Management configuration
|
||||||
|
sprint:
|
||||||
|
reviewChecklistPath: checklists/sprint-review-checklist.md
|
||||||
|
sessionKickoffChecklistPath: checklists/session-kickoff-checklist.md
|
||||||
|
enableSprintPlanning: true
|
||||||
|
sprintPlanningTemplate: templates/sprint-planning-tmpl.yaml
|
||||||
|
|
|
||||||
|
|
@ -65,7 +65,7 @@
|
||||||
- **Feature Flags**: Implementation approach
|
- **Feature Flags**: Implementation approach
|
||||||
- **Backward Compatibility**: Version strategy
|
- **Backward Compatibility**: Version strategy
|
||||||
|
|
||||||
## Red Flags - Always Create an ADR When:
|
## Red Flags - Always Create an ADR When
|
||||||
|
|
||||||
1. **Multiple Valid Options Exist**: The team is debating between approaches
|
1. **Multiple Valid Options Exist**: The team is debating between approaches
|
||||||
2. **Significant Cost Implications**: The decision impacts budget substantially
|
2. **Significant Cost Implications**: The decision impacts budget substantially
|
||||||
|
|
@ -76,7 +76,7 @@
|
||||||
7. **Performance Critical**: Decision significantly impacts system performance
|
7. **Performance Critical**: Decision significantly impacts system performance
|
||||||
8. **Security Implications**: Decision affects system security posture
|
8. **Security Implications**: Decision affects system security posture
|
||||||
|
|
||||||
## When NOT to Create an ADR:
|
## When NOT to Create an ADR
|
||||||
|
|
||||||
1. **Implementation Details**: How to name a variable or structure a small module
|
1. **Implementation Details**: How to name a variable or structure a small module
|
||||||
2. **Temporary Solutions**: Quick fixes that will be replaced soon
|
2. **Temporary Solutions**: Quick fixes that will be replaced soon
|
||||||
|
|
@ -84,5 +84,5 @@
|
||||||
4. **Tool Configuration**: Minor tool settings that are easily changeable
|
4. **Tool Configuration**: Minor tool settings that are easily changeable
|
||||||
5. **Obvious Choices**: When there's only one reasonable option
|
5. **Obvious Choices**: When there's only one reasonable option
|
||||||
|
|
||||||
## Remember:
|
## Remember
|
||||||
> "If someone might ask 'Why did we do it this way?' in 6 months, you need an ADR."
|
> "If someone might ask 'Why did we do it this way?' in 6 months, you need an ADR."
|
||||||
|
|
@ -273,7 +273,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
|
||||||
### Core Development Team
|
### Core Development Team
|
||||||
|
|
||||||
| Agent | Role | Primary Functions | When to Use |
|
| Agent | Role | Primary Functions | When to Use |
|
||||||
| ----------- | ------------------ | --------------------------------------- | -------------------------------------- |
|
|-------------|--------------------|-----------------------------------------|----------------------------------------|
|
||||||
| `analyst` | Business Analyst | Market research, requirements gathering | Project planning, competitive analysis |
|
| `analyst` | Business Analyst | Market research, requirements gathering | Project planning, competitive analysis |
|
||||||
| `pm` | Product Manager | PRD creation, feature prioritization | Strategic planning, roadmaps |
|
| `pm` | Product Manager | PRD creation, feature prioritization | Strategic planning, roadmaps |
|
||||||
| `architect` | Solution Architect | System design, technical architecture | Complex systems, scalability planning |
|
| `architect` | Solution Architect | System design, technical architecture | Complex systems, scalability planning |
|
||||||
|
|
@ -286,7 +286,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
|
||||||
### Meta Agents
|
### Meta Agents
|
||||||
|
|
||||||
| Agent | Role | Primary Functions | When to Use |
|
| Agent | Role | Primary Functions | When to Use |
|
||||||
| ------------------- | ---------------- | ------------------------------------- | --------------------------------- |
|
|---------------------|------------------|---------------------------------------|-----------------------------------|
|
||||||
| `bmad-orchestrator` | Team Coordinator | Multi-agent workflows, role switching | Complex multi-role tasks |
|
| `bmad-orchestrator` | Team Coordinator | Multi-agent workflows, role switching | Complex multi-role tasks |
|
||||||
| `bmad-master` | Universal Expert | All capabilities without switching | Single-session comprehensive work |
|
| `bmad-master` | Universal Expert | All capabilities without switching | Single-session comprehensive work |
|
||||||
|
|
||||||
|
|
@ -349,6 +349,104 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
|
||||||
- **Use Case**: Backend services, APIs, system development
|
- **Use Case**: Backend services, APIs, system development
|
||||||
- **Bundle**: `team-no-ui.txt`
|
- **Bundle**: `team-no-ui.txt`
|
||||||
|
|
||||||
|
## Recent Enhancements (Quad Damage)
|
||||||
|
|
||||||
|
### Memory Bank Pattern
|
||||||
|
|
||||||
|
The Memory Bank provides persistent context across AI sessions, ensuring continuity when AI memory resets:
|
||||||
|
|
||||||
|
**Core Files** (in `docs/memory-bank/`):
|
||||||
|
- `projectbrief.md` - Project foundation and goals
|
||||||
|
- `productContext.md` - Problem space and user needs
|
||||||
|
- `systemPatterns.md` - Architecture and technical decisions
|
||||||
|
- `techContext.md` - Technology stack and constraints
|
||||||
|
- `activeContext.md` - Current work and priorities
|
||||||
|
- `progress.md` - Features completed and status
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
- Session initialization with `session-kickoff` task
|
||||||
|
- Automatic updates through `update-memory-bank` task
|
||||||
|
- Integration with dev journals and ADRs
|
||||||
|
- All agents have Memory Bank awareness
|
||||||
|
|
||||||
|
### Architectural Decision Records (ADRs)
|
||||||
|
|
||||||
|
Formal documentation of significant architectural decisions:
|
||||||
|
|
||||||
|
**Features**:
|
||||||
|
- Michael Nygard format in `docs/adr/`
|
||||||
|
- Numbered sequence (0001, 0002, etc.)
|
||||||
|
- Comprehensive template with alternatives analysis
|
||||||
|
- Integration with architect agent
|
||||||
|
- Triggers documented for when to create ADRs
|
||||||
|
|
||||||
|
### Development Journals
|
||||||
|
|
||||||
|
Session documentation for knowledge sharing:
|
||||||
|
|
||||||
|
**Features**:
|
||||||
|
- Daily entries in `docs/devJournal/`
|
||||||
|
- Comprehensive session narratives
|
||||||
|
- Work stream tracking
|
||||||
|
- Technical decision documentation
|
||||||
|
- Anti-tunnel vision mechanisms
|
||||||
|
|
||||||
|
### Enhanced Commit and PR Workflows
|
||||||
|
|
||||||
|
Professional git workflows with comprehensive context:
|
||||||
|
|
||||||
|
**Features**:
|
||||||
|
- Multi-stream commit synthesis
|
||||||
|
- Conventional Commits 1.0 standard
|
||||||
|
- Anti-tunnel vision checks
|
||||||
|
- Comprehensive PR descriptions
|
||||||
|
- Cross-reference integration
|
||||||
|
|
||||||
|
### Technical Principles Integration
|
||||||
|
|
||||||
|
Three sets of architectural and coding principles:
|
||||||
|
|
||||||
|
**1. Coding Standards** (`data/coding-standards.md`):
|
||||||
|
- Core principles with tags ([SF], [DRY], etc.)
|
||||||
|
- Security best practices
|
||||||
|
- Testing standards
|
||||||
|
- Commit conventions
|
||||||
|
|
||||||
|
**2. Twelve-Factor Principles** (`data/twelve-factor-principles.md`):
|
||||||
|
- Cloud-native application design
|
||||||
|
- Environment parity
|
||||||
|
- Stateless processes
|
||||||
|
- Configuration management
|
||||||
|
|
||||||
|
**3. Microservice Patterns** (`data/microservice-patterns.md`):
|
||||||
|
- Service decomposition strategies
|
||||||
|
- Communication patterns
|
||||||
|
- Data management approaches
|
||||||
|
- Testing and deployment patterns
|
||||||
|
|
||||||
|
### Session Kickoff Protocol
|
||||||
|
|
||||||
|
Universal initialization for all agents:
|
||||||
|
|
||||||
|
**Process**:
|
||||||
|
1. Memory Bank review
|
||||||
|
2. Architecture documentation scan
|
||||||
|
3. Dev journal history check
|
||||||
|
4. ADR review
|
||||||
|
5. Current state assessment
|
||||||
|
6. Consistency validation
|
||||||
|
7. Next steps recommendation
|
||||||
|
|
||||||
|
**Usage**: Run `*session-kickoff` at start of any agent session
|
||||||
|
|
||||||
|
### Integration Points
|
||||||
|
|
||||||
|
All enhancements work together:
|
||||||
|
- Memory Bank ← Dev Journals ← ADRs ← Code Changes
|
||||||
|
- Session Kickoff → Memory Bank → Agent Context
|
||||||
|
- Technical Principles → Architecture Decisions → ADRs
|
||||||
|
- Commit/PR Workflows → Dev Journals → Memory Bank
|
||||||
|
|
||||||
## Core Architecture
|
## Core Architecture
|
||||||
|
|
||||||
### System Overview
|
### System Overview
|
||||||
|
|
@ -381,7 +479,7 @@ The BMad-Method is built around a modular architecture centered on the `bmad-cor
|
||||||
- **Templates** (`bmad-core/templates/`): Markdown templates for PRDs, architecture specs, user stories
|
- **Templates** (`bmad-core/templates/`): Markdown templates for PRDs, architecture specs, user stories
|
||||||
- **Tasks** (`bmad-core/tasks/`): Instructions for specific repeatable actions like "shard-doc" or "create-next-story"
|
- **Tasks** (`bmad-core/tasks/`): Instructions for specific repeatable actions like "shard-doc" or "create-next-story"
|
||||||
- **Checklists** (`bmad-core/checklists/`): Quality assurance checklists for validation and review
|
- **Checklists** (`bmad-core/checklists/`): Quality assurance checklists for validation and review
|
||||||
- **Data** (`bmad-core/data/`): Core knowledge base and technical preferences
|
- **Data** (`bmad-core/data/`): Core knowledge base, technical preferences, and project scaffolding guidelines
|
||||||
|
|
||||||
### Dual Environment Architecture
|
### Dual Environment Architecture
|
||||||
|
|
||||||
|
|
@ -409,13 +507,20 @@ BMad employs a sophisticated template system with three key components:
|
||||||
|
|
||||||
### Technical Preferences Integration
|
### Technical Preferences Integration
|
||||||
|
|
||||||
The `technical-preferences.md` file serves as a persistent technical profile that:
|
The framework includes two key preference files:
|
||||||
|
|
||||||
|
**`technical-preferences.md`** - Technology choices and patterns:
|
||||||
- Ensures consistency across all agents and projects
|
- Ensures consistency across all agents and projects
|
||||||
- Eliminates repetitive technology specification
|
- Eliminates repetitive technology specification
|
||||||
- Provides personalized recommendations aligned with user preferences
|
- Provides personalized recommendations aligned with user preferences
|
||||||
- Evolves over time with lessons learned
|
- Evolves over time with lessons learned
|
||||||
|
|
||||||
|
**`project-scaffolding-preference.md`** - Project structure and organization:
|
||||||
|
- Defines standard directory structure for all projects
|
||||||
|
- Provides technology-agnostic scaffolding guidelines
|
||||||
|
- Ensures consistency in documentation organization
|
||||||
|
- Supports BMAD-specific structures (Memory Bank, ADRs, Dev Journals)
|
||||||
|
|
||||||
### Build and Delivery Process
|
### Build and Delivery Process
|
||||||
|
|
||||||
The `web-builder.js` tool creates web-ready bundles by:
|
The `web-builder.js` tool creates web-ready bundles by:
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,88 @@
|
||||||
|
# Coding Standards and Principles
|
||||||
|
|
||||||
|
> **Purpose:** This document defines the core coding standards and principles that apply to all development work in BMAD projects. These are fundamental rules of software craftsmanship that ensure consistency, quality, and maintainability.
|
||||||
|
|
||||||
|
## Core Coding Principles
|
||||||
|
|
||||||
|
### Simplicity and Readability
|
||||||
|
- **[SF] Simplicity First:** Always choose the simplest viable solution. Complex patterns require explicit justification.
|
||||||
|
- **[RP] Readability Priority:** Code must be immediately understandable by both humans and AI.
|
||||||
|
- **[CA] Clean Architecture:** Generate cleanly formatted, logically structured code with consistent patterns.
|
||||||
|
|
||||||
|
### Dependency Management
|
||||||
|
- **[DM] Dependency Minimalism:** No new libraries without explicit request or compelling justification.
|
||||||
|
- **[DM-1] Security Reviews:** Review third-party dependencies for vulnerabilities quarterly.
|
||||||
|
- **[DM-2] Package Verification:** Prefer signed or verified packages.
|
||||||
|
- **[DM-3] Cleanup:** Remove unused or outdated dependencies promptly.
|
||||||
|
- **[DM-4] Documentation:** Document dependency updates in changelog.
|
||||||
|
|
||||||
|
### Development Workflow
|
||||||
|
- **[WF-FOCUS] Task Focus:** Focus on areas of code relevant to the task.
|
||||||
|
- **[WF-SCOPE] Scope Control:** Do not touch code unrelated to the task.
|
||||||
|
- **[WF-TEST] Testing:** Write thorough tests for all major functionality.
|
||||||
|
- **[WF-ARCH] Architecture Stability:** Avoid major changes to working patterns unless explicitly requested.
|
||||||
|
- **[WF-IMPACT] Impact Analysis:** Consider effects on other methods and code areas.
|
||||||
|
|
||||||
|
### Code Quality Standards
|
||||||
|
- **[DRY] DRY Principle:** No duplicate code. Reuse or extend existing functionality.
|
||||||
|
- **[REH] Error Handling:** Robust error handling for all edge cases and external interactions.
|
||||||
|
- **[CSD] Code Smell Detection:** Proactively identify and refactor:
|
||||||
|
- Functions exceeding 30 lines
|
||||||
|
- Files exceeding 300 lines
|
||||||
|
- Nested conditionals beyond 2 levels
|
||||||
|
- Classes with more than 5 public methods
|
||||||
|
|
||||||
|
### Security Principles
|
||||||
|
- **[IV] Input Validation:** All external data must be validated before processing.
|
||||||
|
- **[SFT] Security-First:** Implement proper authentication, authorization, and data protection.
|
||||||
|
- **[RL] Rate Limiting:** Rate limit all API endpoints.
|
||||||
|
- **[RLS] Row-Level Security:** Use row-level security always.
|
||||||
|
- **[CAP] Captcha Protection:** Captcha on all auth routes/signup pages.
|
||||||
|
- **[WAF] WAF Protection:** Enable attack challenge on hosting WAF when available.
|
||||||
|
- **[SEC-1] Sensitive Files:** DO NOT read or modify without prior approval:
|
||||||
|
- .env files
|
||||||
|
- */config/secrets.*
|
||||||
|
- Any file containing API keys or credentials
|
||||||
|
|
||||||
|
### Performance and Resources
|
||||||
|
- **[PA] Performance Awareness:** Consider computational complexity and resource usage.
|
||||||
|
- **[RM] Resource Management:** Close connections and free resources appropriately.
|
||||||
|
- **[CMV] Constants Over Magic Values:** No magic strings or numbers. Use named constants.
|
||||||
|
|
||||||
|
### Commit Standards
|
||||||
|
- **[AC] Atomic Changes:** Make small, self-contained modifications.
|
||||||
|
- **[CD] Commit Discipline:** Use conventional commit format:
|
||||||
|
```
|
||||||
|
type(scope): concise description
|
||||||
|
|
||||||
|
[optional body with details]
|
||||||
|
|
||||||
|
[optional footer with breaking changes/issue references]
|
||||||
|
```
|
||||||
|
Types: feat, fix, docs, style, refactor, perf, test, chore
|
||||||
|
|
||||||
|
### Testing Standards
|
||||||
|
- **[TDT] Test-Driven Thinking:** Design all code to be easily testable from inception.
|
||||||
|
- **[ISA] Industry Standards:** Follow established conventions for the language and tech stack.
|
||||||
|
|
||||||
|
## Application to AI Development
|
||||||
|
|
||||||
|
### Communication Guidelines
|
||||||
|
- **[RAT] Rule Application Tracking:** Tag rule applications with abbreviations (e.g., [SF], [DRY]).
|
||||||
|
- **[EDC] Explanation Depth Control:** Scale explanation detail based on complexity.
|
||||||
|
- **[AS] Alternative Suggestions:** Offer alternative approaches with pros/cons when relevant.
|
||||||
|
- **[KBT] Knowledge Boundary Transparency:** Clearly communicate capability limits.
|
||||||
|
|
||||||
|
### Context Management
|
||||||
|
- **[TR] Transparent Reasoning:** Explicitly reference which rules influenced decisions.
|
||||||
|
- **[CWM] Context Window Management:** Be mindful of AI context limitations.
|
||||||
|
- **[SD] Strategic Documentation:** Comment only complex logic or critical functions.
|
||||||
|
|
||||||
|
## Integration with BMAD Workflows
|
||||||
|
|
||||||
|
These coding standards should be:
|
||||||
|
1. Referenced during architecture design decisions
|
||||||
|
2. Applied during story implementation
|
||||||
|
3. Validated during code reviews
|
||||||
|
4. Enforced through automated tooling where possible
|
||||||
|
5. Updated based on team learnings and retrospectives
|
||||||
|
|
@ -1,5 +1,20 @@
|
||||||
# Elicitation Methods Data
|
# Elicitation Methods Data
|
||||||
|
|
||||||
|
## Context-Aware Elicitation
|
||||||
|
|
||||||
|
**Memory Bank Integration**
|
||||||
|
- Begin elicitation with Memory Bank context review
|
||||||
|
- Reference `activeContext.md` for current state understanding
|
||||||
|
- Check `systemPatterns.md` for established conventions
|
||||||
|
- Validate against `progress.md` for completed work
|
||||||
|
- Ensure consistency with historical decisions in ADRs
|
||||||
|
|
||||||
|
**Session Kickoff Prerequisite**
|
||||||
|
- Ensure `*session-kickoff` completed before deep elicitation
|
||||||
|
- Load relevant Dev Journal entries for recent context
|
||||||
|
- Review technical principles and coding standards
|
||||||
|
- Establish shared understanding of project state
|
||||||
|
|
||||||
## Core Reflective Methods
|
## Core Reflective Methods
|
||||||
|
|
||||||
**Expand or Contract for Audience**
|
**Expand or Contract for Audience**
|
||||||
|
|
@ -24,12 +39,20 @@
|
||||||
- Check internal consistency and coherence
|
- Check internal consistency and coherence
|
||||||
- Identify and validate dependencies between elements
|
- Identify and validate dependencies between elements
|
||||||
- Confirm effective ordering and sequencing
|
- Confirm effective ordering and sequencing
|
||||||
|
- Cross-reference with Memory Bank patterns for consistency
|
||||||
|
|
||||||
**Assess Alignment with Overall Goals**
|
**Assess Alignment with Overall Goals**
|
||||||
- Evaluate content contribution to stated objectives
|
- Evaluate content contribution to stated objectives
|
||||||
- Identify any misalignments or gaps
|
- Identify any misalignments or gaps
|
||||||
- Interpret alignment from specific role's perspective
|
- Interpret alignment from specific role's perspective
|
||||||
- Suggest adjustments to better serve goals
|
- Suggest adjustments to better serve goals
|
||||||
|
- Validate against `projectbrief.md` for mission alignment
|
||||||
|
|
||||||
|
**Memory Bank Pattern Validation**
|
||||||
|
- Compare proposed approaches with documented patterns
|
||||||
|
- Identify deviations from established conventions
|
||||||
|
- Assess if new patterns should be documented
|
||||||
|
- Update `systemPatterns.md` with validated approaches
|
||||||
|
|
||||||
## Risk and Challenge Methods
|
## Risk and Challenge Methods
|
||||||
|
|
||||||
|
|
@ -126,9 +149,36 @@
|
||||||
- Identify minimum viable approach
|
- Identify minimum viable approach
|
||||||
- Discover innovative workarounds and optimizations
|
- Discover innovative workarounds and optimizations
|
||||||
|
|
||||||
|
## Memory Bank Elicitation Methods
|
||||||
|
|
||||||
|
**Historical Context Mining**
|
||||||
|
- Extract insights from Dev Journal entries
|
||||||
|
- Identify recurring patterns across sessions
|
||||||
|
- Discover implicit knowledge in past decisions
|
||||||
|
- Build on previous architectural choices
|
||||||
|
|
||||||
|
**Progressive Context Building**
|
||||||
|
- Start with `projectbrief.md` for foundation
|
||||||
|
- Layer in `techContext.md` for technical constraints
|
||||||
|
- Add `systemPatterns.md` for design conventions
|
||||||
|
- Integrate `activeContext.md` for current state
|
||||||
|
|
||||||
|
**ADR-Driven Discovery**
|
||||||
|
- Review ADRs for decision rationale
|
||||||
|
- Identify constraints from past choices
|
||||||
|
- Understand trade-offs already considered
|
||||||
|
- Build on established architectural principles
|
||||||
|
|
||||||
|
**Sprint Context Elicitation**
|
||||||
|
- Review sprint goals from planning documents
|
||||||
|
- Check progress against sprint commitments
|
||||||
|
- Identify blockers from Dev Journals
|
||||||
|
- Align new work with sprint objectives
|
||||||
|
|
||||||
## Process Control
|
## Process Control
|
||||||
|
|
||||||
**Proceed / No Further Actions**
|
**Proceed / No Further Actions**
|
||||||
- Acknowledge choice to finalize current work
|
- Acknowledge choice to finalize current work
|
||||||
- Accept output as-is or move to next step
|
- Accept output as-is or move to next step
|
||||||
- Prepare to continue without additional elicitation
|
- Prepare to continue without additional elicitation
|
||||||
|
- Update Memory Bank with elicitation outcomes
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,125 @@
|
||||||
|
# Microservice Architecture Patterns
|
||||||
|
|
||||||
|
> **Purpose:** This document outlines specific patterns and strategies for implementing Microservice-Oriented Architecture, based on Chris Richardson's "Microservices Patterns". It provides detailed guidance for service design, decomposition, communication, and data management.
|
||||||
|
|
||||||
|
## Core Architecture Patterns
|
||||||
|
|
||||||
|
### Foundation Patterns
|
||||||
|
- **[MON] Monolithic Architecture:** Single deployable unit. Good for simple applications, becomes "monolithic hell" as complexity grows.
|
||||||
|
- **[MSA] Microservice Architecture:** Collection of small, autonomous, loosely coupled services. Core pattern for complex systems.
|
||||||
|
|
||||||
|
### Service Decomposition
|
||||||
|
- **[DBC] Decompose by Business Capability:** Define services based on business capabilities (e.g., Order Management, Inventory).
|
||||||
|
- **[DSD] Decompose by Subdomain:** Use Domain-Driven Design to define services around problem subdomains.
|
||||||
|
|
||||||
|
## Communication Patterns
|
||||||
|
|
||||||
|
### Synchronous Communication
|
||||||
|
- **[RPI] Remote Procedure Invocation:** Synchronous request/response (REST, gRPC). Simple but creates coupling.
|
||||||
|
- **[CBR] Circuit Breaker:** Prevent cascading failures. Trip after consecutive failures, fail fast.
|
||||||
|
|
||||||
|
### Asynchronous Communication
|
||||||
|
- **[MSG] Messaging:** Services communicate via message broker. Promotes loose coupling and resilience.
|
||||||
|
- **[DME] Domain Events:** Aggregates publish events when state changes. Foundation for event-driven architecture.
|
||||||
|
|
||||||
|
### Service Discovery
|
||||||
|
- **[SDC] Service Discovery:** Patterns for finding service instances in dynamic cloud environments:
|
||||||
|
- Client-side discovery
|
||||||
|
- Server-side discovery
|
||||||
|
- Service registry patterns
|
||||||
|
|
||||||
|
## Data Management Patterns
|
||||||
|
|
||||||
|
### Data Architecture
|
||||||
|
- **[DPS] Database per Service:** Each service owns its data. Fundamental to loose coupling.
|
||||||
|
- **[AGG] Aggregate:** Cluster of domain objects as single unit. Transactions only create/update single aggregate.
|
||||||
|
|
||||||
|
### Data Consistency
|
||||||
|
- **[SAG] Saga:** Manage data consistency across services without distributed transactions:
|
||||||
|
- Sequence of local transactions
|
||||||
|
- Event/message triggered
|
||||||
|
- Compensating transactions on failure
|
||||||
|
|
||||||
|
### Event Patterns
|
||||||
|
- **[EVS] Event Sourcing:** Store state-changing events rather than current state. Provides audit log.
|
||||||
|
- **[OUT] Transactional Outbox:** Reliably publish messages as part of local database transaction.
|
||||||
|
|
||||||
|
### Query Patterns
|
||||||
|
- **[APC] API Composition:** Client retrieves and joins data from multiple services. Simple but inefficient for complex queries.
|
||||||
|
- **[CQR] CQRS:** Separate command (write) and query (read) models. Maintain denormalized read views.
|
||||||
|
|
||||||
|
## API Patterns
|
||||||
|
|
||||||
|
### Gateway Patterns
|
||||||
|
- **[APG] API Gateway:** Single entry point for all clients. Routes requests, handles cross-cutting concerns.
|
||||||
|
- **[BFF] Backends for Frontends:** Separate API gateway for each client type (mobile, web).
|
||||||
|
|
||||||
|
## Domain Modeling
|
||||||
|
|
||||||
|
### Design Approaches
|
||||||
|
- **[DOM] Domain Model:** Object-oriented with state and behavior. Preferred for complex logic.
|
||||||
|
- **[TSF] Transaction Script:** Procedural approach. Simpler but unmanageable for complex logic.
|
||||||
|
|
||||||
|
## Testing Patterns
|
||||||
|
|
||||||
|
### Service Testing
|
||||||
|
- **[CDC] Consumer-Driven Contract Test:** Consumer writes tests to verify provider meets expectations.
|
||||||
|
- **[SCT] Service Component Test:** Acceptance test for single service with stubbed dependencies.
|
||||||
|
|
||||||
|
## Deployment Patterns
|
||||||
|
|
||||||
|
### Container Patterns
|
||||||
|
- **[SVC] Service as Container:** Package service as container image to encapsulate technology stack.
|
||||||
|
- **[SRL] Serverless Deployment:** Deploy using serverless platform (e.g., AWS Lambda).
|
||||||
|
|
||||||
|
### Infrastructure Patterns
|
||||||
|
- **[MSC] Microservice Chassis:** Framework handling cross-cutting concerns (config, health, metrics).
|
||||||
|
- **[SMH] Service Mesh:** Infrastructure layer for inter-service communication (Istio, Linkerd).
|
||||||
|
|
||||||
|
## Migration Patterns
|
||||||
|
|
||||||
|
### Legacy Modernization
|
||||||
|
- **[STR] Strangler Application:** Incrementally build microservices around monolith. Gradual replacement.
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Service Design
|
||||||
|
1. Services should be loosely coupled and highly cohesive
|
||||||
|
2. Own their data and business logic
|
||||||
|
3. Communicate through well-defined interfaces
|
||||||
|
4. Be independently deployable
|
||||||
|
|
||||||
|
### Transaction Management
|
||||||
|
1. Avoid distributed transactions
|
||||||
|
2. Use saga pattern for cross-service consistency
|
||||||
|
3. Design for eventual consistency
|
||||||
|
4. Implement idempotency
|
||||||
|
|
||||||
|
### Resilience
|
||||||
|
1. Implement circuit breakers
|
||||||
|
2. Use timeouts and retries wisely
|
||||||
|
3. Design for failure
|
||||||
|
4. Implement health checks
|
||||||
|
|
||||||
|
### Observability
|
||||||
|
1. Distributed tracing across services
|
||||||
|
2. Centralized logging
|
||||||
|
3. Service-level metrics
|
||||||
|
4. Business metrics
|
||||||
|
|
||||||
|
## Anti-Patterns to Avoid
|
||||||
|
|
||||||
|
1. **Distributed Monolith:** Microservices that must be deployed together
|
||||||
|
2. **Chatty Services:** Excessive inter-service communication
|
||||||
|
3. **Shared Database:** Multiple services accessing same database
|
||||||
|
4. **Synchronous Communication Everywhere:** Over-reliance on RPI
|
||||||
|
5. **Missing Service Boundaries:** Services that don't align with business capabilities
|
||||||
|
|
||||||
|
## Integration with BMAD
|
||||||
|
|
||||||
|
These patterns should be:
|
||||||
|
1. Considered during architecture design phase
|
||||||
|
2. Documented in Architecture Decision Records (ADRs)
|
||||||
|
3. Applied based on specific project requirements
|
||||||
|
4. Validated against twelve-factor principles
|
||||||
|
5. Reviewed for applicability to project scale and complexity
|
||||||
|
|
@ -0,0 +1,264 @@
|
||||||
|
# Project Scaffolding Preferences
|
||||||
|
|
||||||
|
This document defines generic, technology-agnostic project scaffolding preferences that can be applied to any software project. These preferences promote consistency, maintainability, and best practices across different technology stacks.
|
||||||
|
|
||||||
|
## Documentation Structure
|
||||||
|
|
||||||
|
### Core Documentation
|
||||||
|
- **README**: Primary project documentation with setup instructions, architecture overview, and contribution guidelines
|
||||||
|
- **CHANGELOG**: Maintain detailed changelog following semantic versioning principles
|
||||||
|
- **LICENSE**: Clear licensing information for the project
|
||||||
|
- **Contributing Guidelines**: How to contribute, code standards, and review process
|
||||||
|
|
||||||
|
### BMAD Documentation Structure
|
||||||
|
- **Product Requirements Document (PRD)**:
|
||||||
|
- Single source file: `docs/prd.md`
|
||||||
|
- Can be sharded into `docs/prd/` directory by level 2 sections
|
||||||
|
- Contains epics, stories, requirements
|
||||||
|
|
||||||
|
- **Architecture Documentation**:
|
||||||
|
- Single source file: `docs/architecture.md` or `docs/brownfield-architecture.md`
|
||||||
|
- Can be sharded into `docs/architecture/` directory
|
||||||
|
- For brownfield: Document actual state including technical debt
|
||||||
|
|
||||||
|
- **Memory Bank** (AI Context Persistence):
|
||||||
|
- Location: `docs/memory-bank/`
|
||||||
|
- Core files: projectbrief.md, productContext.md, systemPatterns.md, techContext.md, activeContext.md, progress.md
|
||||||
|
- Provides persistent context across AI sessions
|
||||||
|
|
||||||
|
### Architectural Documentation
|
||||||
|
- **Architecture Decision Records (ADRs)**: Document significant architectural decisions
|
||||||
|
- Location: `docs/adr/`
|
||||||
|
- When to create: Major dependency changes, pattern changes, integration approaches, schema modifications
|
||||||
|
- Follow consistent ADR template (e.g., Michael Nygard format)
|
||||||
|
- Number sequentially (e.g., adr-0001.md)
|
||||||
|
- Maintain an index
|
||||||
|
|
||||||
|
### Development Documentation
|
||||||
|
- **Development Journals**: Track daily/session work, decisions, and challenges
|
||||||
|
- Location: `docs/devJournal/`
|
||||||
|
- Named with date format: `YYYYMMDD-NN.md`
|
||||||
|
- Include work completed, decisions made, blockers encountered
|
||||||
|
- Reference relevant ADRs and feature documentation
|
||||||
|
- Create after significant work sessions
|
||||||
|
|
||||||
|
### Feature Documentation
|
||||||
|
- **Roadmap**: High-level project direction and planned features
|
||||||
|
- Location: `docs/roadmap/`
|
||||||
|
- Feature details in `docs/roadmap/features/`
|
||||||
|
- **Epics and Stories**:
|
||||||
|
- Epics extracted from PRD to `docs/epics/`
|
||||||
|
- Stories created from epics to `docs/stories/`
|
||||||
|
- Follow naming: `epic-N-story-M.md`
|
||||||
|
|
||||||
|
## Source Code Organization
|
||||||
|
|
||||||
|
### Separation of Concerns
|
||||||
|
- **Frontend/UI**: Dedicated location for user interface components
|
||||||
|
- **Backend/API**: Separate backend logic and API implementations
|
||||||
|
- **Shared Utilities**: Common functionality used across layers
|
||||||
|
- **Configuration**: Centralized configuration management
|
||||||
|
- **Scripts**: Automation and utility scripts
|
||||||
|
|
||||||
|
### Testing Structure
|
||||||
|
- **Unit Tests**: Close to source code or in dedicated test directories
|
||||||
|
- **Integration Tests**: Test component interactions
|
||||||
|
- **End-to-End Tests**: Full workflow testing
|
||||||
|
- **Test Utilities**: Shared test helpers and fixtures
|
||||||
|
- **Test Documentation**: How to run tests, test strategies
|
||||||
|
|
||||||
|
## Project Root Structure
|
||||||
|
|
||||||
|
### Essential Files
|
||||||
|
- Version control ignore files (e.g., .gitignore)
|
||||||
|
- Editor/IDE configuration files
|
||||||
|
- Dependency management files
|
||||||
|
- Build/deployment configuration
|
||||||
|
- Environment configuration templates (never commit actual secrets)
|
||||||
|
|
||||||
|
### Standard Directories
|
||||||
|
```
|
||||||
|
/docs
|
||||||
|
/adr # Architecture Decision Records
|
||||||
|
/devJournal # Development journals
|
||||||
|
/memory-bank # Persistent AI context (BMAD-specific)
|
||||||
|
/prd # Sharded Product Requirements Documents
|
||||||
|
/architecture # Sharded Architecture Documents
|
||||||
|
/stories # User stories (from epics)
|
||||||
|
/epics # Epic documents
|
||||||
|
/api # API documentation
|
||||||
|
/roadmap # Project roadmap and features
|
||||||
|
|
||||||
|
/src
|
||||||
|
/[frontend] # UI/frontend code
|
||||||
|
/[backend] # Backend/API code
|
||||||
|
/[shared] # Shared utilities
|
||||||
|
/[config] # Configuration
|
||||||
|
|
||||||
|
/tests
|
||||||
|
/unit # Unit tests
|
||||||
|
/integration # Integration tests
|
||||||
|
/e2e # End-to-end tests
|
||||||
|
|
||||||
|
/scripts # Build, deployment, utility scripts
|
||||||
|
/tools # Development tools and utilities
|
||||||
|
/.bmad # BMAD-specific configuration and overrides
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Practices
|
||||||
|
|
||||||
|
### Code Organization
|
||||||
|
- Keep files focused and manageable (typically under 300 lines)
|
||||||
|
- Prefer composition over inheritance
|
||||||
|
- Avoid code duplication - check for existing implementations
|
||||||
|
- Use clear, consistent naming conventions throughout
|
||||||
|
- Document complex logic and non-obvious decisions
|
||||||
|
|
||||||
|
### Documentation Discipline
|
||||||
|
- Update documentation alongside code changes
|
||||||
|
- Document the "why" not just the "what"
|
||||||
|
- Keep examples current and working
|
||||||
|
- Review documentation in code reviews
|
||||||
|
- Maintain templates for consistency
|
||||||
|
|
||||||
|
### Security Considerations
|
||||||
|
- Never commit secrets or credentials
|
||||||
|
- Use environment variables for configuration
|
||||||
|
- Implement proper input validation
|
||||||
|
- Manage resources appropriately (close connections, free memory)
|
||||||
|
- Follow principle of least privilege
|
||||||
|
- Document security considerations
|
||||||
|
|
||||||
|
### Quality Standards
|
||||||
|
- All code must pass linting and formatting checks
|
||||||
|
- Automated testing at multiple levels
|
||||||
|
- Code review required before merging
|
||||||
|
- Continuous integration for all changes
|
||||||
|
- Regular dependency updates
|
||||||
|
|
||||||
|
## Accessibility & Inclusion
|
||||||
|
|
||||||
|
### Universal Design
|
||||||
|
- Consider accessibility from the start
|
||||||
|
- Follow established accessibility standards (e.g., WCAG)
|
||||||
|
- Ensure keyboard navigation support
|
||||||
|
- Provide appropriate text alternatives
|
||||||
|
- Test with assistive technologies
|
||||||
|
|
||||||
|
### Inclusive Practices
|
||||||
|
- Use clear, inclusive language in documentation
|
||||||
|
- Consider diverse user needs and contexts
|
||||||
|
- Document accessibility requirements
|
||||||
|
- Include accessibility in testing
|
||||||
|
|
||||||
|
## Database/Data Management
|
||||||
|
|
||||||
|
### Schema Management
|
||||||
|
- Version control all schema changes
|
||||||
|
- Use migration tools for consistency
|
||||||
|
- Document schema decisions in ADRs
|
||||||
|
- Maintain data dictionary
|
||||||
|
- Never make manual production changes
|
||||||
|
|
||||||
|
### Data Documentation
|
||||||
|
- Maintain current entity relationship diagrams
|
||||||
|
- Document data flows and dependencies
|
||||||
|
- Explain business rules and constraints
|
||||||
|
- Keep sample data separate from production
|
||||||
|
|
||||||
|
## Environment Management
|
||||||
|
|
||||||
|
### Environment Parity
|
||||||
|
- Development, test, and production should be as similar as possible
|
||||||
|
- Use same deployment process across environments
|
||||||
|
- Configuration through environment variables
|
||||||
|
- Document environment-specific settings
|
||||||
|
- Automate environment setup
|
||||||
|
|
||||||
|
### Local Development
|
||||||
|
- Provide scripted setup process
|
||||||
|
- Document all prerequisites
|
||||||
|
- Include reset/cleanup scripts
|
||||||
|
- Maintain environment templates
|
||||||
|
- Support multiple development environments
|
||||||
|
|
||||||
|
## Branching & Release Strategy
|
||||||
|
|
||||||
|
### Version Control
|
||||||
|
- Define clear branching strategy
|
||||||
|
- Use semantic versioning
|
||||||
|
- Tag all releases
|
||||||
|
- Maintain release notes
|
||||||
|
- Document hotfix procedures
|
||||||
|
|
||||||
|
### Release Process
|
||||||
|
- Automated build and deployment
|
||||||
|
- Staged rollout capabilities
|
||||||
|
- Rollback procedures documented
|
||||||
|
- Release communication plan
|
||||||
|
- Post-release verification
|
||||||
|
|
||||||
|
## Incident Management
|
||||||
|
|
||||||
|
### Incident Response
|
||||||
|
- Maintain incident log
|
||||||
|
- Document root cause analyses
|
||||||
|
- Update runbooks based on incidents
|
||||||
|
- Conduct retrospectives
|
||||||
|
- Share learnings across team
|
||||||
|
|
||||||
|
### Monitoring & Observability
|
||||||
|
- Define key metrics
|
||||||
|
- Implement appropriate logging
|
||||||
|
- Set up alerting thresholds
|
||||||
|
- Document troubleshooting guides
|
||||||
|
- Regular review of metrics
|
||||||
|
|
||||||
|
## Compliance & Governance
|
||||||
|
|
||||||
|
### Data Privacy
|
||||||
|
- Document data handling practices
|
||||||
|
- Implement privacy by design
|
||||||
|
- Regular compliance reviews
|
||||||
|
- Clear data retention policies
|
||||||
|
- User consent management
|
||||||
|
|
||||||
|
### Audit Trail
|
||||||
|
- Maintain change history
|
||||||
|
- Document decision rationale
|
||||||
|
- Track access and modifications
|
||||||
|
- Regular security reviews
|
||||||
|
- Compliance documentation
|
||||||
|
|
||||||
|
## BMAD-Specific Considerations
|
||||||
|
|
||||||
|
### Session Management
|
||||||
|
- **Session Kickoff**: Always start new AI sessions with proper context initialization
|
||||||
|
- **Memory Bank Maintenance**: Keep context files current throughout development
|
||||||
|
- **Dev Journal Creation**: Document significant work sessions
|
||||||
|
- **Sprint Reviews**: Regular quality and progress assessments
|
||||||
|
|
||||||
|
### Document Sharding
|
||||||
|
- **When to Shard**: Large PRDs and architecture documents (>1000 lines)
|
||||||
|
- **How to Shard**: By level 2 sections, maintaining index.md
|
||||||
|
- **Naming Convention**: Convert section headings to lowercase-dash-case
|
||||||
|
- **Tool Support**: Use markdown-tree-parser when available
|
||||||
|
|
||||||
|
### Brownfield vs Greenfield
|
||||||
|
- **Greenfield**: Start with PRD → Architecture → Implementation
|
||||||
|
- **Brownfield**: Document existing → Create focused PRD → Enhance
|
||||||
|
- **Documentation Focus**: Brownfield docs capture actual state, not ideal
|
||||||
|
- **Technical Debt**: Always document workarounds and constraints
|
||||||
|
|
||||||
|
## Best Practices Summary
|
||||||
|
|
||||||
|
1. **Simplicity First**: Choose the simplest solution that works
|
||||||
|
2. **Documentation as Code**: Treat documentation with same rigor as code
|
||||||
|
3. **Automate Everything**: If it's done twice, automate it
|
||||||
|
4. **Security by Default**: Consider security implications in every decision
|
||||||
|
5. **Test Early and Often**: Multiple levels of testing for confidence
|
||||||
|
6. **Continuous Improvement**: Regular retrospectives and improvements
|
||||||
|
7. **Accessibility Always**: Build inclusive solutions from the start
|
||||||
|
8. **Clean as You Go**: Maintain code quality continuously
|
||||||
|
9. **Context Persistence**: Maintain Memory Bank for AI continuity
|
||||||
|
10. **Reality Over Ideals**: Document what exists, not what should be
|
||||||
|
|
@ -0,0 +1,128 @@
|
||||||
|
# Sprint Review Triggers
|
||||||
|
|
||||||
|
This document outlines when and how to conduct sprint reviews within the BMAD framework.
|
||||||
|
|
||||||
|
## When to Conduct Sprint Reviews
|
||||||
|
|
||||||
|
### Regular Cadence
|
||||||
|
- **End of Sprint**: Always conduct at the conclusion of each defined sprint period
|
||||||
|
- **Weekly/Bi-weekly**: Based on your sprint duration
|
||||||
|
- **After Major Milestones**: When significant features or phases complete
|
||||||
|
|
||||||
|
### Event-Based Triggers
|
||||||
|
- **Epic Completion**: When all stories in an epic are done
|
||||||
|
- **Release Preparation**: Before any production release
|
||||||
|
- **Team Changes**: When team composition changes significantly
|
||||||
|
- **Process Issues**: When recurring blockers or challenges arise
|
||||||
|
- **Client Reviews**: Before or after stakeholder demonstrations
|
||||||
|
|
||||||
|
## Sprint Review Components
|
||||||
|
|
||||||
|
### 1. **Metrics Gathering** (Automated)
|
||||||
|
- Git commit analysis
|
||||||
|
- PR merge tracking
|
||||||
|
- Issue closure rates
|
||||||
|
- Test coverage changes
|
||||||
|
- Build/deployment success rates
|
||||||
|
|
||||||
|
### 2. **Achievement Documentation**
|
||||||
|
- Feature completions with evidence
|
||||||
|
- Technical improvements made
|
||||||
|
- Documentation updates
|
||||||
|
- Bug fixes and resolutions
|
||||||
|
|
||||||
|
### 3. **Retrospective Elements**
|
||||||
|
- What went well (celebrate successes)
|
||||||
|
- What didn't go well (identify issues)
|
||||||
|
- What we learned (capture insights)
|
||||||
|
- What we'll try next (action items)
|
||||||
|
|
||||||
|
### 4. **Memory Bank Updates**
|
||||||
|
- Update progress.md with completed features
|
||||||
|
- Update activeContext.md with current state
|
||||||
|
- Document new patterns in systemPatterns.md
|
||||||
|
- Reflect on technical decisions
|
||||||
|
|
||||||
|
## Sprint Review Best Practices
|
||||||
|
|
||||||
|
### Preparation
|
||||||
|
- Schedule review 1-2 days before sprint end
|
||||||
|
- Gather metrics using git commands beforehand
|
||||||
|
- Review dev journals from the sprint
|
||||||
|
- Prepare demo materials if applicable
|
||||||
|
|
||||||
|
### Facilitation
|
||||||
|
- Keep to 60-90 minutes maximum
|
||||||
|
- Encourage all team members to contribute
|
||||||
|
- Focus on facts and evidence
|
||||||
|
- Balance positive and improvement areas
|
||||||
|
- Make action items specific and assignable
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- Use consistent naming: `YYYYMMDD-sprint-review.md`
|
||||||
|
- Place in `docs/devJournal/` directory
|
||||||
|
- Link to relevant PRs, issues, and commits
|
||||||
|
- Include screenshots or recordings when helpful
|
||||||
|
|
||||||
|
### Follow-up
|
||||||
|
- Assign owners to all action items
|
||||||
|
- Set deadlines for improvements
|
||||||
|
- Review previous sprint's action items
|
||||||
|
- Update project Memory Bank
|
||||||
|
- Share outcomes with stakeholders
|
||||||
|
|
||||||
|
## Integration with BMAD Workflow
|
||||||
|
|
||||||
|
### Before Sprint Review
|
||||||
|
1. Complete all story reviews
|
||||||
|
2. Update CHANGELOG.md
|
||||||
|
3. Ensure dev journals are current
|
||||||
|
4. Close completed issues/PRs
|
||||||
|
|
||||||
|
### During Sprint Review
|
||||||
|
1. Use `*sprint-review` command as Scrum Master
|
||||||
|
2. Follow the guided template
|
||||||
|
3. Gather team input actively
|
||||||
|
4. Document honestly and thoroughly
|
||||||
|
|
||||||
|
### After Sprint Review
|
||||||
|
1. Update Memory Bank (`*update-memory-bank`)
|
||||||
|
2. Create next sprint's initial backlog
|
||||||
|
3. Communicate outcomes to stakeholders
|
||||||
|
4. Schedule action item check-ins
|
||||||
|
5. Archive sprint artifacts
|
||||||
|
|
||||||
|
## Anti-Patterns to Avoid
|
||||||
|
|
||||||
|
- **Skipping Reviews**: Even failed sprints need reviews
|
||||||
|
- **Solo Reviews**: Include the whole team when possible
|
||||||
|
- **Blame Sessions**: Focus on process, not people
|
||||||
|
- **No Action Items**: Every review should produce improvements
|
||||||
|
- **Lost Knowledge**: Always document in standard location
|
||||||
|
- **Metrics Without Context**: Numbers need interpretation
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Git Commands for Metrics
|
||||||
|
```bash
|
||||||
|
# Commits in sprint
|
||||||
|
git log --since="2024-01-01" --until="2024-01-14" --oneline | wc -l
|
||||||
|
|
||||||
|
# PRs merged
|
||||||
|
git log --merges --since="2024-01-01" --until="2024-01-14" --oneline
|
||||||
|
|
||||||
|
# Issues closed
|
||||||
|
git log --since="2024-01-01" --until="2024-01-14" --grep="close[sd]\|fixe[sd]" --oneline
|
||||||
|
|
||||||
|
# Active branches
|
||||||
|
git branch --format='%(refname:short) %(creatordate:short)' | grep '2024-01'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Checklist
|
||||||
|
- [ ] Sprint dates and goal documented
|
||||||
|
- [ ] All metrics gathered
|
||||||
|
- [ ] Features linked to PRs
|
||||||
|
- [ ] Retrospective completed
|
||||||
|
- [ ] Action items assigned
|
||||||
|
- [ ] Memory Bank updated
|
||||||
|
- [ ] Next sprint prepared
|
||||||
|
|
@ -1,3 +1,43 @@
|
||||||
# User-Defined Preferred Patterns and Preferences
|
# Technical Preferences and Architectural Principles
|
||||||
|
|
||||||
|
## Core Principles References
|
||||||
|
|
||||||
|
The following technical principles and standards apply to all BMAD projects:
|
||||||
|
|
||||||
|
### 1. Coding Standards
|
||||||
|
- **Reference:** `coding-standards.md`
|
||||||
|
- **Purpose:** Fundamental coding principles, security practices, and quality standards
|
||||||
|
- **Key Areas:** Code simplicity, dependency management, security, testing, commit standards
|
||||||
|
|
||||||
|
### 2. Twelve-Factor App Principles
|
||||||
|
- **Reference:** `twelve-factor-principles.md`
|
||||||
|
- **Purpose:** Cloud-native application development principles
|
||||||
|
- **Key Areas:** Codebase management, dependencies, config, backing services, build/release/run
|
||||||
|
|
||||||
|
### 3. Microservice Patterns
|
||||||
|
- **Reference:** `microservice-patterns.md`
|
||||||
|
- **Purpose:** Patterns for distributed system architecture
|
||||||
|
- **Key Areas:** Service decomposition, communication patterns, data management, resilience
|
||||||
|
|
||||||
|
## Application Guidelines
|
||||||
|
|
||||||
|
1. **During Architecture Design:**
|
||||||
|
- Apply twelve-factor principles for cloud-native applications
|
||||||
|
- Consider microservice patterns for complex distributed systems
|
||||||
|
- Document pattern choices in Architecture Decision Records (ADRs)
|
||||||
|
|
||||||
|
2. **During Implementation:**
|
||||||
|
- Follow coding standards for all code generation
|
||||||
|
- Apply security principles by default
|
||||||
|
- Ensure testability and maintainability
|
||||||
|
|
||||||
|
3. **Technology Selection:**
|
||||||
|
- Prefer simple, proven solutions over complex ones
|
||||||
|
- Minimize dependencies unless explicitly justified
|
||||||
|
- Consider operational complexity alongside technical capabilities
|
||||||
|
|
||||||
|
## User-Defined Preferences
|
||||||
|
|
||||||
|
_Add project-specific technical preferences below:_
|
||||||
|
|
||||||
None Listed
|
None Listed
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,123 @@
|
||||||
|
# Twelve-Factor App Principles
|
||||||
|
|
||||||
|
> **Purpose:** This document provides the definitive set of rules based on the Twelve-Factor App methodology. These principles are mandatory for ensuring applications are built as scalable, resilient, and maintainable cloud-native services.
|
||||||
|
|
||||||
|
## The Twelve Factors
|
||||||
|
|
||||||
|
### I. Codebase
|
||||||
|
- A single, version-controlled codebase must represent one application
|
||||||
|
- All code for a specific application belongs to this single codebase
|
||||||
|
- Shared functionality must be factored into versioned libraries
|
||||||
|
- One codebase produces multiple deploys (development, staging, production)
|
||||||
|
|
||||||
|
### II. Dependencies
|
||||||
|
- Explicitly declare all dependencies via manifest files (e.g., package.json, requirements.txt)
|
||||||
|
- Never rely on implicit existence of system-wide packages
|
||||||
|
- Application must run in isolated environment with only declared dependencies
|
||||||
|
|
||||||
|
### III. Config
|
||||||
|
- Strict separation between code and configuration
|
||||||
|
- All deploy-varying config must be read from environment variables
|
||||||
|
- Never hardcode environment-specific values in source code
|
||||||
|
- Codebase must be runnable anywhere with correct environment variables
|
||||||
|
|
||||||
|
### IV. Backing Services
|
||||||
|
- Treat all backing services as attached, swappable resources
|
||||||
|
- Connect via locators/credentials stored in environment variables
|
||||||
|
- Code must be agnostic to whether service is local or third-party
|
||||||
|
- Examples: databases, message queues, caches, external APIs
|
||||||
|
|
||||||
|
### V. Build, Release, Run
|
||||||
|
Maintain strict three-stage separation:
|
||||||
|
- **Build:** Convert code repo into executable bundle
|
||||||
|
- **Release:** Combine build with environment-specific config
|
||||||
|
- **Run:** Execute release in target environment
|
||||||
|
- Releases must be immutable with unique IDs
|
||||||
|
- Any change requires new release
|
||||||
|
|
||||||
|
### VI. Processes
|
||||||
|
- Execute as stateless, share-nothing processes
|
||||||
|
- Persistent data must be stored in stateful backing service
|
||||||
|
- Never assume local memory/disk state available across requests
|
||||||
|
- Process state is ephemeral
|
||||||
|
|
||||||
|
### VII. Port Binding
|
||||||
|
- Application must be self-contained
|
||||||
|
- Export services by binding to port specified via configuration
|
||||||
|
- Do not rely on runtime injection of webserver
|
||||||
|
- Application brings its own webserver library
|
||||||
|
|
||||||
|
### VIII. Concurrency
|
||||||
|
- Scale out horizontally by adding concurrent processes
|
||||||
|
- Assign different workload types to different process types
|
||||||
|
- Use process manager for lifecycle management
|
||||||
|
- Design for horizontal scaling from the start
|
||||||
|
|
||||||
|
### IX. Disposability
|
||||||
|
- Processes must be disposable (start/stop quickly)
|
||||||
|
- Minimize startup time for fast elastic scaling
|
||||||
|
- Graceful shutdown on SIGTERM
|
||||||
|
- Robust against sudden death (crash-only design)
|
||||||
|
|
||||||
|
### X. Dev/Prod Parity
|
||||||
|
Keep environments as similar as possible:
|
||||||
|
- Same programming language versions
|
||||||
|
- Same system tooling
|
||||||
|
- Same backing service types and versions
|
||||||
|
- Minimize time, personnel, and tool gaps
|
||||||
|
|
||||||
|
### XI. Logs
|
||||||
|
- Treat logs as event streams
|
||||||
|
- Never write to or manage log files directly
|
||||||
|
- Write unbuffered to stdout
|
||||||
|
- Execution environment handles collection and routing
|
||||||
|
|
||||||
|
### XII. Admin Processes
|
||||||
|
- Run admin tasks as one-off processes
|
||||||
|
- Use identical environment as long-running processes
|
||||||
|
- Ship admin scripts with application code
|
||||||
|
- Use same dependency and config management
|
||||||
|
|
||||||
|
## Additional Cloud-Native Principles
|
||||||
|
|
||||||
|
### Containerization
|
||||||
|
- **[SVC] Service as Container:** Package services as container images
|
||||||
|
- Encapsulate technology stack in containers
|
||||||
|
- Ensure consistent deployment across environments
|
||||||
|
|
||||||
|
### Serverless Options
|
||||||
|
- **[SRL] Serverless Deployment:** Consider serverless platforms when appropriate
|
||||||
|
- Abstract away infrastructure management
|
||||||
|
- Focus on business logic over infrastructure
|
||||||
|
|
||||||
|
### Observability
|
||||||
|
- Implement comprehensive monitoring and metrics
|
||||||
|
- Use distributed tracing for microservices
|
||||||
|
- Ensure all services are observable by default
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- Security must be built-in, not bolted-on
|
||||||
|
- Use principle of least privilege
|
||||||
|
- Implement defense in depth
|
||||||
|
- Regular security audits and updates
|
||||||
|
|
||||||
|
## AI/Agent Safeguards
|
||||||
|
- All AI-generated code must be reviewed before production
|
||||||
|
- Escalate ambiguous or risky decisions for approval
|
||||||
|
- Log all significant AI-suggested changes
|
||||||
|
- Never overwrite .env files without confirmation
|
||||||
|
|
||||||
|
## Environmental Sustainability
|
||||||
|
- Optimize compute resources
|
||||||
|
- Minimize infrastructure waste
|
||||||
|
- Prefer energy-efficient solutions
|
||||||
|
- Consider environmental impact in technical decisions
|
||||||
|
|
||||||
|
## Integration with BMAD
|
||||||
|
|
||||||
|
These principles should be:
|
||||||
|
1. Applied during architecture design
|
||||||
|
2. Validated during implementation
|
||||||
|
3. Enforced through CI/CD pipelines
|
||||||
|
4. Reviewed during architectural decision records (ADRs)
|
||||||
|
5. Considered in all technical decisions
|
||||||
|
|
@ -0,0 +1,170 @@
|
||||||
|
# Conduct Sprint Review
|
||||||
|
|
||||||
|
This task guides the Scrum Master through conducting a comprehensive sprint review and retrospective at the end of each sprint or major iteration.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
- Document sprint achievements and deliverables
|
||||||
|
- Analyze sprint metrics and goal completion
|
||||||
|
- Facilitate team retrospective
|
||||||
|
- Capture learnings and action items
|
||||||
|
- Update Memory Bank with sprint outcomes
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### 1. Gather Sprint Context
|
||||||
|
|
||||||
|
Before starting the review, collect:
|
||||||
|
|
||||||
|
**Sprint Information**:
|
||||||
|
- Sprint dates (start and end)
|
||||||
|
- Sprint goal/theme
|
||||||
|
- Team participants
|
||||||
|
- Active branches/releases
|
||||||
|
|
||||||
|
**Metrics** (use git commands):
|
||||||
|
```bash
|
||||||
|
# Commits during sprint
|
||||||
|
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
|
||||||
|
|
||||||
|
# PRs merged
|
||||||
|
git log --merges --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
|
||||||
|
|
||||||
|
# Issues closed
|
||||||
|
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --grep="close[sd]\|fixe[sd]" --oneline | wc -l
|
||||||
|
|
||||||
|
# Branches created
|
||||||
|
git branch --format='%(refname:short) %(creatordate:short)' | grep 'YYYY-MM'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Review Dev Journals
|
||||||
|
|
||||||
|
Scan recent dev journal entries to identify:
|
||||||
|
- Major features completed
|
||||||
|
- Technical challenges overcome
|
||||||
|
- Patterns established
|
||||||
|
- Decisions made
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -la docs/devJournal/*.md | tail -10
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Review ADRs
|
||||||
|
|
||||||
|
Check for new architectural decisions:
|
||||||
|
```bash
|
||||||
|
ls -la docs/adr/*.md | tail -5
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Create Sprint Review Document
|
||||||
|
|
||||||
|
Create file: `docs/devJournal/YYYYMMDD-sprint-review.md`
|
||||||
|
|
||||||
|
Use the sprint-review-tmpl.yaml template (or create manually) covering:
|
||||||
|
|
||||||
|
#### Essential Sections
|
||||||
|
|
||||||
|
**1. Sprint Overview**
|
||||||
|
- Sprint dates and goal
|
||||||
|
- Participants and roles
|
||||||
|
- Branch/release information
|
||||||
|
|
||||||
|
**2. Achievements & Deliverables**
|
||||||
|
- Major features completed (with PR links)
|
||||||
|
- Technical milestones reached
|
||||||
|
- Documentation updates
|
||||||
|
- Testing improvements
|
||||||
|
|
||||||
|
**3. Sprint Metrics**
|
||||||
|
- Commit count
|
||||||
|
- PRs merged (with details)
|
||||||
|
- Issues closed
|
||||||
|
- Test coverage changes
|
||||||
|
|
||||||
|
**4. Goal Review**
|
||||||
|
- What was planned vs achieved
|
||||||
|
- Items not completed (with reasons)
|
||||||
|
- Goal completion percentage
|
||||||
|
|
||||||
|
**5. Demo & Walkthrough**
|
||||||
|
- Screenshots/videos if available
|
||||||
|
- Instructions for reviewing features
|
||||||
|
|
||||||
|
**6. Retrospective**
|
||||||
|
- **What Went Well**: Successes and effective practices
|
||||||
|
- **What Didn't Go Well**: Blockers and pain points
|
||||||
|
- **What We Learned**: Technical and process insights
|
||||||
|
- **What We'll Try Next**: Improvement experiments
|
||||||
|
|
||||||
|
**7. Action Items**
|
||||||
|
- Concrete actions with owners
|
||||||
|
- Deadlines for next sprint
|
||||||
|
- Process improvements to implement
|
||||||
|
|
||||||
|
**8. References**
|
||||||
|
- Dev journal entries from sprint
|
||||||
|
- New/updated ADRs
|
||||||
|
- CHANGELOG updates
|
||||||
|
- Memory Bank updates
|
||||||
|
|
||||||
|
### 5. Update Memory Bank
|
||||||
|
|
||||||
|
After sprint review, update:
|
||||||
|
|
||||||
|
**activeContext.md**:
|
||||||
|
- Current sprint outcomes
|
||||||
|
- Next sprint priorities
|
||||||
|
- Active action items
|
||||||
|
|
||||||
|
**progress.md**:
|
||||||
|
- Features completed this sprint
|
||||||
|
- Overall project progress
|
||||||
|
- Velocity trends
|
||||||
|
|
||||||
|
**systemPatterns.md** (if applicable):
|
||||||
|
- New patterns adopted
|
||||||
|
- Technical decisions from retrospective
|
||||||
|
|
||||||
|
### 6. Facilitate Team Discussion
|
||||||
|
|
||||||
|
If in party-mode or team setting:
|
||||||
|
- Share sprint review with team
|
||||||
|
- Gather additional feedback
|
||||||
|
- Refine action items collaboratively
|
||||||
|
- Celebrate achievements
|
||||||
|
|
||||||
|
### 7. Prepare for Next Sprint
|
||||||
|
|
||||||
|
Based on review outcomes:
|
||||||
|
- Update backlog priorities
|
||||||
|
- Create next sprint goal
|
||||||
|
- Schedule action item follow-ups
|
||||||
|
- Communicate decisions to stakeholders
|
||||||
|
|
||||||
|
## Quality Checklist
|
||||||
|
|
||||||
|
- [ ] All sprint metrics gathered and documented
|
||||||
|
- [ ] Achievements clearly linked to sprint goal
|
||||||
|
- [ ] Honest assessment of what wasn't completed
|
||||||
|
- [ ] Retrospective captures diverse perspectives
|
||||||
|
- [ ] Action items are specific and assigned
|
||||||
|
- [ ] Memory Bank updated with outcomes
|
||||||
|
- [ ] Document follows naming convention
|
||||||
|
- [ ] References to related documentation included
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
The sprint review document serves as:
|
||||||
|
- Historical record of sprint progress
|
||||||
|
- Input for project reporting
|
||||||
|
- Source for continuous improvement
|
||||||
|
- Knowledge transfer for future sprints
|
||||||
|
- Update source for Memory Bank
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Conduct reviews even for partial sprints
|
||||||
|
- Include both technical and process perspectives
|
||||||
|
- Be honest about challenges and failures
|
||||||
|
- Focus on actionable improvements
|
||||||
|
- Link to specific evidence (PRs, commits, journals)
|
||||||
|
|
@ -4,6 +4,8 @@ This task guides the creation of an ADR to document significant architectural de
|
||||||
|
|
||||||
## Initial Setup (if needed)
|
## Initial Setup (if needed)
|
||||||
|
|
||||||
|
[[LLM: The ADR location follows the standard defined in project-scaffolding-preference.md]]
|
||||||
|
|
||||||
If the /docs/adr directory doesn't exist in the project:
|
If the /docs/adr directory doesn't exist in the project:
|
||||||
1. Create the directory: `mkdir -p docs/adr`
|
1. Create the directory: `mkdir -p docs/adr`
|
||||||
2. Create a README.md explaining ADR purpose and structure
|
2. Create a README.md explaining ADR purpose and structure
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,9 @@ This task guides the creation of a development journal entry to document the ses
|
||||||
- Have git access to review commits and changes
|
- Have git access to review commits and changes
|
||||||
|
|
||||||
## Initial Setup (if needed)
|
## Initial Setup (if needed)
|
||||||
|
|
||||||
|
[[LLM: The Dev Journal location follows the standard defined in project-scaffolding-preference.md]]
|
||||||
|
|
||||||
If the /docs/devJournal directory doesn't exist in the project:
|
If the /docs/devJournal directory doesn't exist in the project:
|
||||||
1. Create the directory: `mkdir -p docs/devJournal`
|
1. Create the directory: `mkdir -p docs/devJournal`
|
||||||
2. Create a README.md in that directory explaining its purpose
|
2. Create a README.md in that directory explaining its purpose
|
||||||
|
|
@ -33,14 +36,16 @@ Check existing entries: `ls docs/devJournal/YYYYMMDD-*.md`
|
||||||
|
|
||||||
Use the dev-journal-tmpl.yaml template to create a comprehensive entry covering:
|
Use the dev-journal-tmpl.yaml template to create a comprehensive entry covering:
|
||||||
|
|
||||||
#### Essential Sections:
|
#### Essential Sections
|
||||||
1. **Session Overview** - Brief summary of accomplishments
|
1. **Session Overview** - Brief summary of accomplishments
|
||||||
2. **Work Streams** - Detailed breakdown of each area of work
|
2. **Work Streams** - Detailed breakdown of each area of work
|
||||||
3. **Implementation Details** - Key code changes and decisions
|
3. **Implementation Details** - Key code changes and decisions
|
||||||
4. **Validation & Testing** - What was tested and verified
|
4. **Validation & Testing** - What was tested and verified
|
||||||
5. **Current State & Next Steps** - Where we are and what's next
|
5. **Current State & Next Steps** - Where we are and what's next
|
||||||
|
|
||||||
#### Evidence Gathering:
|
**Sprint Journal Entries**: For end-of-sprint dev journal entries, cross-reference with `sprint-review-checklist.md` to ensure all sprint accomplishments and learnings are captured.
|
||||||
|
|
||||||
|
#### Evidence Gathering
|
||||||
- Review all commits made during session
|
- Review all commits made during session
|
||||||
- Check modified files by functional area
|
- Check modified files by functional area
|
||||||
- Note any new patterns or architectural decisions
|
- Note any new patterns or architectural decisions
|
||||||
|
|
@ -68,6 +73,7 @@ Before finalizing, ensure:
|
||||||
- Include enough detail for context without overwhelming
|
- Include enough detail for context without overwhelming
|
||||||
- Cross-reference related stories, ADRs, or PRs
|
- Cross-reference related stories, ADRs, or PRs
|
||||||
- Use British English for consistency
|
- Use British English for consistency
|
||||||
|
- For sprint-end entries, ensure alignment with sprint review documentation using `sprint-review-checklist.md`
|
||||||
|
|
||||||
## Memory Bank Integration
|
## Memory Bank Integration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,6 +4,12 @@
|
||||||
|
|
||||||
To identify the next logical story based on project progress and epic definitions, and then to prepare a comprehensive, self-contained, and actionable story file using the `Story Template`. This task ensures the story is enriched with all necessary technical context, requirements, and acceptance criteria, making it ready for efficient implementation by a Developer Agent with minimal need for additional research or finding its own context.
|
To identify the next logical story based on project progress and epic definitions, and then to prepare a comprehensive, self-contained, and actionable story file using the `Story Template`. This task ensures the story is enriched with all necessary technical context, requirements, and acceptance criteria, making it ready for efficient implementation by a Developer Agent with minimal need for additional research or finding its own context.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before creating stories, ensure proper session context:
|
||||||
|
- **Session Kickoff**: If this is a new session or after significant time gap (>24 hours), first run the `session-kickoff` task to establish complete project context
|
||||||
|
- **Memory Bank**: Verify Memory Bank files are current for accurate story creation
|
||||||
|
|
||||||
## SEQUENTIAL Task Execution (Do not proceed until current Task is complete)
|
## SEQUENTIAL Task Execution (Do not proceed until current Task is complete)
|
||||||
|
|
||||||
### 0. Load Core Configuration and Check Workflow
|
### 0. Load Core Configuration and Check Workflow
|
||||||
|
|
|
||||||
|
|
@ -4,6 +4,12 @@
|
||||||
|
|
||||||
Generate comprehensive documentation for existing projects optimized for AI development agents. This task creates structured reference materials that enable AI agents to understand project context, conventions, and patterns for effective contribution to any codebase.
|
Generate comprehensive documentation for existing projects optimized for AI development agents. This task creates structured reference materials that enable AI agents to understand project context, conventions, and patterns for effective contribution to any codebase.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before documenting a project, ensure proper session context:
|
||||||
|
- **Session Kickoff**: If this is a new session or after significant time gap (>24 hours), first run the `session-kickoff` task to establish complete project context
|
||||||
|
- **Memory Bank Review**: Check if Memory Bank exists to understand project history and context
|
||||||
|
|
||||||
## Task Instructions
|
## Task Instructions
|
||||||
|
|
||||||
### 1. Initial Project Analysis
|
### 1. Initial Project Analysis
|
||||||
|
|
@ -112,7 +118,7 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
|
||||||
### Change Log
|
### Change Log
|
||||||
|
|
||||||
| Date | Version | Description | Author |
|
| Date | Version | Description | Author |
|
||||||
|------|---------|-------------|--------|
|
|--------|---------|-----------------------------|-----------|
|
||||||
| [Date] | 1.0 | Initial brownfield analysis | [Analyst] |
|
| [Date] | 1.0 | Initial brownfield analysis | [Analyst] |
|
||||||
|
|
||||||
## Quick Reference - Key Files and Entry Points
|
## Quick Reference - Key Files and Entry Points
|
||||||
|
|
@ -137,7 +143,7 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
|
||||||
### Actual Tech Stack (from package.json/requirements.txt)
|
### Actual Tech Stack (from package.json/requirements.txt)
|
||||||
|
|
||||||
| Category | Technology | Version | Notes |
|
| Category | Technology | Version | Notes |
|
||||||
|----------|------------|---------|--------|
|
|-----------|------------|---------|----------------------------|
|
||||||
| Runtime | Node.js | 16.x | [Any constraints] |
|
| Runtime | Node.js | 16.x | [Any constraints] |
|
||||||
| Framework | Express | 4.18.2 | [Custom middleware?] |
|
| Framework | Express | 4.18.2 | [Custom middleware?] |
|
||||||
| Database | PostgreSQL | 13 | [Connection pooling setup] |
|
| Database | PostgreSQL | 13 | [Connection pooling setup] |
|
||||||
|
|
@ -209,7 +215,7 @@ Instead of duplicating, reference actual model files:
|
||||||
### External Services
|
### External Services
|
||||||
|
|
||||||
| Service | Purpose | Integration Type | Key Files |
|
| Service | Purpose | Integration Type | Key Files |
|
||||||
|---------|---------|------------------|-----------|
|
|----------|----------|------------------|--------------------------------|
|
||||||
| Stripe | Payments | REST API | `src/integrations/stripe/` |
|
| Stripe | Payments | REST API | `src/integrations/stripe/` |
|
||||||
| SendGrid | Emails | SDK | `src/services/emailService.js` |
|
| SendGrid | Emails | SDK | `src/services/emailService.js` |
|
||||||
|
|
||||||
|
|
@ -335,13 +341,14 @@ Apply the advanced elicitation task after major sections to refine based on user
|
||||||
## Memory Bank Integration
|
## Memory Bank Integration
|
||||||
|
|
||||||
After documenting a project:
|
After documenting a project:
|
||||||
1. Consider initializing Memory Bank if not exists (`initialize-memory-bank` task)
|
1. Ensure proper session context via `session-kickoff` task (references `session-kickoff-checklist.md`)
|
||||||
2. Use the brownfield architecture document to populate:
|
2. Consider initializing Memory Bank if not exists (`initialize-memory-bank` task)
|
||||||
|
3. Use the brownfield architecture document to populate:
|
||||||
- `projectbrief.md` - Extract project goals and constraints
|
- `projectbrief.md` - Extract project goals and constraints
|
||||||
- `systemPatterns.md` - Document architecture and patterns
|
- `systemPatterns.md` - Document architecture and patterns
|
||||||
- `techContext.md` - Capture technology stack and environment
|
- `techContext.md` - Capture technology stack and environment
|
||||||
- `progress.md` - Note current state and technical debt
|
- `progress.md` - Note current state and technical debt
|
||||||
3. This provides AI agents with both detailed architecture docs and quick-reference Memory Bank
|
4. This provides AI agents with both detailed architecture docs and quick-reference Memory Bank
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,8 @@ The Memory Bank serves as persistent memory for AI agents, containing:
|
||||||
|
|
||||||
### 1. Create Directory Structure
|
### 1. Create Directory Structure
|
||||||
|
|
||||||
|
[[LLM: The Memory Bank location follows the standard defined in project-scaffolding-preference.md]]
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir -p docs/memory-bank
|
mkdir -p docs/memory-bank
|
||||||
```
|
```
|
||||||
|
|
@ -157,11 +159,13 @@ The Memory Bank integrates with:
|
||||||
- [ ] Next steps clearly defined
|
- [ ] Next steps clearly defined
|
||||||
- [ ] Technical decisions documented
|
- [ ] Technical decisions documented
|
||||||
- [ ] Progress accurately reflected
|
- [ ] Progress accurately reflected
|
||||||
|
- [ ] Verified against session-kickoff-checklist.md requirements
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
- Memory Bank is the foundation for AI continuity
|
- Memory Bank is the foundation for AI continuity
|
||||||
- Must be updated regularly to maintain value
|
- Must be updated regularly to maintain value
|
||||||
- All agents should read before starting work
|
- All agents should read before starting work (via session-kickoff task)
|
||||||
- Updates should be comprehensive but concise
|
- Updates should be comprehensive but concise
|
||||||
- British English for consistency
|
- British English for consistency
|
||||||
|
- Use session-kickoff-checklist.md to verify proper initialization
|
||||||
|
|
@ -0,0 +1,219 @@
|
||||||
|
# Session Kickoff
|
||||||
|
|
||||||
|
This task ensures AI agents have complete project context and understanding before starting work. It provides systematic session initialization across all agent types.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
- Establish comprehensive project understanding
|
||||||
|
- Validate documentation consistency
|
||||||
|
- Identify current project state and priorities
|
||||||
|
- Recommend next steps based on evidence
|
||||||
|
- Prevent context gaps that lead to suboptimal decisions
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### 1. Memory Bank Review (Primary Context)
|
||||||
|
|
||||||
|
**Priority Order**:
|
||||||
|
1. **Memory Bank Files** (if they exist): `docs/memory-bank/`
|
||||||
|
- `projectbrief.md` - Project foundation and scope
|
||||||
|
- `activeContext.md` - Current work and immediate priorities
|
||||||
|
- `progress.md` - Project state and completed features
|
||||||
|
- `systemPatterns.md` - Architecture and technical decisions
|
||||||
|
- `techContext.md` - Technology stack and constraints
|
||||||
|
- `productContext.md` - Problem space and user needs
|
||||||
|
|
||||||
|
**Analysis Required**:
|
||||||
|
- When were these last updated?
|
||||||
|
- Is information current and accurate?
|
||||||
|
- Any apparent inconsistencies between files?
|
||||||
|
|
||||||
|
### 2. Architecture Documentation Review
|
||||||
|
|
||||||
|
**Primary References** (check which exists):
|
||||||
|
- `/docs/architecture.md` - General backend/system architecture (greenfield)
|
||||||
|
- `/docs/brownfield-architecture.md` - Enhancement architecture for existing systems
|
||||||
|
- `/docs/frontend-architecture.md` - Frontend-specific architecture
|
||||||
|
- `/docs/fullstack-architecture.md` - Complete full-stack architecture
|
||||||
|
|
||||||
|
**Key Elements to Review**:
|
||||||
|
- Core architectural decisions and patterns
|
||||||
|
- System design and component relationships
|
||||||
|
- Technology choices and constraints
|
||||||
|
- Integration points and data flows
|
||||||
|
- API documentation
|
||||||
|
- Database schemas
|
||||||
|
|
||||||
|
### 3. Development History Review
|
||||||
|
|
||||||
|
**Recent Dev Journals**: `docs/devJournal/`
|
||||||
|
- Read last 3-5 entries to understand recent work
|
||||||
|
- Identify patterns in challenges and decisions
|
||||||
|
- Note any unresolved issues or technical debt
|
||||||
|
- Understand development velocity and blockers
|
||||||
|
|
||||||
|
**Current ADRs**: `docs/adr/`
|
||||||
|
- Review recent architectural decisions
|
||||||
|
- Check for pending or superseded decisions
|
||||||
|
- Validate alignment with current architecture
|
||||||
|
- Skip archived ADRs (consolidated in architecture docs)
|
||||||
|
|
||||||
|
### 4. Project Documentation Scan
|
||||||
|
|
||||||
|
**Core Documentation**:
|
||||||
|
- `README.md` - Project overview and setup
|
||||||
|
- `CHANGELOG.md` - Recent changes and releases
|
||||||
|
- Package manifests (`package.json`, `requirements.txt`, etc.)
|
||||||
|
- Configuration files
|
||||||
|
|
||||||
|
**Additional Context**:
|
||||||
|
- Issue trackers or project boards
|
||||||
|
- Recent commits and branches
|
||||||
|
- Test results and coverage reports
|
||||||
|
|
||||||
|
### 5. Current State Assessment
|
||||||
|
|
||||||
|
**Development Environment**:
|
||||||
|
```bash
|
||||||
|
# Check git status
|
||||||
|
git status
|
||||||
|
git log --oneline -10
|
||||||
|
|
||||||
|
# Check current branch and commits
|
||||||
|
git branch -v
|
||||||
|
|
||||||
|
# Review recent changes
|
||||||
|
git diff --name-status HEAD~5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Project Health**:
|
||||||
|
- Are there failing tests or builds?
|
||||||
|
- Any urgent issues or blockers?
|
||||||
|
- Current sprint/iteration status
|
||||||
|
- Outstanding pull requests
|
||||||
|
|
||||||
|
### 6. Consistency Validation
|
||||||
|
|
||||||
|
**Cross-Reference Checks**:
|
||||||
|
- Does Memory Bank align with actual codebase?
|
||||||
|
- Are ADRs reflected in current architecture?
|
||||||
|
- Do dev journals match git history?
|
||||||
|
- Is documentation current with recent changes?
|
||||||
|
|
||||||
|
**Identify Gaps**:
|
||||||
|
- Missing or outdated documentation
|
||||||
|
- Undocumented architectural decisions
|
||||||
|
- Inconsistencies between sources
|
||||||
|
- Knowledge gaps requiring clarification
|
||||||
|
|
||||||
|
### 7. Agent-Specific Context
|
||||||
|
|
||||||
|
**For Architect Agent**:
|
||||||
|
- Focus on architectural decisions and system design
|
||||||
|
- Review technical debt and improvement opportunities
|
||||||
|
- Assess scalability and performance considerations
|
||||||
|
|
||||||
|
**For Developer Agent**:
|
||||||
|
- Focus on current work items and immediate tasks
|
||||||
|
- Review recent implementation patterns
|
||||||
|
- Understand testing and deployment processes
|
||||||
|
|
||||||
|
**For Product Owner Agent**:
|
||||||
|
- Focus on requirements and user stories
|
||||||
|
- Review product roadmap and priorities
|
||||||
|
- Assess feature completion and user feedback
|
||||||
|
|
||||||
|
### 8. Next Steps Recommendation
|
||||||
|
|
||||||
|
**Based on Evidence**:
|
||||||
|
- What are the most urgent priorities?
|
||||||
|
- Are there any blockers or dependencies?
|
||||||
|
- What documentation needs updating?
|
||||||
|
- What architectural decisions are pending?
|
||||||
|
|
||||||
|
**Recommended Actions**:
|
||||||
|
1. **Immediate Tasks** - Ready to start now
|
||||||
|
2. **Dependency Resolution** - What needs clarification
|
||||||
|
3. **Documentation Updates** - What needs to be updated
|
||||||
|
4. **Strategic Items** - Longer-term considerations
|
||||||
|
|
||||||
|
## Quality Checklist
|
||||||
|
|
||||||
|
- [ ] Memory Bank reviewed (or noted if missing)
|
||||||
|
- [ ] Architecture documentation understood
|
||||||
|
- [ ] Recent development history reviewed
|
||||||
|
- [ ] Current project state assessed
|
||||||
|
- [ ] Documentation inconsistencies identified
|
||||||
|
- [ ] Agent-specific context established
|
||||||
|
- [ ] Next steps clearly recommended
|
||||||
|
- [ ] Any urgent issues flagged
|
||||||
|
|
||||||
|
## Output Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Session Kickoff Summary
|
||||||
|
|
||||||
|
## Project Understanding
|
||||||
|
- **Project**: [Name and core purpose]
|
||||||
|
- **Current Phase**: [Development stage]
|
||||||
|
- **Last Updated**: [When Memory Bank was last updated]
|
||||||
|
|
||||||
|
## Documentation Health
|
||||||
|
- **Memory Bank**: [Exists/Missing/Outdated]
|
||||||
|
- **Architecture Docs**: [Current/Needs Update]
|
||||||
|
- **Dev Journals**: [Last entry date]
|
||||||
|
- **ADRs**: [Recent decisions noted]
|
||||||
|
|
||||||
|
## Current State
|
||||||
|
- **Active Branch**: [Git branch]
|
||||||
|
- **Recent Work**: [Summary from dev journals]
|
||||||
|
- **Project Health**: [Green/Yellow/Red with reasons]
|
||||||
|
- **Immediate Blockers**: [Any urgent issues]
|
||||||
|
|
||||||
|
## Inconsistencies Found
|
||||||
|
[List any documentation inconsistencies or gaps]
|
||||||
|
|
||||||
|
## Agent-Specific Context
|
||||||
|
[Relevant context for current agent role]
|
||||||
|
|
||||||
|
## Recommended Next Steps
|
||||||
|
1. [Most urgent priority]
|
||||||
|
2. [Secondary priority]
|
||||||
|
3. [Documentation updates needed]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
This task integrates with:
|
||||||
|
- **Memory Bank**: Primary source of project context
|
||||||
|
- **All Agents**: Universal session initialization
|
||||||
|
- **Document Project**: Can trigger if documentation missing
|
||||||
|
- **Update Memory Bank**: Can trigger if information outdated
|
||||||
|
- **Agent Activation**: Called at start of agent sessions
|
||||||
|
|
||||||
|
## Usage Patterns
|
||||||
|
|
||||||
|
**New Agent Session**:
|
||||||
|
1. Agent activates
|
||||||
|
2. Runs `session-kickoff` task
|
||||||
|
3. Reviews output and confirms understanding
|
||||||
|
4. Proceeds with informed context
|
||||||
|
|
||||||
|
**Project Handoff**:
|
||||||
|
1. New team member or AI session
|
||||||
|
2. Runs comprehensive kickoff
|
||||||
|
3. Identifies knowledge gaps
|
||||||
|
4. Updates documentation as needed
|
||||||
|
|
||||||
|
**Quality Gate**:
|
||||||
|
1. Before major feature work
|
||||||
|
2. After significant time gap
|
||||||
|
3. When context seems incomplete
|
||||||
|
4. As part of regular project health checks
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This task should be lightweight for daily use but comprehensive for major handoffs
|
||||||
|
- Adapt depth based on project complexity and available time
|
||||||
|
- Can be automated as part of agent startup routines
|
||||||
|
- Helps prevent tunnel vision and context loss
|
||||||
|
|
@ -134,6 +134,8 @@ Triggered by:
|
||||||
- Significant project pivot
|
- Significant project pivot
|
||||||
- Before major feature work
|
- Before major feature work
|
||||||
|
|
||||||
|
**Sprint Review Integration**: For sprint-end updates, use the `sprint-review-checklist.md` to ensure all sprint accomplishments, learnings, and technical decisions are captured in the Memory Bank.
|
||||||
|
|
||||||
## Quality Checklist
|
## Quality Checklist
|
||||||
|
|
||||||
- [ ] All recent dev journals reviewed
|
- [ ] All recent dev journals reviewed
|
||||||
|
|
@ -151,8 +153,9 @@ This task integrates with:
|
||||||
- **Dev Journal Creation**: Triggers selective activeContext update
|
- **Dev Journal Creation**: Triggers selective activeContext update
|
||||||
- **ADR Creation**: Triggers systemPatterns update
|
- **ADR Creation**: Triggers systemPatterns update
|
||||||
- **Story Completion**: Triggers progress update
|
- **Story Completion**: Triggers progress update
|
||||||
- **Sprint End**: Triggers comprehensive update
|
- **Sprint End**: Triggers comprehensive update (use `sprint-review-checklist.md`)
|
||||||
- **Architecture Changes**: Triggers multiple file updates
|
- **Architecture Changes**: Triggers multiple file updates
|
||||||
|
- **Sprint Reviews**: Reference `sprint-review-checklist.md` to ensure comprehensive capture of sprint outcomes
|
||||||
|
|
||||||
## Example Update Flow
|
## Example Update Flow
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -34,6 +34,8 @@ sections:
|
||||||
- Boilerplate projects or scaffolding tools
|
- Boilerplate projects or scaffolding tools
|
||||||
- Previous projects to be cloned or adapted
|
- Previous projects to be cloned or adapted
|
||||||
|
|
||||||
|
NOTE: Reference project-scaffolding-preference.md for standard project structure guidelines regardless of starter template choice.
|
||||||
|
|
||||||
2. If a starter template or existing project is mentioned:
|
2. If a starter template or existing project is mentioned:
|
||||||
- Ask the user to provide access via one of these methods:
|
- Ask the user to provide access via one of these methods:
|
||||||
- Link to the starter template documentation
|
- Link to the starter template documentation
|
||||||
|
|
|
||||||
|
|
@ -141,7 +141,14 @@ sections:
|
||||||
title: Feature Comparison Matrix
|
title: Feature Comparison Matrix
|
||||||
instruction: Create a detailed comparison table of key features across competitors
|
instruction: Create a detailed comparison table of key features across competitors
|
||||||
type: table
|
type: table
|
||||||
columns: ["Feature Category", "{{your_company}}", "{{competitor_1}}", "{{competitor_2}}", "{{competitor_3}}"]
|
columns:
|
||||||
|
[
|
||||||
|
"Feature Category",
|
||||||
|
"{{your_company}}",
|
||||||
|
"{{competitor_1}}",
|
||||||
|
"{{competitor_2}}",
|
||||||
|
"{{competitor_3}}",
|
||||||
|
]
|
||||||
rows:
|
rows:
|
||||||
- category: "Core Functionality"
|
- category: "Core Functionality"
|
||||||
items:
|
items:
|
||||||
|
|
@ -153,7 +160,13 @@ sections:
|
||||||
- ["Onboarding Time", "{{time}}", "{{time}}", "{{time}}", "{{time}}"]
|
- ["Onboarding Time", "{{time}}", "{{time}}", "{{time}}", "{{time}}"]
|
||||||
- category: "Integration & Ecosystem"
|
- category: "Integration & Ecosystem"
|
||||||
items:
|
items:
|
||||||
- ["API Availability", "{{availability}}", "{{availability}}", "{{availability}}", "{{availability}}"]
|
- [
|
||||||
|
"API Availability",
|
||||||
|
"{{availability}}",
|
||||||
|
"{{availability}}",
|
||||||
|
"{{availability}}",
|
||||||
|
"{{availability}}",
|
||||||
|
]
|
||||||
- ["Third-party Integrations", "{{number}}", "{{number}}", "{{number}}", "{{number}}"]
|
- ["Third-party Integrations", "{{number}}", "{{number}}", "{{number}}", "{{number}}"]
|
||||||
- category: "Pricing & Plans"
|
- category: "Pricing & Plans"
|
||||||
items:
|
items:
|
||||||
|
|
|
||||||
|
|
@ -75,19 +75,34 @@ sections:
|
||||||
rows:
|
rows:
|
||||||
- ["Framework", "{{framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Framework", "{{framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["UI Library", "{{ui_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["UI Library", "{{ui_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["State Management", "{{state_management}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- [
|
||||||
|
"State Management",
|
||||||
|
"{{state_management}}",
|
||||||
|
"{{version}}",
|
||||||
|
"{{purpose}}",
|
||||||
|
"{{why_chosen}}",
|
||||||
|
]
|
||||||
- ["Routing", "{{routing_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Routing", "{{routing_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Build Tool", "{{build_tool}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Build Tool", "{{build_tool}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Styling", "{{styling_solution}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Styling", "{{styling_solution}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Testing", "{{test_framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Testing", "{{test_framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Component Library", "{{component_lib}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- [
|
||||||
|
"Component Library",
|
||||||
|
"{{component_lib}}",
|
||||||
|
"{{version}}",
|
||||||
|
"{{purpose}}",
|
||||||
|
"{{why_chosen}}",
|
||||||
|
]
|
||||||
- ["Form Handling", "{{form_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Form Handling", "{{form_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Animation", "{{animation_lib}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Animation", "{{animation_lib}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
- ["Dev Tools", "{{dev_tools}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
- ["Dev Tools", "{{dev_tools}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
|
||||||
|
|
||||||
- id: project-structure
|
- id: project-structure
|
||||||
title: Project Structure
|
title: Project Structure
|
||||||
instruction: Define exact directory structure for AI tools based on the chosen framework. Be specific about where each type of file goes. Generate a structure that follows the framework's best practices and conventions.
|
instruction: |
|
||||||
|
Define exact directory structure for AI tools based on the chosen framework. Be specific about where each type of file goes. Generate a structure that follows the framework's best practices and conventions.
|
||||||
|
|
||||||
|
NOTE: Reference project-scaffolding-preference.md for standard project structure guidelines. Ensure to include BMAD-specific directories (docs/memory-bank, docs/adr, docs/devJournal) in addition to frontend-specific structure.
|
||||||
elicit: true
|
elicit: true
|
||||||
type: code
|
type: code
|
||||||
language: plaintext
|
language: plaintext
|
||||||
|
|
|
||||||
|
|
@ -483,7 +483,10 @@ sections:
|
||||||
|
|
||||||
- id: unified-project-structure
|
- id: unified-project-structure
|
||||||
title: Unified Project Structure
|
title: Unified Project Structure
|
||||||
instruction: Create a monorepo structure that accommodates both frontend and backend. Adapt based on chosen tools and frameworks.
|
instruction: |
|
||||||
|
Create a monorepo structure that accommodates both frontend and backend. Adapt based on chosen tools and frameworks.
|
||||||
|
|
||||||
|
NOTE: Reference project-scaffolding-preference.md for standard project structure guidelines and ensure alignment with BMAD conventions including docs/memory-bank, docs/adr, and docs/devJournal directories.
|
||||||
elicit: true
|
elicit: true
|
||||||
type: code
|
type: code
|
||||||
language: plaintext
|
language: plaintext
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,240 @@
|
||||||
|
template:
|
||||||
|
type: document
|
||||||
|
id: sprint-planning-tmpl
|
||||||
|
name: Sprint Planning Template
|
||||||
|
version: 1.0.0
|
||||||
|
description: >-
|
||||||
|
Template for sprint planning sessions. Documents sprint goals, selected stories,
|
||||||
|
capacity planning, risk assessment, and success criteria. Integrates with Memory Bank
|
||||||
|
for context and creates Dev Journal entries for tracking.
|
||||||
|
category: planning
|
||||||
|
|
||||||
|
sections:
|
||||||
|
- id: sprint-overview
|
||||||
|
name: Sprint Overview
|
||||||
|
content: |
|
||||||
|
# Sprint [Number] Planning
|
||||||
|
|
||||||
|
## Sprint Information
|
||||||
|
- **Sprint Number**: [X]
|
||||||
|
- **Sprint Name**: [Descriptive name]
|
||||||
|
- **Start Date**: [YYYY-MM-DD]
|
||||||
|
- **End Date**: [YYYY-MM-DD]
|
||||||
|
- **Duration**: [X weeks]
|
||||||
|
- **Team**: [Team name/composition]
|
||||||
|
|
||||||
|
## Sprint Theme
|
||||||
|
[Brief description of the sprint's primary focus or theme]
|
||||||
|
|
||||||
|
- id: context-review
|
||||||
|
name: Context Review
|
||||||
|
content: |
|
||||||
|
## Context from Previous Sprint
|
||||||
|
|
||||||
|
### Previous Sprint Outcomes
|
||||||
|
[Summary of what was completed in the previous sprint]
|
||||||
|
|
||||||
|
### Carried Over Items
|
||||||
|
- [ ] [Story/task carried from previous sprint]
|
||||||
|
- [ ] [Technical debt item]
|
||||||
|
|
||||||
|
### Memory Bank Context
|
||||||
|
- **Active Context**: [Key points from activeContext.md]
|
||||||
|
- **Recent Patterns**: [Relevant patterns from systemPatterns.md]
|
||||||
|
- **Progress Status**: [Current state from progress.md]
|
||||||
|
|
||||||
|
### Recent Dev Journal Insights
|
||||||
|
[Key learnings or decisions from recent dev journals]
|
||||||
|
|
||||||
|
- id: sprint-goals
|
||||||
|
name: Sprint Goals
|
||||||
|
content: |
|
||||||
|
## Sprint Goals
|
||||||
|
|
||||||
|
### Primary Goal
|
||||||
|
[Main objective for this sprint - should be achievable and measurable]
|
||||||
|
|
||||||
|
### Secondary Goals
|
||||||
|
1. [Secondary objective 1]
|
||||||
|
2. [Secondary objective 2]
|
||||||
|
3. [Secondary objective 3]
|
||||||
|
|
||||||
|
### Success Criteria
|
||||||
|
- [ ] [Specific measurable outcome 1]
|
||||||
|
- [ ] [Specific measurable outcome 2]
|
||||||
|
- [ ] [Specific measurable outcome 3]
|
||||||
|
|
||||||
|
### Definition of Done
|
||||||
|
- [ ] All code reviewed and approved
|
||||||
|
- [ ] All tests passing (unit, integration, E2E)
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Memory Bank updated with progress
|
||||||
|
- [ ] Dev Journal entry created
|
||||||
|
|
||||||
|
- id: capacity-planning
|
||||||
|
name: Capacity Planning
|
||||||
|
content: |
|
||||||
|
## Team Capacity
|
||||||
|
|
||||||
|
### Available Hours
|
||||||
|
| Team Member | Role | Available Hours | Planned Time Off |
|
||||||
|
|-------------|------|-----------------|------------------|
|
||||||
|
| [Name] | [Role] | [Hours] | [Dates if any] |
|
||||||
|
|
||||||
|
### Velocity Reference
|
||||||
|
- **Previous Sprint Velocity**: [Story points]
|
||||||
|
- **3-Sprint Average**: [Story points]
|
||||||
|
- **Target Velocity**: [Story points]
|
||||||
|
|
||||||
|
### Capacity Allocation
|
||||||
|
- **Feature Development**: [X]%
|
||||||
|
- **Bug Fixes**: [X]%
|
||||||
|
- **Technical Debt**: [X]%
|
||||||
|
- **Meetings/Ceremonies**: [X]%
|
||||||
|
- **Buffer**: [X]%
|
||||||
|
|
||||||
|
- id: selected-stories
|
||||||
|
name: Selected Stories
|
||||||
|
content: |
|
||||||
|
## Selected User Stories
|
||||||
|
|
||||||
|
### Committed Stories
|
||||||
|
| Story ID | Title | Points | Assignee | Dependencies |
|
||||||
|
|----------|-------|--------|----------|--------------|
|
||||||
|
| [ID] | [Title] | [Points] | [Name] | [Dependencies] |
|
||||||
|
|
||||||
|
### Total Story Points: [X]
|
||||||
|
|
||||||
|
### Story Details
|
||||||
|
#### [Story ID]: [Story Title]
|
||||||
|
- **Acceptance Criteria**:
|
||||||
|
- [ ] [Criterion 1]
|
||||||
|
- [ ] [Criterion 2]
|
||||||
|
- **Technical Considerations**: [Notes]
|
||||||
|
- **Integration Points**: [Systems/APIs]
|
||||||
|
- **Testing Strategy**: [Approach]
|
||||||
|
|
||||||
|
- id: technical-planning
|
||||||
|
name: Technical Planning
|
||||||
|
content: |
|
||||||
|
## Technical Considerations
|
||||||
|
|
||||||
|
### Architecture Impacts
|
||||||
|
- [ ] [Component/system that will be modified]
|
||||||
|
- [ ] [New integrations required]
|
||||||
|
- [ ] [Database changes needed]
|
||||||
|
|
||||||
|
### Technical Dependencies
|
||||||
|
| Dependency | Status | Owner | Due Date |
|
||||||
|
|------------|--------|-------|----------|
|
||||||
|
| [Item] | [Status] | [Name] | [Date] |
|
||||||
|
|
||||||
|
### Technical Debt Items
|
||||||
|
- [ ] [Debt item to address this sprint]
|
||||||
|
- [ ] [Refactoring opportunity]
|
||||||
|
|
||||||
|
### Performance Considerations
|
||||||
|
- [Performance impacts to monitor]
|
||||||
|
- [Optimization opportunities]
|
||||||
|
|
||||||
|
- id: risk-assessment
|
||||||
|
name: Risk Assessment
|
||||||
|
content: |
|
||||||
|
## Risk Assessment
|
||||||
|
|
||||||
|
### Identified Risks
|
||||||
|
| Risk | Probability | Impact | Mitigation Strategy | Owner |
|
||||||
|
|------|-------------|--------|---------------------|-------|
|
||||||
|
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Name] |
|
||||||
|
|
||||||
|
### Dependencies on External Teams
|
||||||
|
- **Team**: [External team name]
|
||||||
|
- **Dependency**: [What we need]
|
||||||
|
- **Status**: [Current status]
|
||||||
|
- **Fallback**: [Plan B if blocked]
|
||||||
|
|
||||||
|
### Technical Risks
|
||||||
|
- [ ] [Complex integration risk]
|
||||||
|
- [ ] [Performance risk]
|
||||||
|
- [ ] [Security consideration]
|
||||||
|
|
||||||
|
- id: communication-plan
|
||||||
|
name: Communication Plan
|
||||||
|
content: |
|
||||||
|
## Communication Plan
|
||||||
|
|
||||||
|
### Sprint Ceremonies
|
||||||
|
- **Daily Standup**: [Time] @ [Location/Link]
|
||||||
|
- **Sprint Review**: [Date/Time] @ [Location/Link]
|
||||||
|
- **Retrospective**: [Date/Time] @ [Location/Link]
|
||||||
|
|
||||||
|
### Stakeholder Updates
|
||||||
|
- **Format**: [Email/Slack/Meeting]
|
||||||
|
- **Frequency**: [Daily/Weekly]
|
||||||
|
- **Recipients**: [List of stakeholders]
|
||||||
|
|
||||||
|
### Escalation Path
|
||||||
|
1. [First level - e.g., Scrum Master]
|
||||||
|
2. [Second level - e.g., Product Owner]
|
||||||
|
3. [Third level - e.g., Engineering Manager]
|
||||||
|
|
||||||
|
- id: sprint-backlog
|
||||||
|
name: Sprint Backlog
|
||||||
|
content: |
|
||||||
|
## Sprint Backlog
|
||||||
|
|
||||||
|
### Ready for Development
|
||||||
|
- [ ] [Task ready to start]
|
||||||
|
- [ ] [Another task]
|
||||||
|
|
||||||
|
### In Progress
|
||||||
|
- [ ] [Task being worked on]
|
||||||
|
|
||||||
|
### Stretch Goals
|
||||||
|
- [ ] [Additional story if capacity allows]
|
||||||
|
- [ ] [Nice-to-have feature]
|
||||||
|
|
||||||
|
### Not Selected (Documented for Next Sprint)
|
||||||
|
- [Story/task deferred with reason]
|
||||||
|
- [Another deferred item]
|
||||||
|
|
||||||
|
- id: acceptance-criteria
|
||||||
|
name: Sprint Acceptance Criteria
|
||||||
|
content: |
|
||||||
|
## Sprint Acceptance Criteria
|
||||||
|
|
||||||
|
### Must Complete
|
||||||
|
- [ ] All committed stories meet Definition of Done
|
||||||
|
- [ ] No critical bugs in production
|
||||||
|
- [ ] Memory Bank updated with sprint progress
|
||||||
|
- [ ] Dev Journals document key decisions
|
||||||
|
- [ ] All integration tests passing
|
||||||
|
|
||||||
|
### Should Complete
|
||||||
|
- [ ] Technical debt items addressed
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Performance benchmarks met
|
||||||
|
|
||||||
|
### Sprint Review Preparation
|
||||||
|
- [ ] Demo environment ready
|
||||||
|
- [ ] Demo script prepared
|
||||||
|
- [ ] Metrics collected
|
||||||
|
- [ ] Stakeholder invites sent
|
||||||
|
|
||||||
|
usage:
|
||||||
|
instructions: |
|
||||||
|
This template should be used at the beginning of each sprint during sprint planning.
|
||||||
|
|
||||||
|
1. Review previous sprint outcomes and Memory Bank context
|
||||||
|
2. Define clear, measurable sprint goals
|
||||||
|
3. Assess team capacity realistically
|
||||||
|
4. Select stories that align with goals and capacity
|
||||||
|
5. Identify and plan for risks
|
||||||
|
6. Establish clear communication plans
|
||||||
|
7. Document all decisions in Dev Journal
|
||||||
|
|
||||||
|
integration:
|
||||||
|
- memory-bank: Update activeContext.md with sprint goals
|
||||||
|
- dev-journal: Create sprint planning entry
|
||||||
|
- workflows: Referenced by sprint-execution workflow
|
||||||
|
- checklists: Links to sprint-review-checklist for end of sprint
|
||||||
|
|
@ -0,0 +1,256 @@
|
||||||
|
template:
|
||||||
|
id: sprint-review-template-v1
|
||||||
|
name: Sprint Review & Retrospective
|
||||||
|
version: 1.0
|
||||||
|
output:
|
||||||
|
format: markdown
|
||||||
|
filename: docs/devJournal/{{sprint_end_date}}-sprint-review.md
|
||||||
|
title: "Sprint Review: {{sprint_start_date}} - {{sprint_end_date}}"
|
||||||
|
description: |
|
||||||
|
Template for conducting comprehensive sprint reviews and retrospectives,
|
||||||
|
capturing achievements, learnings, and action items for continuous improvement.
|
||||||
|
|
||||||
|
workflow:
|
||||||
|
mode: guided
|
||||||
|
instruction: |
|
||||||
|
Conduct a thorough sprint review by gathering metrics, reviewing achievements,
|
||||||
|
facilitating retrospective, and planning improvements. Use git commands to
|
||||||
|
gather accurate metrics before starting.
|
||||||
|
|
||||||
|
sections:
|
||||||
|
- id: header
|
||||||
|
title: Sprint Review Header
|
||||||
|
instruction: Capture sprint metadata
|
||||||
|
template: |
|
||||||
|
# Sprint Review: {{sprint_start_date}} - {{sprint_end_date}}
|
||||||
|
|
||||||
|
**Sprint Name:** {{sprint_name}}
|
||||||
|
**Sprint Goal:** {{sprint_goal}}
|
||||||
|
**Duration:** {{sprint_duration}} weeks
|
||||||
|
**Date of Review:** {{review_date}}
|
||||||
|
|
||||||
|
- id: overview
|
||||||
|
title: Sprint Overview
|
||||||
|
instruction: Summarize the sprint context
|
||||||
|
template: |
|
||||||
|
## 1. Sprint Overview
|
||||||
|
|
||||||
|
- **Sprint Dates:** {{sprint_start_date}} – {{sprint_end_date}}
|
||||||
|
- **Sprint Goal:** {{sprint_goal_detailed}}
|
||||||
|
- **Participants:** {{participants}}
|
||||||
|
- **Branch/Release:** {{branch_release}}
|
||||||
|
|
||||||
|
- id: achievements
|
||||||
|
title: Achievements & Deliverables
|
||||||
|
instruction: Document what was accomplished
|
||||||
|
template: |
|
||||||
|
## 2. Achievements & Deliverables
|
||||||
|
|
||||||
|
### Major Features Completed
|
||||||
|
{{#each features_completed}}
|
||||||
|
- {{this.feature}} ({{this.pr_link}})
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### Technical Milestones
|
||||||
|
{{#each technical_milestones}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### Documentation Updates
|
||||||
|
{{#each documentation_updates}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### Testing & Quality
|
||||||
|
- **Tests Added:** {{tests_added}}
|
||||||
|
- **Coverage Change:** {{coverage_change}}
|
||||||
|
- **Bugs Fixed:** {{bugs_fixed}}
|
||||||
|
|
||||||
|
- id: metrics
|
||||||
|
title: Sprint Metrics
|
||||||
|
instruction: Present quantitative sprint data
|
||||||
|
template: |
|
||||||
|
## 3. Sprint Metrics
|
||||||
|
|
||||||
|
| Metric | Count | Details |
|
||||||
|
|--------|-------|---------|
|
||||||
|
| Commits | {{commit_count}} | {{commit_details}} |
|
||||||
|
| PRs Merged | {{pr_count}} | {{pr_details}} |
|
||||||
|
| Issues Closed | {{issues_closed}} | {{issue_details}} |
|
||||||
|
| Story Points Completed | {{story_points}} | {{velocity_trend}} |
|
||||||
|
|
||||||
|
### Git Activity Summary
|
||||||
|
```
|
||||||
|
{{git_summary}}
|
||||||
|
```
|
||||||
|
|
||||||
|
- id: goal-review
|
||||||
|
title: Review of Sprint Goals
|
||||||
|
instruction: Assess goal completion honestly
|
||||||
|
template: |
|
||||||
|
## 4. Review of Sprint Goals
|
||||||
|
|
||||||
|
### What Was Planned
|
||||||
|
{{sprint_planned}}
|
||||||
|
|
||||||
|
### What Was Achieved
|
||||||
|
{{sprint_achieved}}
|
||||||
|
|
||||||
|
### What Was Not Completed
|
||||||
|
{{#each incomplete_items}}
|
||||||
|
- **{{this.item}}**: {{this.reason}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
**Goal Completion:** {{completion_percentage}}%
|
||||||
|
|
||||||
|
- id: demo
|
||||||
|
title: Demo & Walkthrough
|
||||||
|
instruction: Provide demonstration materials if available
|
||||||
|
template: |
|
||||||
|
## 5. Demo & Walkthrough
|
||||||
|
|
||||||
|
{{#if has_screenshots}}
|
||||||
|
### Screenshots/Videos
|
||||||
|
{{demo_links}}
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
### How to Review Features
|
||||||
|
{{review_instructions}}
|
||||||
|
|
||||||
|
- id: retrospective
|
||||||
|
title: Retrospective
|
||||||
|
instruction: Facilitate honest team reflection
|
||||||
|
template: |
|
||||||
|
## 6. Retrospective
|
||||||
|
|
||||||
|
### What Went Well 🎉
|
||||||
|
{{#each went_well}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### What Didn't Go Well 😔
|
||||||
|
{{#each didnt_go_well}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### What We Learned 💡
|
||||||
|
{{#each learnings}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### What We'll Try Next 🚀
|
||||||
|
{{#each improvements}}
|
||||||
|
- {{this}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
- id: action-items
|
||||||
|
title: Action Items & Next Steps
|
||||||
|
instruction: Define concrete improvements
|
||||||
|
template: |
|
||||||
|
## 7. Action Items & Next Steps
|
||||||
|
|
||||||
|
| Action | Owner | Deadline | Priority |
|
||||||
|
|--------|-------|----------|----------|
|
||||||
|
{{#each action_items}}
|
||||||
|
| {{this.action}} | {{this.owner}} | {{this.deadline}} | {{this.priority}} |
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### Next Sprint Preparation
|
||||||
|
- **Next Sprint Goal:** {{next_sprint_goal}}
|
||||||
|
- **Key Focus Areas:** {{next_focus_areas}}
|
||||||
|
|
||||||
|
- id: references
|
||||||
|
title: References
|
||||||
|
instruction: Link to supporting documentation
|
||||||
|
template: |
|
||||||
|
## 8. References
|
||||||
|
|
||||||
|
### Dev Journal Entries
|
||||||
|
{{#each dev_journals}}
|
||||||
|
- [{{this.date}}]({{this.path}}) - {{this.summary}}
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### ADRs Created/Updated
|
||||||
|
{{#each adrs}}
|
||||||
|
- [{{this.number}} - {{this.title}}]({{this.path}})
|
||||||
|
{{/each}}
|
||||||
|
|
||||||
|
### Other Documentation
|
||||||
|
- [CHANGELOG.md](../../CHANGELOG.md) - {{changelog_summary}}
|
||||||
|
- [Memory Bank - Progress](../memory-bank/progress.md) - Updated with sprint outcomes
|
||||||
|
- [Memory Bank - Active Context](../memory-bank/activeContext.md) - Updated with current state
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Sprint review conducted by {{facilitator}} on {{review_date}}*
|
||||||
|
|
||||||
|
validation:
|
||||||
|
required_fields:
|
||||||
|
- sprint_start_date
|
||||||
|
- sprint_end_date
|
||||||
|
- sprint_goal
|
||||||
|
- participants
|
||||||
|
- features_completed
|
||||||
|
- went_well
|
||||||
|
- didnt_go_well
|
||||||
|
- learnings
|
||||||
|
- action_items
|
||||||
|
|
||||||
|
prompts:
|
||||||
|
# Sprint metadata
|
||||||
|
sprint_start_date: "Sprint start date (YYYY-MM-DD)"
|
||||||
|
sprint_end_date: "Sprint end date (YYYY-MM-DD)"
|
||||||
|
sprint_name: "Sprint name or number"
|
||||||
|
sprint_goal: "Brief sprint goal"
|
||||||
|
sprint_goal_detailed: "Detailed sprint goal description"
|
||||||
|
sprint_duration: "Sprint duration in weeks"
|
||||||
|
review_date: "Date of this review"
|
||||||
|
participants: "List of sprint participants"
|
||||||
|
branch_release: "Active branches or release tags"
|
||||||
|
|
||||||
|
# Achievements
|
||||||
|
features_completed: "List major features completed with PR links"
|
||||||
|
technical_milestones: "List technical achievements"
|
||||||
|
documentation_updates: "List documentation improvements"
|
||||||
|
tests_added: "Number of tests added"
|
||||||
|
coverage_change: "Test coverage change (e.g., +5%)"
|
||||||
|
bugs_fixed: "Number of bugs fixed"
|
||||||
|
|
||||||
|
# Metrics
|
||||||
|
commit_count: "Total commits in sprint"
|
||||||
|
commit_details: "Brief summary of commit types"
|
||||||
|
pr_count: "Number of PRs merged"
|
||||||
|
pr_details: "Notable PRs"
|
||||||
|
issues_closed: "Number of issues closed"
|
||||||
|
issue_details: "Types of issues resolved"
|
||||||
|
story_points: "Story points completed"
|
||||||
|
velocity_trend: "Velocity compared to previous sprints"
|
||||||
|
git_summary: "Git log summary or statistics"
|
||||||
|
|
||||||
|
# Goal review
|
||||||
|
sprint_planned: "What was originally planned for the sprint"
|
||||||
|
sprint_achieved: "Summary of what was actually achieved"
|
||||||
|
incomplete_items: "List items not completed with reasons"
|
||||||
|
completion_percentage: "Estimated percentage of goal completion"
|
||||||
|
|
||||||
|
# Demo
|
||||||
|
has_screenshots: "Are there screenshots or videos? (true/false)"
|
||||||
|
demo_links: "Links to demo materials"
|
||||||
|
review_instructions: "How to test or review the new features"
|
||||||
|
|
||||||
|
# Retrospective
|
||||||
|
went_well: "List what went well during the sprint"
|
||||||
|
didnt_go_well: "List challenges and issues"
|
||||||
|
learnings: "List key learnings and insights"
|
||||||
|
improvements: "List experiments for next sprint"
|
||||||
|
|
||||||
|
# Action items
|
||||||
|
action_items: "List action items with owner, deadline, priority"
|
||||||
|
next_sprint_goal: "Proposed goal for next sprint"
|
||||||
|
next_focus_areas: "Key areas to focus on"
|
||||||
|
|
||||||
|
# References
|
||||||
|
dev_journals: "List relevant dev journal entries"
|
||||||
|
adrs: "List ADRs created or updated"
|
||||||
|
changelog_summary: "Brief summary of CHANGELOG updates"
|
||||||
|
facilitator: "Person facilitating this review"
|
||||||
|
|
@ -238,6 +238,98 @@ You will want to verify from sharding your architecture that these documents exi
|
||||||
|
|
||||||
As your project grows and the code starts to build consistent patterns, coding standards should be reduced to just the items that the agent makes mistakes at still - must with the better models, they will look at surrounding code in files and not need a rule from that file to guide them.
|
As your project grows and the code starts to build consistent patterns, coding standards should be reduced to just the items that the agent makes mistakes at still - must with the better models, they will look at surrounding code in files and not need a rule from that file to guide them.
|
||||||
|
|
||||||
|
## Enhanced BMad Features
|
||||||
|
|
||||||
|
### Session Kickoff Protocol
|
||||||
|
|
||||||
|
Every new AI session should begin with proper context initialization:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[New Session Start] --> B[bmad-master: session-kickoff]
|
||||||
|
B --> C[Review Memory Bank]
|
||||||
|
C --> D[Check Dev Journals]
|
||||||
|
D --> E[Review Recent ADRs]
|
||||||
|
E --> F[Load Technical Principles]
|
||||||
|
F --> G[Session Ready]
|
||||||
|
|
||||||
|
style A fill:#f5f5f5
|
||||||
|
style B fill:#FF6B6B
|
||||||
|
style C fill:#DDA0DD
|
||||||
|
style D fill:#FFE4B5
|
||||||
|
style E fill:#ADD8E6
|
||||||
|
style F fill:#98FB98
|
||||||
|
style G fill:#90EE90
|
||||||
|
```
|
||||||
|
|
||||||
|
**When to use**: Start of any new session, after 24+ hour gaps, or when switching contexts.
|
||||||
|
|
||||||
|
### Memory Bank Pattern
|
||||||
|
|
||||||
|
The Memory Bank provides persistent context across AI sessions:
|
||||||
|
|
||||||
|
- **Location**: `docs/memory-bank/`
|
||||||
|
- **Core Files**:
|
||||||
|
- `projectbrief.md` - Project foundation and goals
|
||||||
|
- `productContext.md` - User needs and problems
|
||||||
|
- `systemPatterns.md` - Architecture and patterns
|
||||||
|
- `techContext.md` - Technology stack
|
||||||
|
- `activeContext.md` - Current work state
|
||||||
|
- `progress.md` - Completed features and status
|
||||||
|
|
||||||
|
**Integration**: All agents automatically reference Memory Bank during session kickoff.
|
||||||
|
|
||||||
|
### Development Journals
|
||||||
|
|
||||||
|
Track session work and decisions:
|
||||||
|
|
||||||
|
- **Location**: `docs/devJournal/`
|
||||||
|
- **Format**: `YYYYMMDD-NN.md` (e.g., `20240115-01.md`)
|
||||||
|
- **Created**: After significant work sessions
|
||||||
|
- **Contents**: Work completed, decisions made, challenges faced
|
||||||
|
|
||||||
|
### Architectural Decision Records (ADRs)
|
||||||
|
|
||||||
|
Document significant technical decisions:
|
||||||
|
|
||||||
|
- **Location**: `docs/adr/`
|
||||||
|
- **Format**: Michael Nygard ADR format
|
||||||
|
- **Triggers**: Major architecture changes, new patterns, technology choices
|
||||||
|
- **Integration**: Automatically referenced in Memory Bank
|
||||||
|
|
||||||
|
### Sprint Ceremonies
|
||||||
|
|
||||||
|
BMad now includes full sprint support:
|
||||||
|
|
||||||
|
1. **Sprint Planning**: Integrated into all workflows after validation
|
||||||
|
2. **Daily Development**: Tracked through dev journals
|
||||||
|
3. **Sprint Review**: Comprehensive review using `sprint-review-checklist.md`
|
||||||
|
4. **Retrospectives**: Optional but recommended for continuous improvement
|
||||||
|
|
||||||
|
### New Specialized Workflows
|
||||||
|
|
||||||
|
Beyond the standard development workflows, BMad now includes:
|
||||||
|
|
||||||
|
- **sprint-execution.yaml**: Full sprint-based agile development
|
||||||
|
- **quick-fix.yaml**: Streamlined hotfix process
|
||||||
|
- **technical-debt.yaml**: Systematic debt reduction
|
||||||
|
- **documentation-update.yaml**: Documentation maintenance
|
||||||
|
- **system-migration.yaml**: Platform and technology migrations
|
||||||
|
- **performance-optimization.yaml**: Performance improvement cycles
|
||||||
|
|
||||||
|
### Quality Gates
|
||||||
|
|
||||||
|
All workflows now include 8-step validation:
|
||||||
|
|
||||||
|
1. Syntax validation
|
||||||
|
2. Type checking
|
||||||
|
3. Linting
|
||||||
|
4. Security scanning
|
||||||
|
5. Test execution
|
||||||
|
6. Performance validation
|
||||||
|
7. Documentation checks
|
||||||
|
8. Integration testing
|
||||||
|
|
||||||
## Getting Help
|
## Getting Help
|
||||||
|
|
||||||
- **Discord Community**: [Join Discord](https://discord.gg/gk8jAdXWmj)
|
- **Discord Community**: [Join Discord](https://discord.gg/gk8jAdXWmj)
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,18 @@ workflow:
|
||||||
- integration-enhancement
|
- integration-enhancement
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand project state and technical principles
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review sprint context if applicable
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- step: enhancement_classification
|
- step: enhancement_classification
|
||||||
agent: analyst
|
agent: analyst
|
||||||
action: classify enhancement scope
|
action: classify enhancement scope
|
||||||
|
|
@ -93,6 +105,17 @@ workflow:
|
||||||
condition: po_checklist_issues
|
condition: po_checklist_issues
|
||||||
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for significant decisions
|
||||||
|
|
||||||
- agent: po
|
- agent: po
|
||||||
action: shard_documents
|
action: shard_documents
|
||||||
creates: sharded_docs
|
creates: sharded_docs
|
||||||
|
|
@ -159,6 +182,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- Major feature completion
|
||||||
|
- Complex problem solutions
|
||||||
|
- Architectural decisions (trigger ADR creation)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -176,18 +212,32 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review accomplishments using sprint-review-checklist
|
||||||
|
- Document learnings and technical decisions
|
||||||
|
- Update Memory Bank comprehensively
|
||||||
|
- Create sprint summary documentation
|
||||||
|
- Plan next sprint priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
All stories implemented and reviewed!
|
All stories implemented and reviewed!
|
||||||
Project development phase complete.
|
Project development phase complete.
|
||||||
|
Memory Bank and documentation updated.
|
||||||
|
|
||||||
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
||||||
|
|
||||||
flow_diagram: |
|
flow_diagram: |
|
||||||
```mermaid
|
```mermaid
|
||||||
graph TD
|
graph TD
|
||||||
A[Start: Brownfield Enhancement] --> B[analyst: classify enhancement scope]
|
A[Start: Brownfield Enhancement] --> A1[bmad-master: session kickoff]
|
||||||
|
A1 --> B[analyst: classify enhancement scope]
|
||||||
B --> C{Enhancement Size?}
|
B --> C{Enhancement Size?}
|
||||||
|
|
||||||
C -->|Single Story| D[pm: brownfield-create-story]
|
C -->|Single Story| D[pm: brownfield-create-story]
|
||||||
|
|
@ -209,8 +259,9 @@ workflow:
|
||||||
|
|
||||||
L --> M{PO finds issues?}
|
L --> M{PO finds issues?}
|
||||||
M -->|Yes| N[Fix issues]
|
M -->|Yes| N[Fix issues]
|
||||||
M -->|No| O[po: shard documents]
|
M -->|No| SP[sm: sprint planning]
|
||||||
N --> L
|
N --> L
|
||||||
|
SP --> O[po: shard documents]
|
||||||
|
|
||||||
O --> P[sm: create story]
|
O --> P[sm: create story]
|
||||||
P --> Q{Story Type?}
|
P --> Q{Story Type?}
|
||||||
|
|
@ -231,10 +282,14 @@ workflow:
|
||||||
Z -->|No| Y
|
Z -->|No| Y
|
||||||
AA --> X
|
AA --> X
|
||||||
Y -->|Yes| P
|
Y -->|Yes| P
|
||||||
Y -->|No| AB{Retrospective?}
|
Y -->|No| DJ[dev: dev journal]
|
||||||
|
DJ --> AB{Retrospective?}
|
||||||
AB -->|Yes| AC[po: retrospective]
|
AB -->|Yes| AC[po: retrospective]
|
||||||
AB -->|No| AD[Complete]
|
AB -->|No| SR{Sprint boundary?}
|
||||||
AC --> AD
|
AC --> SR
|
||||||
|
SR -->|Yes| SRV[sm: sprint review]
|
||||||
|
SR -->|No| AD[Complete]
|
||||||
|
SRV --> AD
|
||||||
|
|
||||||
style AD fill:#90EE90
|
style AD fill:#90EE90
|
||||||
style END1 fill:#90EE90
|
style END1 fill:#90EE90
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,18 @@ workflow:
|
||||||
- integration-enhancement
|
- integration-enhancement
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand service architecture and dependencies
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review technical principles and patterns
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- step: service_analysis
|
- step: service_analysis
|
||||||
agent: architect
|
agent: architect
|
||||||
action: analyze existing project and use task document-project
|
action: analyze existing project and use task document-project
|
||||||
|
|
@ -41,6 +53,17 @@ workflow:
|
||||||
condition: po_checklist_issues
|
condition: po_checklist_issues
|
||||||
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align service enhancements with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for API changes
|
||||||
|
|
||||||
- agent: po
|
- agent: po
|
||||||
action: shard_documents
|
action: shard_documents
|
||||||
creates: sharded_docs
|
creates: sharded_docs
|
||||||
|
|
@ -105,6 +128,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- API changes and integration updates
|
||||||
|
- Performance improvements
|
||||||
|
- Service architecture decisions (trigger ADR)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank systemPatterns
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -122,25 +158,40 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review service enhancements and API changes
|
||||||
|
- Document performance improvements
|
||||||
|
- Update Memory Bank with architectural decisions
|
||||||
|
- Create sprint summary with metrics
|
||||||
|
- Plan next sprint priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
All stories implemented and reviewed!
|
All stories implemented and reviewed!
|
||||||
Project development phase complete.
|
Service development phase complete.
|
||||||
|
Memory Bank and API documentation updated.
|
||||||
|
|
||||||
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
||||||
|
|
||||||
flow_diagram: |
|
flow_diagram: |
|
||||||
```mermaid
|
```mermaid
|
||||||
graph TD
|
graph TD
|
||||||
A[Start: Service Enhancement] --> B[analyst: analyze existing service]
|
A[Start: Service Enhancement] --> A1[bmad-master: session kickoff]
|
||||||
|
A1 --> B[architect: analyze existing service]
|
||||||
B --> C[pm: prd.md]
|
B --> C[pm: prd.md]
|
||||||
C --> D[architect: architecture.md]
|
C --> D[architect: architecture.md]
|
||||||
D --> E[po: validate with po-master-checklist]
|
D --> E[po: validate with po-master-checklist]
|
||||||
E --> F{PO finds issues?}
|
E --> F{PO finds issues?}
|
||||||
F -->|Yes| G[Return to relevant agent for fixes]
|
F -->|Yes| G[Return to relevant agent for fixes]
|
||||||
F -->|No| H[po: shard documents]
|
F -->|No| SP[sm: sprint planning]
|
||||||
G --> E
|
G --> E
|
||||||
|
SP --> H[po: shard documents]
|
||||||
|
|
||||||
H --> I[sm: create story]
|
H --> I[sm: create story]
|
||||||
I --> J{Review draft story?}
|
I --> J{Review draft story?}
|
||||||
|
|
@ -155,10 +206,14 @@ workflow:
|
||||||
P -->|No| O
|
P -->|No| O
|
||||||
Q --> N
|
Q --> N
|
||||||
O -->|Yes| I
|
O -->|Yes| I
|
||||||
O -->|No| R{Epic retrospective?}
|
O -->|No| DJ[dev: dev journal]
|
||||||
|
DJ --> R{Epic retrospective?}
|
||||||
R -->|Yes| S[po: epic retrospective]
|
R -->|Yes| S[po: epic retrospective]
|
||||||
R -->|No| T[Project Complete]
|
R -->|No| SR{Sprint boundary?}
|
||||||
S --> T
|
S --> SR
|
||||||
|
SR -->|Yes| SRV[sm: sprint review]
|
||||||
|
SR -->|No| T[Project Complete]
|
||||||
|
SRV --> T
|
||||||
|
|
||||||
style T fill:#90EE90
|
style T fill:#90EE90
|
||||||
style H fill:#ADD8E6
|
style H fill:#ADD8E6
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,18 @@ workflow:
|
||||||
- frontend-enhancement
|
- frontend-enhancement
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand project state and technical principles
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review sprint context if applicable
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- step: ui_analysis
|
- step: ui_analysis
|
||||||
agent: architect
|
agent: architect
|
||||||
action: analyze existing project and use task document-project
|
action: analyze existing project and use task document-project
|
||||||
|
|
@ -48,6 +60,17 @@ workflow:
|
||||||
condition: po_checklist_issues
|
condition: po_checklist_issues
|
||||||
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for significant UI/UX decisions
|
||||||
|
|
||||||
- agent: po
|
- agent: po
|
||||||
action: shard_documents
|
action: shard_documents
|
||||||
creates: sharded_docs
|
creates: sharded_docs
|
||||||
|
|
@ -112,6 +135,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- Major UI component completion
|
||||||
|
- Complex frontend problem solutions
|
||||||
|
- UI/UX architectural decisions (trigger ADR creation)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -129,6 +165,18 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review UI enhancement accomplishments using sprint-review-checklist
|
||||||
|
- Document UI/UX learnings and design decisions
|
||||||
|
- Update Memory Bank comprehensively
|
||||||
|
- Create sprint summary documentation
|
||||||
|
- Plan next sprint UI priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -140,15 +188,17 @@ workflow:
|
||||||
flow_diagram: |
|
flow_diagram: |
|
||||||
```mermaid
|
```mermaid
|
||||||
graph TD
|
graph TD
|
||||||
A[Start: UI Enhancement] --> B[analyst: analyze existing UI]
|
A[Start: UI Enhancement] --> A1[bmad-master: session kickoff]
|
||||||
|
A1 --> B[analyst: analyze existing UI]
|
||||||
B --> C[pm: prd.md]
|
B --> C[pm: prd.md]
|
||||||
C --> D[ux-expert: front-end-spec.md]
|
C --> D[ux-expert: front-end-spec.md]
|
||||||
D --> E[architect: architecture.md]
|
D --> E[architect: architecture.md]
|
||||||
E --> F[po: validate with po-master-checklist]
|
E --> F[po: validate with po-master-checklist]
|
||||||
F --> G{PO finds issues?}
|
F --> G{PO finds issues?}
|
||||||
G -->|Yes| H[Return to relevant agent for fixes]
|
G -->|Yes| H[Return to relevant agent for fixes]
|
||||||
G -->|No| I[po: shard documents]
|
G -->|No| SP[sm: sprint planning]
|
||||||
H --> F
|
H --> F
|
||||||
|
SP --> I[po: shard documents]
|
||||||
|
|
||||||
I --> J[sm: create story]
|
I --> J[sm: create story]
|
||||||
J --> K{Review draft story?}
|
J --> K{Review draft story?}
|
||||||
|
|
@ -157,16 +207,20 @@ workflow:
|
||||||
L --> M
|
L --> M
|
||||||
M --> N{QA review?}
|
M --> N{QA review?}
|
||||||
N -->|Yes| O[qa: review implementation]
|
N -->|Yes| O[qa: review implementation]
|
||||||
N -->|No| P{More stories?}
|
N -->|No| DJ[dev: dev journal]
|
||||||
O --> Q{QA found issues?}
|
O --> Q{QA found issues?}
|
||||||
Q -->|Yes| R[dev: address QA feedback]
|
Q -->|Yes| R[dev: address QA feedback]
|
||||||
Q -->|No| P
|
Q -->|No| DJ
|
||||||
R --> O
|
R --> O
|
||||||
|
DJ --> P{More stories?}
|
||||||
P -->|Yes| J
|
P -->|Yes| J
|
||||||
P -->|No| S{Epic retrospective?}
|
P -->|No| S{Epic retrospective?}
|
||||||
S -->|Yes| T[po: epic retrospective]
|
S -->|Yes| T[po: epic retrospective]
|
||||||
S -->|No| U[Project Complete]
|
S -->|No| SR{Sprint boundary?}
|
||||||
T --> U
|
T --> SR
|
||||||
|
SR -->|Yes| SRV[sm: sprint review]
|
||||||
|
SR -->|No| U[Project Complete]
|
||||||
|
SRV --> U
|
||||||
|
|
||||||
style U fill:#90EE90
|
style U fill:#90EE90
|
||||||
style I fill:#ADD8E6
|
style I fill:#ADD8E6
|
||||||
|
|
@ -178,6 +232,10 @@ workflow:
|
||||||
style L fill:#F0E68C
|
style L fill:#F0E68C
|
||||||
style O fill:#F0E68C
|
style O fill:#F0E68C
|
||||||
style T fill:#F0E68C
|
style T fill:#F0E68C
|
||||||
|
style A1 fill:#E6E6FA
|
||||||
|
style SP fill:#E6E6FA
|
||||||
|
style DJ fill:#E6E6FA
|
||||||
|
style SRV fill:#E6E6FA
|
||||||
```
|
```
|
||||||
|
|
||||||
decision_guidance:
|
decision_guidance:
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,287 @@
|
||||||
|
workflow:
|
||||||
|
id: documentation-update
|
||||||
|
name: Documentation Update Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for systematic documentation creation and updates. Handles
|
||||||
|
API documentation, user guides, architectural documentation, and knowledge
|
||||||
|
base maintenance with focus on accuracy and completeness.
|
||||||
|
type: documentation
|
||||||
|
project_types:
|
||||||
|
- api-documentation
|
||||||
|
- user-guides
|
||||||
|
- developer-docs
|
||||||
|
- architecture-docs
|
||||||
|
- knowledge-base
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Documentation context initialization:
|
||||||
|
- Review Memory Bank for current state
|
||||||
|
- Check existing documentation
|
||||||
|
- Understand system architecture
|
||||||
|
- Review recent changes
|
||||||
|
- Identify documentation standards
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: documentation_audit
|
||||||
|
creates: doc-audit-report.md
|
||||||
|
notes: |
|
||||||
|
Comprehensive documentation audit:
|
||||||
|
- Inventory existing documentation
|
||||||
|
- Identify gaps and outdated content
|
||||||
|
- Check accuracy against current code
|
||||||
|
- Review user feedback on docs
|
||||||
|
- Assess documentation coverage
|
||||||
|
- List priority areas
|
||||||
|
|
||||||
|
- agent: pm
|
||||||
|
action: documentation_planning
|
||||||
|
creates: doc-update-plan.md
|
||||||
|
requires: doc-audit-report.md
|
||||||
|
notes: |
|
||||||
|
Create documentation plan:
|
||||||
|
- Prioritize documentation needs
|
||||||
|
- Define target audiences
|
||||||
|
- Set documentation standards
|
||||||
|
- Create content outline
|
||||||
|
- Estimate effort required
|
||||||
|
- Plan review cycles
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: technical_content_planning
|
||||||
|
creates: technical-doc-outline.md
|
||||||
|
condition: technical_documentation
|
||||||
|
notes: |
|
||||||
|
Technical documentation planning:
|
||||||
|
- Architecture diagrams needed
|
||||||
|
- API specifications to document
|
||||||
|
- Integration guides required
|
||||||
|
- Performance documentation
|
||||||
|
- Security documentation
|
||||||
|
- Deployment guides
|
||||||
|
|
||||||
|
- agent: ux-expert
|
||||||
|
action: user_documentation_planning
|
||||||
|
creates: user-doc-outline.md
|
||||||
|
condition: user_documentation
|
||||||
|
notes: |
|
||||||
|
User documentation planning:
|
||||||
|
- User journey mapping
|
||||||
|
- Tutorial structure
|
||||||
|
- FAQ compilation
|
||||||
|
- Troubleshooting guides
|
||||||
|
- Quick start guides
|
||||||
|
- Video script outlines
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
validates: documentation_plan
|
||||||
|
notes: |
|
||||||
|
Validate documentation approach:
|
||||||
|
- Confirm priorities align
|
||||||
|
- Approve resource allocation
|
||||||
|
- Sign off on timelines
|
||||||
|
- Verify compliance needs
|
||||||
|
|
||||||
|
- documentation_cycle:
|
||||||
|
repeats: for_each_doc_section
|
||||||
|
sequence:
|
||||||
|
- agent: scribe
|
||||||
|
action: create_documentation
|
||||||
|
creates: documentation_files
|
||||||
|
notes: |
|
||||||
|
Content creation:
|
||||||
|
- Write clear, concise content
|
||||||
|
- Follow style guide
|
||||||
|
- Include code examples
|
||||||
|
- Add diagrams where helpful
|
||||||
|
- Ensure accuracy
|
||||||
|
- Consider localization
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: technical_review
|
||||||
|
validates: technical_accuracy
|
||||||
|
condition: technical_content
|
||||||
|
notes: |
|
||||||
|
Technical validation:
|
||||||
|
- Verify code examples work
|
||||||
|
- Check API accuracy
|
||||||
|
- Validate architecture diagrams
|
||||||
|
- Ensure version compatibility
|
||||||
|
- Review security implications
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: documentation_testing
|
||||||
|
validates: documentation_usability
|
||||||
|
notes: |
|
||||||
|
Documentation QA:
|
||||||
|
- Test all code examples
|
||||||
|
- Verify links work
|
||||||
|
- Check formatting
|
||||||
|
- Validate screenshots
|
||||||
|
- Test tutorials end-to-end
|
||||||
|
- Accessibility review
|
||||||
|
|
||||||
|
- agent: mentor
|
||||||
|
action: educational_review
|
||||||
|
validates: learning_effectiveness
|
||||||
|
optional: true
|
||||||
|
notes: |
|
||||||
|
Educational quality:
|
||||||
|
- Check clarity for beginners
|
||||||
|
- Verify learning progression
|
||||||
|
- Assess completeness
|
||||||
|
- Review examples
|
||||||
|
- Suggest improvements
|
||||||
|
|
||||||
|
- agent: scribe
|
||||||
|
action: create_documentation_index
|
||||||
|
creates: documentation-index.md
|
||||||
|
notes: |
|
||||||
|
Create navigation aids:
|
||||||
|
- Table of contents
|
||||||
|
- Cross-references
|
||||||
|
- Search keywords
|
||||||
|
- Category tags
|
||||||
|
- Version mapping
|
||||||
|
- Related resources
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: integrate_documentation
|
||||||
|
updates: project_documentation
|
||||||
|
notes: |
|
||||||
|
Documentation integration:
|
||||||
|
- Update README files
|
||||||
|
- Link from code comments
|
||||||
|
- Update help systems
|
||||||
|
- Deploy to doc sites
|
||||||
|
- Configure search
|
||||||
|
- Set up redirects
|
||||||
|
|
||||||
|
- agent: scribe
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_doc_updates
|
||||||
|
uses: create-dev-journal
|
||||||
|
notes: |
|
||||||
|
Document the documentation:
|
||||||
|
- What was updated and why
|
||||||
|
- Major changes made
|
||||||
|
- Known gaps remaining
|
||||||
|
- Feedback incorporated
|
||||||
|
- Update Memory Bank
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: documentation_acceptance
|
||||||
|
validates: final_documentation
|
||||||
|
notes: |
|
||||||
|
Final documentation review:
|
||||||
|
- Verify completeness
|
||||||
|
- Check quality standards
|
||||||
|
- Approve for publication
|
||||||
|
- Sign off on release
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: documentation_release
|
||||||
|
creates: doc-release-notes.md
|
||||||
|
notes: |
|
||||||
|
Documentation release:
|
||||||
|
- Publish updates
|
||||||
|
- Notify stakeholders
|
||||||
|
- Update version notes
|
||||||
|
- Plan maintenance cycle
|
||||||
|
- Schedule next review
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: documentation_complete
|
||||||
|
notes: |
|
||||||
|
Documentation updated!
|
||||||
|
- All sections complete
|
||||||
|
- Quality validated
|
||||||
|
- Published and indexed
|
||||||
|
- Memory Bank updated
|
||||||
|
- Maintenance scheduled
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Documentation] --> B[bmad-master: session init]
|
||||||
|
B --> C[analyst: audit docs]
|
||||||
|
C --> D[pm: create plan]
|
||||||
|
D --> E{Technical docs?}
|
||||||
|
E -->|Yes| F[architect: tech outline]
|
||||||
|
E -->|No| G{User docs?}
|
||||||
|
F --> G
|
||||||
|
G -->|Yes| H[ux-expert: user outline]
|
||||||
|
G -->|No| I[po: validate plan]
|
||||||
|
H --> I
|
||||||
|
|
||||||
|
I --> J[Documentation Cycle]
|
||||||
|
J --> K[scribe: create content]
|
||||||
|
K --> L{Technical content?}
|
||||||
|
L -->|Yes| M[architect: tech review]
|
||||||
|
L -->|No| N[qa: test docs]
|
||||||
|
M --> N
|
||||||
|
N --> O{Educational review?}
|
||||||
|
O -->|Yes| P[mentor: review]
|
||||||
|
O -->|No| Q{More sections?}
|
||||||
|
P --> Q
|
||||||
|
Q -->|Yes| J
|
||||||
|
Q -->|No| R[scribe: create index]
|
||||||
|
|
||||||
|
R --> S[dev: integrate docs]
|
||||||
|
S --> T[scribe: document updates]
|
||||||
|
T --> U[po: final acceptance]
|
||||||
|
U --> V[sm: release docs]
|
||||||
|
V --> W[Documentation Complete]
|
||||||
|
|
||||||
|
style W fill:#90EE90
|
||||||
|
style B fill:#DDA0DD
|
||||||
|
style K fill:#FFE4B5
|
||||||
|
style M fill:#ADD8E6
|
||||||
|
style N fill:#ADD8E6
|
||||||
|
style T fill:#FFE4B5
|
||||||
|
style V fill:#98FB98
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- Major feature releases
|
||||||
|
- API changes
|
||||||
|
- User complaints about docs
|
||||||
|
- Onboarding difficulties
|
||||||
|
- Compliance requirements
|
||||||
|
- Regular documentation maintenance
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
audit_complete: |
|
||||||
|
Documentation audit complete:
|
||||||
|
- Total pages: {{page_count}}
|
||||||
|
- Outdated sections: {{outdated_count}}
|
||||||
|
- Missing topics: {{gap_count}}
|
||||||
|
- Priority updates: {{priority_count}}
|
||||||
|
|
||||||
|
plan_ready: |
|
||||||
|
Documentation plan created:
|
||||||
|
- Sections to update: {{section_count}}
|
||||||
|
- New content needed: {{new_count}} pages
|
||||||
|
- Estimated effort: {{effort_hours}} hours
|
||||||
|
- Target completion: {{target_date}}
|
||||||
|
|
||||||
|
section_complete: |
|
||||||
|
Documentation section "{{section_name}}" complete:
|
||||||
|
- Pages created/updated: {{page_count}}
|
||||||
|
- Code examples: {{example_count}}
|
||||||
|
- Diagrams added: {{diagram_count}}
|
||||||
|
- Review status: {{review_status}}
|
||||||
|
|
||||||
|
release_ready: |
|
||||||
|
Documentation release ready:
|
||||||
|
- Total updates: {{update_count}}
|
||||||
|
- Quality score: {{quality_score}}/100
|
||||||
|
- Coverage improvement: {{coverage_increase}}%
|
||||||
|
- Ready for publication
|
||||||
|
|
||||||
|
complete: "Documentation update complete. {{total_pages}} pages updated. Next review scheduled for {{next_review_date}}."
|
||||||
|
|
@ -13,6 +13,18 @@ workflow:
|
||||||
- mvp
|
- mvp
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand project state and technical principles
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review sprint context if applicable
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- agent: analyst
|
- agent: analyst
|
||||||
creates: project-brief.md
|
creates: project-brief.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
|
|
@ -64,6 +76,17 @@ workflow:
|
||||||
condition: po_checklist_issues
|
condition: po_checklist_issues
|
||||||
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for significant architectural decisions
|
||||||
|
|
||||||
- project_setup_guidance:
|
- project_setup_guidance:
|
||||||
action: guide_project_structure
|
action: guide_project_structure
|
||||||
condition: user_has_generated_ui
|
condition: user_has_generated_ui
|
||||||
|
|
@ -137,6 +160,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- Major full-stack feature completion
|
||||||
|
- Complex integration solutions
|
||||||
|
- Architectural decisions (trigger ADR creation)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -154,6 +190,18 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review full-stack accomplishments using sprint-review-checklist
|
||||||
|
- Document integration learnings and technical decisions
|
||||||
|
- Update Memory Bank comprehensively
|
||||||
|
- Create sprint summary documentation
|
||||||
|
- Plan next sprint priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -165,7 +213,8 @@ workflow:
|
||||||
flow_diagram: |
|
flow_diagram: |
|
||||||
```mermaid
|
```mermaid
|
||||||
graph TD
|
graph TD
|
||||||
A[Start: Greenfield Project] --> B[analyst: project-brief.md]
|
A[Start: Greenfield Project] --> A1[bmad-master: session kickoff]
|
||||||
|
A1 --> B[analyst: project-brief.md]
|
||||||
B --> C[pm: prd.md]
|
B --> C[pm: prd.md]
|
||||||
C --> D[ux-expert: front-end-spec.md]
|
C --> D[ux-expert: front-end-spec.md]
|
||||||
D --> D2{Generate v0 prompt?}
|
D --> D2{Generate v0 prompt?}
|
||||||
|
|
@ -179,8 +228,9 @@ workflow:
|
||||||
G --> H
|
G --> H
|
||||||
H --> I{PO finds issues?}
|
H --> I{PO finds issues?}
|
||||||
I -->|Yes| J[Return to relevant agent for fixes]
|
I -->|Yes| J[Return to relevant agent for fixes]
|
||||||
I -->|No| K[po: shard documents]
|
I -->|No| SP[sm: sprint planning]
|
||||||
J --> H
|
J --> H
|
||||||
|
SP --> K[po: shard documents]
|
||||||
|
|
||||||
K --> L[sm: create story]
|
K --> L[sm: create story]
|
||||||
L --> M{Review draft story?}
|
L --> M{Review draft story?}
|
||||||
|
|
@ -189,16 +239,20 @@ workflow:
|
||||||
N --> O
|
N --> O
|
||||||
O --> P{QA review?}
|
O --> P{QA review?}
|
||||||
P -->|Yes| Q[qa: review implementation]
|
P -->|Yes| Q[qa: review implementation]
|
||||||
P -->|No| R{More stories?}
|
P -->|No| DJ[dev: dev journal]
|
||||||
Q --> S{QA found issues?}
|
Q --> S{QA found issues?}
|
||||||
S -->|Yes| T[dev: address QA feedback]
|
S -->|Yes| T[dev: address QA feedback]
|
||||||
S -->|No| R
|
S -->|No| DJ
|
||||||
T --> Q
|
T --> Q
|
||||||
|
DJ --> R{More stories?}
|
||||||
R -->|Yes| L
|
R -->|Yes| L
|
||||||
R -->|No| U{Epic retrospective?}
|
R -->|No| U{Epic retrospective?}
|
||||||
U -->|Yes| V[po: epic retrospective]
|
U -->|Yes| V[po: epic retrospective]
|
||||||
U -->|No| W[Project Complete]
|
U -->|No| SR{Sprint boundary?}
|
||||||
V --> W
|
V --> SR
|
||||||
|
SR -->|Yes| SRV[sm: sprint review]
|
||||||
|
SR -->|No| W[Project Complete]
|
||||||
|
SRV --> W
|
||||||
|
|
||||||
B -.-> B1[Optional: brainstorming]
|
B -.-> B1[Optional: brainstorming]
|
||||||
B -.-> B2[Optional: market research]
|
B -.-> B2[Optional: market research]
|
||||||
|
|
@ -218,6 +272,10 @@ workflow:
|
||||||
style N fill:#F0E68C
|
style N fill:#F0E68C
|
||||||
style Q fill:#F0E68C
|
style Q fill:#F0E68C
|
||||||
style V fill:#F0E68C
|
style V fill:#F0E68C
|
||||||
|
style A1 fill:#E6E6FA
|
||||||
|
style SP fill:#E6E6FA
|
||||||
|
style DJ fill:#E6E6FA
|
||||||
|
style SRV fill:#E6E6FA
|
||||||
```
|
```
|
||||||
|
|
||||||
decision_guidance:
|
decision_guidance:
|
||||||
|
|
|
||||||
|
|
@ -14,6 +14,18 @@ workflow:
|
||||||
- simple-service
|
- simple-service
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand project state and technical principles
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review sprint context if applicable
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- agent: analyst
|
- agent: analyst
|
||||||
creates: project-brief.md
|
creates: project-brief.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
|
|
@ -49,6 +61,17 @@ workflow:
|
||||||
condition: po_checklist_issues
|
condition: po_checklist_issues
|
||||||
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for significant API/service decisions
|
||||||
|
|
||||||
- agent: po
|
- agent: po
|
||||||
action: shard_documents
|
action: shard_documents
|
||||||
creates: sharded_docs
|
creates: sharded_docs
|
||||||
|
|
@ -113,6 +136,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- Major API/service endpoint completion
|
||||||
|
- Complex backend problem solutions
|
||||||
|
- Service architectural decisions (trigger ADR creation)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -130,6 +166,18 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review service/API accomplishments using sprint-review-checklist
|
||||||
|
- Document backend learnings and technical decisions
|
||||||
|
- Update Memory Bank comprehensively
|
||||||
|
- Create sprint summary documentation
|
||||||
|
- Plan next sprint service priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
|
|
|
||||||
|
|
@ -14,6 +14,18 @@ workflow:
|
||||||
- simple-interface
|
- simple-interface
|
||||||
|
|
||||||
sequence:
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize AI session context:
|
||||||
|
- Review Memory Bank if exists
|
||||||
|
- Understand project state and technical principles
|
||||||
|
- Check recent dev journals and ADRs
|
||||||
|
- Review sprint context if applicable
|
||||||
|
Required for new sessions or after 24+ hour gaps
|
||||||
|
|
||||||
- agent: analyst
|
- agent: analyst
|
||||||
creates: project-brief.md
|
creates: project-brief.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
|
|
@ -68,6 +80,17 @@ workflow:
|
||||||
condition: user_has_generated_ui
|
condition: user_has_generated_ui
|
||||||
notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo. For monorepo, place in apps/web or frontend/ directory. Review architecture document for specific guidance."
|
notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo. For monorepo, place in apps/web or frontend/ directory. Review architecture document for specific guidance."
|
||||||
|
|
||||||
|
- step: sprint_planning_check
|
||||||
|
agent: sm
|
||||||
|
action: verify_sprint_context
|
||||||
|
condition: development_ready
|
||||||
|
notes: |
|
||||||
|
If part of sprint:
|
||||||
|
- Align UI development with sprint goals
|
||||||
|
- Update sprint backlog
|
||||||
|
- Plan for sprint review
|
||||||
|
- Create ADRs for significant UI decisions
|
||||||
|
|
||||||
- agent: po
|
- agent: po
|
||||||
action: shard_documents
|
action: shard_documents
|
||||||
creates: sharded_docs
|
creates: sharded_docs
|
||||||
|
|
@ -132,6 +155,19 @@ workflow:
|
||||||
- Dev Agent (New Chat): Address remaining items
|
- Dev Agent (New Chat): Address remaining items
|
||||||
- Return to QA for final approval
|
- Return to QA for final approval
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_session_work
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: significant_work_completed
|
||||||
|
notes: |
|
||||||
|
Create dev journal for:
|
||||||
|
- UI component completion
|
||||||
|
- Complex frontend solutions
|
||||||
|
- Design pattern decisions (trigger ADR)
|
||||||
|
- End of work session
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
- repeat_development_cycle:
|
- repeat_development_cycle:
|
||||||
action: continue_for_all_stories
|
action: continue_for_all_stories
|
||||||
notes: |
|
notes: |
|
||||||
|
|
@ -149,18 +185,32 @@ workflow:
|
||||||
- Validate epic was completed correctly
|
- Validate epic was completed correctly
|
||||||
- Document learnings and improvements
|
- Document learnings and improvements
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
condition: sprint_boundary
|
||||||
|
notes: |
|
||||||
|
At sprint end:
|
||||||
|
- Review UI components and features delivered
|
||||||
|
- Document design decisions and patterns
|
||||||
|
- Update Memory Bank with UI patterns
|
||||||
|
- Create sprint summary with visual samples
|
||||||
|
- Plan next sprint UI priorities
|
||||||
|
|
||||||
- workflow_end:
|
- workflow_end:
|
||||||
action: project_complete
|
action: project_complete
|
||||||
notes: |
|
notes: |
|
||||||
All stories implemented and reviewed!
|
All stories implemented and reviewed!
|
||||||
Project development phase complete.
|
UI development phase complete.
|
||||||
|
Memory Bank and design documentation updated.
|
||||||
|
|
||||||
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
Reference: {root}/data/bmad-kb.md#IDE Development Workflow
|
||||||
|
|
||||||
flow_diagram: |
|
flow_diagram: |
|
||||||
```mermaid
|
```mermaid
|
||||||
graph TD
|
graph TD
|
||||||
A[Start: UI Development] --> B[analyst: project-brief.md]
|
A[Start: UI Development] --> A1[bmad-master: session kickoff]
|
||||||
|
A1 --> B[analyst: project-brief.md]
|
||||||
B --> C[pm: prd.md]
|
B --> C[pm: prd.md]
|
||||||
C --> D[ux-expert: front-end-spec.md]
|
C --> D[ux-expert: front-end-spec.md]
|
||||||
D --> D2{Generate v0 prompt?}
|
D --> D2{Generate v0 prompt?}
|
||||||
|
|
@ -174,8 +224,9 @@ workflow:
|
||||||
G --> H
|
G --> H
|
||||||
H --> I{PO finds issues?}
|
H --> I{PO finds issues?}
|
||||||
I -->|Yes| J[Return to relevant agent for fixes]
|
I -->|Yes| J[Return to relevant agent for fixes]
|
||||||
I -->|No| K[po: shard documents]
|
I -->|No| SP[sm: sprint planning]
|
||||||
J --> H
|
J --> H
|
||||||
|
SP --> K[po: shard documents]
|
||||||
|
|
||||||
K --> L[sm: create story]
|
K --> L[sm: create story]
|
||||||
L --> M{Review draft story?}
|
L --> M{Review draft story?}
|
||||||
|
|
@ -190,10 +241,14 @@ workflow:
|
||||||
S -->|No| R
|
S -->|No| R
|
||||||
T --> Q
|
T --> Q
|
||||||
R -->|Yes| L
|
R -->|Yes| L
|
||||||
R -->|No| U{Epic retrospective?}
|
R -->|No| DJ[dev: dev journal]
|
||||||
|
DJ --> U{Epic retrospective?}
|
||||||
U -->|Yes| V[po: epic retrospective]
|
U -->|Yes| V[po: epic retrospective]
|
||||||
U -->|No| W[Project Complete]
|
U -->|No| SR{Sprint boundary?}
|
||||||
V --> W
|
V --> SR
|
||||||
|
SR -->|Yes| SRV[sm: sprint review]
|
||||||
|
SR -->|No| W[Project Complete]
|
||||||
|
SRV --> W
|
||||||
|
|
||||||
B -.-> B1[Optional: brainstorming]
|
B -.-> B1[Optional: brainstorming]
|
||||||
B -.-> B2[Optional: market research]
|
B -.-> B2[Optional: market research]
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,300 @@
|
||||||
|
workflow:
|
||||||
|
id: performance-optimization
|
||||||
|
name: Performance Optimization Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for systematic performance improvements. Focuses on identifying
|
||||||
|
bottlenecks, implementing optimizations, and validating improvements through
|
||||||
|
benchmarking and monitoring.
|
||||||
|
type: optimization
|
||||||
|
project_types:
|
||||||
|
- performance-tuning
|
||||||
|
- scalability-improvement
|
||||||
|
- response-time-optimization
|
||||||
|
- resource-optimization
|
||||||
|
- throughput-enhancement
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Performance-focused initialization:
|
||||||
|
- Review Memory Bank for known issues
|
||||||
|
- Check previous optimization efforts
|
||||||
|
- Review performance requirements
|
||||||
|
- Understand system architecture
|
||||||
|
- Check monitoring setup
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: performance_assessment
|
||||||
|
creates: performance-baseline.md
|
||||||
|
notes: |
|
||||||
|
Establish performance baseline:
|
||||||
|
- Current performance metrics
|
||||||
|
- Resource utilization patterns
|
||||||
|
- Response time distribution
|
||||||
|
- Throughput measurements
|
||||||
|
- Error rates
|
||||||
|
- User experience metrics
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: bottleneck_analysis
|
||||||
|
creates: bottleneck-analysis.md
|
||||||
|
requires: performance-baseline.md
|
||||||
|
notes: |
|
||||||
|
Identify performance bottlenecks:
|
||||||
|
- Profile application code
|
||||||
|
- Analyze database queries
|
||||||
|
- Review network latency
|
||||||
|
- Check resource constraints
|
||||||
|
- Identify hot paths
|
||||||
|
- Memory leak detection
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: optimization_strategy
|
||||||
|
creates: optimization-strategy.md
|
||||||
|
requires: bottleneck-analysis.md
|
||||||
|
notes: |
|
||||||
|
Design optimization approach:
|
||||||
|
- Quick wins identification
|
||||||
|
- Architectural improvements
|
||||||
|
- Caching strategies
|
||||||
|
- Query optimization plans
|
||||||
|
- Resource scaling options
|
||||||
|
- Code optimization targets
|
||||||
|
|
||||||
|
- agent: pm
|
||||||
|
creates: optimization-plan.md
|
||||||
|
action: prioritize_optimizations
|
||||||
|
requires: optimization-strategy.md
|
||||||
|
notes: |
|
||||||
|
Create optimization plan:
|
||||||
|
- Impact vs effort matrix
|
||||||
|
- Implementation sequence
|
||||||
|
- Performance targets
|
||||||
|
- Risk assessment
|
||||||
|
- Testing requirements
|
||||||
|
- Rollback procedures
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
validates: optimization_plan
|
||||||
|
notes: |
|
||||||
|
Validate approach:
|
||||||
|
- Confirm performance goals
|
||||||
|
- Approve resource usage
|
||||||
|
- Accept risk levels
|
||||||
|
- Sign off on timeline
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: setup_benchmarks
|
||||||
|
creates: benchmark-suite/
|
||||||
|
notes: |
|
||||||
|
Create benchmark suite:
|
||||||
|
- Micro-benchmarks for hot paths
|
||||||
|
- Load testing scenarios
|
||||||
|
- Real-world usage patterns
|
||||||
|
- Stress test configurations
|
||||||
|
- Monitoring dashboards
|
||||||
|
- Automated performance tests
|
||||||
|
|
||||||
|
- optimization_cycle:
|
||||||
|
repeats: for_each_optimization
|
||||||
|
sequence:
|
||||||
|
- agent: dev
|
||||||
|
action: implement_optimization
|
||||||
|
updates: optimized_code
|
||||||
|
notes: |
|
||||||
|
Careful implementation:
|
||||||
|
- Apply optimization
|
||||||
|
- Maintain functionality
|
||||||
|
- Add performance tests
|
||||||
|
- Document changes
|
||||||
|
- Update monitoring
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: performance_testing
|
||||||
|
validates: optimization_impact
|
||||||
|
notes: |
|
||||||
|
Comprehensive testing:
|
||||||
|
- Run benchmark suite
|
||||||
|
- Load testing
|
||||||
|
- Stress testing
|
||||||
|
- Regression testing
|
||||||
|
- Real-world scenarios
|
||||||
|
- Monitor resource usage
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: measure_improvement
|
||||||
|
creates: improvement-metrics.md
|
||||||
|
notes: |
|
||||||
|
Quantify improvements:
|
||||||
|
- Before/after comparison
|
||||||
|
- Statistical significance
|
||||||
|
- Resource usage delta
|
||||||
|
- Cost analysis
|
||||||
|
- User impact assessment
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: adr.md
|
||||||
|
action: document_optimization
|
||||||
|
condition: significant_change
|
||||||
|
notes: |
|
||||||
|
Document decisions:
|
||||||
|
- Optimization rationale
|
||||||
|
- Trade-offs made
|
||||||
|
- Implementation details
|
||||||
|
- Performance gains
|
||||||
|
- Side effects
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: production_monitoring
|
||||||
|
updates: monitoring_configuration
|
||||||
|
notes: |
|
||||||
|
Enhanced monitoring:
|
||||||
|
- Add performance metrics
|
||||||
|
- Set up alerts
|
||||||
|
- Create dashboards
|
||||||
|
- Configure profiling
|
||||||
|
- Enable tracing
|
||||||
|
- Set SLO targets
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_optimizations
|
||||||
|
uses: create-dev-journal
|
||||||
|
notes: |
|
||||||
|
Document optimization work:
|
||||||
|
- Optimizations applied
|
||||||
|
- Performance improvements
|
||||||
|
- Lessons learned
|
||||||
|
- Monitoring setup
|
||||||
|
- Update Memory Bank
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: final_performance_report
|
||||||
|
creates: performance-report.md
|
||||||
|
notes: |
|
||||||
|
Comprehensive performance report:
|
||||||
|
- Overall improvements
|
||||||
|
- Individual optimization impacts
|
||||||
|
- Resource usage changes
|
||||||
|
- Cost implications
|
||||||
|
- User experience improvements
|
||||||
|
- Recommendations
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: optimization_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
creates: optimization-review.md
|
||||||
|
notes: |
|
||||||
|
Review optimization results:
|
||||||
|
- Present improvements
|
||||||
|
- Demonstrate benchmarks
|
||||||
|
- Show monitoring dashboards
|
||||||
|
- Discuss next steps
|
||||||
|
- Plan maintenance
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: update_performance_docs
|
||||||
|
updates: architecture_documentation
|
||||||
|
notes: |
|
||||||
|
Update documentation:
|
||||||
|
- Performance characteristics
|
||||||
|
- Scaling guidelines
|
||||||
|
- Optimization patterns
|
||||||
|
- Monitoring requirements
|
||||||
|
- Troubleshooting guides
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: optimization_complete
|
||||||
|
notes: |
|
||||||
|
Performance optimization complete!
|
||||||
|
- Targets achieved
|
||||||
|
- Monitoring active
|
||||||
|
- Documentation updated
|
||||||
|
- Team trained
|
||||||
|
- Maintenance planned
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Performance] --> B[bmad-master: session init]
|
||||||
|
B --> C[architect: baseline assessment]
|
||||||
|
C --> D[analyst: bottleneck analysis]
|
||||||
|
D --> E[architect: optimization strategy]
|
||||||
|
E --> F[pm: prioritize optimizations]
|
||||||
|
F --> G[po: validate plan]
|
||||||
|
|
||||||
|
G --> H[dev: setup benchmarks]
|
||||||
|
H --> I[Optimization Cycle]
|
||||||
|
I --> J[dev: implement optimization]
|
||||||
|
J --> K[qa: performance testing]
|
||||||
|
K --> L[analyst: measure improvement]
|
||||||
|
L --> M{Significant change?}
|
||||||
|
M -->|Yes| N[architect: create ADR]
|
||||||
|
M -->|No| O{More optimizations?}
|
||||||
|
N --> O
|
||||||
|
O -->|Yes| I
|
||||||
|
O -->|No| P[dev: production monitoring]
|
||||||
|
|
||||||
|
P --> Q[dev: document work]
|
||||||
|
Q --> R[analyst: final report]
|
||||||
|
R --> S[sm: optimization review]
|
||||||
|
S --> T[architect: update docs]
|
||||||
|
T --> U[Optimization Complete]
|
||||||
|
|
||||||
|
style U fill:#90EE90
|
||||||
|
style B fill:#DDA0DD
|
||||||
|
style C fill:#FFB6C1
|
||||||
|
style D fill:#FFB6C1
|
||||||
|
style L fill:#98FB98
|
||||||
|
style K fill:#FFA500
|
||||||
|
style R fill:#98FB98
|
||||||
|
style S fill:#ADD8E6
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- User complaints about speed
|
||||||
|
- System not meeting SLOs
|
||||||
|
- Scaling issues
|
||||||
|
- High infrastructure costs
|
||||||
|
- Before major traffic events
|
||||||
|
- Regular performance maintenance
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
baseline_complete: |
|
||||||
|
Performance baseline established:
|
||||||
|
- P95 response time: {{p95_response}}ms
|
||||||
|
- Throughput: {{throughput}} req/s
|
||||||
|
- Error rate: {{error_rate}}%
|
||||||
|
- Resource usage: CPU {{cpu}}%, Memory {{memory}}%
|
||||||
|
|
||||||
|
bottlenecks_found: |
|
||||||
|
Top bottlenecks identified:
|
||||||
|
1. {{bottleneck_1}}: {{impact_1}}% impact
|
||||||
|
2. {{bottleneck_2}}: {{impact_2}}% impact
|
||||||
|
3. {{bottleneck_3}}: {{impact_3}}% impact
|
||||||
|
Total optimization potential: {{total_potential}}%
|
||||||
|
|
||||||
|
optimization_complete: |
|
||||||
|
Optimization "{{optimization_name}}" complete:
|
||||||
|
- Performance gain: {{improvement}}%
|
||||||
|
- Response time: {{old_time}}ms → {{new_time}}ms
|
||||||
|
- Resource reduction: {{resource_saving}}%
|
||||||
|
- Tests passing: {{test_status}}
|
||||||
|
|
||||||
|
final_report: |
|
||||||
|
Performance optimization summary:
|
||||||
|
- Overall improvement: {{total_improvement}}%
|
||||||
|
- P95 reduced by: {{p95_reduction}}ms
|
||||||
|
- Throughput increased: {{throughput_increase}}%
|
||||||
|
- Cost savings: ${{monthly_savings}}/month
|
||||||
|
|
||||||
|
complete: |
|
||||||
|
Performance optimization complete!
|
||||||
|
- All targets met: {{targets_met}}
|
||||||
|
- SLO compliance: {{slo_compliance}}%
|
||||||
|
- Monitoring active: {{monitoring_status}}
|
||||||
|
- Next review: {{next_review_date}}
|
||||||
|
|
@ -0,0 +1,220 @@
|
||||||
|
workflow:
|
||||||
|
id: quick-fix
|
||||||
|
name: Quick Fix / Hotfix Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for urgent production fixes and hotfixes. Streamlined process
|
||||||
|
for critical issues that need immediate attention with minimal overhead
|
||||||
|
while maintaining quality and documentation standards.
|
||||||
|
type: hotfix
|
||||||
|
project_types:
|
||||||
|
- production-fix
|
||||||
|
- security-patch
|
||||||
|
- critical-bug
|
||||||
|
- emergency-update
|
||||||
|
- hotfix
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: abbreviated_session_init
|
||||||
|
agent: bmad-master
|
||||||
|
action: quick_context_load
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Abbreviated initialization:
|
||||||
|
- Quick Memory Bank scan
|
||||||
|
- Focus on affected system areas
|
||||||
|
- Review recent deployments
|
||||||
|
- Check related ADRs
|
||||||
|
Priority: Speed over comprehensive review
|
||||||
|
|
||||||
|
- step: issue_triage
|
||||||
|
agent: analyst
|
||||||
|
action: analyze_issue
|
||||||
|
creates: issue-analysis.md
|
||||||
|
notes: |
|
||||||
|
Rapid issue analysis:
|
||||||
|
- Reproduce the issue
|
||||||
|
- Identify root cause
|
||||||
|
- Assess impact severity
|
||||||
|
- Determine affected components
|
||||||
|
- Estimate fix complexity
|
||||||
|
Time box: 30 minutes max
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: quick_solution_design
|
||||||
|
creates: fix-approach.md
|
||||||
|
condition: complex_fix
|
||||||
|
optional: true
|
||||||
|
notes: |
|
||||||
|
For complex fixes only:
|
||||||
|
- Design minimal viable fix
|
||||||
|
- Consider rollback strategy
|
||||||
|
- Identify testing requirements
|
||||||
|
- Note any technical debt incurred
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: implement_fix
|
||||||
|
creates: fix_implementation
|
||||||
|
notes: |
|
||||||
|
Rapid implementation:
|
||||||
|
- Minimal code changes
|
||||||
|
- Focus on fixing issue only
|
||||||
|
- Add regression test
|
||||||
|
- Comment code thoroughly
|
||||||
|
- Update affected documentation
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: targeted_testing
|
||||||
|
validates: fix_implementation
|
||||||
|
notes: |
|
||||||
|
Focused testing:
|
||||||
|
- Test the specific fix
|
||||||
|
- Verify no regressions
|
||||||
|
- Check edge cases
|
||||||
|
- Performance impact check
|
||||||
|
- Security implications review
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: address_test_issues
|
||||||
|
updates: fix_implementation
|
||||||
|
condition: qa_finds_issues
|
||||||
|
notes: |
|
||||||
|
Quick iteration:
|
||||||
|
- Fix identified issues
|
||||||
|
- Re-test specific areas
|
||||||
|
- Update fix if needed
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: adr-hotfix.md
|
||||||
|
action: document_emergency_decision
|
||||||
|
notes: |
|
||||||
|
Document fix decision:
|
||||||
|
- Why this approach was chosen
|
||||||
|
- Trade-offs accepted
|
||||||
|
- Technical debt created
|
||||||
|
- Future improvement notes
|
||||||
|
Quick ADR format acceptable
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: deployment-notes.md
|
||||||
|
action: prepare_deployment
|
||||||
|
notes: |
|
||||||
|
Deployment preparation:
|
||||||
|
- Create deployment checklist
|
||||||
|
- Document rollback procedure
|
||||||
|
- List configuration changes
|
||||||
|
- Note monitoring requirements
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_hotfix
|
||||||
|
uses: create-dev-journal
|
||||||
|
notes: |
|
||||||
|
Quick documentation:
|
||||||
|
- Issue description and impact
|
||||||
|
- Fix approach and rationale
|
||||||
|
- Deployment details
|
||||||
|
- Lessons learned
|
||||||
|
Updates Memory Bank activeContext
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: approve_emergency_release
|
||||||
|
validates: fix_and_documentation
|
||||||
|
notes: |
|
||||||
|
Emergency approval:
|
||||||
|
- Verify fix addresses issue
|
||||||
|
- Accept documentation level
|
||||||
|
- Approve for deployment
|
||||||
|
- Sign off on risk acceptance
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: post_mortem_planning
|
||||||
|
creates: post-mortem-plan.md
|
||||||
|
optional: true
|
||||||
|
notes: |
|
||||||
|
Schedule follow-up:
|
||||||
|
- Plan full post-mortem
|
||||||
|
- Schedule debt paydown
|
||||||
|
- Identify process improvements
|
||||||
|
- Add to next sprint backlog
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: hotfix_complete
|
||||||
|
notes: |
|
||||||
|
Hotfix deployed!
|
||||||
|
- Issue resolved
|
||||||
|
- Basic documentation complete
|
||||||
|
- Post-mortem scheduled
|
||||||
|
- Technical debt logged
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Critical Issue] --> B[bmad-master: quick init]
|
||||||
|
B --> C[analyst: triage issue]
|
||||||
|
C --> D{Complex fix?}
|
||||||
|
|
||||||
|
D -->|Yes| E[architect: design approach]
|
||||||
|
D -->|No| F[dev: implement fix]
|
||||||
|
E --> F
|
||||||
|
|
||||||
|
F --> G[qa: targeted testing]
|
||||||
|
G --> H{Issues found?}
|
||||||
|
H -->|Yes| I[dev: fix issues]
|
||||||
|
H -->|No| J[architect: quick ADR]
|
||||||
|
I --> G
|
||||||
|
|
||||||
|
J --> K[dev: deployment prep]
|
||||||
|
K --> L[dev: document hotfix]
|
||||||
|
L --> M[po: emergency approval]
|
||||||
|
M --> N{Post-mortem needed?}
|
||||||
|
N -->|Yes| O[sm: schedule post-mortem]
|
||||||
|
N -->|No| P[Hotfix Complete]
|
||||||
|
O --> P
|
||||||
|
|
||||||
|
style P fill:#90EE90
|
||||||
|
style B fill:#FF6B6B
|
||||||
|
style C fill:#FF6B6B
|
||||||
|
style F fill:#FFA500
|
||||||
|
style G fill:#FFA500
|
||||||
|
style M fill:#FFD700
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- Production system is down
|
||||||
|
- Security vulnerability discovered
|
||||||
|
- Critical bug affecting users
|
||||||
|
- Regulatory compliance issue
|
||||||
|
- Revenue-impacting problem
|
||||||
|
|
||||||
|
when_not_to_use:
|
||||||
|
- Non-critical improvements
|
||||||
|
- Feature requests
|
||||||
|
- Technical debt cleanup
|
||||||
|
- Performance optimizations (non-critical)
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
triage_complete: |
|
||||||
|
Issue analysis complete:
|
||||||
|
- Severity: {{severity_level}}
|
||||||
|
- Impact: {{affected_users}} users affected
|
||||||
|
- Root cause: {{root_cause}}
|
||||||
|
- Fix complexity: {{complexity_estimate}}
|
||||||
|
|
||||||
|
fix_ready: "Fix implemented in {{time_taken}}. Ready for emergency testing."
|
||||||
|
|
||||||
|
testing_complete: "Fix verified. {{test_count}} tests passed. No regressions found."
|
||||||
|
|
||||||
|
approval_request: |
|
||||||
|
Emergency release request:
|
||||||
|
- Issue: {{issue_description}}
|
||||||
|
- Fix: {{fix_summary}}
|
||||||
|
- Risk: {{risk_level}}
|
||||||
|
- Rollback available: {{yes/no}}
|
||||||
|
|
||||||
|
complete: |
|
||||||
|
Hotfix deployed successfully.
|
||||||
|
- Deployment time: {{deployment_time}}
|
||||||
|
- Issue resolved: {{confirmation}}
|
||||||
|
- Post-mortem scheduled: {{date/time or N/A}}
|
||||||
|
- Technical debt ticket: {{ticket_id}}
|
||||||
|
|
@ -0,0 +1,249 @@
|
||||||
|
workflow:
|
||||||
|
id: sprint-execution
|
||||||
|
name: Sprint-Based Development Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for sprint-based agile development. Handles sprint planning,
|
||||||
|
daily development cycles, mid-sprint adjustments, sprint reviews, and retrospectives.
|
||||||
|
Integrates Memory Bank updates and continuous documentation.
|
||||||
|
type: sprint
|
||||||
|
project_types:
|
||||||
|
- agile-development
|
||||||
|
- scrum-sprint
|
||||||
|
- iterative-delivery
|
||||||
|
- continuous-improvement
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: sprint_kickoff
|
||||||
|
agent: bmad-master
|
||||||
|
action: initialize_sprint_context
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Sprint initialization:
|
||||||
|
- Review Memory Bank for project state
|
||||||
|
- Check previous sprint outcomes
|
||||||
|
- Review product backlog
|
||||||
|
- Understand technical context
|
||||||
|
- Review team velocity and capacity
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: sprint_planning
|
||||||
|
creates: sprint-plan.md
|
||||||
|
uses: sprint-planning-tmpl
|
||||||
|
notes: |
|
||||||
|
Sprint planning session:
|
||||||
|
- Define sprint goals
|
||||||
|
- Select stories from backlog
|
||||||
|
- Estimate story points
|
||||||
|
- Assign initial priorities
|
||||||
|
- Create sprint backlog
|
||||||
|
- Document in sprint-plan.md
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
validates: sprint_plan
|
||||||
|
action: approve_sprint_scope
|
||||||
|
notes: |
|
||||||
|
Validate sprint plan:
|
||||||
|
- Confirm goals align with product vision
|
||||||
|
- Verify story selection
|
||||||
|
- Approve priorities
|
||||||
|
- Sign off on sprint commitment
|
||||||
|
|
||||||
|
- daily_development_cycle:
|
||||||
|
repeats: for_each_sprint_day
|
||||||
|
sequence:
|
||||||
|
- agent: sm
|
||||||
|
action: daily_standup
|
||||||
|
creates: daily-status.md
|
||||||
|
notes: |
|
||||||
|
Daily coordination:
|
||||||
|
- Review yesterday's progress
|
||||||
|
- Plan today's work
|
||||||
|
- Identify blockers
|
||||||
|
- Update sprint burndown
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: implement_daily_work
|
||||||
|
updates: implementation_files
|
||||||
|
notes: |
|
||||||
|
Development work:
|
||||||
|
- Work on assigned stories
|
||||||
|
- Follow coding standards
|
||||||
|
- Create unit tests
|
||||||
|
- Update story status
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_daily_progress
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: end_of_day
|
||||||
|
notes: |
|
||||||
|
Daily documentation:
|
||||||
|
- Document work completed
|
||||||
|
- Note challenges faced
|
||||||
|
- Record decisions made
|
||||||
|
- Update Memory Bank activeContext
|
||||||
|
|
||||||
|
- step: mid_sprint_check
|
||||||
|
agent: sm
|
||||||
|
action: assess_sprint_health
|
||||||
|
condition: sprint_midpoint
|
||||||
|
notes: |
|
||||||
|
Mid-sprint assessment:
|
||||||
|
- Review burndown chart
|
||||||
|
- Assess story completion rate
|
||||||
|
- Identify at-risk items
|
||||||
|
- Plan adjustments if needed
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: mid_sprint_adjustment
|
||||||
|
condition: scope_change_needed
|
||||||
|
optional: true
|
||||||
|
notes: |
|
||||||
|
Sprint adjustment:
|
||||||
|
- Review proposed changes
|
||||||
|
- Approve scope modifications
|
||||||
|
- Update sprint goals if needed
|
||||||
|
- Communicate changes
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: continuous_testing
|
||||||
|
updates: test_results
|
||||||
|
repeats: as_stories_complete
|
||||||
|
notes: |
|
||||||
|
Ongoing quality assurance:
|
||||||
|
- Test completed stories
|
||||||
|
- Report issues immediately
|
||||||
|
- Verify acceptance criteria
|
||||||
|
- Update story status
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: adr.md
|
||||||
|
action: document_decisions
|
||||||
|
condition: architectural_decision_made
|
||||||
|
notes: |
|
||||||
|
ADR creation:
|
||||||
|
- Document significant decisions
|
||||||
|
- Record context and rationale
|
||||||
|
- Update Memory Bank systemPatterns
|
||||||
|
- Link to related stories
|
||||||
|
|
||||||
|
- step: sprint_review_preparation
|
||||||
|
agent: sm
|
||||||
|
action: prepare_review_materials
|
||||||
|
creates: sprint-review-prep.md
|
||||||
|
condition: sprint_final_day
|
||||||
|
notes: |
|
||||||
|
Review preparation:
|
||||||
|
- Compile completed stories
|
||||||
|
- Prepare demonstration materials
|
||||||
|
- Document impediments
|
||||||
|
- Calculate velocity metrics
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: conduct_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
creates: sprint-review-summary.md
|
||||||
|
notes: |
|
||||||
|
Sprint review meeting:
|
||||||
|
- Demonstrate completed work
|
||||||
|
- Gather stakeholder feedback
|
||||||
|
- Document acceptance status
|
||||||
|
- Update product backlog
|
||||||
|
- Use sprint-review-checklist.md
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: accept_sprint_deliverables
|
||||||
|
validates: completed_stories
|
||||||
|
notes: |
|
||||||
|
Sprint acceptance:
|
||||||
|
- Review demonstrated features
|
||||||
|
- Confirm acceptance criteria met
|
||||||
|
- Sign off on completed work
|
||||||
|
- Update story status to Done
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: sprint_retrospective
|
||||||
|
creates: sprint-retrospective.md
|
||||||
|
notes: |
|
||||||
|
Sprint retrospective:
|
||||||
|
- What went well
|
||||||
|
- What could improve
|
||||||
|
- Action items for next sprint
|
||||||
|
- Team velocity analysis
|
||||||
|
- Process improvements
|
||||||
|
|
||||||
|
- agent: bmad-master
|
||||||
|
action: update_memory_bank
|
||||||
|
uses: update-memory-bank
|
||||||
|
notes: |
|
||||||
|
Comprehensive Memory Bank update:
|
||||||
|
- Sprint outcomes to progress.md
|
||||||
|
- Technical decisions to systemPatterns.md
|
||||||
|
- Learnings to activeContext.md
|
||||||
|
- Update project velocity metrics
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: sprint_complete
|
||||||
|
notes: |
|
||||||
|
Sprint completed!
|
||||||
|
- All deliverables accepted
|
||||||
|
- Memory Bank updated
|
||||||
|
- Retrospective documented
|
||||||
|
- Ready for next sprint planning
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Sprint] --> B[bmad-master: sprint kickoff]
|
||||||
|
B --> C[sm: sprint planning]
|
||||||
|
C --> D[po: approve sprint scope]
|
||||||
|
|
||||||
|
D --> E[Daily Cycle Start]
|
||||||
|
E --> F[sm: daily standup]
|
||||||
|
F --> G[dev: implement work]
|
||||||
|
G --> H[dev: daily journal]
|
||||||
|
H --> I{More days?}
|
||||||
|
I -->|Yes| E
|
||||||
|
I -->|No| J[sm: mid-sprint check]
|
||||||
|
|
||||||
|
J --> K{Adjustment needed?}
|
||||||
|
K -->|Yes| L[po: scope adjustment]
|
||||||
|
K -->|No| M[Continue development]
|
||||||
|
L --> M
|
||||||
|
|
||||||
|
M --> N[qa: continuous testing]
|
||||||
|
N --> O{Architecture decision?}
|
||||||
|
O -->|Yes| P[architect: create ADR]
|
||||||
|
O -->|No| Q[sm: prepare review]
|
||||||
|
P --> Q
|
||||||
|
|
||||||
|
Q --> R[sm: sprint review]
|
||||||
|
R --> S[po: accept deliverables]
|
||||||
|
S --> T[sm: retrospective]
|
||||||
|
T --> U[bmad-master: update Memory Bank]
|
||||||
|
U --> V[Sprint Complete]
|
||||||
|
|
||||||
|
style V fill:#90EE90
|
||||||
|
style B fill:#FFE4B5
|
||||||
|
style C fill:#FFE4B5
|
||||||
|
style R fill:#ADD8E6
|
||||||
|
style T fill:#F0E68C
|
||||||
|
style U fill:#DDA0DD
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- Team follows agile/scrum methodology
|
||||||
|
- Fixed-length iterations preferred
|
||||||
|
- Regular stakeholder reviews needed
|
||||||
|
- Continuous improvement focus
|
||||||
|
- Multiple stories per iteration
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
kickoff_to_planning: "Sprint context initialized. Ready for sprint planning with backlog items."
|
||||||
|
planning_to_po: "Sprint plan created with {{story_count}} stories totaling {{story_points}} points. Please review and approve."
|
||||||
|
daily_standup: "Yesterday: {{completed_tasks}}. Today: {{planned_tasks}}. Blockers: {{blockers}}."
|
||||||
|
mid_sprint_alert: "Sprint health check: {{completion_percentage}}% complete. {{at_risk_count}} stories at risk."
|
||||||
|
review_ready: "Sprint review prepared. {{completed_stories}} stories ready for demonstration."
|
||||||
|
retrospective_complete: "Sprint retrospective complete. {{improvement_actions}} action items for next sprint."
|
||||||
|
complete: "Sprint {{sprint_number}} complete. Velocity: {{velocity}} points. Ready for next sprint planning."
|
||||||
|
|
@ -0,0 +1,340 @@
|
||||||
|
workflow:
|
||||||
|
id: system-migration
|
||||||
|
name: System Migration Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for technology migrations, platform upgrades, and system
|
||||||
|
modernization. Handles careful planning, phased implementation, rollback
|
||||||
|
procedures, and validation to ensure safe transitions.
|
||||||
|
type: migration
|
||||||
|
project_types:
|
||||||
|
- platform-migration
|
||||||
|
- framework-upgrade
|
||||||
|
- database-migration
|
||||||
|
- cloud-migration
|
||||||
|
- technology-modernization
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: comprehensive_context_load
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Comprehensive initialization:
|
||||||
|
- Full Memory Bank review
|
||||||
|
- Current architecture understanding
|
||||||
|
- Dependency analysis
|
||||||
|
- Integration points mapping
|
||||||
|
- Risk assessment preparation
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: current_state_analysis
|
||||||
|
creates: current-state-analysis.md
|
||||||
|
uses: document-project
|
||||||
|
notes: |
|
||||||
|
Document current state:
|
||||||
|
- Complete system architecture
|
||||||
|
- Technology stack details
|
||||||
|
- Dependencies and versions
|
||||||
|
- Integration points
|
||||||
|
- Performance baselines
|
||||||
|
- Known issues and constraints
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: target_state_design
|
||||||
|
creates: target-state-design.md
|
||||||
|
notes: |
|
||||||
|
Design target architecture:
|
||||||
|
- New technology stack
|
||||||
|
- Migration architecture
|
||||||
|
- Compatibility layers needed
|
||||||
|
- Performance targets
|
||||||
|
- Security improvements
|
||||||
|
- Scalability goals
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: gap_analysis
|
||||||
|
creates: migration-gap-analysis.md
|
||||||
|
requires:
|
||||||
|
- current-state-analysis.md
|
||||||
|
- target-state-design.md
|
||||||
|
notes: |
|
||||||
|
Analyze migration gaps:
|
||||||
|
- Technical gaps to bridge
|
||||||
|
- Feature parity assessment
|
||||||
|
- Data migration needs
|
||||||
|
- Training requirements
|
||||||
|
- Tool changes
|
||||||
|
- Process updates needed
|
||||||
|
|
||||||
|
- agent: pm
|
||||||
|
action: migration_planning
|
||||||
|
creates: migration-plan.md
|
||||||
|
requires: migration-gap-analysis.md
|
||||||
|
notes: |
|
||||||
|
Create migration plan:
|
||||||
|
- Phased approach design
|
||||||
|
- Timeline with milestones
|
||||||
|
- Resource requirements
|
||||||
|
- Risk mitigation strategies
|
||||||
|
- Rollback procedures
|
||||||
|
- Success criteria
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: migration_architecture
|
||||||
|
creates: migration-architecture.md
|
||||||
|
notes: |
|
||||||
|
Technical migration design:
|
||||||
|
- Parallel run strategy
|
||||||
|
- Data migration approach
|
||||||
|
- API compatibility layers
|
||||||
|
- Feature flags design
|
||||||
|
- Monitoring strategy
|
||||||
|
- Cutover procedures
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
validates: migration_plan
|
||||||
|
uses: po-master-checklist
|
||||||
|
notes: |
|
||||||
|
Validate migration approach:
|
||||||
|
- Business continuity assured
|
||||||
|
- Risk acceptable
|
||||||
|
- Timeline realistic
|
||||||
|
- Resources available
|
||||||
|
- Rollback viable
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: create_migration_tools
|
||||||
|
creates: migration-tools/
|
||||||
|
notes: |
|
||||||
|
Build migration utilities:
|
||||||
|
- Data migration scripts
|
||||||
|
- Compatibility adapters
|
||||||
|
- Validation tools
|
||||||
|
- Performance benchmarks
|
||||||
|
- Rollback scripts
|
||||||
|
- Monitoring setup
|
||||||
|
|
||||||
|
- migration_phases:
|
||||||
|
repeats: for_each_phase
|
||||||
|
sequence:
|
||||||
|
- agent: sm
|
||||||
|
action: phase_planning
|
||||||
|
creates: phase-plan.md
|
||||||
|
notes: |
|
||||||
|
Plan migration phase:
|
||||||
|
- Define phase scope
|
||||||
|
- Create detailed tasks
|
||||||
|
- Set success criteria
|
||||||
|
- Plan validation steps
|
||||||
|
- Schedule activities
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: implement_phase
|
||||||
|
updates: system_components
|
||||||
|
notes: |
|
||||||
|
Phase implementation:
|
||||||
|
- Migrate components
|
||||||
|
- Update configurations
|
||||||
|
- Modify integrations
|
||||||
|
- Apply compatibility layers
|
||||||
|
- Enable feature flags
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: phase_validation
|
||||||
|
validates: migrated_components
|
||||||
|
notes: |
|
||||||
|
Comprehensive testing:
|
||||||
|
- Functional validation
|
||||||
|
- Performance testing
|
||||||
|
- Integration testing
|
||||||
|
- Security scanning
|
||||||
|
- Load testing
|
||||||
|
- Compatibility checks
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: parallel_run_validation
|
||||||
|
validates: system_parity
|
||||||
|
condition: parallel_run_phase
|
||||||
|
notes: |
|
||||||
|
Parallel run checks:
|
||||||
|
- Compare outputs
|
||||||
|
- Verify data consistency
|
||||||
|
- Performance comparison
|
||||||
|
- Error rate analysis
|
||||||
|
- Resource utilization
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: adr-migration-phase.md
|
||||||
|
action: document_phase_decisions
|
||||||
|
notes: |
|
||||||
|
Document phase outcomes:
|
||||||
|
- Decisions made
|
||||||
|
- Issues encountered
|
||||||
|
- Workarounds applied
|
||||||
|
- Performance impacts
|
||||||
|
- Lessons learned
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: phase_signoff
|
||||||
|
validates: phase_completion
|
||||||
|
notes: |
|
||||||
|
Phase acceptance:
|
||||||
|
- Verify success criteria
|
||||||
|
- Review test results
|
||||||
|
- Approve continuation
|
||||||
|
- Update stakeholders
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
action: cutover_preparation
|
||||||
|
creates: cutover-checklist.md
|
||||||
|
notes: |
|
||||||
|
Prepare for cutover:
|
||||||
|
- Final validation checklist
|
||||||
|
- Cutover procedures
|
||||||
|
- Communication plan
|
||||||
|
- Rollback procedures
|
||||||
|
- Monitoring alerts
|
||||||
|
- Support readiness
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: final_validation
|
||||||
|
validates: system_readiness
|
||||||
|
notes: |
|
||||||
|
Pre-cutover validation:
|
||||||
|
- All components migrated
|
||||||
|
- Performance acceptable
|
||||||
|
- Security validated
|
||||||
|
- Integrations working
|
||||||
|
- Data integrity verified
|
||||||
|
- Rollback tested
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
action: cutover_approval
|
||||||
|
validates: go_live_readiness
|
||||||
|
notes: |
|
||||||
|
Final go-live approval:
|
||||||
|
- All criteria met
|
||||||
|
- Risks acceptable
|
||||||
|
- Team ready
|
||||||
|
- Communications sent
|
||||||
|
- Support prepared
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_migration
|
||||||
|
uses: create-dev-journal
|
||||||
|
notes: |
|
||||||
|
Document migration:
|
||||||
|
- Migration summary
|
||||||
|
- Issues and resolutions
|
||||||
|
- Performance changes
|
||||||
|
- Lessons learned
|
||||||
|
- Update Memory Bank
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: post_migration_review
|
||||||
|
creates: migration-review.md
|
||||||
|
notes: |
|
||||||
|
Post-migration review:
|
||||||
|
- Success metrics
|
||||||
|
- Issues encountered
|
||||||
|
- Performance analysis
|
||||||
|
- Cost analysis
|
||||||
|
- Team feedback
|
||||||
|
- Improvement recommendations
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: migration_complete
|
||||||
|
notes: |
|
||||||
|
Migration completed!
|
||||||
|
- System successfully migrated
|
||||||
|
- Performance validated
|
||||||
|
- Documentation updated
|
||||||
|
- Team trained
|
||||||
|
- Monitoring active
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Migration] --> B[bmad-master: comprehensive init]
|
||||||
|
B --> C[architect: current state]
|
||||||
|
C --> D[architect: target state]
|
||||||
|
D --> E[analyst: gap analysis]
|
||||||
|
E --> F[pm: migration plan]
|
||||||
|
F --> G[architect: migration architecture]
|
||||||
|
G --> H[po: validate plan]
|
||||||
|
|
||||||
|
H --> I[dev: build tools]
|
||||||
|
I --> J[Migration Phases]
|
||||||
|
J --> K[sm: phase planning]
|
||||||
|
K --> L[dev: implement phase]
|
||||||
|
L --> M[qa: validate phase]
|
||||||
|
M --> N{Parallel run?}
|
||||||
|
N -->|Yes| O[dev: validate parity]
|
||||||
|
N -->|No| P[architect: document phase]
|
||||||
|
O --> P
|
||||||
|
P --> Q[po: phase signoff]
|
||||||
|
Q --> R{More phases?}
|
||||||
|
R -->|Yes| J
|
||||||
|
R -->|No| S[dev: cutover prep]
|
||||||
|
|
||||||
|
S --> T[architect: final validation]
|
||||||
|
T --> U[po: cutover approval]
|
||||||
|
U --> V[dev: document migration]
|
||||||
|
V --> W[sm: post-migration review]
|
||||||
|
W --> X[Migration Complete]
|
||||||
|
|
||||||
|
style X fill:#90EE90
|
||||||
|
style B fill:#FF6B6B
|
||||||
|
style H fill:#FFD700
|
||||||
|
style Q fill:#FFD700
|
||||||
|
style U fill:#FFD700
|
||||||
|
style L fill:#FFA500
|
||||||
|
style M fill:#FFA500
|
||||||
|
style T fill:#98FB98
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- End-of-life technology
|
||||||
|
- Performance limitations
|
||||||
|
- Security vulnerabilities
|
||||||
|
- Scalability requirements
|
||||||
|
- Cost optimization needs
|
||||||
|
- Compliance requirements
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
analysis_complete: |
|
||||||
|
Migration analysis complete:
|
||||||
|
- Current components: {{component_count}}
|
||||||
|
- Migration complexity: {{complexity_score}}/10
|
||||||
|
- Estimated phases: {{phase_count}}
|
||||||
|
- Risk level: {{risk_level}}
|
||||||
|
|
||||||
|
plan_ready: |
|
||||||
|
Migration plan created:
|
||||||
|
- Total phases: {{phase_count}}
|
||||||
|
- Timeline: {{timeline_weeks}} weeks
|
||||||
|
- Rollback points: {{rollback_count}}
|
||||||
|
- Success criteria defined
|
||||||
|
|
||||||
|
phase_complete: |
|
||||||
|
Phase {{phase_number}} complete:
|
||||||
|
- Components migrated: {{component_count}}
|
||||||
|
- Tests passed: {{test_pass_rate}}%
|
||||||
|
- Performance delta: {{performance_change}}%
|
||||||
|
- Issues resolved: {{issue_count}}
|
||||||
|
|
||||||
|
cutover_ready: |
|
||||||
|
System ready for cutover:
|
||||||
|
- All phases complete
|
||||||
|
- Validation passed: {{validation_score}}/100
|
||||||
|
- Rollback tested: Yes
|
||||||
|
- Team prepared: Yes
|
||||||
|
|
||||||
|
complete: |
|
||||||
|
Migration successful!
|
||||||
|
- Cutover completed: {{cutover_duration}}
|
||||||
|
- System stability: {{stability_metric}}
|
||||||
|
- Performance improvement: {{perf_improvement}}%
|
||||||
|
- Zero data loss confirmed
|
||||||
|
|
@ -0,0 +1,268 @@
|
||||||
|
workflow:
|
||||||
|
id: technical-debt
|
||||||
|
name: Technical Debt Reduction Workflow
|
||||||
|
description: >-
|
||||||
|
Agent workflow for systematic technical debt reduction. Focuses on identifying,
|
||||||
|
prioritizing, and safely eliminating technical debt while maintaining system
|
||||||
|
stability and documenting improvements.
|
||||||
|
type: maintenance
|
||||||
|
project_types:
|
||||||
|
- debt-reduction
|
||||||
|
- code-cleanup
|
||||||
|
- refactoring
|
||||||
|
- modernization
|
||||||
|
- system-health
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- step: session_initialization
|
||||||
|
agent: bmad-master
|
||||||
|
action: session_kickoff
|
||||||
|
uses: session-kickoff
|
||||||
|
notes: |
|
||||||
|
Initialize with debt focus:
|
||||||
|
- Review Memory Bank for known debt
|
||||||
|
- Check previous debt reduction efforts
|
||||||
|
- Review system patterns and pain points
|
||||||
|
- Understand current technical principles
|
||||||
|
- Check ADRs for debt-inducing decisions
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
action: debt_assessment
|
||||||
|
creates: debt-assessment.md
|
||||||
|
uses: document-project
|
||||||
|
notes: |
|
||||||
|
Comprehensive debt analysis:
|
||||||
|
- Code quality metrics analysis
|
||||||
|
- Dependency audit (outdated/vulnerable)
|
||||||
|
- Architecture anti-patterns
|
||||||
|
- Performance bottlenecks
|
||||||
|
- Security vulnerabilities
|
||||||
|
- Testing gaps
|
||||||
|
- Documentation debt
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: debt_prioritization
|
||||||
|
creates: debt-priorities.md
|
||||||
|
requires: debt-assessment.md
|
||||||
|
notes: |
|
||||||
|
Prioritize debt items:
|
||||||
|
- Risk assessment (security, stability)
|
||||||
|
- Business impact analysis
|
||||||
|
- Effort estimation
|
||||||
|
- Dependency mapping
|
||||||
|
- Quick wins identification
|
||||||
|
- Create debt backlog
|
||||||
|
|
||||||
|
- agent: pm
|
||||||
|
creates: debt-reduction-plan.md
|
||||||
|
action: create_debt_sprint_plan
|
||||||
|
requires: debt-priorities.md
|
||||||
|
notes: |
|
||||||
|
Plan debt reduction:
|
||||||
|
- Group related debt items
|
||||||
|
- Create epic for major debt areas
|
||||||
|
- Define success metrics
|
||||||
|
- Set realistic timelines
|
||||||
|
- Plan incremental improvements
|
||||||
|
- Balance with feature work
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: refactoring-strategy.md
|
||||||
|
action: design_refactoring_approach
|
||||||
|
requires: debt-reduction-plan.md
|
||||||
|
notes: |
|
||||||
|
Technical approach:
|
||||||
|
- Define refactoring patterns
|
||||||
|
- Plan migration strategies
|
||||||
|
- Design new architecture
|
||||||
|
- Create rollback plans
|
||||||
|
- Define testing strategy
|
||||||
|
- Document constraints
|
||||||
|
|
||||||
|
- agent: po
|
||||||
|
validates: debt_reduction_plan
|
||||||
|
uses: po-master-checklist
|
||||||
|
notes: |
|
||||||
|
Validate approach:
|
||||||
|
- Confirm business value
|
||||||
|
- Verify risk mitigation
|
||||||
|
- Approve timeline
|
||||||
|
- Sign off on approach
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: create_debt_stories
|
||||||
|
creates: debt-stories/
|
||||||
|
uses: create-next-story
|
||||||
|
notes: |
|
||||||
|
Story creation:
|
||||||
|
- Break down into manageable stories
|
||||||
|
- Include refactoring in each story
|
||||||
|
- Add comprehensive test requirements
|
||||||
|
- Define clear acceptance criteria
|
||||||
|
- Include documentation updates
|
||||||
|
|
||||||
|
- development_cycle:
|
||||||
|
repeats: for_each_debt_story
|
||||||
|
sequence:
|
||||||
|
- agent: dev
|
||||||
|
action: implement_refactoring
|
||||||
|
updates: codebase
|
||||||
|
notes: |
|
||||||
|
Careful implementation:
|
||||||
|
- Follow Boy Scout Rule
|
||||||
|
- Maintain backward compatibility
|
||||||
|
- Add missing tests first
|
||||||
|
- Refactor incrementally
|
||||||
|
- Update documentation
|
||||||
|
|
||||||
|
- agent: qa
|
||||||
|
action: regression_testing
|
||||||
|
validates: refactored_code
|
||||||
|
notes: |
|
||||||
|
Thorough testing:
|
||||||
|
- Full regression suite
|
||||||
|
- Performance benchmarks
|
||||||
|
- Security scanning
|
||||||
|
- Integration tests
|
||||||
|
- Load testing if needed
|
||||||
|
|
||||||
|
- agent: architect
|
||||||
|
creates: adr.md
|
||||||
|
action: document_improvements
|
||||||
|
condition: significant_change
|
||||||
|
notes: |
|
||||||
|
Document decisions:
|
||||||
|
- Why refactoring was needed
|
||||||
|
- Approach taken
|
||||||
|
- Trade-offs made
|
||||||
|
- Patterns introduced
|
||||||
|
- Update Memory Bank
|
||||||
|
|
||||||
|
- agent: dev
|
||||||
|
creates: dev_journal_entry
|
||||||
|
action: document_debt_reduction
|
||||||
|
uses: create-dev-journal
|
||||||
|
condition: milestone_reached
|
||||||
|
notes: |
|
||||||
|
Document progress:
|
||||||
|
- Debt eliminated
|
||||||
|
- Patterns improved
|
||||||
|
- Metrics before/after
|
||||||
|
- Lessons learned
|
||||||
|
- Update Memory Bank
|
||||||
|
|
||||||
|
- agent: analyst
|
||||||
|
action: measure_improvement
|
||||||
|
creates: improvement-metrics.md
|
||||||
|
notes: |
|
||||||
|
Quantify improvements:
|
||||||
|
- Code quality metrics
|
||||||
|
- Performance improvements
|
||||||
|
- Test coverage increase
|
||||||
|
- Build time reduction
|
||||||
|
- Reduced vulnerabilities
|
||||||
|
- Developer productivity
|
||||||
|
|
||||||
|
- agent: sm
|
||||||
|
action: debt_sprint_review
|
||||||
|
uses: conduct-sprint-review
|
||||||
|
creates: debt-review-summary.md
|
||||||
|
notes: |
|
||||||
|
Review improvements:
|
||||||
|
- Present metrics
|
||||||
|
- Demonstrate improvements
|
||||||
|
- Show risk reduction
|
||||||
|
- Document remaining debt
|
||||||
|
- Plan next iteration
|
||||||
|
|
||||||
|
- agent: bmad-master
|
||||||
|
action: comprehensive_update
|
||||||
|
uses: update-memory-bank
|
||||||
|
notes: |
|
||||||
|
Update all documentation:
|
||||||
|
- New patterns to systemPatterns.md
|
||||||
|
- Progress to progress.md
|
||||||
|
- Remaining debt to activeContext.md
|
||||||
|
- Update technical context
|
||||||
|
|
||||||
|
- workflow_end:
|
||||||
|
action: debt_reduction_complete
|
||||||
|
notes: |
|
||||||
|
Debt reduction cycle complete!
|
||||||
|
- Metrics improved
|
||||||
|
- Documentation updated
|
||||||
|
- System more maintainable
|
||||||
|
- Team knowledge increased
|
||||||
|
|
||||||
|
flow_diagram: |
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start: Debt Reduction] --> B[bmad-master: session init]
|
||||||
|
B --> C[architect: assess debt]
|
||||||
|
C --> D[analyst: prioritize debt]
|
||||||
|
D --> E[pm: create plan]
|
||||||
|
E --> F[architect: refactoring strategy]
|
||||||
|
F --> G[po: validate approach]
|
||||||
|
|
||||||
|
G --> H[sm: create stories]
|
||||||
|
H --> I[Development Cycle]
|
||||||
|
I --> J[dev: implement refactoring]
|
||||||
|
J --> K[qa: regression testing]
|
||||||
|
K --> L{Significant change?}
|
||||||
|
L -->|Yes| M[architect: create ADR]
|
||||||
|
L -->|No| N{More stories?}
|
||||||
|
M --> N
|
||||||
|
N -->|Yes| I
|
||||||
|
N -->|No| O[dev: document progress]
|
||||||
|
|
||||||
|
O --> P[analyst: measure improvement]
|
||||||
|
P --> Q[sm: debt sprint review]
|
||||||
|
Q --> R[bmad-master: update Memory Bank]
|
||||||
|
R --> S[Debt Reduction Complete]
|
||||||
|
|
||||||
|
style S fill:#90EE90
|
||||||
|
style B fill:#DDA0DD
|
||||||
|
style C fill:#FFB6C1
|
||||||
|
style D fill:#FFB6C1
|
||||||
|
style P fill:#98FB98
|
||||||
|
style Q fill:#ADD8E6
|
||||||
|
style R fill:#DDA0DD
|
||||||
|
```
|
||||||
|
|
||||||
|
decision_guidance:
|
||||||
|
when_to_use:
|
||||||
|
- System becoming hard to maintain
|
||||||
|
- Frequent bugs in certain areas
|
||||||
|
- Performance degradation
|
||||||
|
- Security vulnerabilities accumulating
|
||||||
|
- Developer velocity decreasing
|
||||||
|
- Before major feature additions
|
||||||
|
|
||||||
|
handoff_prompts:
|
||||||
|
assessment_complete: |
|
||||||
|
Debt assessment complete:
|
||||||
|
- Critical items: {{critical_count}}
|
||||||
|
- High priority: {{high_count}}
|
||||||
|
- Total debt items: {{total_count}}
|
||||||
|
- Estimated effort: {{total_effort}} hours
|
||||||
|
|
||||||
|
plan_ready: |
|
||||||
|
Debt reduction plan created:
|
||||||
|
- Sprint 1 focus: {{sprint1_focus}}
|
||||||
|
- Quick wins: {{quick_wins_count}}
|
||||||
|
- Risk reduction: {{risk_reduction_percentage}}%
|
||||||
|
|
||||||
|
story_complete: |
|
||||||
|
Debt story {{story_id}} complete:
|
||||||
|
- Code quality: {{before_score}} → {{after_score}}
|
||||||
|
- Test coverage: {{before_coverage}}% → {{after_coverage}}%
|
||||||
|
- Performance: {{improvement_percentage}}% faster
|
||||||
|
|
||||||
|
review_summary: |
|
||||||
|
Debt reduction sprint complete:
|
||||||
|
- Stories completed: {{completed_count}}
|
||||||
|
- Debt eliminated: {{debt_points}} points
|
||||||
|
- System health: {{health_improvement}}% better
|
||||||
|
- Remaining debt: {{remaining_debt}} items
|
||||||
|
|
||||||
|
complete: "Technical debt reduction cycle complete. System health improved by {{overall_improvement}}%."
|
||||||
|
|
@ -26,6 +26,27 @@ If you have just completed an MVP with BMad, and you want to continue with post-
|
||||||
|
|
||||||
## The Complete Brownfield Workflow
|
## The Complete Brownfield Workflow
|
||||||
|
|
||||||
|
### Session Initialization (NEW - Critical First Step)
|
||||||
|
|
||||||
|
Before starting any brownfield work, proper context initialization is essential:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
@bmad-master
|
||||||
|
*session-kickoff
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
- Review existing Memory Bank files if available
|
||||||
|
- Check recent Dev Journal entries for current state
|
||||||
|
- Load relevant ADRs for architectural decisions
|
||||||
|
- Establish proper context for brownfield work
|
||||||
|
|
||||||
|
**Why This Matters for Brownfield**:
|
||||||
|
- Prevents duplicate work or conflicting decisions
|
||||||
|
- Ensures awareness of recent changes and patterns
|
||||||
|
- Maintains consistency with established practices
|
||||||
|
- Provides AI agents with critical historical context
|
||||||
|
|
||||||
### Choose Your Approach
|
### Choose Your Approach
|
||||||
|
|
||||||
#### Approach A: PRD-First (Recommended if adding very large and complex new features, single or multiple epics or massive changes)
|
#### Approach A: PRD-First (Recommended if adding very large and complex new features, single or multiple epics or massive changes)
|
||||||
|
|
@ -81,6 +102,7 @@ The analyst will:
|
||||||
- **Focus on relevant modules** identified in PRD or your description
|
- **Focus on relevant modules** identified in PRD or your description
|
||||||
- **Skip unrelated areas** to keep docs lean
|
- **Skip unrelated areas** to keep docs lean
|
||||||
- **Generate ONE architecture document** for all environments
|
- **Generate ONE architecture document** for all environments
|
||||||
|
- **Initialize Memory Bank** for the project if not exists
|
||||||
|
|
||||||
The analyst creates:
|
The analyst creates:
|
||||||
|
|
||||||
|
|
@ -88,6 +110,13 @@ The analyst creates:
|
||||||
- **Covers all system aspects** in a single file
|
- **Covers all system aspects** in a single file
|
||||||
- **Easy to copy and save** as `docs/project-architecture.md`
|
- **Easy to copy and save** as `docs/project-architecture.md`
|
||||||
- **Can be sharded later** in IDE if desired
|
- **Can be sharded later** in IDE if desired
|
||||||
|
- **Memory Bank initialization** with project context and patterns
|
||||||
|
|
||||||
|
**Memory Bank Integration**:
|
||||||
|
- Creates `docs/memory-bank/projectbrief.md` with project overview
|
||||||
|
- Populates `docs/memory-bank/techContext.md` with discovered stack
|
||||||
|
- Documents patterns in `docs/memory-bank/systemPatterns.md`
|
||||||
|
- Sets up `docs/memory-bank/activeContext.md` for ongoing work
|
||||||
|
|
||||||
For example, if you say "Add payment processing to user service":
|
For example, if you say "Add payment processing to user service":
|
||||||
|
|
||||||
|
|
@ -221,6 +250,14 @@ Follow the enhanced IDE Development Workflow:
|
||||||
- **SM** creates stories with integration awareness
|
- **SM** creates stories with integration awareness
|
||||||
- **Dev** implements with existing code respect
|
- **Dev** implements with existing code respect
|
||||||
- **QA** reviews for compatibility and improvements
|
- **QA** reviews for compatibility and improvements
|
||||||
|
- **Dev Journals** track daily progress and decisions
|
||||||
|
- **Sprint Reviews** ensure alignment and quality
|
||||||
|
|
||||||
|
4. **Continuous Documentation**:
|
||||||
|
- Create Dev Journal entries after significant work
|
||||||
|
- Update Memory Bank's `activeContext.md` with progress
|
||||||
|
- Document decisions in ADRs when making architectural changes
|
||||||
|
- Run sprint reviews to validate progress against goals
|
||||||
|
|
||||||
## Brownfield Best Practices
|
## Brownfield Best Practices
|
||||||
|
|
||||||
|
|
@ -268,6 +305,17 @@ Document:
|
||||||
- New patterns introduced
|
- New patterns introduced
|
||||||
- Deprecation notices
|
- Deprecation notices
|
||||||
|
|
||||||
|
### 6. Maintain Memory Bank (NEW)
|
||||||
|
|
||||||
|
Keep project context current:
|
||||||
|
|
||||||
|
- **Session Start**: Always run `*session-kickoff` to load context
|
||||||
|
- **During Work**: Update `activeContext.md` with current state
|
||||||
|
- **Pattern Discovery**: Document new patterns in `systemPatterns.md`
|
||||||
|
- **Progress Tracking**: Update `progress.md` with completed features
|
||||||
|
- **Tech Changes**: Keep `techContext.md` current with stack updates
|
||||||
|
- **Decision Records**: Create ADRs for significant architectural decisions
|
||||||
|
|
||||||
## Common Brownfield Scenarios
|
## Common Brownfield Scenarios
|
||||||
|
|
||||||
### Scenario 1: Adding a New Feature
|
### Scenario 1: Adding a New Feature
|
||||||
|
|
@ -321,6 +369,9 @@ Document:
|
||||||
### Brownfield-Specific Commands
|
### Brownfield-Specific Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Initialize session with context (ALWAYS DO FIRST)
|
||||||
|
@bmad-master → *session-kickoff
|
||||||
|
|
||||||
# Document existing project
|
# Document existing project
|
||||||
@analyst → *document-project
|
@analyst → *document-project
|
||||||
|
|
||||||
|
|
@ -335,6 +386,13 @@ Document:
|
||||||
|
|
||||||
# Single story creation
|
# Single story creation
|
||||||
@pm → *brownfield-create-story
|
@pm → *brownfield-create-story
|
||||||
|
|
||||||
|
# Memory Bank operations
|
||||||
|
@bmad-master → *initialize-memory-bank
|
||||||
|
@bmad-master → *update-memory-bank
|
||||||
|
|
||||||
|
# Sprint ceremonies
|
||||||
|
@bmad-master → *conduct-sprint-review
|
||||||
```
|
```
|
||||||
|
|
||||||
### Decision Tree
|
### Decision Tree
|
||||||
|
|
|
||||||
|
|
@ -17,21 +17,21 @@ workflow:
|
||||||
- brainstorming_session
|
- brainstorming_session
|
||||||
- game_research_prompt
|
- game_research_prompt
|
||||||
- player_research
|
- player_research
|
||||||
notes: 'Start with brainstorming game concepts, then create comprehensive game brief. SAVE OUTPUT: Copy final game-brief.md to your project''s docs/design/ folder.'
|
notes: "Start with brainstorming game concepts, then create comprehensive game brief. SAVE OUTPUT: Copy final game-brief.md to your project's docs/design/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: game-design-doc.md
|
creates: game-design-doc.md
|
||||||
requires: game-brief.md
|
requires: game-brief.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- competitive_analysis
|
- competitive_analysis
|
||||||
- technical_research
|
- technical_research
|
||||||
notes: 'Create detailed Game Design Document using game-design-doc-tmpl. Defines all gameplay mechanics, progression, and technical requirements. SAVE OUTPUT: Copy final game-design-doc.md to your project''s docs/design/ folder.'
|
notes: "Create detailed Game Design Document using game-design-doc-tmpl. Defines all gameplay mechanics, progression, and technical requirements. SAVE OUTPUT: Copy final game-design-doc.md to your project's docs/design/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: level-design-doc.md
|
creates: level-design-doc.md
|
||||||
requires: game-design-doc.md
|
requires: game-design-doc.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- level_prototyping
|
- level_prototyping
|
||||||
- difficulty_analysis
|
- difficulty_analysis
|
||||||
notes: 'Create level design framework using level-design-doc-tmpl. Establishes content creation guidelines and performance requirements. SAVE OUTPUT: Copy final level-design-doc.md to your project''s docs/design/ folder.'
|
notes: "Create level design framework using level-design-doc-tmpl. Establishes content creation guidelines and performance requirements. SAVE OUTPUT: Copy final level-design-doc.md to your project's docs/design/ folder."
|
||||||
- agent: solution-architect
|
- agent: solution-architect
|
||||||
creates: game-architecture.md
|
creates: game-architecture.md
|
||||||
requires:
|
requires:
|
||||||
|
|
@ -41,7 +41,7 @@ workflow:
|
||||||
- technical_research_prompt
|
- technical_research_prompt
|
||||||
- performance_analysis
|
- performance_analysis
|
||||||
- platform_research
|
- platform_research
|
||||||
notes: 'Create comprehensive technical architecture using game-architecture-tmpl. Defines Phaser 3 systems, performance optimization, and code structure. SAVE OUTPUT: Copy final game-architecture.md to your project''s docs/architecture/ folder.'
|
notes: "Create comprehensive technical architecture using game-architecture-tmpl. Defines Phaser 3 systems, performance optimization, and code structure. SAVE OUTPUT: Copy final game-architecture.md to your project's docs/architecture/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
validates: design_consistency
|
validates: design_consistency
|
||||||
requires: all_design_documents
|
requires: all_design_documents
|
||||||
|
|
@ -66,7 +66,7 @@ workflow:
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- quick_brainstorming
|
- quick_brainstorming
|
||||||
- concept_validation
|
- concept_validation
|
||||||
notes: 'Create focused game brief for prototype. Emphasize core mechanics and immediate playability. SAVE OUTPUT: Copy final game-brief.md to your project''s docs/ folder.'
|
notes: "Create focused game brief for prototype. Emphasize core mechanics and immediate playability. SAVE OUTPUT: Copy final game-brief.md to your project's docs/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: prototype-design.md
|
creates: prototype-design.md
|
||||||
uses: create-doc prototype-design OR create-game-story
|
uses: create-doc prototype-design OR create-game-story
|
||||||
|
|
|
||||||
|
|
@ -44,7 +44,7 @@ workflow:
|
||||||
notes: Implement stories in priority order. Test frequently and adjust design based on what feels fun. Document discoveries.
|
notes: Implement stories in priority order. Test frequently and adjust design based on what feels fun. Document discoveries.
|
||||||
workflow_end:
|
workflow_end:
|
||||||
action: prototype_evaluation
|
action: prototype_evaluation
|
||||||
notes: 'Prototype complete. Evaluate core mechanics, gather feedback, and decide next steps: iterate, expand, or archive.'
|
notes: "Prototype complete. Evaluate core mechanics, gather feedback, and decide next steps: iterate, expand, or archive."
|
||||||
game_jam_sequence:
|
game_jam_sequence:
|
||||||
- step: jam_concept
|
- step: jam_concept
|
||||||
agent: game-designer
|
agent: game-designer
|
||||||
|
|
|
||||||
|
|
@ -17,21 +17,21 @@ workflow:
|
||||||
- brainstorming_session
|
- brainstorming_session
|
||||||
- game_research_prompt
|
- game_research_prompt
|
||||||
- player_research
|
- player_research
|
||||||
notes: 'Start with brainstorming game concepts, then create comprehensive game brief. SAVE OUTPUT: Copy final game-brief.md to your project''s docs/design/ folder.'
|
notes: "Start with brainstorming game concepts, then create comprehensive game brief. SAVE OUTPUT: Copy final game-brief.md to your project's docs/design/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: game-design-doc.md
|
creates: game-design-doc.md
|
||||||
requires: game-brief.md
|
requires: game-brief.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- competitive_analysis
|
- competitive_analysis
|
||||||
- technical_research
|
- technical_research
|
||||||
notes: 'Create detailed Game Design Document using game-design-doc-tmpl. Defines all gameplay mechanics, progression, and technical requirements. SAVE OUTPUT: Copy final game-design-doc.md to your project''s docs/design/ folder.'
|
notes: "Create detailed Game Design Document using game-design-doc-tmpl. Defines all gameplay mechanics, progression, and technical requirements. SAVE OUTPUT: Copy final game-design-doc.md to your project's docs/design/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: level-design-doc.md
|
creates: level-design-doc.md
|
||||||
requires: game-design-doc.md
|
requires: game-design-doc.md
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- level_prototyping
|
- level_prototyping
|
||||||
- difficulty_analysis
|
- difficulty_analysis
|
||||||
notes: 'Create level design framework using level-design-doc-tmpl. Establishes content creation guidelines and performance requirements. SAVE OUTPUT: Copy final level-design-doc.md to your project''s docs/design/ folder.'
|
notes: "Create level design framework using level-design-doc-tmpl. Establishes content creation guidelines and performance requirements. SAVE OUTPUT: Copy final level-design-doc.md to your project's docs/design/ folder."
|
||||||
- agent: solution-architect
|
- agent: solution-architect
|
||||||
creates: game-architecture.md
|
creates: game-architecture.md
|
||||||
requires:
|
requires:
|
||||||
|
|
@ -41,7 +41,7 @@ workflow:
|
||||||
- technical_research_prompt
|
- technical_research_prompt
|
||||||
- performance_analysis
|
- performance_analysis
|
||||||
- platform_research
|
- platform_research
|
||||||
notes: 'Create comprehensive technical architecture using game-architecture-tmpl. Defines Unity systems, performance optimization, and code structure. SAVE OUTPUT: Copy final game-architecture.md to your project''s docs/architecture/ folder.'
|
notes: "Create comprehensive technical architecture using game-architecture-tmpl. Defines Unity systems, performance optimization, and code structure. SAVE OUTPUT: Copy final game-architecture.md to your project's docs/architecture/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
validates: design_consistency
|
validates: design_consistency
|
||||||
requires: all_design_documents
|
requires: all_design_documents
|
||||||
|
|
@ -66,7 +66,7 @@ workflow:
|
||||||
optional_steps:
|
optional_steps:
|
||||||
- quick_brainstorming
|
- quick_brainstorming
|
||||||
- concept_validation
|
- concept_validation
|
||||||
notes: 'Create focused game brief for prototype. Emphasize core mechanics and immediate playability. SAVE OUTPUT: Copy final game-brief.md to your project''s docs/ folder.'
|
notes: "Create focused game brief for prototype. Emphasize core mechanics and immediate playability. SAVE OUTPUT: Copy final game-brief.md to your project's docs/ folder."
|
||||||
- agent: game-designer
|
- agent: game-designer
|
||||||
creates: prototype-design.md
|
creates: prototype-design.md
|
||||||
uses: create-doc prototype-design OR create-game-story
|
uses: create-doc prototype-design OR create-game-story
|
||||||
|
|
|
||||||
|
|
@ -44,7 +44,7 @@ workflow:
|
||||||
notes: Implement stories in priority order. Test frequently in the Unity Editor and adjust design based on what feels fun. Document discoveries.
|
notes: Implement stories in priority order. Test frequently in the Unity Editor and adjust design based on what feels fun. Document discoveries.
|
||||||
workflow_end:
|
workflow_end:
|
||||||
action: prototype_evaluation
|
action: prototype_evaluation
|
||||||
notes: 'Prototype complete. Evaluate core mechanics, gather feedback, and decide next steps: iterate, expand, or archive.'
|
notes: "Prototype complete. Evaluate core mechanics, gather feedback, and decide next steps: iterate, expand, or archive."
|
||||||
game_jam_sequence:
|
game_jam_sequence:
|
||||||
- step: jam_concept
|
- step: jam_concept
|
||||||
agent: game-designer
|
agent: game-designer
|
||||||
|
|
|
||||||
|
|
@ -1,134 +0,0 @@
|
||||||
# Cline's Memory Bank
|
|
||||||
|
|
||||||
I am Cline, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional.
|
|
||||||
|
|
||||||
## Memory Bank Structure
|
|
||||||
|
|
||||||
The Memory Bank consists of core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy:
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
PB[projectbrief.md] --> PC[productContext.md]
|
|
||||||
PB --> SP[systemPatterns.md]
|
|
||||||
PB --> TC[techContext.md]
|
|
||||||
|
|
||||||
PC --> AC[activeContext.md]
|
|
||||||
SP --> AC
|
|
||||||
TC --> AC
|
|
||||||
|
|
||||||
AC --> P[progress.md]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Core Files (Required)
|
|
||||||
|
|
||||||
1. `projectbrief.md`
|
|
||||||
|
|
||||||
- Foundation document that shapes all other files
|
|
||||||
- Created at project start if it doesn't exist
|
|
||||||
- Defines core requirements and goals
|
|
||||||
- Source of truth for project scope
|
|
||||||
|
|
||||||
2. `productContext.md`
|
|
||||||
|
|
||||||
- Why this project exists
|
|
||||||
- Problems it solves
|
|
||||||
- How it should work
|
|
||||||
- User experience goals
|
|
||||||
|
|
||||||
3. `activeContext.md`
|
|
||||||
|
|
||||||
- Current work focus
|
|
||||||
- Recent changes
|
|
||||||
- Next steps
|
|
||||||
- Active decisions and considerations
|
|
||||||
- Important patterns and preferences
|
|
||||||
- Learnings and project insights
|
|
||||||
|
|
||||||
4. `systemPatterns.md`
|
|
||||||
|
|
||||||
- System architecture
|
|
||||||
- Key technical decisions
|
|
||||||
- Design patterns in use
|
|
||||||
- Component relationships
|
|
||||||
- Critical implementation paths
|
|
||||||
|
|
||||||
5. `techContext.md`
|
|
||||||
|
|
||||||
- Technologies used
|
|
||||||
- Development setup
|
|
||||||
- Technical constraints
|
|
||||||
- Dependencies
|
|
||||||
- Tool usage patterns
|
|
||||||
|
|
||||||
6. `progress.md`
|
|
||||||
- What works
|
|
||||||
- What's left to build
|
|
||||||
- Current status
|
|
||||||
- Known issues
|
|
||||||
- Evolution of project decisions
|
|
||||||
|
|
||||||
### Additional Context
|
|
||||||
|
|
||||||
Create additional files/folders within memory-bank/ when they help organize:
|
|
||||||
|
|
||||||
- Complex feature documentation
|
|
||||||
- Integration specifications
|
|
||||||
- API documentation
|
|
||||||
- Testing strategies
|
|
||||||
- Deployment procedures
|
|
||||||
|
|
||||||
## Core Workflows
|
|
||||||
|
|
||||||
### Plan Mode
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
Start[Start] --> ReadFiles[Read Memory Bank]
|
|
||||||
ReadFiles --> CheckFiles{Files Complete?}
|
|
||||||
|
|
||||||
CheckFiles -->|No| Plan[Create Plan]
|
|
||||||
Plan --> Document[Document in Chat]
|
|
||||||
|
|
||||||
CheckFiles -->|Yes| Verify[Verify Context]
|
|
||||||
Verify --> Strategy[Develop Strategy]
|
|
||||||
Strategy --> Present[Present Approach]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Act Mode
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
Start[Start] --> Context[Check Memory Bank]
|
|
||||||
Context --> Update[Update Documentation]
|
|
||||||
Update --> Execute[Execute Task]
|
|
||||||
Execute --> Document[Document Changes]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Documentation Updates
|
|
||||||
|
|
||||||
Memory Bank updates occur when:
|
|
||||||
|
|
||||||
1. Discovering new project patterns
|
|
||||||
2. After implementing significant changes
|
|
||||||
3. When user requests with **update memory bank** (MUST review ALL files)
|
|
||||||
4. When context needs clarification
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
Start[Update Process]
|
|
||||||
|
|
||||||
subgraph Process
|
|
||||||
P1[Review ALL Files]
|
|
||||||
P2[Document Current State]
|
|
||||||
P3[Clarify Next Steps]
|
|
||||||
P4[Document Insights & Patterns]
|
|
||||||
|
|
||||||
P1 --> P2 --> P3 --> P4
|
|
||||||
end
|
|
||||||
|
|
||||||
Start --> Process
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md and progress.md as they track current state.
|
|
||||||
|
|
||||||
REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy.
|
|
||||||
|
|
@ -1,163 +0,0 @@
|
||||||
# Project Guidelines
|
|
||||||
|
|
||||||
> **Scope:** This document outlines high-level, project-specific guidelines, policies, and standards unique to this project. It serves as the primary entry point to the rule system, linking to more detailed principles in other files. For universal coding principles, see `02-CoreCodingPrinciples.md`. For mandatory file structure, see `03-ProjectScaffoldingRules.md`. For cloud-native development, see `04-TwelveFactorApp.md`. For our service architecture, see `05-MicroServiceOrientedArchitecture.md`.
|
|
||||||
|
|
||||||
## Documentation Requirements
|
|
||||||
|
|
||||||
- Update relevant documentation in /docs when modifying features
|
|
||||||
- Keep README.md in sync with new capabilities
|
|
||||||
- Maintain changelog entries in CHANGELOG.md
|
|
||||||
|
|
||||||
## Documentation Discipline
|
|
||||||
|
|
||||||
- All major changes must be reflected in README, ADRs, dev journals, and changelogs.
|
|
||||||
- Use and maintain templates for sprint reviews and journal entries.
|
|
||||||
- Document onboarding steps, environment requirements, and common pitfalls in README files.
|
|
||||||
|
|
||||||
## Accessibility & Inclusion
|
|
||||||
|
|
||||||
- All UI components must meet WCAG 2.1 AA accessibility standards.
|
|
||||||
- Ensure sufficient color contrast, keyboard navigation, and screen reader support.
|
|
||||||
- Use inclusive language in documentation and user-facing text.
|
|
||||||
- Accessibility must be tested as part of code review and release.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
- This project follows a **Microservice-Oriented Architecture**.
|
|
||||||
- All development must adhere to the principles outlined in `04-TwelveFactorApp.md` for cloud-native compatibility and `05-MicroServiceOrientedArchitecture.md` for service design and implementation patterns.
|
|
||||||
|
|
||||||
## Architecture Decision Records
|
|
||||||
|
|
||||||
Create ADRs in /docs/adr for:
|
|
||||||
|
|
||||||
- Major dependency changes
|
|
||||||
- Architectural pattern changes
|
|
||||||
- New integration patterns
|
|
||||||
- Database schema changes
|
|
||||||
Follow template in /docs/adr/template.md
|
|
||||||
|
|
||||||
## Code Style & Patterns
|
|
||||||
|
|
||||||
- Generate API clients using OpenAPI Generator
|
|
||||||
- Use TypeScript axios template
|
|
||||||
- Place generated code in /src/generated
|
|
||||||
- Prefer composition over inheritance
|
|
||||||
- Use repository pattern for data access
|
|
||||||
- Follow error handling pattern in /src/utils/errors.ts
|
|
||||||
|
|
||||||
## REST API Implementation
|
|
||||||
|
|
||||||
- REST endpoints must follow the `CustomEndpointDelegate` pattern and reside in package-scoped Groovy files.
|
|
||||||
- Always declare the package at the top of each endpoint file.
|
|
||||||
- Configure REST script roots and packages via environment variables for auto-discovery.
|
|
||||||
|
|
||||||
## Testing Standards
|
|
||||||
|
|
||||||
- Unit tests required for business logic
|
|
||||||
- Integration tests for API endpoints
|
|
||||||
- E2E tests for critical user flows
|
|
||||||
|
|
||||||
## Testing & Data Utilities
|
|
||||||
|
|
||||||
- All critical endpoints and data utilities must have integration tests.
|
|
||||||
- Synthetic data scripts must be idempotent, robust, and never modify migration tracking tables.
|
|
||||||
- Document the behavior and safety of all data utility scripts.
|
|
||||||
|
|
||||||
## Security & Performance Considerations
|
|
||||||
|
|
||||||
- **Input Validation (IV):** All external data must be validated before processing.
|
|
||||||
- **Resource Management (RM):** Close connections and free resources appropriately.
|
|
||||||
- **Constants Over Magic Values (CMV):** No magic strings or numbers. Use named constants.
|
|
||||||
- **Security-First Thinking (SFT):** Implement proper authentication, authorization, and data protection.
|
|
||||||
- **Performance Awareness (PA):** Consider computational complexity and resource usage.
|
|
||||||
- Rate limit all api endpoints
|
|
||||||
- Use row level security always (RLS)
|
|
||||||
- Captcha on all auth routes/signup pages
|
|
||||||
- If using hosting solution like vercel, enable attack challenge on their WAF
|
|
||||||
- DO NOT read or modify, without prior approval by user:
|
|
||||||
- .env files
|
|
||||||
- \*_/config/secrets._
|
|
||||||
- Any file containing API keys or credentials
|
|
||||||
|
|
||||||
## Security & Quality Automation
|
|
||||||
|
|
||||||
- Integrate Semgrep for static analysis and security scanning in all projects.
|
|
||||||
- Use MegaLinter (or equivalent) for multi-language linting and formatting.
|
|
||||||
- Supplement with language/framework-specific linters (e.g., ESLint for JS/TS, flake8 for Python, RuboCop for Ruby).
|
|
||||||
- All linting and static analysis tools must be run in CI/CD pipelines; merges are blocked on failure.
|
|
||||||
- Linter and static analysis configurations must be version-controlled and documented at the project root.
|
|
||||||
- Regularly review and update linter/analysis rules to address new threats and maintain code quality.
|
|
||||||
- Document and version-control all ignore rules and linter configs.
|
|
||||||
- CI checks must pass before merging any code.
|
|
||||||
|
|
||||||
## Miscellaneous recommendations
|
|
||||||
|
|
||||||
- Always prefer simple solutions
|
|
||||||
- Avoid duplication of code whenever possible, which means checking for other areas of the codebase that might already have similar code and functionality
|
|
||||||
- Write code that takes into account the different environments: dev, test, and prod
|
|
||||||
- You are careful to only make changes that are requested or you are confident are well understood and related to the change being requested
|
|
||||||
- When fixing an issue or bug, do not introduce a new pattern or technology without first exhausting all options for the existing implementation. And if you finally do this, make sure to remove the old implementation afterwards so we don't have duplicate logic.
|
|
||||||
- Keep the codebase very clean and organized
|
|
||||||
- Avoid writing scripts in files if possible, especially if the script is likely only to be run once
|
|
||||||
- Avoid having files over 200-300 lines
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
|
|
||||||
- Avoid unnecessary plugin or build complexity; prefer script-based, automatable deployment.
|
|
||||||
- Mocking data is only needed for tests, never mock data for dev or prod
|
|
||||||
- Never add stubbing or fake data patterns to code that affects the dev or prod environments
|
|
||||||
- Never overwrite my .env file without first asking and confirming
|
|
||||||
|
|
||||||
## Automation & CI/CD
|
|
||||||
|
|
||||||
- All code must pass linting and formatting checks in CI before merge.
|
|
||||||
- CI must run all tests (unit, integration, E2E) and block merges on failure.
|
|
||||||
- Add new linters or formatters only with team consensus.
|
|
||||||
|
|
||||||
## Local Development Environment
|
|
||||||
|
|
||||||
- Use Podman or Docker and Ansible for local environment setup.
|
|
||||||
- Provide wrapper scripts for starting, stopping, and resetting the environment; avoid direct Ansible or container CLI usage.
|
|
||||||
- Ensure all environment configuration is version-controlled.
|
|
||||||
|
|
||||||
## Branching & Release Policy
|
|
||||||
|
|
||||||
- Follow [your branching model, e.g. Git Flow or trunk-based] for all work.
|
|
||||||
- Use semantic versioning for releases.
|
|
||||||
- Release branches must be code-frozen and pass all CI checks before tagging.
|
|
||||||
|
|
||||||
## Incident Response
|
|
||||||
|
|
||||||
- Maintain an incident log documenting bugs, outages, and recovery actions.
|
|
||||||
- After any incident, hold a retrospective and update runbooks as needed.
|
|
||||||
- Critical incidents must be reviewed in the next team meeting.
|
|
||||||
|
|
||||||
## Data Privacy & Compliance
|
|
||||||
|
|
||||||
- All data handling must comply with applicable privacy laws (e.g., GDPR, CCPA).
|
|
||||||
- Never log or store sensitive data insecurely.
|
|
||||||
- Review and document data flows for compliance annually.
|
|
||||||
|
|
||||||
## Database Migration & Change Management
|
|
||||||
|
|
||||||
- Use a dedicated, automated migration tool (e.g., Liquibase, Flyway) for all schema changes.
|
|
||||||
- Store all migration scripts under version control, alongside application code.
|
|
||||||
- All environments (dev, test, prod) must be migrated using the same process and scripts.
|
|
||||||
- Manual, ad-hoc schema changes are prohibited.
|
|
||||||
- All migrations must be documented with rationale and expected outcomes.
|
|
||||||
|
|
||||||
## Database Management & Documentation
|
|
||||||
|
|
||||||
- Maintain an up-to-date Entity Relationship Diagram (ERD).
|
|
||||||
- Use templates for documenting schema changes, migrations, and rationale.
|
|
||||||
- Document all reference data and non-obvious constraints.
|
|
||||||
- Maintain a changelog for all database changes.
|
|
||||||
- Review and update database documentation as part of the development workflow.
|
|
||||||
|
|
||||||
## Database Naming Conventions
|
|
||||||
|
|
||||||
- Use clear, consistent, and project-wide naming conventions for tables, columns, indexes, and constraints.
|
|
||||||
- Prefer snake_case for all identifiers.
|
|
||||||
- Prefix/suffix conventions must be documented (e.g., `tbl_` for tables, `_fk` for foreign keys).
|
|
||||||
- Avoid reserved words and ambiguous abbreviations.
|
|
||||||
- Enforce naming conventions in code review and automated linting where possible.
|
|
||||||
|
|
@ -1,82 +0,0 @@
|
||||||
# Core Coding Principles
|
|
||||||
|
|
||||||
> **Scope:** This document defines the core, universal, and project-agnostic engineering principles that apply to all development work. These are the fundamental rules of good software craftsmanship, independent of any specific project.
|
|
||||||
|
|
||||||
## Core Coding Principles
|
|
||||||
|
|
||||||
- [SF] **Simplicity First:** Always choose the simplest viable solution. Complex patterns or architectures require explicit justification.
|
|
||||||
- [RP] **Readability Priority:** Code must be immediately understandable by both humans and AI during future modifications.
|
|
||||||
- [DM] **Dependency Minimalism:** No new libraries or frameworks without explicit request or compelling justification.
|
|
||||||
- [ISA] **Industry Standards Adherence:** Follow established conventions for the relevant language and tech stack.
|
|
||||||
- [SD] **Strategic Documentation:** Comment only complex logic or critical functions. Avoid documenting the obvious.
|
|
||||||
- [TDT] **Test-Driven Thinking:** Design all code to be easily testable from inception.
|
|
||||||
|
|
||||||
## Dependency Management
|
|
||||||
|
|
||||||
- [DM-1] Review third-party dependencies for vulnerabilities at least quarterly.
|
|
||||||
- [DM-2] Prefer signed or verified packages.
|
|
||||||
- [DM-3] Remove unused or outdated dependencies promptly.
|
|
||||||
- [DM-4] Document dependency updates in the changelog.
|
|
||||||
|
|
||||||
## Coding workflow preferences
|
|
||||||
|
|
||||||
- [WF-FOCUS] Focus on the areas of code relevant to the task
|
|
||||||
- [WF-SCOPE] Do not touch code that is unrelated to the task
|
|
||||||
- [WF-TEST] Write thorough tests for all major functionality
|
|
||||||
- [WF-ARCH] Avoid making major changes to the patterns and architecture of how a feature works, after it has shown to work well, unless explicitly structured
|
|
||||||
- [WF-IMPACT] Always think about what other methods and areas of code might be affected by code changes
|
|
||||||
|
|
||||||
## Workflow Standards
|
|
||||||
|
|
||||||
- [AC] **Atomic Changes:** Make small, self-contained modifications to improve traceability and rollback capability.
|
|
||||||
- [CD] **Commit Discipline:** Recommend regular commits with semantic messages using conventional commit format:
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
type(scope): concise description
|
|
||||||
|
|
||||||
[optional body with details]
|
|
||||||
|
|
||||||
[optional footer with breaking changes/issue references]
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Types: feat, fix, docs, style, refactor, perf, test, chore
|
|
||||||
Adhere to the ConventionalCommit specification: <https://www.conventionalcommits.org/en/v1.0.0/#specification>
|
|
||||||
|
|
||||||
- [TR] **Transparent Reasoning:** When generating code, explicitly reference which global rules influenced decisions.
|
|
||||||
- [CWM] **Context Window Management:** Be mindful of AI context limitations. Suggest new sessions when necessary.
|
|
||||||
|
|
||||||
## Code Quality Guarantees
|
|
||||||
|
|
||||||
- [DRY] **DRY Principle:** No duplicate code. Reuse or extend existing functionality.
|
|
||||||
- [CA] **Clean Architecture:** Generate cleanly formatted, logically structured code with consistent patterns.
|
|
||||||
- [REH] **Robust Error Handling:** Integrate appropriate error handling for all edge cases and external interactions.
|
|
||||||
- [CSD] **Code Smell Detection:** Proactively identify and suggest refactoring for:
|
|
||||||
- Functions exceeding 30 lines
|
|
||||||
- Files exceeding 300 lines
|
|
||||||
- Nested conditionals beyond 2 levels
|
|
||||||
- Classes with more than 5 public methods
|
|
||||||
|
|
||||||
## Security & Performance Considerations
|
|
||||||
|
|
||||||
- [IV] **Input Validation:** All external data must be validated before processing.
|
|
||||||
- [RM] **Resource Management:** Close connections and free resources appropriately.
|
|
||||||
- [CMV] **Constants Over Magic Values:** No magic strings or numbers. Use named constants.
|
|
||||||
- [SFT] **Security-First Thinking:** Implement proper authentication, authorization, and data protection.
|
|
||||||
- [PA] **Performance Awareness:** Consider computational complexity and resource usage.
|
|
||||||
- [RL] Rate limit all API endpoints.
|
|
||||||
- [RLS] Use row-level security always (RLS).
|
|
||||||
- [CAP] Captcha on all auth routes/signup pages.
|
|
||||||
- [WAF] If using hosting solution like Vercel, enable attack challenge on their WAF.
|
|
||||||
- [SEC-1] **DO NOT** read or modify, without prior approval by user:
|
|
||||||
- .env files
|
|
||||||
- \*_/config/secrets._
|
|
||||||
- Any file containing API keys or credentials
|
|
||||||
|
|
||||||
## AI Communication Guidelines
|
|
||||||
|
|
||||||
- [RAT] **Rule Application Tracking:** When applying rules, tag with the abbreviation in brackets (e.g., [SF], [DRY]).
|
|
||||||
- [EDC] **Explanation Depth Control:** Scale explanation detail based on complexity, from brief to comprehensive.
|
|
||||||
- [AS] **Alternative Suggestions:** When relevant, offer alternative approaches with pros/cons.
|
|
||||||
- [KBT] **Knowledge Boundary Transparency:** Clearly communicate when a request exceeds AI capabilities or project context.
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
# Project Scaffolding Rules
|
|
||||||
|
|
||||||
> **Scope:** This document defines the mandatory file and folder structure for the project. Adherence to this structure is required to ensure consistency and support automated tooling.
|
|
||||||
|
|
||||||
## Project structure
|
|
||||||
|
|
||||||
The project should include the following files and folders:
|
|
||||||
|
|
||||||
- a .clineignore file
|
|
||||||
- a .gitignore file primed for a regular project managed with CLINE in Microsoft VSCode
|
|
||||||
- a generic readme.md file
|
|
||||||
- a blank .gitattributes file
|
|
||||||
- a license file
|
|
||||||
|
|
||||||
- /.clinerules/rules folder to include all project specific rules for the CLINE extension
|
|
||||||
- /.clinerules/workflows folder to include all project specific workflows for the CLINE extension
|
|
||||||
- /.windsurf/rules/ folder to include all project specific rules for the Windsurf extension
|
|
||||||
- /.windsurf/workflows/ folder to include all project specific workflows for the Windsurf extension
|
|
||||||
|
|
||||||
- a docs/adr folder to include all project specific Architectural Decisions Records (ADRs)
|
|
||||||
- a docs/devJournal folder to include all project specific development journals
|
|
||||||
- a docs/roadmap folder to include all project roadmap and features description
|
|
||||||
- a docs/roadmap/features folder to include all project specific features and their technical, functional and non-functional requirements (Including UX-UI)
|
|
||||||
|
|
||||||
- an src/app folder to include the frontend components of the solution
|
|
||||||
- an src/api folder to include the backend components of the solution
|
|
||||||
- an src/utils folder to include the share utilities components of the solution
|
|
||||||
- an src/tests folder to include the tests components of the solution
|
|
||||||
- an src/tests/e2e folder to include the end-to-end tests components of the solution
|
|
||||||
- an src/tests/postman folder to include the postman tests for the API components of the solution
|
|
||||||
|
|
||||||
- a db folder to include the database components of the solution
|
|
||||||
- a db/liquibase folder to include the liquibase components of the solution
|
|
||||||
|
|
||||||
- a local-dev-setup folder to include the local development setup components of the solution
|
|
||||||
|
|
@ -1,99 +0,0 @@
|
||||||
_Scope: This document provides the definitive, consolidated set of rules based on the Twelve-Factor App methodology. These principles are mandatory for ensuring our applications are built as scalable, resilient, and maintainable cloud-native services._
|
|
||||||
|
|
||||||
# The Consolidated Twelve-Factor App Rules for an AI Agent
|
|
||||||
|
|
||||||
**I. Codebase**
|
|
||||||
|
|
||||||
- A single, version-controlled codebase (e.g., in Git) must represent one and only one application.
|
|
||||||
- All code you generate, manage, or refactor for a specific application must belong to this single codebase.
|
|
||||||
- Shared functionality across applications must be factored into versioned libraries and managed via a dependency manager.
|
|
||||||
- This single codebase is used to produce multiple deploys (e.g., development, staging, production).
|
|
||||||
|
|
||||||
**II. Dependencies**
|
|
||||||
|
|
||||||
- You must explicitly declare all application dependencies via a manifest file (e.g., `requirements.txt`, `package.json`, `pom.xml`).
|
|
||||||
- Never rely on the implicit existence of system-wide packages or tools. The application must run in an isolated environment where only explicitly declared dependencies are available.
|
|
||||||
|
|
||||||
**III. Config**
|
|
||||||
|
|
||||||
- A strict separation between code and configuration must be enforced.
|
|
||||||
- All configuration that varies between deploys (credentials, resource handles, hostnames) must be read from environment variables.
|
|
||||||
- Never hardcode environment-specific values in the source code you generate. The codebase must be runnable anywhere provided the correct environment variables are set.
|
|
||||||
|
|
||||||
**IV. Backing Services**
|
|
||||||
|
|
||||||
- All backing services (databases, message queues, caches, external APIs, etc.) must be treated as attached, swappable resources.
|
|
||||||
- Connect to all backing services via locators/credentials stored in the configuration (environment variables). The code must be agnostic to whether a service is local or third-party.
|
|
||||||
|
|
||||||
**V. Build, Release, Run**
|
|
||||||
|
|
||||||
- Maintain a strict, three-stage separation:
|
|
||||||
- **Build:** Converts the code repo into an executable bundle.
|
|
||||||
- **Release:** Combines the build with environment-specific config.
|
|
||||||
- **Run:** Executes the release in the target environment.
|
|
||||||
- Releases must be immutable and have unique IDs. Any change to code or config must create a new release. You must not generate code that attempts to modify itself at runtime.
|
|
||||||
|
|
||||||
**VI. Processes**
|
|
||||||
|
|
||||||
- Design the application to execute as one or more stateless, share-nothing processes.
|
|
||||||
- Any data that needs to persist must be stored in a stateful backing service (e.g., a database). Never assume that local memory or disk state is available across requests or between process restarts.
|
|
||||||
|
|
||||||
**VII. Port Binding**
|
|
||||||
|
|
||||||
- The application must be self-contained and export its services (e.g., HTTP) by binding to a port specified via configuration. Do not rely on runtime injection of a webserver (e.g., as a module in Apache).
|
|
||||||
|
|
||||||
**VIII. Concurrency**
|
|
||||||
|
|
||||||
- Design the application to scale out horizontally by adding more concurrent processes.
|
|
||||||
- Assign different workload types to different process types (e.g., `web`, `worker`).
|
|
||||||
- Rely on a process manager (e.g., systemd, Foreman, Kubernetes) for process lifecycle management, logging, and crash recovery.
|
|
||||||
|
|
||||||
**IX. Disposability**
|
|
||||||
|
|
||||||
- Processes must be disposable, meaning they can be started or stopped at a moment's notice.
|
|
||||||
- Strive for minimal startup time to facilitate fast elastic scaling and deployments.
|
|
||||||
- Ensure graceful shutdown on `SIGTERM`, finishing any in-progress work before exiting.
|
|
||||||
- Design processes to be robust against sudden death (crash-only design).
|
|
||||||
|
|
||||||
**X. Dev/Prod Parity**
|
|
||||||
|
|
||||||
- Keep development, staging, and production environments as similar as possible.
|
|
||||||
- This applies to the type and version of the programming language, system tooling, and all backing services.
|
|
||||||
|
|
||||||
**XI. Logs**
|
|
||||||
|
|
||||||
- Treat logs as event streams. Never write to or manage log files directly from the application.
|
|
||||||
- Each process must write its event stream, unbuffered, to standard output (`stdout`).
|
|
||||||
- The execution environment is responsible for collecting, aggregating, and routing these log streams for storage and analysis.
|
|
||||||
|
|
||||||
**XII. Admin Processes**
|
|
||||||
|
|
||||||
- Run administrative and management tasks (e.g., database migrations, one-off scripts) as one-off processes in an environment identical to the main application's long-running processes.
|
|
||||||
- Admin scripts must be shipped with the application code and use the same dependency and configuration management to avoid synchronization issues.
|
|
||||||
|
|
||||||
# Additional Consolidated Project Rules
|
|
||||||
|
|
||||||
**Onboarding & Knowledge Transfer**
|
|
||||||
|
|
||||||
- Maintain up-to-date onboarding guides and “How To” docs for new contributors and AI agents.
|
|
||||||
- All major workflows must have step-by-step documentation.
|
|
||||||
- Encourage new team members to suggest improvements to onboarding materials.
|
|
||||||
|
|
||||||
**AI/Agent Safeguards**
|
|
||||||
|
|
||||||
- All AI-generated code must be reviewed by a human before deployment to production.
|
|
||||||
- Escalate ambiguous or risky decisions to a human for approval.
|
|
||||||
- Log all significant AI-suggested changes for auditability.
|
|
||||||
- Never overwrite an `.env` file without first asking and confirming.
|
|
||||||
|
|
||||||
**Continuous Improvement**
|
|
||||||
|
|
||||||
- Hold regular retrospectives to review rules, workflows, and documentation.
|
|
||||||
- Encourage all contributors to provide feedback and suggest improvements.
|
|
||||||
- Update rules and workflows based on lessons learned.
|
|
||||||
|
|
||||||
**Environmental Sustainability**
|
|
||||||
|
|
||||||
- Optimize compute resources and minimize waste in infrastructure choices.
|
|
||||||
- Prefer energy-efficient solutions where practical.
|
|
||||||
- Consider environmental impact in all major technical decisions.
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
_Scope: This document outlines the specific patterns and strategies for implementing our Microservice-Oriented Architecture, based on Chris Richardson's "Microservices Patterns". It builds upon the foundational principles in `04-TwelveFactorApp.md` and provides a detailed guide for service design, decomposition, communication, and data management._
|
|
||||||
|
|
||||||
Observe the principles set by the book "Microservices Patterns" by Chris Richardson.
|
|
||||||
|
|
||||||
### Microservice Patterns & Principles (with Trigram Codes)
|
|
||||||
|
|
||||||
- [MON] **Monolithic Architecture:** Structures an application as a single, unified deployable unit. Good for simple applications, but becomes "monolithic hell" as complexity grows.
|
|
||||||
- [MSA] **Microservice Architecture:** Structures an application as a collection of small, autonomous, and loosely coupled services. This is the core pattern the rest of the book builds upon.
|
|
||||||
- [DBC] **Decompose by Business Capability:** Define services based on what a business _does_ (e.g., Order Management, Inventory Management) to create stable service boundaries.
|
|
||||||
- [DSD] **Decompose by Subdomain:** Use Domain-Driven Design (DDD) to define services around specific problem subdomains, aligning service boundaries with the business domain model.
|
|
||||||
- [RPI] **Remote Procedure Invocation:** A client invokes a service using a synchronous, request/response protocol like REST or gRPC. Simple and familiar but creates tight coupling and can reduce availability.
|
|
||||||
- [MSG] **Messaging:** Services communicate asynchronously by exchanging messages via a message broker. This promotes loose coupling and improves resilience.
|
|
||||||
- [CBR] **Circuit Breaker:** Prevents a network or service failure from cascading. After a number of consecutive failures, the breaker trips, and further calls fail immediately.
|
|
||||||
- [SDC] **Service Discovery:** Patterns for how a client service can find the network location of a service instance in a dynamic cloud environment (self/3rd party registration, client/server-side discovery).
|
|
||||||
- [DPS] **Database per Service:** Each microservice owns its own data and is solely responsible for it. Fundamental to loose coupling; requires new transaction management strategies.
|
|
||||||
- [SAG] **Saga:** Master pattern for managing data consistency across services without distributed transactions. Sequence of local transactions, each triggering the next via events/messages, with compensating transactions on failure.
|
|
||||||
- [OUT] **Transactional Outbox / Polling Publisher / Transaction Log Tailing:** Reliably publish messages/events as part of a local database transaction, ensuring no messages are lost if a service crashes after updating its database but before sending the message.
|
|
||||||
- [DOM] **Domain Model:** Classic object-oriented approach with classes containing both state and behaviour. Preferred for complex logic.
|
|
||||||
- [TSF] **Transaction Script:** Procedural approach where a single procedure handles a single request. Simpler, but unmanageable for complex logic.
|
|
||||||
- [AGG] **Aggregate:** A cluster of related domain objects treated as a single unit, with a root entity. Transactions only ever create or update a single aggregate.
|
|
||||||
- [DME] **Domain Events:** Aggregates publish events when their state changes. Foundation for event-driven architectures, sagas, and data replication.
|
|
||||||
- [EVS] **Event Sourcing:** Store the sequence of state-changing events rather than the current state. The current state is derived by replaying events, providing a reliable audit log and simplifying event publishing.
|
|
||||||
- [APC] **API Composition:** A client (or API Gateway) retrieves data from multiple services and joins it in memory. Simple for basic queries, inefficient for complex joins across large datasets.
|
|
||||||
- [CQR] **Command Query Responsibility Segregation (CQRS):** Maintain one or more denormalised, read-optimised "view" databases kept up-to-date by subscribing to events from the services that own the data. Separates the command-side (write) from the query-side (read) model.
|
|
||||||
- [APG] **API Gateway:** A single entry point for all external clients. Routes requests to backend services, can perform API composition, and handles cross-cutting concerns like authentication.
|
|
||||||
- [BFF] **Backends for Frontends:** A variation of the API Gateway pattern where you have a separate, tailored API gateway for each specific client (e.g., mobile app, web app).
|
|
||||||
- [CDC] **Consumer-Driven Contract Test:** A test written by the _consumer_ of a service to verify that the _provider_ meets its expectations, ensuring correct communication without slow, brittle end-to-end tests.
|
|
||||||
- [SCT] **Service Component Test:** Acceptance test for a single service in isolation, using stubs for external dependencies.
|
|
||||||
- [SVC] **Service as a Container:** Package a service as a container image (e.g., Docker) to encapsulate its technology stack.
|
|
||||||
- [SRL] **Serverless Deployment:** Deploy services using a platform like AWS Lambda that abstracts away the underlying infrastructure.
|
|
||||||
- [MSC] **Microservice Chassis:** A framework (e.g., Spring Boot + Spring Cloud) that handles cross-cutting concerns such as config, health checks, metrics, and distributed tracing.
|
|
||||||
- [SMH] **Service Mesh:** Infrastructure layer (e.g., Istio, Linkerd) that handles inter-service communication concerns like circuit breakers, distributed tracing, and load balancing outside of service code.
|
|
||||||
- [STR] **Strangler Application:** Strategy for migrating a monolith. Incrementally build new microservices around the monolith, gradually replacing it and avoiding a "big bang" rewrite.
|
|
||||||
|
|
||||||
More at <https://microservices.io/patterns/>
|
|
||||||
|
|
@ -1,60 +0,0 @@
|
||||||
---
|
|
||||||
description: The definitive workflow for safely updating API specifications and generating Postman tests to ensure 100% consistency and prevent data loss.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: API Spec & Test Generation
|
|
||||||
|
|
||||||
This workflow establishes `openapi.yaml` as the single source of truth for all API development. The Postman collection is **always generated** from this file. **NEVER edit the Postman JSON file directly.** This prevents inconsistencies and the kind of file corruption we have experienced.
|
|
||||||
|
|
||||||
## Guiding Principles
|
|
||||||
|
|
||||||
- **OpenAPI is the ONLY Source of Truth:** All API changes begin and end with `docs/api/openapi.yaml`.
|
|
||||||
- **Postman is a GENERATED ARTIFACT:** The collection file is treated as build output. It is never edited by hand.
|
|
||||||
- **Validate Before Generating:** Always validate the OpenAPI spec _before_ attempting to generate the Postman collection.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
### 1. Update the OpenAPI Specification (`docs/api/openapi.yaml`)
|
|
||||||
|
|
||||||
- **Identify API Changes:** Review the Groovy source code (e.g., `src/com/umig/api/v2/*.groovy`) to identify any new, modified, or removed endpoints.
|
|
||||||
- **Edit the Spec:** Carefully add, modify, or remove the corresponding endpoint definitions under `paths` and schemas under `components/schemas`.
|
|
||||||
- **Best Practice:** Use `allOf` to extend existing schemas non-destructively (e.g., adding audit fields to a base `User` schema).
|
|
||||||
- **Use an IDE with OpenAPI support** to get real-time linting and validation.
|
|
||||||
|
|
||||||
### 2. Validate the OpenAPI Specification
|
|
||||||
|
|
||||||
- **CRITICAL:** Before proceeding, validate your `openapi.yaml` file.
|
|
||||||
- Use your IDE's built-in OpenAPI preview or a dedicated linter.
|
|
||||||
- **DO NOT proceed if the file has errors.** Fix them first. This is the most important step to prevent downstream issues.
|
|
||||||
|
|
||||||
### 3. Generate the Postman Collection
|
|
||||||
|
|
||||||
- **Navigate to the correct directory** in your terminal. The command must be run from here:
|
|
||||||
```bash
|
|
||||||
cd docs/api/postman
|
|
||||||
```
|
|
||||||
- **Run the generation command:**
|
|
||||||
```bash
|
|
||||||
// turbo
|
|
||||||
npx openapi-to-postmanv2 -s ../openapi.yaml -o ./UMIG_API_V2_Collection.postman_collection.json -p -O folderStrategy=Tags
|
|
||||||
```
|
|
||||||
- **Note on `npx`:** The `npx` command runs the `openapi-to-postmanv2` package without requiring a global installation. If you see `command not found`, ensure you are using `npx`.
|
|
||||||
|
|
||||||
### 4. Verify the Changes
|
|
||||||
|
|
||||||
- **Review the Diff:** Use `git diff` to review the changes to `UMIG_API_V2_Collection.postman_collection.json`. Confirm that the new endpoint has been added and that no unexpected changes have occurred.
|
|
||||||
- **Test in Postman:** (Optional but recommended) Import the newly generated collection into Postman and run a few requests against a local dev environment to ensure correctness.
|
|
||||||
|
|
||||||
### 5. Document and Commit
|
|
||||||
|
|
||||||
- **Commit all changes:** Add the modified `openapi.yaml` and the generated `UMIG_API_V2_Collection.postman_collection.json` to your commit.
|
|
||||||
- **Update Changelog:** Add an entry to `CHANGELOG.md` detailing the API changes.
|
|
||||||
- **Update Dev Journal:** Create a developer journal entry summarizing the work done.scribe any removals or replacements and the rationale.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Key Principles:**
|
|
||||||
|
|
||||||
- Never erase or overwrite existing tests/specs unless required by an API change.
|
|
||||||
- Every endpoint in the API must be present and tested in both Postman and OpenAPI.
|
|
||||||
- Consistency, completeness, and traceability are paramount.
|
|
||||||
|
|
@ -1,44 +0,0 @@
|
||||||
---
|
|
||||||
description: The standard workflow for creating or modifying Groovy REST API endpoints in this project.
|
|
||||||
---
|
|
||||||
|
|
||||||
This workflow ensures all API development adheres to the project's established, stable patterns to prevent bugs and maintain consistency.
|
|
||||||
|
|
||||||
## Key Reference Documents
|
|
||||||
|
|
||||||
**PRIMARY REFERENCE**: `/docs/solution-architecture.md` — Comprehensive solution architecture and API design standards
|
|
||||||
|
|
||||||
**SUPPORTING REFERENCES**:
|
|
||||||
|
|
||||||
- Current ADRs in `/docs/adr/` (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
|
|
||||||
- Working examples: `src/com/umig/api/v2/TeamsApi.groovy`
|
|
||||||
|
|
||||||
1. **Analyze the Existing Pattern**:
|
|
||||||
|
|
||||||
- Before writing any code, thoroughly review a working, stable API file like `src/com/umig/api/v2/TeamsApi.groovy`.
|
|
||||||
- Pay close attention to the structure: separate endpoint definitions for each HTTP method, simple `try-catch` blocks for error handling, and standard `javax.ws.rs.core.Response` objects.
|
|
||||||
|
|
||||||
2. **Replicate the Pattern**:
|
|
||||||
|
|
||||||
- Create a new endpoint definition for each HTTP method (`GET`, `POST`, `PUT`, `DELETE`).
|
|
||||||
- Do NOT use a central dispatcher, custom exception classes, or complex helper methods for error handling. Keep all logic within the endpoint method.
|
|
||||||
|
|
||||||
3. **Implement Business Logic**:
|
|
||||||
|
|
||||||
- Write the core business logic inside a `try` block.
|
|
||||||
- Call the appropriate `UserRepository` or `TeamRepository` methods.
|
|
||||||
|
|
||||||
4. **Handle Success Cases**:
|
|
||||||
|
|
||||||
- For `GET`, `POST`, and `PUT`, return a `Response.ok()` or `Response.status(Response.Status.CREATED)` with a `JsonBuilder` payload.
|
|
||||||
- **CRITICAL**: For a successful `DELETE`, always return `Response.noContent().build()`. Do NOT attempt to return a body.
|
|
||||||
|
|
||||||
5. **Handle Error Cases**:
|
|
||||||
|
|
||||||
- Use `catch (SQLException e)` to handle specific database errors (e.g., foreign key violations `23503`, unique constraint violations `23505`).
|
|
||||||
- Use a generic `catch (Exception e)` for all other unexpected errors.
|
|
||||||
- In all `catch` blocks, log the error using `log.error()` or `log.warn()` and return an appropriate `Response.status(...)` with a simple JSON error message.
|
|
||||||
|
|
||||||
6. **Validate Inputs**:
|
|
||||||
- Strictly validate all incoming data (path parameters, request bodies) at the beginning of the endpoint method.
|
|
||||||
- Return a `400 Bad Request` for any invalid input.
|
|
||||||
|
|
@ -1,162 +0,0 @@
|
||||||
---
|
|
||||||
description: This is the basic workflow to gather last changes, prepare a relevant commit message and commit the staged code changes
|
|
||||||
---
|
|
||||||
|
|
||||||
This workflow guides the creation of a high-quality, comprehensive commit message that accurately reflects all staged changes, adhering strictly to the Conventional Commits 1.0 standard.
|
|
||||||
|
|
||||||
**1. Comprehensive Evidence Gathering (MANDATORY - Prevent tunnel vision):**
|
|
||||||
|
|
||||||
**1.1. Staged Changes Analysis:**
|
|
||||||
|
|
||||||
- **Detailed Diff Review:** Run `git diff --staged --stat` to get both summary and detailed view of all staged changes.
|
|
||||||
- **File-by-File Analysis:** Run `git diff --staged --name-status` to see the operation type (Modified, Added, Deleted) for each file.
|
|
||||||
- **Functional Area Classification:** Group staged files by functional area:
|
|
||||||
- **API Changes:** `src/groovy/umig/api/`, `src/groovy/umig/repository/`
|
|
||||||
- **UI Changes:** `src/groovy/umig/web/js/`, `src/groovy/umig/web/css/`, `src/groovy/umig/macros/`
|
|
||||||
- **Documentation:** `docs/`, `README.md`, `CHANGELOG.md`, `*.md` files
|
|
||||||
- **Tests:** `src/groovy/umig/tests/`, `local-dev-setup/__tests__/`
|
|
||||||
- **Configuration:** `local-dev-setup/liquibase/`, `*.json`, `*.yml`, `*.properties`
|
|
||||||
- **Database:** Migration files, schema changes
|
|
||||||
- **Change Type Analysis:** For each file, determine the type of change:
|
|
||||||
- New functionality added
|
|
||||||
- Existing functionality modified
|
|
||||||
- Bug fixes
|
|
||||||
- Refactoring or code cleanup
|
|
||||||
- Documentation updates
|
|
||||||
- Test additions or modifications
|
|
||||||
|
|
||||||
**1.2. Unstaged and Untracked Files Review:**
|
|
||||||
|
|
||||||
- **Related Files Check:** Run `git status --porcelain` to identify any untracked or unstaged files that might be related.
|
|
||||||
- **Completeness Verification:** Ensure all related changes are staged or deliberately excluded.
|
|
||||||
- **User Prompt:** If potentially related files are unstaged, prompt the user about inclusion.
|
|
||||||
|
|
||||||
**1.3. Work Stream Identification:**
|
|
||||||
|
|
||||||
- **Primary Work Stream:** Identify the main type of work being committed.
|
|
||||||
- **Secondary Work Streams:** Identify supporting changes (e.g., tests, documentation, configuration).
|
|
||||||
- **Cross-Functional Impact:** Note changes that span multiple functional areas.
|
|
||||||
- **Architecture Impact:** Identify any architectural or pattern changes.
|
|
||||||
|
|
||||||
**2. Multi-Context Rationale Analysis (MANDATORY - Address tunnel vision):**
|
|
||||||
|
|
||||||
**2.1. Session Context Review:**
|
|
||||||
|
|
||||||
- **Conversation Timeline:** Review the entire session conversation to understand the evolution of the work.
|
|
||||||
- **Initial Problem:** Identify the original problem or task that initiated the changes.
|
|
||||||
- **Decision Points:** Note key decisions made during the session that influenced the implementation.
|
|
||||||
- **Scope Evolution:** If the work expanded beyond the initial scope, understand how and why.
|
|
||||||
|
|
||||||
**2.2. Development Context:**
|
|
||||||
|
|
||||||
- **Dev Journal Review:** If a development journal entry was created during the session, review it for high-level narrative.
|
|
||||||
- **Related Work:** Check if this commit is part of a larger feature or bug fix spanning multiple commits.
|
|
||||||
- **Previous Commits:** Review recent commits to understand the progression of work.
|
|
||||||
|
|
||||||
**2.3. Business and Technical Context:**
|
|
||||||
|
|
||||||
- **Business Impact:** Understand what user-facing or system benefits this change provides.
|
|
||||||
- **Technical Motivation:** Identify the technical reasons for the changes (performance, maintainability, new features).
|
|
||||||
- **Problem-Solution Mapping:** For each work stream, clearly understand:
|
|
||||||
- What problem was being solved
|
|
||||||
- Why this particular solution was chosen
|
|
||||||
- What alternatives were considered
|
|
||||||
- What the outcome achieves
|
|
||||||
|
|
||||||
**2.4. Change Dependencies:**
|
|
||||||
|
|
||||||
- **Cross-Stream Dependencies:** How different work streams in this commit depend on each other.
|
|
||||||
- **External Dependencies:** Any external factors that influenced the changes.
|
|
||||||
- **Future Implications:** What this change enables or constrains for future development.
|
|
||||||
|
|
||||||
**3. Multi-Stream Commit Message Synthesis (MANDATORY - Address tunnel vision):**
|
|
||||||
|
|
||||||
The goal is to create a message that comprehensively explains all changes and their context for future developers.
|
|
||||||
|
|
||||||
**3.1. Type and Scope Selection:**
|
|
||||||
|
|
||||||
- **Primary Type:** Choose the most significant type from the allowed list (`feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `chore`).
|
|
||||||
- **Multi-Stream Consideration:** If multiple significant work streams exist, choose the type that best represents the overall impact.
|
|
||||||
- **Scope Selection:** Identify the primary part of the codebase affected:
|
|
||||||
- **Specific Components:** `api`, `ui`, `db`, `auth`, `docs`, `tests`
|
|
||||||
- **Functional Areas:** `admin`, `migration`, `iteration`, `planning`
|
|
||||||
- **System-Wide:** Use broader scopes for cross-cutting changes
|
|
||||||
|
|
||||||
**3.2. Subject Line Construction:**
|
|
||||||
|
|
||||||
- **Imperative Mood:** Write a concise summary (under 50 characters) in imperative mood.
|
|
||||||
- **Multi-Stream Subject:** If multiple work streams are significant, write a subject that captures the overall achievement.
|
|
||||||
- **Specific vs General:** Balance specificity with comprehensiveness.
|
|
||||||
|
|
||||||
**3.3. Body Structure (Enhanced for Multi-Stream):**
|
|
||||||
|
|
||||||
- **Primary Change Description:** Start with the main change and its motivation.
|
|
||||||
- **Work Stream Breakdown:** For each significant work stream:
|
|
||||||
- **What Changed:** Specific files, components, or functionality
|
|
||||||
- **Why Changed:** Problem being solved or improvement being made
|
|
||||||
- **How Changed:** Technical approach or implementation details
|
|
||||||
- **Impact:** What this enables or improves
|
|
||||||
- **Cross-Stream Integration:** How different work streams work together.
|
|
||||||
- **Technical Decisions:** Explain significant design choices and why alternatives were rejected.
|
|
||||||
- **Context:** Provide enough context for future developers to understand the change.
|
|
||||||
|
|
||||||
**3.4. Footer Considerations:**
|
|
||||||
|
|
||||||
- **Breaking Changes:** Use `BREAKING CHANGE:` for any breaking changes with migration notes.
|
|
||||||
- **Issue References:** Reference related issues (e.g., `Closes #123`, `Relates to #456`).
|
|
||||||
- **Co-authorship:** Add `Co-Authored-By:` for pair programming or AI assistance.
|
|
||||||
|
|
||||||
**3.5. Message Assembly:**
|
|
||||||
|
|
||||||
- **Single Coherent Story:** Weave multiple work streams into a single, coherent narrative.
|
|
||||||
- **Logical Flow:** Organize information in a logical sequence that makes sense to readers.
|
|
||||||
- **Appropriate Detail:** Include enough detail to understand the change without overwhelming.
|
|
||||||
|
|
||||||
**4. Anti-Tunnel Vision Verification (MANDATORY - Use before finalizing):**
|
|
||||||
|
|
||||||
Before presenting the commit message, verify you have addressed ALL of the following:
|
|
||||||
|
|
||||||
**Content Coverage:**
|
|
||||||
|
|
||||||
- [ ] All staged files are explained in the commit message
|
|
||||||
- [ ] All functional areas touched are documented
|
|
||||||
- [ ] All work streams are identified and described
|
|
||||||
- [ ] Change types (feat/fix/docs/etc.) are accurately represented
|
|
||||||
- [ ] Cross-functional impacts are noted
|
|
||||||
|
|
||||||
**Technical Completeness:**
|
|
||||||
|
|
||||||
- [ ] Code changes include rationale for the approach taken
|
|
||||||
- [ ] API changes are summarized with impact
|
|
||||||
- [ ] UI changes are explained with user impact
|
|
||||||
- [ ] Database changes include migration details
|
|
||||||
- [ ] Configuration changes are noted
|
|
||||||
- [ ] Test changes are explained
|
|
||||||
|
|
||||||
**Context and Rationale:**
|
|
||||||
|
|
||||||
- [ ] Original problem or motivation is clearly stated
|
|
||||||
- [ ] Solution approach is justified
|
|
||||||
- [ ] Technical decisions are explained
|
|
||||||
- [ ] Alternative approaches are noted (if relevant)
|
|
||||||
- [ ] Future implications are considered
|
|
||||||
|
|
||||||
**Message Quality:**
|
|
||||||
|
|
||||||
- [ ] Subject line is under 50 characters and imperative mood
|
|
||||||
- [ ] Body explains "what" and "why" for each work stream
|
|
||||||
- [ ] Information is organized in logical flow
|
|
||||||
- [ ] Appropriate level of detail for future developers
|
|
||||||
- [ ] Conventional Commits format is followed
|
|
||||||
|
|
||||||
**Completeness Verification:**
|
|
||||||
|
|
||||||
- [ ] All evidence from steps 1-2 is reflected in the message
|
|
||||||
- [ ] No significant work is missing from the description
|
|
||||||
- [ ] Multi-stream nature is properly represented
|
|
||||||
- [ ] Session context is appropriately captured
|
|
||||||
|
|
||||||
**5. Await Confirmation and Commit:**
|
|
||||||
|
|
||||||
- Present the generated commit message to the user for review.
|
|
||||||
- After receiving confirmation, execute the `git commit` command.
|
|
||||||
|
|
@ -1,63 +0,0 @@
|
||||||
---
|
|
||||||
description: How to safely refine the data model, update migrations, and keep data generation and tests in sync
|
|
||||||
---
|
|
||||||
|
|
||||||
# Data Model Refinement & Synchronisation Workflow
|
|
||||||
|
|
||||||
This workflow ensures every data model change is robust, consistent, and reflected across migrations, documentation, data generation, and tests.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Reference the Authoritative Sources
|
|
||||||
|
|
||||||
Before making or reviewing any data model change, consult these key documents:
|
|
||||||
|
|
||||||
- `/docs/solution-architecture.md` — **PRIMARY**: Comprehensive solution architecture and design decisions
|
|
||||||
- `/docs/dataModel/README.md` — Data model documentation and ERD
|
|
||||||
- `/local-dev-setup/liquibase/changelogs/001_unified_baseline.sql` — Baseline schema (Liquibase)
|
|
||||||
- `/docs/adr/` — Current ADRs (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
|
|
||||||
|
|
||||||
## 2. Plan the Change
|
|
||||||
|
|
||||||
- Identify the business or technical rationale for the change.
|
|
||||||
- Determine the impact on existing tables, columns, relationships, and constraints.
|
|
||||||
- Draft or update the ERD as needed.
|
|
||||||
|
|
||||||
## 3. Update the Schema
|
|
||||||
|
|
||||||
- Create or edit the appropriate Liquibase changelog(s) (never edit the baseline directly after project start).
|
|
||||||
- Follow naming conventions and migration strategy as per ADRs.
|
|
||||||
- Document every change with clear comments in the changelog.
|
|
||||||
|
|
||||||
## 4. Update Data Model Documentation
|
|
||||||
|
|
||||||
- Reflect all changes in `/docs/dataModel/README.md` (ERD, field lists, rationale).
|
|
||||||
- If the change is significant, consider updating or creating an ADR.
|
|
||||||
|
|
||||||
## 5. Synchronise Data Generation Scripts
|
|
||||||
|
|
||||||
- Review and update `local-dev-setup/data-utils/umig_generate_fake_data.js` (FAKER-based generator).
|
|
||||||
- Adjust or add generators in `local-dev-setup/data-utils/generators/` as needed.
|
|
||||||
- Ensure all generated data matches the new/updated schema.
|
|
||||||
|
|
||||||
## 6. Update and Extend Tests
|
|
||||||
|
|
||||||
- Update all related tests in `local-dev-setup/data-utils/__tests__/` to cover new/changed fields and relationships.
|
|
||||||
- Add new fixture data if needed.
|
|
||||||
- Ensure tests remain non-destructive and deterministic.
|
|
||||||
|
|
||||||
## 7. Validate
|
|
||||||
|
|
||||||
- Run all migrations in a fresh environment (dev/test).
|
|
||||||
- Run the data generation script and all tests; confirm no failures or regressions.
|
|
||||||
- Review the ERD and documentation for completeness and accuracy.
|
|
||||||
|
|
||||||
## 8. Document and Communicate
|
|
||||||
|
|
||||||
- Update `CHANGELOG.md` with a summary of the data model change.
|
|
||||||
- If required, update the main `README.md` and any relevant ADRs.
|
|
||||||
- Consider adding a Developer Journal entry to narrate the rationale and process.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
> _Use this workflow every time you refine the data model to maintain project discipline, testability, and documentation integrity._
|
|
||||||
|
|
@ -1,151 +0,0 @@
|
||||||
---
|
|
||||||
description: At the end of each session, we look back at everything that was said and done, and we write down a Development Journal Entry
|
|
||||||
---
|
|
||||||
|
|
||||||
The Developer Journal is a great way to keep track of our progress and document the way we made design decisions and coding breakthroughs.
|
|
||||||
|
|
||||||
The task is to generate a new Developer Journal entry in the `docs/devJournal` folder, in markdown format, using the naming convention `yyyymmdd-nn.md`.
|
|
||||||
|
|
||||||
The content of the entry must narrate the session's story. To ensure the full context is captured, you will follow these steps in order:
|
|
||||||
|
|
||||||
**1. Establish the 'Why' (The High-Level Context):**
|
|
||||||
|
|
||||||
- First, determine the active feature branch by running `git branch --show-current`.
|
|
||||||
- Then, find and read the most recent previous journal entry to understand the starting point.
|
|
||||||
- Synthesise these with the beginning of our current conversation to state the session's primary goal or the feature being worked on.
|
|
||||||
|
|
||||||
**2. Gather Evidence of 'The How' (The Journey):**
|
|
||||||
|
|
||||||
This step is critical to avoid "tunnel vision". You must perform a deep analysis of the entire session using multiple evidence sources.
|
|
||||||
|
|
||||||
**2.1. Multi-Source Evidence Gathering (MANDATORY - All sources must be reviewed):**
|
|
||||||
|
|
||||||
- **Conversation Chronology:** Create a timeline of the entire session from start to finish. Note every major topic, tool usage, file interaction, and decision point.
|
|
||||||
- **Git Commit Analysis:** Run `git log --since="YYYY-MM-DD" --stat --oneline` to get a comprehensive view of all commits since the last journal entry. Each commit represents a separate work stream that must be captured.
|
|
||||||
- **Staged Changes Analysis:** Run `git diff --staged --name-status` to see what's currently staged for commit (if anything).
|
|
||||||
- **File System Impact:** Run `git status --porcelain` to see all modified, added, and untracked files. Group by functional area (API, UI, docs, tests, etc.).
|
|
||||||
- **Documentation Trail:** Check for changes in:
|
|
||||||
- `CHANGELOG.md` (often contains structured summaries of work)
|
|
||||||
- `README.md` and other root-level documentation
|
|
||||||
- `docs/` directory (API specs, ADRs, solution architecture)
|
|
||||||
- `cline-docs/` (memory bank files)
|
|
||||||
- Any workflow executions mentioned in conversation
|
|
||||||
|
|
||||||
**2.2. Evidence Cross-Reference (MANDATORY - Prevent tunnel vision):**
|
|
||||||
|
|
||||||
- **Workflow Execution Review:** If any workflows were mentioned in the conversation (e.g., `.clinerules/workflows/`), review their outputs and ensure their objectives are captured.
|
|
||||||
- **API Development Pattern:** If API work was done, check for:
|
|
||||||
- New/modified Groovy files in `src/groovy/umig/api/`
|
|
||||||
- New/modified repository files in `src/groovy/umig/repository/`
|
|
||||||
- Documentation updates in `docs/api/`
|
|
||||||
- OpenAPI specification changes
|
|
||||||
- Postman collection regeneration
|
|
||||||
- **UI Development Pattern:** If UI work was done, check for:
|
|
||||||
- JavaScript file changes in `src/groovy/umig/web/js/`
|
|
||||||
- CSS changes in `src/groovy/umig/web/css/`
|
|
||||||
- Macro changes in `src/groovy/umig/macros/`
|
|
||||||
- Mock/prototype updates in `mock/`
|
|
||||||
- **Refactoring Pattern:** If refactoring was done, check for:
|
|
||||||
- File moves, renames, or splits
|
|
||||||
- Architecture changes reflected in project structure
|
|
||||||
- New patterns or modules introduced
|
|
||||||
- Breaking changes or deprecations
|
|
||||||
|
|
||||||
**2.3. Completeness Verification (MANDATORY - Final check):**
|
|
||||||
|
|
||||||
- **Three-Pass Review:**
|
|
||||||
1. **First Pass:** What was the initial request/problem?
|
|
||||||
2. **Second Pass:** What were all the intermediate steps and discoveries?
|
|
||||||
3. **Third Pass:** What was the final state and all deliverables?
|
|
||||||
- **Breadth vs Depth Check:** Ensure both technical depth (how things were implemented) and breadth (all areas touched) are captured.
|
|
||||||
- **Hidden Work Detection:** Look for "invisible" work like:
|
|
||||||
- Configuration changes
|
|
||||||
- Dependency updates
|
|
||||||
- Test file modifications
|
|
||||||
- Documentation synchronization
|
|
||||||
- Workflow or process improvements
|
|
||||||
|
|
||||||
**3. Synthesise and Write the Narrative:**
|
|
||||||
|
|
||||||
The goal is to write a detailed, insightful story, not a shallow summary. Prioritise depth and clarity over brevity.
|
|
||||||
|
|
||||||
**3.1. Multi-Stream Integration (MANDATORY - Address tunnel vision):**
|
|
||||||
|
|
||||||
- **Identify All Work Streams:** Based on evidence gathering, create a list of all distinct work streams (e.g., "API documentation", "Admin GUI refactoring", "Environment API implementation", "Schema consistency fixes").
|
|
||||||
- **Parallel vs Sequential Work:** Determine which work streams were parallel (done simultaneously) vs sequential (one led to another).
|
|
||||||
- **Cross-Stream Dependencies:** Note how different work streams influenced each other (e.g., API documentation revealed schema issues that required code changes).
|
|
||||||
- **Scope Creep Documentation:** If the session expanded beyond initial scope, document how and why this happened.
|
|
||||||
|
|
||||||
**3.2. Narrative Structure (Enhanced):**
|
|
||||||
|
|
||||||
- **Copy and Fill the Template:** For every new devJournal entry, always copy and fill in the persistent template at `docs/devJournal/devJournalEntryTemplate.md`. This ensures consistency, quality, and traceability across all devJournal entries.
|
|
||||||
- **Multi-Problem Awareness:** If multiple problems were addressed, structure the narrative to handle multiple concurrent themes rather than forcing a single linear story.
|
|
||||||
- **Enhanced Story Arc:** The "How" section should follow this comprehensive structure:
|
|
||||||
1. **The Initial Problem(s):** Clearly describe all bugs, errors, or tasks at the start of the session. Note if scope expanded.
|
|
||||||
2. **The Investigation:** Detail the debugging/analysis process for each work stream. What did we look at first? What were our initial hypotheses? What tools did we use?
|
|
||||||
3. **The Breakthrough(s):** Describe key insights or discoveries for each work stream. Note cross-stream insights.
|
|
||||||
4. **Implementation and Refinements:** Explain how solutions were implemented across all work streams. Detail code changes and architectural improvements.
|
|
||||||
5. **Validation and Documentation:** Describe how we confirmed fixes worked and updated documentation across all areas.
|
|
||||||
- **Technical Depth Requirements:** For each work stream, ensure you capture:
|
|
||||||
- **What changed** (files, code, configuration)
|
|
||||||
- **Why it changed** (problem being solved, improvement being made)
|
|
||||||
- **How it changed** (technical approach, patterns used)
|
|
||||||
- **Impact** (what this enables, what problems it solves)
|
|
||||||
|
|
||||||
**3.3. Quality Assurance (MANDATORY - Final verification):**
|
|
||||||
|
|
||||||
- **Evidence vs Narrative Cross-Check:** Verify that every piece of evidence from step 2 has been addressed in the narrative.
|
|
||||||
- **Completeness Audit:** Check that the journal entry would allow someone to understand:
|
|
||||||
- The full scope of work accomplished
|
|
||||||
- The technical decisions made and why
|
|
||||||
- The current state of the project
|
|
||||||
- What should be done next
|
|
||||||
- **Tone and Format:** The tone should be in British English, and the format should be raw markdown.
|
|
||||||
- **Final Review:** Before presenting the journal entry, re-read it one last time to ensure it captures the full journey and avoids the "tunnel vision" of only looking at the final code or the most recent work.
|
|
||||||
|
|
||||||
**4. Anti-Tunnel Vision Checklist (MANDATORY - Use before finalizing):**
|
|
||||||
|
|
||||||
Before presenting the journal entry, verify you have addressed ALL of the following:
|
|
||||||
|
|
||||||
**Content Coverage:**
|
|
||||||
|
|
||||||
- [ ] All git commits since last journal entry are documented
|
|
||||||
- [ ] All workflow executions mentioned in conversation are captured
|
|
||||||
- [ ] All file modifications (API, UI, docs, tests, config) are explained
|
|
||||||
- [ ] All architectural or pattern changes are documented
|
|
||||||
- [ ] All bug fixes and their root causes are explained
|
|
||||||
- [ ] All new features and their implementation are detailed
|
|
||||||
|
|
||||||
**Work Stream Integration:**
|
|
||||||
|
|
||||||
- [ ] Multiple work streams are identified and explained
|
|
||||||
- [ ] Parallel vs sequential work is clearly distinguished
|
|
||||||
- [ ] Cross-dependencies between work streams are noted
|
|
||||||
- [ ] Scope expansions are documented with reasoning
|
|
||||||
|
|
||||||
**Technical Depth:**
|
|
||||||
|
|
||||||
- [ ] Code changes include the "what", "why", "how", and "impact"
|
|
||||||
- [ ] Database schema changes are documented
|
|
||||||
- [ ] API changes include request/response examples
|
|
||||||
- [ ] UI changes include user experience impact
|
|
||||||
- [ ] Documentation changes and their necessity are explained
|
|
||||||
|
|
||||||
**Project Context:**
|
|
||||||
|
|
||||||
- [ ] Current project state is accurately reflected
|
|
||||||
- [ ] Next steps and priorities are updated
|
|
||||||
- [ ] Key learnings and patterns are documented
|
|
||||||
- [ ] Project milestone significance is noted
|
|
||||||
|
|
||||||
**Quality Verification:**
|
|
||||||
|
|
||||||
- [ ] Evidence from step 2 matches narrative content
|
|
||||||
- [ ] No significant work is missing from the story
|
|
||||||
- [ ] Technical decisions are justified and explained
|
|
||||||
- [ ] Future developers could understand the session's impact
|
|
||||||
|
|
||||||
**5. Await Confirmation:**
|
|
||||||
|
|
||||||
- After presenting the generated journal entry, **DO NOT** proceed with any other actions, especially committing.
|
|
||||||
- Wait for explicit confirmation or further instructions from the user.
|
|
||||||
|
|
@ -1,10 +0,0 @@
|
||||||
---
|
|
||||||
description: A workflow to update the project documentation and memories based on latest changes
|
|
||||||
---
|
|
||||||
|
|
||||||
- Review and summarise the latest changes performed, based on the cascade conversation and on the git status. Be concise but comprehensive.
|
|
||||||
- **CRITICAL**: If changes affect architecture, update `/docs/solution-architecture.md` as the primary reference
|
|
||||||
- Do any changes require a new ADR in `/docs/adr/` (archived ADRs in `/docs/adr/archive/` are consolidated in solution-architecture.md)
|
|
||||||
- Update as required the CHANGELOG
|
|
||||||
- Update as required the main README file
|
|
||||||
- Update as required the README files in subfolders
|
|
||||||
|
|
@ -1,12 +0,0 @@
|
||||||
---
|
|
||||||
description: We run this workflow at the beginning of each new Cascade session, to make sure that the agent has the correct understanding of the state of the development.
|
|
||||||
---
|
|
||||||
|
|
||||||
- Review the memories
|
|
||||||
- **PRIORITY**: Review `/docs/solution-architecture.md` — Primary architectural reference document
|
|
||||||
- Review project documentation in folder /cline-docs
|
|
||||||
- Review the developer journal entries in folder /docs/devJournal
|
|
||||||
- Review current ADRs in folder `/docs/adr` (skip `/docs/adr/archive/` - consolidated in solution-architecture.md)
|
|
||||||
- Confirm your good understanding of the project's requirements and the current state of the development
|
|
||||||
- Advise if there are any documentation inconsistencies to resolve
|
|
||||||
- Recommend the next steps and tasks to be tackled.
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
||||||
This task is about updating the memory bank of cline, in folder cline-docs
|
|
||||||
You will do that based on
|
|
||||||
|
|
||||||
- the Developer Journal entries of the day, that you will find in folder DevJournal,
|
|
||||||
- the CHANGELOG.md file
|
|
||||||
- the various README.md files
|
|
||||||
- the Architectural Decision Records, that you will find in folder Docs/adrs
|
|
||||||
|
|
||||||
Be concise but comprehensive and accurate. Ensure consistency within the existing memory bank. Express yourself in british english.
|
|
||||||
|
|
@ -1,271 +0,0 @@
|
||||||
---
|
|
||||||
description: A Pull Request documentation workflow
|
|
||||||
---
|
|
||||||
|
|
||||||
This workflow guides the creation of a high-quality, comprehensive Pull Request description. A great PR description is the fastest way to get your changes reviewed and merged.
|
|
||||||
|
|
||||||
**1. Comprehensive Scope Analysis (MANDATORY - Prevent tunnel vision):**
|
|
||||||
|
|
||||||
**1.1. Branch and Commit Analysis:**
|
|
||||||
|
|
||||||
- **Determine the Base Branch:** Identify the target branch for the merge (e.g., `main`, `develop`).
|
|
||||||
- **Full Commit Analysis:** Run `git log <base_branch>..HEAD --stat --oneline` to get both summary and detailed changes for all commits in this PR.
|
|
||||||
- **Commit Categorization:** Group commits by type (feat, fix, docs, refactor, test, chore) to understand the full scope.
|
|
||||||
- **Time Range Assessment:** Run `git log <base_branch>..HEAD --format="%h %ad %s" --date=short` to understand the development timeline.
|
|
||||||
|
|
||||||
**1.2. File System Impact Analysis:**
|
|
||||||
|
|
||||||
- **Changed Files Overview:** Run `git diff <base_branch>..HEAD --name-status` to see all modified, added, and deleted files.
|
|
||||||
- **Functional Area Mapping:** Group changed files by functional area:
|
|
||||||
- **API Changes:** `src/groovy/umig/api/`, `src/groovy/umig/repository/`
|
|
||||||
- **UI Changes:** `src/groovy/umig/web/js/`, `src/groovy/umig/web/css/`, `src/groovy/umig/macros/`
|
|
||||||
- **Documentation:** `docs/`, `README.md`, `CHANGELOG.md`, `*.md` files
|
|
||||||
- **Tests:** `src/groovy/umig/tests/`, `local-dev-setup/__tests__/`
|
|
||||||
- **Configuration:** `local-dev-setup/liquibase/`, `*.json`, `*.yml`, `*.properties`
|
|
||||||
- **Database:** Migration files, schema changes
|
|
||||||
- **Cross-Functional Impact:** Identify changes that span multiple functional areas.
|
|
||||||
|
|
||||||
**1.3. Work Stream Identification:**
|
|
||||||
|
|
||||||
- **Primary Work Streams:** Based on commits and file changes, identify distinct work streams (e.g., "API implementation", "UI refactoring", "documentation updates").
|
|
||||||
- **Secondary Work Streams:** Identify supporting work (e.g., "schema fixes", "test updates", "configuration changes").
|
|
||||||
- **Parallel vs Sequential:** Determine which work streams were done in parallel vs. sequence.
|
|
||||||
- **Dependencies:** Note how different work streams depend on each other.
|
|
||||||
|
|
||||||
**2. Multi-Stream Narrative Synthesis (MANDATORY - Address tunnel vision):**
|
|
||||||
|
|
||||||
A PR is a story that may have multiple parallel themes. You need to explain the "why," the "what," and the "how" for each work stream.
|
|
||||||
|
|
||||||
**2.1. Context and Motivation Analysis:**
|
|
||||||
|
|
||||||
- **Development Context:** Review recent dev journal entries, session context, and any associated tickets (e.g., Jira, GitHub Issues).
|
|
||||||
- **Problem Statement:** For each work stream, clearly articulate:
|
|
||||||
- What problem was being solved or feature being added?
|
|
||||||
- What was the state of the application before this change?
|
|
||||||
- What will it be after?
|
|
||||||
- **Business Impact:** Explain the user-facing or technical benefits of the changes.
|
|
||||||
- **Scope Evolution:** If the PR scope expanded during development, explain how and why.
|
|
||||||
|
|
||||||
**2.2. Technical Implementation Analysis:**
|
|
||||||
|
|
||||||
- **Architecture Overview:** Describe the overall technical approach and any significant architectural decisions.
|
|
||||||
- **Work Stream Details:** For each work stream identified in step 1:
|
|
||||||
- **API Changes:** New endpoints, schema modifications, repository patterns
|
|
||||||
- **UI Changes:** Component modifications, styling updates, user experience improvements
|
|
||||||
- **Documentation:** What docs were updated and why
|
|
||||||
- **Database Changes:** Schema migrations, data model updates
|
|
||||||
- **Configuration:** Environment or build configuration changes
|
|
||||||
- **Tests:** New test coverage, test framework updates
|
|
||||||
- **Technical Decisions:** Explain why you chose specific solutions over alternatives.
|
|
||||||
- **Patterns and Standards:** Note adherence to or establishment of new project patterns.
|
|
||||||
|
|
||||||
**2.3. Integration and Dependencies:**
|
|
||||||
|
|
||||||
- **Cross-Stream Integration:** How different work streams work together.
|
|
||||||
- **Breaking Changes:** Any breaking changes and migration path.
|
|
||||||
- **Backward Compatibility:** How existing functionality is preserved.
|
|
||||||
- **Future Implications:** What this change enables for future development.
|
|
||||||
|
|
||||||
**3. Comprehensive Review Instructions (MANDATORY - Cover all work streams):**
|
|
||||||
|
|
||||||
Make it easy for others to review your work across all functional areas.
|
|
||||||
|
|
||||||
**3.1. Testing Instructions by Work Stream:**
|
|
||||||
|
|
||||||
- **API Testing:** For each new or modified API endpoint:
|
|
||||||
- Provide curl commands or Postman collection references
|
|
||||||
- Include expected request/response examples
|
|
||||||
- Note any authentication or setup requirements
|
|
||||||
- Identify edge cases and error scenarios to test
|
|
||||||
- **UI Testing:** For each UI change:
|
|
||||||
- Provide step-by-step user interaction flows
|
|
||||||
- Include screenshots or GIFs showing before/after states
|
|
||||||
- Identify specific user scenarios to test
|
|
||||||
- Note any browser-specific considerations
|
|
||||||
- **Database Testing:** For schema changes:
|
|
||||||
- Provide migration verification steps
|
|
||||||
- Include data verification queries
|
|
||||||
- Note any rollback procedures
|
|
||||||
- **Configuration Testing:** For environment changes:
|
|
||||||
- Provide setup or configuration verification steps
|
|
||||||
- Include any new environment variables or settings
|
|
||||||
- Note any deployment considerations
|
|
||||||
|
|
||||||
**3.2. Review Focus Areas:**
|
|
||||||
|
|
||||||
- **Code Quality:** Highlight areas that need particular attention (complex logic, new patterns, potential performance impacts).
|
|
||||||
- **Security:** Note any security considerations or authentication changes.
|
|
||||||
- **Performance:** Identify any performance-critical changes or optimizations.
|
|
||||||
- **Compatibility:** Note any backward compatibility concerns or breaking changes.
|
|
||||||
|
|
||||||
**3.3. Verification Checklist:**
|
|
||||||
|
|
||||||
- **Functional Verification:** What specific functionality should reviewers verify works correctly?
|
|
||||||
- **Integration Testing:** How should reviewers verify that different components work together?
|
|
||||||
- **Edge Case Testing:** What edge cases or error conditions should be tested?
|
|
||||||
- **Documentation Review:** What documentation should be reviewed for accuracy and completeness?
|
|
||||||
|
|
||||||
**4. Enhanced PR Description Template (MANDATORY - Multi-stream aware):**
|
|
||||||
|
|
||||||
Use a structured template that accommodates multiple work streams and comprehensive coverage.
|
|
||||||
|
|
||||||
**4.1. Title Construction:**
|
|
||||||
|
|
||||||
- **Primary Work Stream:** Use the most significant work stream for the title following Conventional Commits standard.
|
|
||||||
- **Multi-Stream Indicator:** If multiple significant work streams exist, use a broader scope (e.g., `feat(admin): complete user management system with API and UI`).
|
|
||||||
|
|
||||||
**4.2. Enhanced Body Template:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
<!-- Brief overview of the PR's purpose and scope. What problem does this solve? -->
|
|
||||||
|
|
||||||
## Work Streams
|
|
||||||
|
|
||||||
<!-- List all major work streams in this PR -->
|
|
||||||
|
|
||||||
### 🚀 [Primary Work Stream Name]
|
|
||||||
|
|
||||||
- Brief description of changes
|
|
||||||
- Key files modified
|
|
||||||
- Impact on users/system
|
|
||||||
|
|
||||||
### 🔧 [Secondary Work Stream Name]
|
|
||||||
|
|
||||||
- Brief description of changes
|
|
||||||
- Key files modified
|
|
||||||
- Impact on users/system
|
|
||||||
|
|
||||||
## Technical Changes
|
|
||||||
|
|
||||||
<!-- Detailed breakdown by functional area -->
|
|
||||||
|
|
||||||
### API Changes
|
|
||||||
|
|
||||||
- New endpoints:
|
|
||||||
- Modified endpoints:
|
|
||||||
- Schema changes:
|
|
||||||
- Repository updates:
|
|
||||||
|
|
||||||
### UI Changes
|
|
||||||
|
|
||||||
- New components:
|
|
||||||
- Modified components:
|
|
||||||
- Styling updates:
|
|
||||||
- User experience improvements:
|
|
||||||
|
|
||||||
### Database Changes
|
|
||||||
|
|
||||||
- Schema migrations:
|
|
||||||
- Data model updates:
|
|
||||||
- Migration scripts:
|
|
||||||
|
|
||||||
### Documentation Updates
|
|
||||||
|
|
||||||
- API documentation:
|
|
||||||
- User documentation:
|
|
||||||
- Developer documentation:
|
|
||||||
- Configuration documentation:
|
|
||||||
|
|
||||||
## Testing Instructions
|
|
||||||
|
|
||||||
<!-- Work stream specific testing -->
|
|
||||||
|
|
||||||
### API Testing
|
|
||||||
|
|
||||||
1. [Specific API test steps]
|
|
||||||
2. [Expected outcomes]
|
|
||||||
3. [Edge cases to verify]
|
|
||||||
|
|
||||||
### UI Testing
|
|
||||||
|
|
||||||
1. [Specific UI test steps]
|
|
||||||
2. [User flows to verify]
|
|
||||||
3. [Browser compatibility checks]
|
|
||||||
|
|
||||||
### Database Testing
|
|
||||||
|
|
||||||
1. [Migration verification]
|
|
||||||
2. [Data integrity checks]
|
|
||||||
3. [Rollback verification]
|
|
||||||
|
|
||||||
## Screenshots / Recordings
|
|
||||||
|
|
||||||
<!-- Visual evidence of changes -->
|
|
||||||
|
|
||||||
### Before
|
|
||||||
|
|
||||||
[Screenshots/GIFs of old behavior]
|
|
||||||
|
|
||||||
### After
|
|
||||||
|
|
||||||
[Screenshots/GIFs of new behavior]
|
|
||||||
|
|
||||||
## Review Focus Areas
|
|
||||||
|
|
||||||
<!-- Areas needing particular attention -->
|
|
||||||
|
|
||||||
- [ ] **Code Quality:** [Specific areas to focus on]
|
|
||||||
- [ ] **Security:** [Security considerations]
|
|
||||||
- [ ] **Performance:** [Performance impacts]
|
|
||||||
- [ ] **Compatibility:** [Breaking changes or compatibility concerns]
|
|
||||||
|
|
||||||
## Deployment Notes
|
|
||||||
|
|
||||||
<!-- Any special deployment considerations -->
|
|
||||||
|
|
||||||
- Environment variables:
|
|
||||||
- Configuration changes:
|
|
||||||
- Database migrations:
|
|
||||||
- Rollback procedures:
|
|
||||||
|
|
||||||
## Related Issues
|
|
||||||
|
|
||||||
<!-- Link to any related issues, e.g., "Closes #123" -->
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
- [ ] All work streams are documented above
|
|
||||||
- [ ] Testing instructions cover all functional areas
|
|
||||||
- [ ] Documentation is updated for all changes
|
|
||||||
- [ ] Database migrations are tested
|
|
||||||
- [ ] API changes are documented
|
|
||||||
- [ ] UI changes are demonstrated with screenshots
|
|
||||||
- [ ] Code follows project style guidelines
|
|
||||||
- [ ] All tests pass
|
|
||||||
- [ ] Breaking changes are documented
|
|
||||||
- [ ] Deployment considerations are noted
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. Anti-Tunnel Vision Verification (MANDATORY - Use before finalizing):**
|
|
||||||
|
|
||||||
Before presenting the PR description, verify you have addressed ALL of the following:
|
|
||||||
|
|
||||||
**Content Coverage:**
|
|
||||||
|
|
||||||
- [ ] All commits in the PR are explained
|
|
||||||
- [ ] All modified files are accounted for
|
|
||||||
- [ ] All functional areas touched are documented
|
|
||||||
- [ ] All work streams are identified and described
|
|
||||||
- [ ] Cross-functional impacts are noted
|
|
||||||
|
|
||||||
**Technical Completeness:**
|
|
||||||
|
|
||||||
- [ ] API changes include endpoint details and examples
|
|
||||||
- [ ] UI changes include visual evidence and user flows
|
|
||||||
- [ ] Database changes include migration details
|
|
||||||
- [ ] Configuration changes include deployment notes
|
|
||||||
- [ ] Documentation updates are comprehensive
|
|
||||||
|
|
||||||
**Review Readiness:**
|
|
||||||
|
|
||||||
- [ ] Testing instructions are clear and complete
|
|
||||||
- [ ] Review focus areas are identified
|
|
||||||
- [ ] Deployment considerations are documented
|
|
||||||
- [ ] Rollback procedures are noted (if applicable)
|
|
||||||
- [ ] Breaking changes are clearly highlighted
|
|
||||||
|
|
||||||
**6. Final Review:**
|
|
||||||
|
|
||||||
- Present the generated PR title and body to the user for final review and approval before they create the pull request on their Git platform.
|
|
||||||
|
|
@ -1,222 +0,0 @@
|
||||||
---
|
|
||||||
description: Sprint Review & Retrospective (UMIG)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Sprint Review & Retrospective Workflow
|
|
||||||
|
|
||||||
> **Filename convention:** `{yyyymmdd}-sprint-review.md` (e.g., `20250627-sprint-review.md`). Place in `/docs/devJournal/`.
|
|
||||||
|
|
||||||
This workflow guides the team through a structured review and retrospective at the end of each sprint or major iteration. It ensures that all accomplishments, learnings, and opportunities for improvement are captured, and that the next sprint is set up for success.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Gather Sprint Context
|
|
||||||
|
|
||||||
**Before generating the sprint review document, fill in or confirm the following:**
|
|
||||||
|
|
||||||
- **Sprint Dates:** (Enter start and end date, e.g., 2025-06-16 – 2025-06-27)
|
|
||||||
- **Participants:** (List all team members involved)
|
|
||||||
- **Branch/Release:** (Run the command below to list all branches created or active during the sprint)
|
|
||||||
```sh
|
|
||||||
git branch --format='%(refname:short) %(creatordate:short)' | grep 'YYYY-MM'
|
|
||||||
```
|
|
||||||
- **Metrics:** (Run the following commands, replacing dates as appropriate)
|
|
||||||
- **Commits:**
|
|
||||||
```sh
|
|
||||||
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
|
|
||||||
```
|
|
||||||
- **PRs Merged:**
|
|
||||||
```sh
|
|
||||||
git log --merges --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l
|
|
||||||
```
|
|
||||||
For details:
|
|
||||||
```sh
|
|
||||||
git log --merges --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline
|
|
||||||
```
|
|
||||||
- **Issues Closed:**
|
|
||||||
```sh
|
|
||||||
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --grep="close[sd]\\|fixe[sd]" --oneline | wc -l
|
|
||||||
```
|
|
||||||
For a list:
|
|
||||||
```sh
|
|
||||||
git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --grep="close[sd]\\|fixe[sd]" --oneline
|
|
||||||
```
|
|
||||||
- **Highlights:** (What are the biggest achievements or milestones? E.g., POC completion)
|
|
||||||
- **Blockers:** (Any major blockers or pain points encountered)
|
|
||||||
- **Learnings:** (Key technical, process, or team insights)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Generate the Sprint Review Document
|
|
||||||
|
|
||||||
Once the above context is filled, generate a new file named `{yyyymmdd}-sprint-review.md` in `/docs/devJournal/` using the following structure:
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 1. Sprint Overview
|
|
||||||
|
|
||||||
- **Sprint Dates:** (start date – end date)
|
|
||||||
- **Sprint Goal:** (Summarise the main objective or theme of the sprint)
|
|
||||||
- **Participants:** (List team members involved)
|
|
||||||
- **Branch/Release:** (List all relevant branches/tags)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Achievements & Deliverables
|
|
||||||
|
|
||||||
- **Major Features Completed:** (Bullet list, with links to PRs or dev journal entries)
|
|
||||||
- **Technical Milestones:** (E.g., architectural decisions, major refactors, new patterns adopted)
|
|
||||||
- **Documentation Updates:** (Summarise key documentation, changelog, or ADR updates)
|
|
||||||
- **Testing & Quality:** (Describe test coverage improvements, integration test results, bug fixes)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Sprint Metrics
|
|
||||||
|
|
||||||
- **Commits:** (Paste result)
|
|
||||||
- **PRs Merged:** (Paste result and details)
|
|
||||||
- **Issues Closed:** (Paste result and details)
|
|
||||||
- **Branches Created:** (Paste result)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 4. Review of Sprint Goals
|
|
||||||
|
|
||||||
- **What was planned:** (Paste or paraphrase the original sprint goal)
|
|
||||||
- **What was achieved:** (Honest assessment of goal completion)
|
|
||||||
- **What was not completed:** (List and explain any items not finished, with reasons)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5. Demo & Walkthrough
|
|
||||||
|
|
||||||
- **Screenshots, GIFs, or short video links:** (if available)
|
|
||||||
- **Instructions for reviewers:** (How to test/review the new features)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 6. Retrospective
|
|
||||||
|
|
||||||
#### What Went Well
|
|
||||||
|
|
||||||
- (Successes, effective practices, positive surprises)
|
|
||||||
|
|
||||||
#### What Didn’t Go Well
|
|
||||||
|
|
||||||
- (Blockers, pain points, technical debt, process issues)
|
|
||||||
|
|
||||||
#### What We Learned
|
|
||||||
|
|
||||||
- (Technical, process, or team insights)
|
|
||||||
|
|
||||||
#### What We’ll Try Next
|
|
||||||
|
|
||||||
- (Actions to improve, experiments for next sprint)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 7. Action Items & Next Steps
|
|
||||||
|
|
||||||
- (Concrete actions, owners, deadlines for next sprint)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 8. References
|
|
||||||
|
|
||||||
- **Dev Journal Entries:** (List all relevant `/docs/devJournal/YYYYMMDD-nn.md` files)
|
|
||||||
- **ADR(s):** (Link to any new or updated ADRs)
|
|
||||||
- **Changelog/Docs:** (Links to major documentation changes)
|
|
||||||
- CHANGELOG.md
|
|
||||||
- .cline-docs/progress.md
|
|
||||||
- .cline-docs/activeContext.md
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
> _Use this workflow at the end of each sprint to ensure a culture of continuous improvement, transparency, and knowledge sharing._
|
|
||||||
|
|
||||||
> **Filename convention:** `{yyyymmdd}-sprint-review.md` (e.g., `20250627-sprint-review.md`). Place in `/docs/devJournal/`.
|
|
||||||
|
|
||||||
This workflow guides the team through a structured review and retrospective at the end of each sprint or major iteration. It ensures that all accomplishments, learnings, and opportunities for improvement are captured, and that the next sprint is set up for success.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Sprint Overview
|
|
||||||
|
|
||||||
- **Sprint Dates:** (start date – end date)
|
|
||||||
- **Sprint Goal:** (Summarise the main objective or theme of the sprint)
|
|
||||||
- **Participants:** (List team members involved)
|
|
||||||
- **Branch/Release:** (Relevant branch/tag or release milestone)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Achievements & Deliverables
|
|
||||||
|
|
||||||
- **Major Features Completed:** (Bullet list, with links to PRs or dev journal entries)
|
|
||||||
- **Technical Milestones:** (E.g., architectural decisions, major refactors, new patterns adopted)
|
|
||||||
- **Documentation Updates:** (Summarise key documentation, changelog, or ADR updates)
|
|
||||||
- **Testing & Quality:** (Describe test coverage improvements, integration test results, bug fixes)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Sprint Metrics
|
|
||||||
|
|
||||||
- **Commits:** (Number or summary, e.g., `git log --since="YYYY-MM-DD" --until="YYYY-MM-DD" --oneline | wc -l`)
|
|
||||||
- **PRs Merged:** (Count and/or links)
|
|
||||||
- **Issues Closed:** (Count and/or links)
|
|
||||||
- **Test Coverage:** (Summarise if measured)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Review of Sprint Goals
|
|
||||||
|
|
||||||
- **What was planned:** (Paste or paraphrase the original sprint goal)
|
|
||||||
- **What was achieved:** (Honest assessment of goal completion)
|
|
||||||
- **What was not completed:** (List and explain any items not finished, with reasons)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Demo & Walkthrough
|
|
||||||
|
|
||||||
- **Screenshots, GIFs, or short video links** (if available)
|
|
||||||
- **Instructions for reviewers:** (How to test/review the new features)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Retrospective
|
|
||||||
|
|
||||||
### What Went Well
|
|
||||||
|
|
||||||
- (Bullet points: successes, effective practices, positive surprises)
|
|
||||||
|
|
||||||
### What Didn’t Go Well
|
|
||||||
|
|
||||||
- (Bullet points: blockers, pain points, technical debt, process issues)
|
|
||||||
|
|
||||||
### What We Learned
|
|
||||||
|
|
||||||
- (Bullet points: technical, process, or team insights)
|
|
||||||
|
|
||||||
### What We’ll Try Next
|
|
||||||
|
|
||||||
- (Bullet points: actions to improve, experiments for next sprint)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Action Items & Next Steps
|
|
||||||
|
|
||||||
- (Bullet list of concrete actions, owners, and deadlines for the next sprint)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. References
|
|
||||||
|
|
||||||
- **Dev Journal Entries:** (List all relevant `/docs/devJournal/YYYYMMDD-nn.md` files)
|
|
||||||
- **ADR(s):** (Link to any new or updated ADRs)
|
|
||||||
- **Changelog/Docs:** (Links to major documentation changes)
|
|
||||||
- CHANGELOG.md
|
|
||||||
- .cline-docs/progress.md
|
|
||||||
- .cline-docs/activeContext.md
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
> _Use this template at the end of each sprint to ensure a culture of continuous improvement, transparency, and knowledge sharing._
|
|
||||||
|
|
@ -5,23 +5,23 @@
|
||||||
* This file ensures proper execution when run via npx from GitHub
|
* This file ensures proper execution when run via npx from GitHub
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const { execSync } = require('child_process');
|
const { execSync } = require('child_process')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const fs = require('fs');
|
const fs = require('fs')
|
||||||
|
|
||||||
// Check if we're running in an npx temporary directory
|
// Check if we're running in an npx temporary directory
|
||||||
const isNpxExecution = __dirname.includes('_npx') || __dirname.includes('.npm');
|
const isNpxExecution = __dirname.includes('_npx') || __dirname.includes('.npm')
|
||||||
|
|
||||||
// If running via npx, we need to handle things differently
|
// If running via npx, we need to handle things differently
|
||||||
if (isNpxExecution) {
|
if (isNpxExecution) {
|
||||||
// The actual bmad.js is in installer/bin/ (relative to tools directory)
|
// The actual bmad.js is in installer/bin/ (relative to tools directory)
|
||||||
const bmadScriptPath = path.join(__dirname, 'installer', 'bin', 'bmad.js');
|
const bmadScriptPath = path.join(__dirname, 'installer', 'bin', 'bmad.js')
|
||||||
|
|
||||||
// Verify the file exists
|
// Verify the file exists
|
||||||
if (!fs.existsSync(bmadScriptPath)) {
|
if (!fs.existsSync(bmadScriptPath)) {
|
||||||
console.error('Error: Could not find bmad.js at', bmadScriptPath);
|
console.error('Error: Could not find bmad.js at', bmadScriptPath)
|
||||||
console.error('Current directory:', __dirname);
|
console.error('Current directory:', __dirname)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Execute with proper working directory
|
// Execute with proper working directory
|
||||||
|
|
@ -29,13 +29,13 @@ if (isNpxExecution) {
|
||||||
execSync(`node "${bmadScriptPath}" ${process.argv.slice(2).join(' ')}`, {
|
execSync(`node "${bmadScriptPath}" ${process.argv.slice(2).join(' ')}`, {
|
||||||
stdio: 'inherit',
|
stdio: 'inherit',
|
||||||
cwd: path.dirname(__dirname)
|
cwd: path.dirname(__dirname)
|
||||||
});
|
})
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// execSync will throw if the command exits with non-zero
|
// execSync will throw if the command exits with non-zero
|
||||||
// But the stdio is inherited, so the error is already displayed
|
// But the stdio is inherited, so the error is already displayed
|
||||||
process.exit(error.status || 1);
|
process.exit(error.status || 1)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Local execution - just require the installer directly
|
// Local execution - just require the installer directly
|
||||||
require('./installer/bin/bmad.js');
|
require('./installer/bin/bmad.js')
|
||||||
}
|
}
|
||||||
|
|
@ -1,52 +1,52 @@
|
||||||
const fs = require("node:fs").promises;
|
const fs = require('node:fs').promises
|
||||||
const path = require("node:path");
|
const path = require('node:path')
|
||||||
const DependencyResolver = require("../lib/dependency-resolver");
|
const DependencyResolver = require('../lib/dependency-resolver')
|
||||||
const yamlUtils = require("../lib/yaml-utils");
|
const yamlUtils = require('../lib/yaml-utils')
|
||||||
|
|
||||||
class WebBuilder {
|
class WebBuilder {
|
||||||
constructor(options = {}) {
|
constructor (options = {}) {
|
||||||
this.rootDir = options.rootDir || process.cwd();
|
this.rootDir = options.rootDir || process.cwd()
|
||||||
this.outputDirs = options.outputDirs || [path.join(this.rootDir, "dist")];
|
this.outputDirs = options.outputDirs || [path.join(this.rootDir, 'dist')]
|
||||||
this.resolver = new DependencyResolver(this.rootDir);
|
this.resolver = new DependencyResolver(this.rootDir)
|
||||||
this.templatePath = path.join(
|
this.templatePath = path.join(
|
||||||
this.rootDir,
|
this.rootDir,
|
||||||
"tools",
|
'tools',
|
||||||
"md-assets",
|
'md-assets',
|
||||||
"web-agent-startup-instructions.md"
|
'web-agent-startup-instructions.md'
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
parseYaml(content) {
|
parseYaml (content) {
|
||||||
const yaml = require("js-yaml");
|
const yaml = require('js-yaml')
|
||||||
return yaml.load(content);
|
return yaml.load(content)
|
||||||
}
|
}
|
||||||
|
|
||||||
convertToWebPath(filePath, bundleRoot = 'bmad-core') {
|
convertToWebPath (filePath, bundleRoot = 'bmad-core') {
|
||||||
// Convert absolute paths to web bundle paths with dot prefix
|
// Convert absolute paths to web bundle paths with dot prefix
|
||||||
// All resources get installed under the bundle root, so use that path
|
// All resources get installed under the bundle root, so use that path
|
||||||
const relativePath = path.relative(this.rootDir, filePath);
|
const relativePath = path.relative(this.rootDir, filePath)
|
||||||
const pathParts = relativePath.split(path.sep);
|
const pathParts = relativePath.split(path.sep)
|
||||||
|
|
||||||
let resourcePath;
|
let resourcePath
|
||||||
if (pathParts[0] === 'expansion-packs') {
|
if (pathParts[0] === 'expansion-packs') {
|
||||||
// For expansion packs, remove 'expansion-packs/packname' and use the rest
|
// For expansion packs, remove 'expansion-packs/packname' and use the rest
|
||||||
resourcePath = pathParts.slice(2).join('/');
|
resourcePath = pathParts.slice(2).join('/')
|
||||||
} else {
|
} else {
|
||||||
// For bmad-core, common, etc., remove the first part
|
// For bmad-core, common, etc., remove the first part
|
||||||
resourcePath = pathParts.slice(1).join('/');
|
resourcePath = pathParts.slice(1).join('/')
|
||||||
}
|
}
|
||||||
|
|
||||||
return `.${bundleRoot}/${resourcePath}`;
|
return `.${bundleRoot}/${resourcePath}`
|
||||||
}
|
}
|
||||||
|
|
||||||
generateWebInstructions(bundleType, packName = null) {
|
generateWebInstructions (bundleType, packName = null) {
|
||||||
// Generate dynamic web instructions based on bundle type
|
// Generate dynamic web instructions based on bundle type
|
||||||
const rootExample = packName ? `.${packName}` : '.bmad-core';
|
const rootExample = packName ? `.${packName}` : '.bmad-core'
|
||||||
const examplePath = packName ? `.${packName}/folder/filename.md` : '.bmad-core/folder/filename.md';
|
const examplePath = packName ? `.${packName}/folder/filename.md` : '.bmad-core/folder/filename.md'
|
||||||
const personasExample = packName ? `.${packName}/personas/analyst.md` : '.bmad-core/personas/analyst.md';
|
const personasExample = packName ? `.${packName}/personas/analyst.md` : '.bmad-core/personas/analyst.md'
|
||||||
const tasksExample = packName ? `.${packName}/tasks/create-story.md` : '.bmad-core/tasks/create-story.md';
|
const tasksExample = packName ? `.${packName}/tasks/create-story.md` : '.bmad-core/tasks/create-story.md'
|
||||||
const utilsExample = packName ? `.${packName}/utils/template-format.md` : '.bmad-core/utils/template-format.md';
|
const utilsExample = packName ? `.${packName}/utils/template-format.md` : '.bmad-core/utils/template-format.md'
|
||||||
const tasksRef = packName ? `.${packName}/tasks/create-story.md` : '.bmad-core/tasks/create-story.md';
|
const tasksRef = packName ? `.${packName}/tasks/create-story.md` : '.bmad-core/tasks/create-story.md'
|
||||||
|
|
||||||
return `# Web Agent Bundle Instructions
|
return `# Web Agent Bundle Instructions
|
||||||
|
|
||||||
|
|
@ -88,225 +88,225 @@ These references map directly to bundle sections:
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
`;
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
async cleanOutputDirs() {
|
async cleanOutputDirs () {
|
||||||
for (const dir of this.outputDirs) {
|
for (const dir of this.outputDirs) {
|
||||||
try {
|
try {
|
||||||
await fs.rm(dir, { recursive: true, force: true });
|
await fs.rm(dir, { recursive: true, force: true })
|
||||||
console.log(`Cleaned: ${path.relative(this.rootDir, dir)}`);
|
console.log(`Cleaned: ${path.relative(this.rootDir, dir)}`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.debug(`Failed to clean directory ${dir}:`, error.message);
|
console.debug(`Failed to clean directory ${dir}:`, error.message)
|
||||||
// Directory might not exist, that's fine
|
// Directory might not exist, that's fine
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildAgents() {
|
async buildAgents () {
|
||||||
const agents = await this.resolver.listAgents();
|
const agents = await this.resolver.listAgents()
|
||||||
|
|
||||||
for (const agentId of agents) {
|
for (const agentId of agents) {
|
||||||
console.log(` Building agent: ${agentId}`);
|
console.log(` Building agent: ${agentId}`)
|
||||||
const bundle = await this.buildAgentBundle(agentId);
|
const bundle = await this.buildAgentBundle(agentId)
|
||||||
|
|
||||||
// Write to all output directories
|
// Write to all output directories
|
||||||
for (const outputDir of this.outputDirs) {
|
for (const outputDir of this.outputDirs) {
|
||||||
const outputPath = path.join(outputDir, "agents");
|
const outputPath = path.join(outputDir, 'agents')
|
||||||
await fs.mkdir(outputPath, { recursive: true });
|
await fs.mkdir(outputPath, { recursive: true })
|
||||||
const outputFile = path.join(outputPath, `${agentId}.txt`);
|
const outputFile = path.join(outputPath, `${agentId}.txt`)
|
||||||
await fs.writeFile(outputFile, bundle, "utf8");
|
await fs.writeFile(outputFile, bundle, 'utf8')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(`Built ${agents.length} agent bundles in ${this.outputDirs.length} locations`);
|
console.log(`Built ${agents.length} agent bundles in ${this.outputDirs.length} locations`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildTeams() {
|
async buildTeams () {
|
||||||
const teams = await this.resolver.listTeams();
|
const teams = await this.resolver.listTeams()
|
||||||
|
|
||||||
for (const teamId of teams) {
|
for (const teamId of teams) {
|
||||||
console.log(` Building team: ${teamId}`);
|
console.log(` Building team: ${teamId}`)
|
||||||
const bundle = await this.buildTeamBundle(teamId);
|
const bundle = await this.buildTeamBundle(teamId)
|
||||||
|
|
||||||
// Write to all output directories
|
// Write to all output directories
|
||||||
for (const outputDir of this.outputDirs) {
|
for (const outputDir of this.outputDirs) {
|
||||||
const outputPath = path.join(outputDir, "teams");
|
const outputPath = path.join(outputDir, 'teams')
|
||||||
await fs.mkdir(outputPath, { recursive: true });
|
await fs.mkdir(outputPath, { recursive: true })
|
||||||
const outputFile = path.join(outputPath, `${teamId}.txt`);
|
const outputFile = path.join(outputPath, `${teamId}.txt`)
|
||||||
await fs.writeFile(outputFile, bundle, "utf8");
|
await fs.writeFile(outputFile, bundle, 'utf8')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(`Built ${teams.length} team bundles in ${this.outputDirs.length} locations`);
|
console.log(`Built ${teams.length} team bundles in ${this.outputDirs.length} locations`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildAgentBundle(agentId) {
|
async buildAgentBundle (agentId) {
|
||||||
const dependencies = await this.resolver.resolveAgentDependencies(agentId);
|
const dependencies = await this.resolver.resolveAgentDependencies(agentId)
|
||||||
const template = this.generateWebInstructions('agent');
|
const template = this.generateWebInstructions('agent')
|
||||||
|
|
||||||
const sections = [template];
|
const sections = [template]
|
||||||
|
|
||||||
// Add agent configuration
|
// Add agent configuration
|
||||||
const agentPath = this.convertToWebPath(dependencies.agent.path, 'bmad-core');
|
const agentPath = this.convertToWebPath(dependencies.agent.path, 'bmad-core')
|
||||||
sections.push(this.formatSection(agentPath, dependencies.agent.content, 'bmad-core'));
|
sections.push(this.formatSection(agentPath, dependencies.agent.content, 'bmad-core'))
|
||||||
|
|
||||||
// Add all dependencies
|
// Add all dependencies
|
||||||
for (const resource of dependencies.resources) {
|
for (const resource of dependencies.resources) {
|
||||||
const resourcePath = this.convertToWebPath(resource.path, 'bmad-core');
|
const resourcePath = this.convertToWebPath(resource.path, 'bmad-core')
|
||||||
sections.push(this.formatSection(resourcePath, resource.content, 'bmad-core'));
|
sections.push(this.formatSection(resourcePath, resource.content, 'bmad-core'))
|
||||||
}
|
}
|
||||||
|
|
||||||
return sections.join("\n");
|
return sections.join('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildTeamBundle(teamId) {
|
async buildTeamBundle (teamId) {
|
||||||
const dependencies = await this.resolver.resolveTeamDependencies(teamId);
|
const dependencies = await this.resolver.resolveTeamDependencies(teamId)
|
||||||
const template = this.generateWebInstructions('team');
|
const template = this.generateWebInstructions('team')
|
||||||
|
|
||||||
const sections = [template];
|
const sections = [template]
|
||||||
|
|
||||||
// Add team configuration
|
// Add team configuration
|
||||||
const teamPath = this.convertToWebPath(dependencies.team.path, 'bmad-core');
|
const teamPath = this.convertToWebPath(dependencies.team.path, 'bmad-core')
|
||||||
sections.push(this.formatSection(teamPath, dependencies.team.content, 'bmad-core'));
|
sections.push(this.formatSection(teamPath, dependencies.team.content, 'bmad-core'))
|
||||||
|
|
||||||
// Add all agents
|
// Add all agents
|
||||||
for (const agent of dependencies.agents) {
|
for (const agent of dependencies.agents) {
|
||||||
const agentPath = this.convertToWebPath(agent.path, 'bmad-core');
|
const agentPath = this.convertToWebPath(agent.path, 'bmad-core')
|
||||||
sections.push(this.formatSection(agentPath, agent.content, 'bmad-core'));
|
sections.push(this.formatSection(agentPath, agent.content, 'bmad-core'))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add all deduplicated resources
|
// Add all deduplicated resources
|
||||||
for (const resource of dependencies.resources) {
|
for (const resource of dependencies.resources) {
|
||||||
const resourcePath = this.convertToWebPath(resource.path, 'bmad-core');
|
const resourcePath = this.convertToWebPath(resource.path, 'bmad-core')
|
||||||
sections.push(this.formatSection(resourcePath, resource.content, 'bmad-core'));
|
sections.push(this.formatSection(resourcePath, resource.content, 'bmad-core'))
|
||||||
}
|
}
|
||||||
|
|
||||||
return sections.join("\n");
|
return sections.join('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
processAgentContent(content) {
|
processAgentContent (content) {
|
||||||
// First, replace content before YAML with the template
|
// First, replace content before YAML with the template
|
||||||
const yamlContent = yamlUtils.extractYamlFromAgent(content);
|
const yamlContent = yamlUtils.extractYamlFromAgent(content)
|
||||||
if (!yamlContent) return content;
|
if (!yamlContent) return content
|
||||||
|
|
||||||
const yamlMatch = content.match(/```ya?ml\n([\s\S]*?)\n```/);
|
const yamlMatch = content.match(/```ya?ml\n([\s\S]*?)\n```/)
|
||||||
if (!yamlMatch) return content;
|
if (!yamlMatch) return content
|
||||||
|
|
||||||
const yamlStartIndex = content.indexOf(yamlMatch[0]);
|
const yamlStartIndex = content.indexOf(yamlMatch[0])
|
||||||
const yamlEndIndex = yamlStartIndex + yamlMatch[0].length;
|
const yamlEndIndex = yamlStartIndex + yamlMatch[0].length
|
||||||
|
|
||||||
// Parse YAML and remove root and IDE-FILE-RESOLUTION properties
|
// Parse YAML and remove root and IDE-FILE-RESOLUTION properties
|
||||||
try {
|
try {
|
||||||
const yaml = require("js-yaml");
|
const yaml = require('js-yaml')
|
||||||
const parsed = yaml.load(yamlContent);
|
const parsed = yaml.load(yamlContent)
|
||||||
|
|
||||||
// Remove the properties if they exist at root level
|
// Remove the properties if they exist at root level
|
||||||
delete parsed.root;
|
delete parsed.root
|
||||||
delete parsed["IDE-FILE-RESOLUTION"];
|
delete parsed['IDE-FILE-RESOLUTION']
|
||||||
delete parsed["REQUEST-RESOLUTION"];
|
delete parsed['REQUEST-RESOLUTION']
|
||||||
|
|
||||||
// Also remove from activation-instructions if they exist
|
// Also remove from activation-instructions if they exist
|
||||||
if (parsed["activation-instructions"] && Array.isArray(parsed["activation-instructions"])) {
|
if (parsed['activation-instructions'] && Array.isArray(parsed['activation-instructions'])) {
|
||||||
parsed["activation-instructions"] = parsed["activation-instructions"].filter(
|
parsed['activation-instructions'] = parsed['activation-instructions'].filter(
|
||||||
(instruction) => {
|
(instruction) => {
|
||||||
return (
|
return (
|
||||||
typeof instruction === 'string' &&
|
typeof instruction === 'string' &&
|
||||||
!instruction.startsWith("IDE-FILE-RESOLUTION:") &&
|
!instruction.startsWith('IDE-FILE-RESOLUTION:') &&
|
||||||
!instruction.startsWith("REQUEST-RESOLUTION:")
|
!instruction.startsWith('REQUEST-RESOLUTION:')
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Reconstruct the YAML
|
// Reconstruct the YAML
|
||||||
const cleanedYaml = yaml.dump(parsed, { lineWidth: -1 });
|
const cleanedYaml = yaml.dump(parsed, { lineWidth: -1 })
|
||||||
|
|
||||||
// Get the agent name from the YAML for the header
|
// Get the agent name from the YAML for the header
|
||||||
const agentName = parsed.agent?.id || "agent";
|
const agentName = parsed.agent?.id || 'agent'
|
||||||
|
|
||||||
// Build the new content with just the agent header and YAML
|
// Build the new content with just the agent header and YAML
|
||||||
const newHeader = `# ${agentName}\n\nCRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n`;
|
const newHeader = `# ${agentName}\n\nCRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n`
|
||||||
const afterYaml = content.substring(yamlEndIndex);
|
const afterYaml = content.substring(yamlEndIndex)
|
||||||
|
|
||||||
return newHeader + "```yaml\n" + cleanedYaml.trim() + "\n```" + afterYaml;
|
return newHeader + '```yaml\n' + cleanedYaml.trim() + '\n```' + afterYaml
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn("Failed to process agent YAML:", error.message);
|
console.warn('Failed to process agent YAML:', error.message)
|
||||||
// If parsing fails, return original content
|
// If parsing fails, return original content
|
||||||
return content;
|
return content
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
formatSection(path, content, bundleRoot = 'bmad-core') {
|
formatSection (path, content, bundleRoot = 'bmad-core') {
|
||||||
const separator = "====================";
|
const separator = '===================='
|
||||||
|
|
||||||
// Process agent content if this is an agent file
|
// Process agent content if this is an agent file
|
||||||
if (path.includes("/agents/")) {
|
if (path.includes('/agents/')) {
|
||||||
content = this.processAgentContent(content);
|
content = this.processAgentContent(content)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Replace {root} references with the actual bundle root
|
// Replace {root} references with the actual bundle root
|
||||||
content = this.replaceRootReferences(content, bundleRoot);
|
content = this.replaceRootReferences(content, bundleRoot)
|
||||||
|
|
||||||
return [
|
return [
|
||||||
`${separator} START: ${path} ${separator}`,
|
`${separator} START: ${path} ${separator}`,
|
||||||
content.trim(),
|
content.trim(),
|
||||||
`${separator} END: ${path} ${separator}`,
|
`${separator} END: ${path} ${separator}`,
|
||||||
"",
|
''
|
||||||
].join("\n");
|
].join('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
replaceRootReferences(content, bundleRoot) {
|
replaceRootReferences (content, bundleRoot) {
|
||||||
// Replace {root} with the appropriate bundle root path
|
// Replace {root} with the appropriate bundle root path
|
||||||
return content.replace(/\{root\}/g, `.${bundleRoot}`);
|
return content.replace(/\{root\}/g, `.${bundleRoot}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async validate() {
|
async validate () {
|
||||||
console.log("Validating agent configurations...");
|
console.log('Validating agent configurations...')
|
||||||
const agents = await this.resolver.listAgents();
|
const agents = await this.resolver.listAgents()
|
||||||
for (const agentId of agents) {
|
for (const agentId of agents) {
|
||||||
try {
|
try {
|
||||||
await this.resolver.resolveAgentDependencies(agentId);
|
await this.resolver.resolveAgentDependencies(agentId)
|
||||||
console.log(` ✓ ${agentId}`);
|
console.log(` ✓ ${agentId}`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.log(` ✗ ${agentId}: ${error.message}`);
|
console.log(` ✗ ${agentId}: ${error.message}`)
|
||||||
throw error;
|
throw error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log("\nValidating team configurations...");
|
console.log('\nValidating team configurations...')
|
||||||
const teams = await this.resolver.listTeams();
|
const teams = await this.resolver.listTeams()
|
||||||
for (const teamId of teams) {
|
for (const teamId of teams) {
|
||||||
try {
|
try {
|
||||||
await this.resolver.resolveTeamDependencies(teamId);
|
await this.resolver.resolveTeamDependencies(teamId)
|
||||||
console.log(` ✓ ${teamId}`);
|
console.log(` ✓ ${teamId}`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.log(` ✗ ${teamId}: ${error.message}`);
|
console.log(` ✗ ${teamId}: ${error.message}`)
|
||||||
throw error;
|
throw error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildAllExpansionPacks(options = {}) {
|
async buildAllExpansionPacks (options = {}) {
|
||||||
const expansionPacks = await this.listExpansionPacks();
|
const expansionPacks = await this.listExpansionPacks()
|
||||||
|
|
||||||
for (const packName of expansionPacks) {
|
for (const packName of expansionPacks) {
|
||||||
console.log(` Building expansion pack: ${packName}`);
|
console.log(` Building expansion pack: ${packName}`)
|
||||||
await this.buildExpansionPack(packName, options);
|
await this.buildExpansionPack(packName, options)
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(`Built ${expansionPacks.length} expansion pack bundles`);
|
console.log(`Built ${expansionPacks.length} expansion pack bundles`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildExpansionPack(packName, options = {}) {
|
async buildExpansionPack (packName, options = {}) {
|
||||||
const packDir = path.join(this.rootDir, "expansion-packs", packName);
|
const packDir = path.join(this.rootDir, 'expansion-packs', packName)
|
||||||
const outputDirs = [path.join(this.rootDir, "dist", "expansion-packs", packName)];
|
const outputDirs = [path.join(this.rootDir, 'dist', 'expansion-packs', packName)]
|
||||||
|
|
||||||
// Clean output directories if requested
|
// Clean output directories if requested
|
||||||
if (options.clean !== false) {
|
if (options.clean !== false) {
|
||||||
for (const outputDir of outputDirs) {
|
for (const outputDir of outputDirs) {
|
||||||
try {
|
try {
|
||||||
await fs.rm(outputDir, { recursive: true, force: true });
|
await fs.rm(outputDir, { recursive: true, force: true })
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Directory might not exist, that's fine
|
// Directory might not exist, that's fine
|
||||||
}
|
}
|
||||||
|
|
@ -314,96 +314,96 @@ These references map directly to bundle sections:
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build individual agents first
|
// Build individual agents first
|
||||||
const agentsDir = path.join(packDir, "agents");
|
const agentsDir = path.join(packDir, 'agents')
|
||||||
try {
|
try {
|
||||||
const agentFiles = await fs.readdir(agentsDir);
|
const agentFiles = await fs.readdir(agentsDir)
|
||||||
const agentMarkdownFiles = agentFiles.filter((f) => f.endsWith(".md"));
|
const agentMarkdownFiles = agentFiles.filter((f) => f.endsWith('.md'))
|
||||||
|
|
||||||
if (agentMarkdownFiles.length > 0) {
|
if (agentMarkdownFiles.length > 0) {
|
||||||
console.log(` Building individual agents for ${packName}:`);
|
console.log(` Building individual agents for ${packName}:`)
|
||||||
|
|
||||||
for (const agentFile of agentMarkdownFiles) {
|
for (const agentFile of agentMarkdownFiles) {
|
||||||
const agentName = agentFile.replace(".md", "");
|
const agentName = agentFile.replace('.md', '')
|
||||||
console.log(` - ${agentName}`);
|
console.log(` - ${agentName}`)
|
||||||
|
|
||||||
// Build individual agent bundle
|
// Build individual agent bundle
|
||||||
const bundle = await this.buildExpansionAgentBundle(packName, packDir, agentName);
|
const bundle = await this.buildExpansionAgentBundle(packName, packDir, agentName)
|
||||||
|
|
||||||
// Write to all output directories
|
// Write to all output directories
|
||||||
for (const outputDir of outputDirs) {
|
for (const outputDir of outputDirs) {
|
||||||
const agentsOutputDir = path.join(outputDir, "agents");
|
const agentsOutputDir = path.join(outputDir, 'agents')
|
||||||
await fs.mkdir(agentsOutputDir, { recursive: true });
|
await fs.mkdir(agentsOutputDir, { recursive: true })
|
||||||
const outputFile = path.join(agentsOutputDir, `${agentName}.txt`);
|
const outputFile = path.join(agentsOutputDir, `${agentName}.txt`)
|
||||||
await fs.writeFile(outputFile, bundle, "utf8");
|
await fs.writeFile(outputFile, bundle, 'utf8')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.debug(` No agents directory found for ${packName}`);
|
console.debug(` No agents directory found for ${packName}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build team bundle
|
// Build team bundle
|
||||||
const agentTeamsDir = path.join(packDir, "agent-teams");
|
const agentTeamsDir = path.join(packDir, 'agent-teams')
|
||||||
try {
|
try {
|
||||||
const teamFiles = await fs.readdir(agentTeamsDir);
|
const teamFiles = await fs.readdir(agentTeamsDir)
|
||||||
const teamFile = teamFiles.find((f) => f.endsWith(".yaml"));
|
const teamFile = teamFiles.find((f) => f.endsWith('.yaml'))
|
||||||
|
|
||||||
if (teamFile) {
|
if (teamFile) {
|
||||||
console.log(` Building team bundle for ${packName}`);
|
console.log(` Building team bundle for ${packName}`)
|
||||||
const teamConfigPath = path.join(agentTeamsDir, teamFile);
|
const teamConfigPath = path.join(agentTeamsDir, teamFile)
|
||||||
|
|
||||||
// Build expansion pack as a team bundle
|
// Build expansion pack as a team bundle
|
||||||
const bundle = await this.buildExpansionTeamBundle(packName, packDir, teamConfigPath);
|
const bundle = await this.buildExpansionTeamBundle(packName, packDir, teamConfigPath)
|
||||||
|
|
||||||
// Write to all output directories
|
// Write to all output directories
|
||||||
for (const outputDir of outputDirs) {
|
for (const outputDir of outputDirs) {
|
||||||
const teamsOutputDir = path.join(outputDir, "teams");
|
const teamsOutputDir = path.join(outputDir, 'teams')
|
||||||
await fs.mkdir(teamsOutputDir, { recursive: true });
|
await fs.mkdir(teamsOutputDir, { recursive: true })
|
||||||
const outputFile = path.join(teamsOutputDir, teamFile.replace(".yaml", ".txt"));
|
const outputFile = path.join(teamsOutputDir, teamFile.replace('.yaml', '.txt'))
|
||||||
await fs.writeFile(outputFile, bundle, "utf8");
|
await fs.writeFile(outputFile, bundle, 'utf8')
|
||||||
console.log(` ✓ Created bundle: ${path.relative(this.rootDir, outputFile)}`);
|
console.log(` ✓ Created bundle: ${path.relative(this.rootDir, outputFile)}`)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
console.warn(` ⚠ No team configuration found in ${packName}/agent-teams/`);
|
console.warn(` ⚠ No team configuration found in ${packName}/agent-teams/`)
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(` ⚠ No agent-teams directory found for ${packName}`);
|
console.warn(` ⚠ No agent-teams directory found for ${packName}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildExpansionAgentBundle(packName, packDir, agentName) {
|
async buildExpansionAgentBundle (packName, packDir, agentName) {
|
||||||
const template = this.generateWebInstructions('expansion-agent', packName);
|
const template = this.generateWebInstructions('expansion-agent', packName)
|
||||||
const sections = [template];
|
const sections = [template]
|
||||||
|
|
||||||
// Add agent configuration
|
// Add agent configuration
|
||||||
const agentPath = path.join(packDir, "agents", `${agentName}.md`);
|
const agentPath = path.join(packDir, 'agents', `${agentName}.md`)
|
||||||
const agentContent = await fs.readFile(agentPath, "utf8");
|
const agentContent = await fs.readFile(agentPath, 'utf8')
|
||||||
const agentWebPath = this.convertToWebPath(agentPath, packName);
|
const agentWebPath = this.convertToWebPath(agentPath, packName)
|
||||||
sections.push(this.formatSection(agentWebPath, agentContent, packName));
|
sections.push(this.formatSection(agentWebPath, agentContent, packName))
|
||||||
|
|
||||||
// Resolve and add agent dependencies
|
// Resolve and add agent dependencies
|
||||||
const yamlContent = yamlUtils.extractYamlFromAgent(agentContent);
|
const yamlContent = yamlUtils.extractYamlFromAgent(agentContent)
|
||||||
if (yamlContent) {
|
if (yamlContent) {
|
||||||
try {
|
try {
|
||||||
const yaml = require("js-yaml");
|
const yaml = require('js-yaml')
|
||||||
const agentConfig = yaml.load(yamlContent);
|
const agentConfig = yaml.load(yamlContent)
|
||||||
|
|
||||||
if (agentConfig.dependencies) {
|
if (agentConfig.dependencies) {
|
||||||
// Add resources, first try expansion pack, then core
|
// Add resources, first try expansion pack, then core
|
||||||
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
||||||
if (Array.isArray(resources)) {
|
if (Array.isArray(resources)) {
|
||||||
for (const resourceName of resources) {
|
for (const resourceName of resources) {
|
||||||
let found = false;
|
let found = false
|
||||||
|
|
||||||
// Try expansion pack first
|
// Try expansion pack first
|
||||||
const resourcePath = path.join(packDir, resourceType, resourceName);
|
const resourcePath = path.join(packDir, resourceType, resourceName)
|
||||||
try {
|
try {
|
||||||
const resourceContent = await fs.readFile(resourcePath, "utf8");
|
const resourceContent = await fs.readFile(resourcePath, 'utf8')
|
||||||
const resourceWebPath = this.convertToWebPath(resourcePath, packName);
|
const resourceWebPath = this.convertToWebPath(resourcePath, packName)
|
||||||
sections.push(
|
sections.push(
|
||||||
this.formatSection(resourceWebPath, resourceContent, packName)
|
this.formatSection(resourceWebPath, resourceContent, packName)
|
||||||
);
|
)
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Not in expansion pack, continue
|
// Not in expansion pack, continue
|
||||||
}
|
}
|
||||||
|
|
@ -412,17 +412,17 @@ These references map directly to bundle sections:
|
||||||
if (!found) {
|
if (!found) {
|
||||||
const corePath = path.join(
|
const corePath = path.join(
|
||||||
this.rootDir,
|
this.rootDir,
|
||||||
"bmad-core",
|
'bmad-core',
|
||||||
resourceType,
|
resourceType,
|
||||||
resourceName
|
resourceName
|
||||||
);
|
)
|
||||||
try {
|
try {
|
||||||
const coreContent = await fs.readFile(corePath, "utf8");
|
const coreContent = await fs.readFile(corePath, 'utf8')
|
||||||
const coreWebPath = this.convertToWebPath(corePath, packName);
|
const coreWebPath = this.convertToWebPath(corePath, packName)
|
||||||
sections.push(
|
sections.push(
|
||||||
this.formatSection(coreWebPath, coreContent, packName)
|
this.formatSection(coreWebPath, coreContent, packName)
|
||||||
);
|
)
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Not in core either, continue
|
// Not in core either, continue
|
||||||
}
|
}
|
||||||
|
|
@ -432,17 +432,17 @@ These references map directly to bundle sections:
|
||||||
if (!found) {
|
if (!found) {
|
||||||
const commonPath = path.join(
|
const commonPath = path.join(
|
||||||
this.rootDir,
|
this.rootDir,
|
||||||
"common",
|
'common',
|
||||||
resourceType,
|
resourceType,
|
||||||
resourceName
|
resourceName
|
||||||
);
|
)
|
||||||
try {
|
try {
|
||||||
const commonContent = await fs.readFile(commonPath, "utf8");
|
const commonContent = await fs.readFile(commonPath, 'utf8')
|
||||||
const commonWebPath = this.convertToWebPath(commonPath, packName);
|
const commonWebPath = this.convertToWebPath(commonPath, packName)
|
||||||
sections.push(
|
sections.push(
|
||||||
this.formatSection(commonWebPath, commonContent, packName)
|
this.formatSection(commonWebPath, commonContent, packName)
|
||||||
);
|
)
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Not in common either, continue
|
// Not in common either, continue
|
||||||
}
|
}
|
||||||
|
|
@ -451,56 +451,56 @@ These references map directly to bundle sections:
|
||||||
if (!found) {
|
if (!found) {
|
||||||
console.warn(
|
console.warn(
|
||||||
` ⚠ Dependency ${resourceType}#${resourceName} not found in expansion pack or core`
|
` ⚠ Dependency ${resourceType}#${resourceName} not found in expansion pack or core`
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.debug(`Failed to parse agent YAML for ${agentName}:`, error.message);
|
console.debug(`Failed to parse agent YAML for ${agentName}:`, error.message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return sections.join("\n");
|
return sections.join('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
async buildExpansionTeamBundle(packName, packDir, teamConfigPath) {
|
async buildExpansionTeamBundle (packName, packDir, teamConfigPath) {
|
||||||
const template = this.generateWebInstructions('expansion-team', packName);
|
const template = this.generateWebInstructions('expansion-team', packName)
|
||||||
|
|
||||||
const sections = [template];
|
const sections = [template]
|
||||||
|
|
||||||
// Add team configuration and parse to get agent list
|
// Add team configuration and parse to get agent list
|
||||||
const teamContent = await fs.readFile(teamConfigPath, "utf8");
|
const teamContent = await fs.readFile(teamConfigPath, 'utf8')
|
||||||
const teamFileName = path.basename(teamConfigPath, ".yaml");
|
const teamFileName = path.basename(teamConfigPath, '.yaml')
|
||||||
const teamConfig = this.parseYaml(teamContent);
|
const teamConfig = this.parseYaml(teamContent)
|
||||||
const teamWebPath = this.convertToWebPath(teamConfigPath, packName);
|
const teamWebPath = this.convertToWebPath(teamConfigPath, packName)
|
||||||
sections.push(this.formatSection(teamWebPath, teamContent, packName));
|
sections.push(this.formatSection(teamWebPath, teamContent, packName))
|
||||||
|
|
||||||
// Get list of expansion pack agents
|
// Get list of expansion pack agents
|
||||||
const expansionAgents = new Set();
|
const expansionAgents = new Set()
|
||||||
const agentsDir = path.join(packDir, "agents");
|
const agentsDir = path.join(packDir, 'agents')
|
||||||
try {
|
try {
|
||||||
const agentFiles = await fs.readdir(agentsDir);
|
const agentFiles = await fs.readdir(agentsDir)
|
||||||
for (const agentFile of agentFiles.filter((f) => f.endsWith(".md"))) {
|
for (const agentFile of agentFiles.filter((f) => f.endsWith('.md'))) {
|
||||||
const agentName = agentFile.replace(".md", "");
|
const agentName = agentFile.replace('.md', '')
|
||||||
expansionAgents.add(agentName);
|
expansionAgents.add(agentName)
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(` ⚠ No agents directory found in ${packName}`);
|
console.warn(` ⚠ No agents directory found in ${packName}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build a map of all available expansion pack resources for override checking
|
// Build a map of all available expansion pack resources for override checking
|
||||||
const expansionResources = new Map();
|
const expansionResources = new Map()
|
||||||
const resourceDirs = ["templates", "tasks", "checklists", "workflows", "data"];
|
const resourceDirs = ['templates', 'tasks', 'checklists', 'workflows', 'data']
|
||||||
for (const resourceDir of resourceDirs) {
|
for (const resourceDir of resourceDirs) {
|
||||||
const resourcePath = path.join(packDir, resourceDir);
|
const resourcePath = path.join(packDir, resourceDir)
|
||||||
try {
|
try {
|
||||||
const resourceFiles = await fs.readdir(resourcePath);
|
const resourceFiles = await fs.readdir(resourcePath)
|
||||||
for (const resourceFile of resourceFiles.filter(
|
for (const resourceFile of resourceFiles.filter(
|
||||||
(f) => f.endsWith(".md") || f.endsWith(".yaml")
|
(f) => f.endsWith('.md') || f.endsWith('.yaml')
|
||||||
)) {
|
)) {
|
||||||
expansionResources.set(`${resourceDir}#${resourceFile}`, true);
|
expansionResources.set(`${resourceDir}#${resourceFile}`, true)
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Directory might not exist, that's fine
|
// Directory might not exist, that's fine
|
||||||
|
|
@ -508,77 +508,77 @@ These references map directly to bundle sections:
|
||||||
}
|
}
|
||||||
|
|
||||||
// Process all agents listed in team configuration
|
// Process all agents listed in team configuration
|
||||||
const agentsToProcess = teamConfig.agents || [];
|
const agentsToProcess = teamConfig.agents || []
|
||||||
|
|
||||||
// Ensure bmad-orchestrator is always included for teams
|
// Ensure bmad-orchestrator is always included for teams
|
||||||
if (!agentsToProcess.includes("bmad-orchestrator")) {
|
if (!agentsToProcess.includes('bmad-orchestrator')) {
|
||||||
console.warn(` ⚠ Team ${teamFileName} missing bmad-orchestrator, adding automatically`);
|
console.warn(` ⚠ Team ${teamFileName} missing bmad-orchestrator, adding automatically`)
|
||||||
agentsToProcess.unshift("bmad-orchestrator");
|
agentsToProcess.unshift('bmad-orchestrator')
|
||||||
}
|
}
|
||||||
|
|
||||||
// Track all dependencies from all agents (deduplicated)
|
// Track all dependencies from all agents (deduplicated)
|
||||||
const allDependencies = new Map();
|
const allDependencies = new Map()
|
||||||
|
|
||||||
for (const agentId of agentsToProcess) {
|
for (const agentId of agentsToProcess) {
|
||||||
if (expansionAgents.has(agentId)) {
|
if (expansionAgents.has(agentId)) {
|
||||||
// Use expansion pack version (override)
|
// Use expansion pack version (override)
|
||||||
const agentPath = path.join(agentsDir, `${agentId}.md`);
|
const agentPath = path.join(agentsDir, `${agentId}.md`)
|
||||||
const agentContent = await fs.readFile(agentPath, "utf8");
|
const agentContent = await fs.readFile(agentPath, 'utf8')
|
||||||
const expansionAgentWebPath = this.convertToWebPath(agentPath, packName);
|
const expansionAgentWebPath = this.convertToWebPath(agentPath, packName)
|
||||||
sections.push(this.formatSection(expansionAgentWebPath, agentContent, packName));
|
sections.push(this.formatSection(expansionAgentWebPath, agentContent, packName))
|
||||||
|
|
||||||
// Parse and collect dependencies from expansion agent
|
// Parse and collect dependencies from expansion agent
|
||||||
const agentYaml = agentContent.match(/```yaml\n([\s\S]*?)\n```/);
|
const agentYaml = agentContent.match(/```yaml\n([\s\S]*?)\n```/)
|
||||||
if (agentYaml) {
|
if (agentYaml) {
|
||||||
try {
|
try {
|
||||||
const agentConfig = this.parseYaml(agentYaml[1]);
|
const agentConfig = this.parseYaml(agentYaml[1])
|
||||||
if (agentConfig.dependencies) {
|
if (agentConfig.dependencies) {
|
||||||
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
||||||
if (Array.isArray(resources)) {
|
if (Array.isArray(resources)) {
|
||||||
for (const resourceName of resources) {
|
for (const resourceName of resources) {
|
||||||
const key = `${resourceType}#${resourceName}`;
|
const key = `${resourceType}#${resourceName}`
|
||||||
if (!allDependencies.has(key)) {
|
if (!allDependencies.has(key)) {
|
||||||
allDependencies.set(key, { type: resourceType, name: resourceName });
|
allDependencies.set(key, { type: resourceType, name: resourceName })
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.debug(`Failed to parse agent YAML for ${agentId}:`, error.message);
|
console.debug(`Failed to parse agent YAML for ${agentId}:`, error.message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Use core BMad version
|
// Use core BMad version
|
||||||
try {
|
try {
|
||||||
const coreAgentPath = path.join(this.rootDir, "bmad-core", "agents", `${agentId}.md`);
|
const coreAgentPath = path.join(this.rootDir, 'bmad-core', 'agents', `${agentId}.md`)
|
||||||
const coreAgentContent = await fs.readFile(coreAgentPath, "utf8");
|
const coreAgentContent = await fs.readFile(coreAgentPath, 'utf8')
|
||||||
const coreAgentWebPath = this.convertToWebPath(coreAgentPath, packName);
|
const coreAgentWebPath = this.convertToWebPath(coreAgentPath, packName)
|
||||||
sections.push(this.formatSection(coreAgentWebPath, coreAgentContent, packName));
|
sections.push(this.formatSection(coreAgentWebPath, coreAgentContent, packName))
|
||||||
|
|
||||||
// Parse and collect dependencies from core agent
|
// Parse and collect dependencies from core agent
|
||||||
const yamlContent = yamlUtils.extractYamlFromAgent(coreAgentContent, true);
|
const yamlContent = yamlUtils.extractYamlFromAgent(coreAgentContent, true)
|
||||||
if (yamlContent) {
|
if (yamlContent) {
|
||||||
try {
|
try {
|
||||||
const agentConfig = this.parseYaml(yamlContent);
|
const agentConfig = this.parseYaml(yamlContent)
|
||||||
if (agentConfig.dependencies) {
|
if (agentConfig.dependencies) {
|
||||||
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
for (const [resourceType, resources] of Object.entries(agentConfig.dependencies)) {
|
||||||
if (Array.isArray(resources)) {
|
if (Array.isArray(resources)) {
|
||||||
for (const resourceName of resources) {
|
for (const resourceName of resources) {
|
||||||
const key = `${resourceType}#${resourceName}`;
|
const key = `${resourceType}#${resourceName}`
|
||||||
if (!allDependencies.has(key)) {
|
if (!allDependencies.has(key)) {
|
||||||
allDependencies.set(key, { type: resourceType, name: resourceName });
|
allDependencies.set(key, { type: resourceType, name: resourceName })
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.debug(`Failed to parse agent YAML for ${agentId}:`, error.message);
|
console.debug(`Failed to parse agent YAML for ${agentId}:`, error.message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(` ⚠ Agent ${agentId} not found in core or expansion pack`);
|
console.warn(` ⚠ Agent ${agentId} not found in core or expansion pack`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -586,18 +586,18 @@ These references map directly to bundle sections:
|
||||||
// Add all collected dependencies from agents
|
// Add all collected dependencies from agents
|
||||||
// Always prefer expansion pack versions if they exist
|
// Always prefer expansion pack versions if they exist
|
||||||
for (const [key, dep] of allDependencies) {
|
for (const [key, dep] of allDependencies) {
|
||||||
let found = false;
|
let found = false
|
||||||
|
|
||||||
// Always check expansion pack first, even if the dependency came from a core agent
|
// Always check expansion pack first, even if the dependency came from a core agent
|
||||||
if (expansionResources.has(key)) {
|
if (expansionResources.has(key)) {
|
||||||
// We know it exists in expansion pack, find and load it
|
// We know it exists in expansion pack, find and load it
|
||||||
const expansionPath = path.join(packDir, dep.type, dep.name);
|
const expansionPath = path.join(packDir, dep.type, dep.name)
|
||||||
try {
|
try {
|
||||||
const content = await fs.readFile(expansionPath, "utf8");
|
const content = await fs.readFile(expansionPath, 'utf8')
|
||||||
const expansionWebPath = this.convertToWebPath(expansionPath, packName);
|
const expansionWebPath = this.convertToWebPath(expansionPath, packName)
|
||||||
sections.push(this.formatSection(expansionWebPath, content, packName));
|
sections.push(this.formatSection(expansionWebPath, content, packName))
|
||||||
console.log(` ✓ Using expansion override for ${key}`);
|
console.log(` ✓ Using expansion override for ${key}`)
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Try next extension
|
// Try next extension
|
||||||
}
|
}
|
||||||
|
|
@ -605,12 +605,12 @@ These references map directly to bundle sections:
|
||||||
|
|
||||||
// If not found in expansion pack (or doesn't exist there), try core
|
// If not found in expansion pack (or doesn't exist there), try core
|
||||||
if (!found) {
|
if (!found) {
|
||||||
const corePath = path.join(this.rootDir, "bmad-core", dep.type, dep.name);
|
const corePath = path.join(this.rootDir, 'bmad-core', dep.type, dep.name)
|
||||||
try {
|
try {
|
||||||
const content = await fs.readFile(corePath, "utf8");
|
const content = await fs.readFile(corePath, 'utf8')
|
||||||
const coreWebPath = this.convertToWebPath(corePath, packName);
|
const coreWebPath = this.convertToWebPath(corePath, packName)
|
||||||
sections.push(this.formatSection(coreWebPath, content, packName));
|
sections.push(this.formatSection(coreWebPath, content, packName))
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Not in core either, continue
|
// Not in core either, continue
|
||||||
}
|
}
|
||||||
|
|
@ -618,40 +618,40 @@ These references map directly to bundle sections:
|
||||||
|
|
||||||
// If not found in core, try common folder
|
// If not found in core, try common folder
|
||||||
if (!found) {
|
if (!found) {
|
||||||
const commonPath = path.join(this.rootDir, "common", dep.type, dep.name);
|
const commonPath = path.join(this.rootDir, 'common', dep.type, dep.name)
|
||||||
try {
|
try {
|
||||||
const content = await fs.readFile(commonPath, "utf8");
|
const content = await fs.readFile(commonPath, 'utf8')
|
||||||
const commonWebPath = this.convertToWebPath(commonPath, packName);
|
const commonWebPath = this.convertToWebPath(commonPath, packName)
|
||||||
sections.push(this.formatSection(commonWebPath, content, packName));
|
sections.push(this.formatSection(commonWebPath, content, packName))
|
||||||
found = true;
|
found = true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Not in common either, continue
|
// Not in common either, continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!found) {
|
if (!found) {
|
||||||
console.warn(` ⚠ Dependency ${key} not found in expansion pack or core`);
|
console.warn(` ⚠ Dependency ${key} not found in expansion pack or core`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add remaining expansion pack resources not already included as dependencies
|
// Add remaining expansion pack resources not already included as dependencies
|
||||||
for (const resourceDir of resourceDirs) {
|
for (const resourceDir of resourceDirs) {
|
||||||
const resourcePath = path.join(packDir, resourceDir);
|
const resourcePath = path.join(packDir, resourceDir)
|
||||||
try {
|
try {
|
||||||
const resourceFiles = await fs.readdir(resourcePath);
|
const resourceFiles = await fs.readdir(resourcePath)
|
||||||
for (const resourceFile of resourceFiles.filter(
|
for (const resourceFile of resourceFiles.filter(
|
||||||
(f) => f.endsWith(".md") || f.endsWith(".yaml")
|
(f) => f.endsWith('.md') || f.endsWith('.yaml')
|
||||||
)) {
|
)) {
|
||||||
const filePath = path.join(resourcePath, resourceFile);
|
const filePath = path.join(resourcePath, resourceFile)
|
||||||
const fileContent = await fs.readFile(filePath, "utf8");
|
const fileContent = await fs.readFile(filePath, 'utf8')
|
||||||
const fileName = resourceFile.replace(/\.(md|yaml)$/, "");
|
const fileName = resourceFile.replace(/\.(md|yaml)$/, '')
|
||||||
|
|
||||||
// Only add if not already included as a dependency
|
// Only add if not already included as a dependency
|
||||||
const resourceKey = `${resourceDir}#${fileName}`;
|
const resourceKey = `${resourceDir}#${fileName}`
|
||||||
if (!allDependencies.has(resourceKey)) {
|
if (!allDependencies.has(resourceKey)) {
|
||||||
const fullResourcePath = path.join(resourcePath, resourceFile);
|
const fullResourcePath = path.join(resourcePath, resourceFile)
|
||||||
const resourceWebPath = this.convertToWebPath(fullResourcePath, packName);
|
const resourceWebPath = this.convertToWebPath(fullResourcePath, packName)
|
||||||
sections.push(this.formatSection(resourceWebPath, fileContent, packName));
|
sections.push(this.formatSection(resourceWebPath, fileContent, packName))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|
@ -659,23 +659,23 @@ These references map directly to bundle sections:
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return sections.join("\n");
|
return sections.join('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
async listExpansionPacks() {
|
async listExpansionPacks () {
|
||||||
const expansionPacksDir = path.join(this.rootDir, "expansion-packs");
|
const expansionPacksDir = path.join(this.rootDir, 'expansion-packs')
|
||||||
try {
|
try {
|
||||||
const entries = await fs.readdir(expansionPacksDir, { withFileTypes: true });
|
const entries = await fs.readdir(expansionPacksDir, { withFileTypes: true })
|
||||||
return entries.filter((entry) => entry.isDirectory()).map((entry) => entry.name);
|
return entries.filter((entry) => entry.isDirectory()).map((entry) => entry.name)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn("No expansion-packs directory found");
|
console.warn('No expansion-packs directory found')
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
listAgents() {
|
listAgents () {
|
||||||
return this.resolver.listAgents();
|
return this.resolver.listAgents()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = WebBuilder;
|
module.exports = WebBuilder
|
||||||
|
|
|
||||||
|
|
@ -1,106 +1,104 @@
|
||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
|
|
||||||
const fs = require('fs');
|
const fs = require('fs')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
|
|
||||||
const args = process.argv.slice(2);
|
const args = process.argv.slice(2)
|
||||||
const bumpType = args[0] || 'minor'; // default to minor
|
const bumpType = args[0] || 'minor' // default to minor
|
||||||
|
|
||||||
if (!['major', 'minor', 'patch'].includes(bumpType)) {
|
if (!['major', 'minor', 'patch'].includes(bumpType)) {
|
||||||
console.log('Usage: node bump-all-versions.js [major|minor|patch]');
|
console.log('Usage: node bump-all-versions.js [major|minor|patch]')
|
||||||
console.log('Default: minor');
|
console.log('Default: minor')
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
function bumpVersion(currentVersion, type) {
|
function bumpVersion (currentVersion, type) {
|
||||||
const [major, minor, patch] = currentVersion.split('.').map(Number);
|
const [major, minor, patch] = currentVersion.split('.').map(Number)
|
||||||
|
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case 'major':
|
case 'major':
|
||||||
return `${major + 1}.0.0`;
|
return `${major + 1}.0.0`
|
||||||
case 'minor':
|
case 'minor':
|
||||||
return `${major}.${minor + 1}.0`;
|
return `${major}.${minor + 1}.0`
|
||||||
case 'patch':
|
case 'patch':
|
||||||
return `${major}.${minor}.${patch + 1}`;
|
return `${major}.${minor}.${patch + 1}`
|
||||||
default:
|
default:
|
||||||
return currentVersion;
|
return currentVersion
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function bumpAllVersions() {
|
async function bumpAllVersions () {
|
||||||
const updatedItems = [];
|
const updatedItems = []
|
||||||
|
|
||||||
// First, bump the core version (package.json)
|
// First, bump the core version (package.json)
|
||||||
const packagePath = path.join(__dirname, '..', 'package.json');
|
const packagePath = path.join(__dirname, '..', 'package.json')
|
||||||
try {
|
try {
|
||||||
const packageContent = fs.readFileSync(packagePath, 'utf8');
|
const packageContent = fs.readFileSync(packagePath, 'utf8')
|
||||||
const packageJson = JSON.parse(packageContent);
|
const packageJson = JSON.parse(packageContent)
|
||||||
const oldCoreVersion = packageJson.version || '1.0.0';
|
const oldCoreVersion = packageJson.version || '1.0.0'
|
||||||
const newCoreVersion = bumpVersion(oldCoreVersion, bumpType);
|
const newCoreVersion = bumpVersion(oldCoreVersion, bumpType)
|
||||||
|
|
||||||
packageJson.version = newCoreVersion;
|
packageJson.version = newCoreVersion
|
||||||
|
|
||||||
fs.writeFileSync(packagePath, JSON.stringify(packageJson, null, 2) + '\n');
|
fs.writeFileSync(packagePath, JSON.stringify(packageJson, null, 2) + '\n')
|
||||||
|
|
||||||
updatedItems.push({ type: 'core', name: 'BMad Core', oldVersion: oldCoreVersion, newVersion: newCoreVersion });
|
updatedItems.push({ type: 'core', name: 'BMad Core', oldVersion: oldCoreVersion, newVersion: newCoreVersion })
|
||||||
console.log(`✓ BMad Core (package.json): ${oldCoreVersion} → ${newCoreVersion}`);
|
console.log(`✓ BMad Core (package.json): ${oldCoreVersion} → ${newCoreVersion}`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`✗ Failed to update BMad Core: ${error.message}`);
|
console.error(`✗ Failed to update BMad Core: ${error.message}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Then, bump all expansion packs
|
// Then, bump all expansion packs
|
||||||
const expansionPacksDir = path.join(__dirname, '..', 'expansion-packs');
|
const expansionPacksDir = path.join(__dirname, '..', 'expansion-packs')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const entries = fs.readdirSync(expansionPacksDir, { withFileTypes: true });
|
const entries = fs.readdirSync(expansionPacksDir, { withFileTypes: true })
|
||||||
|
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (entry.isDirectory() && !entry.name.startsWith('.') && entry.name !== 'README.md') {
|
if (entry.isDirectory() && !entry.name.startsWith('.') && entry.name !== 'README.md') {
|
||||||
const packId = entry.name;
|
const packId = entry.name
|
||||||
const configPath = path.join(expansionPacksDir, packId, 'config.yaml');
|
const configPath = path.join(expansionPacksDir, packId, 'config.yaml')
|
||||||
|
|
||||||
if (fs.existsSync(configPath)) {
|
if (fs.existsSync(configPath)) {
|
||||||
try {
|
try {
|
||||||
const configContent = fs.readFileSync(configPath, 'utf8');
|
const configContent = fs.readFileSync(configPath, 'utf8')
|
||||||
const config = yaml.load(configContent);
|
const config = yaml.load(configContent)
|
||||||
const oldVersion = config.version || '1.0.0';
|
const oldVersion = config.version || '1.0.0'
|
||||||
const newVersion = bumpVersion(oldVersion, bumpType);
|
const newVersion = bumpVersion(oldVersion, bumpType)
|
||||||
|
|
||||||
config.version = newVersion;
|
config.version = newVersion
|
||||||
|
|
||||||
const updatedYaml = yaml.dump(config, { indent: 2 });
|
const updatedYaml = yaml.dump(config, { indent: 2 })
|
||||||
fs.writeFileSync(configPath, updatedYaml);
|
fs.writeFileSync(configPath, updatedYaml)
|
||||||
|
|
||||||
updatedItems.push({ type: 'expansion', name: packId, oldVersion, newVersion });
|
|
||||||
console.log(`✓ ${packId}: ${oldVersion} → ${newVersion}`);
|
|
||||||
|
|
||||||
|
updatedItems.push({ type: 'expansion', name: packId, oldVersion, newVersion })
|
||||||
|
console.log(`✓ ${packId}: ${oldVersion} → ${newVersion}`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`✗ Failed to update ${packId}: ${error.message}`);
|
console.error(`✗ Failed to update ${packId}: ${error.message}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (updatedItems.length > 0) {
|
if (updatedItems.length > 0) {
|
||||||
const coreCount = updatedItems.filter(i => i.type === 'core').length;
|
const coreCount = updatedItems.filter(i => i.type === 'core').length
|
||||||
const expansionCount = updatedItems.filter(i => i.type === 'expansion').length;
|
const expansionCount = updatedItems.filter(i => i.type === 'expansion').length
|
||||||
|
|
||||||
console.log(`\n✓ Successfully bumped ${updatedItems.length} item(s) with ${bumpType} version bump`);
|
console.log(`\n✓ Successfully bumped ${updatedItems.length} item(s) with ${bumpType} version bump`)
|
||||||
if (coreCount > 0) console.log(` - ${coreCount} core`);
|
if (coreCount > 0) console.log(` - ${coreCount} core`)
|
||||||
if (expansionCount > 0) console.log(` - ${expansionCount} expansion pack(s)`);
|
if (expansionCount > 0) console.log(` - ${expansionCount} expansion pack(s)`)
|
||||||
|
|
||||||
console.log('\nNext steps:');
|
console.log('\nNext steps:')
|
||||||
console.log('1. Test the changes');
|
console.log('1. Test the changes')
|
||||||
console.log('2. Commit: git add -A && git commit -m "chore: bump all versions (' + bumpType + ')"');
|
console.log('2. Commit: git add -A && git commit -m "chore: bump all versions (' + bumpType + ')"')
|
||||||
} else {
|
} else {
|
||||||
console.log('No items found to update');
|
console.log('No items found to update')
|
||||||
}
|
}
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Error reading expansion packs directory:', error.message);
|
console.error('Error reading expansion packs directory:', error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
bumpAllVersions();
|
bumpAllVersions()
|
||||||
|
|
|
||||||
|
|
@ -1,83 +1,82 @@
|
||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
|
|
||||||
// Load required modules
|
// Load required modules
|
||||||
const fs = require('fs');
|
const fs = require('fs')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
|
|
||||||
// Parse CLI arguments
|
// Parse CLI arguments
|
||||||
const args = process.argv.slice(2);
|
const args = process.argv.slice(2)
|
||||||
const packId = args[0];
|
const packId = args[0]
|
||||||
const bumpType = args[1] || 'minor';
|
const bumpType = args[1] || 'minor'
|
||||||
|
|
||||||
// Validate arguments
|
// Validate arguments
|
||||||
if (!packId || args.length > 2) {
|
if (!packId || args.length > 2) {
|
||||||
console.log('Usage: node bump-expansion-version.js <expansion-pack-id> [major|minor|patch]');
|
console.log('Usage: node bump-expansion-version.js <expansion-pack-id> [major|minor|patch]')
|
||||||
console.log('Default: minor');
|
console.log('Default: minor')
|
||||||
console.log('Example: node bump-expansion-version.js bmad-creator-tools patch');
|
console.log('Example: node bump-expansion-version.js bmad-creator-tools patch')
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!['major', 'minor', 'patch'].includes(bumpType)) {
|
if (!['major', 'minor', 'patch'].includes(bumpType)) {
|
||||||
console.error('Error: Bump type must be major, minor, or patch');
|
console.error('Error: Bump type must be major, minor, or patch')
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Version bump logic
|
// Version bump logic
|
||||||
function bumpVersion(currentVersion, type) {
|
function bumpVersion (currentVersion, type) {
|
||||||
const [major, minor, patch] = currentVersion.split('.').map(Number);
|
const [major, minor, patch] = currentVersion.split('.').map(Number)
|
||||||
|
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case 'major': return `${major + 1}.0.0`;
|
case 'major': return `${major + 1}.0.0`
|
||||||
case 'minor': return `${major}.${minor + 1}.0`;
|
case 'minor': return `${major}.${minor + 1}.0`
|
||||||
case 'patch': return `${major}.${minor}.${patch + 1}`;
|
case 'patch': return `${major}.${minor}.${patch + 1}`
|
||||||
default: return currentVersion;
|
default: return currentVersion
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Main function to bump version
|
// Main function to bump version
|
||||||
async function updateVersion() {
|
async function updateVersion () {
|
||||||
const configPath = path.join(__dirname, '..', 'expansion-packs', packId, 'config.yaml');
|
const configPath = path.join(__dirname, '..', 'expansion-packs', packId, 'config.yaml')
|
||||||
|
|
||||||
// Check if config exists
|
// Check if config exists
|
||||||
if (!fs.existsSync(configPath)) {
|
if (!fs.existsSync(configPath)) {
|
||||||
console.error(`Error: Expansion pack '${packId}' not found`);
|
console.error(`Error: Expansion pack '${packId}' not found`)
|
||||||
console.log('\nAvailable expansion packs:');
|
console.log('\nAvailable expansion packs:')
|
||||||
|
|
||||||
const packsDir = path.join(__dirname, '..', 'expansion-packs');
|
const packsDir = path.join(__dirname, '..', 'expansion-packs')
|
||||||
const entries = fs.readdirSync(packsDir, { withFileTypes: true });
|
const entries = fs.readdirSync(packsDir, { withFileTypes: true })
|
||||||
|
|
||||||
entries.forEach(entry => {
|
entries.forEach(entry => {
|
||||||
if (entry.isDirectory() && !entry.name.startsWith('.')) {
|
if (entry.isDirectory() && !entry.name.startsWith('.')) {
|
||||||
console.log(` - ${entry.name}`);
|
console.log(` - ${entry.name}`)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const configContent = fs.readFileSync(configPath, 'utf8');
|
const configContent = fs.readFileSync(configPath, 'utf8')
|
||||||
const config = yaml.load(configContent);
|
const config = yaml.load(configContent)
|
||||||
|
|
||||||
const oldVersion = config.version || '1.0.0';
|
const oldVersion = config.version || '1.0.0'
|
||||||
const newVersion = bumpVersion(oldVersion, bumpType);
|
const newVersion = bumpVersion(oldVersion, bumpType)
|
||||||
|
|
||||||
config.version = newVersion;
|
config.version = newVersion
|
||||||
|
|
||||||
const updatedYaml = yaml.dump(config, { indent: 2 });
|
const updatedYaml = yaml.dump(config, { indent: 2 })
|
||||||
fs.writeFileSync(configPath, updatedYaml);
|
fs.writeFileSync(configPath, updatedYaml)
|
||||||
|
|
||||||
console.log(`✓ ${packId}: ${oldVersion} → ${newVersion}`);
|
|
||||||
console.log(`\n✓ Successfully bumped ${packId} with ${bumpType} version bump`);
|
|
||||||
console.log('\nNext steps:');
|
|
||||||
console.log(`1. Test the changes`);
|
|
||||||
console.log(`2. Commit: git add -A && git commit -m "chore: bump ${packId} version (${bumpType})"`);
|
|
||||||
|
|
||||||
|
console.log(`✓ ${packId}: ${oldVersion} → ${newVersion}`)
|
||||||
|
console.log(`\n✓ Successfully bumped ${packId} with ${bumpType} version bump`)
|
||||||
|
console.log('\nNext steps:')
|
||||||
|
console.log('1. Test the changes')
|
||||||
|
console.log(`2. Commit: git add -A && git commit -m "chore: bump ${packId} version (${bumpType})"`)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Error updating version:', error.message);
|
console.error('Error updating version:', error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
updateVersion();
|
updateVersion()
|
||||||
|
|
|
||||||
116
tools/cli.js
116
tools/cli.js
|
|
@ -1,17 +1,17 @@
|
||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
|
|
||||||
const { Command } = require('commander');
|
const { Command } = require('commander')
|
||||||
const WebBuilder = require('./builders/web-builder');
|
const WebBuilder = require('./builders/web-builder')
|
||||||
const V3ToV4Upgrader = require('./upgraders/v3-to-v4-upgrader');
|
const V3ToV4Upgrader = require('./upgraders/v3-to-v4-upgrader')
|
||||||
const IdeSetup = require('./installer/lib/ide-setup');
|
const IdeSetup = require('./installer/lib/ide-setup')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
|
|
||||||
const program = new Command();
|
const program = new Command()
|
||||||
|
|
||||||
program
|
program
|
||||||
.name('bmad-build')
|
.name('bmad-build')
|
||||||
.description('BMad-Method build tool for creating web bundles')
|
.description('BMad-Method build tool for creating web bundles')
|
||||||
.version('4.0.0');
|
.version('4.0.0')
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('build')
|
.command('build')
|
||||||
|
|
@ -24,40 +24,40 @@ program
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
const builder = new WebBuilder({
|
const builder = new WebBuilder({
|
||||||
rootDir: process.cwd()
|
rootDir: process.cwd()
|
||||||
});
|
})
|
||||||
|
|
||||||
try {
|
try {
|
||||||
if (options.clean) {
|
if (options.clean) {
|
||||||
console.log('Cleaning output directories...');
|
console.log('Cleaning output directories...')
|
||||||
await builder.cleanOutputDirs();
|
await builder.cleanOutputDirs()
|
||||||
}
|
}
|
||||||
|
|
||||||
if (options.expansionsOnly) {
|
if (options.expansionsOnly) {
|
||||||
console.log('Building expansion pack bundles...');
|
console.log('Building expansion pack bundles...')
|
||||||
await builder.buildAllExpansionPacks({ clean: false });
|
await builder.buildAllExpansionPacks({ clean: false })
|
||||||
} else {
|
} else {
|
||||||
if (!options.teamsOnly) {
|
if (!options.teamsOnly) {
|
||||||
console.log('Building agent bundles...');
|
console.log('Building agent bundles...')
|
||||||
await builder.buildAgents();
|
await builder.buildAgents()
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!options.agentsOnly) {
|
if (!options.agentsOnly) {
|
||||||
console.log('Building team bundles...');
|
console.log('Building team bundles...')
|
||||||
await builder.buildTeams();
|
await builder.buildTeams()
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!options.noExpansions) {
|
if (!options.noExpansions) {
|
||||||
console.log('Building expansion pack bundles...');
|
console.log('Building expansion pack bundles...')
|
||||||
await builder.buildAllExpansionPacks({ clean: false });
|
await builder.buildAllExpansionPacks({ clean: false })
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log('Build completed successfully!');
|
console.log('Build completed successfully!')
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Build failed:', error.message);
|
console.error('Build failed:', error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('build:expansions')
|
.command('build:expansions')
|
||||||
|
|
@ -67,72 +67,72 @@ program
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
const builder = new WebBuilder({
|
const builder = new WebBuilder({
|
||||||
rootDir: process.cwd()
|
rootDir: process.cwd()
|
||||||
});
|
})
|
||||||
|
|
||||||
try {
|
try {
|
||||||
if (options.expansion) {
|
if (options.expansion) {
|
||||||
console.log(`Building expansion pack: ${options.expansion}`);
|
console.log(`Building expansion pack: ${options.expansion}`)
|
||||||
await builder.buildExpansionPack(options.expansion, { clean: options.clean });
|
await builder.buildExpansionPack(options.expansion, { clean: options.clean })
|
||||||
} else {
|
} else {
|
||||||
console.log('Building all expansion packs...');
|
console.log('Building all expansion packs...')
|
||||||
await builder.buildAllExpansionPacks({ clean: options.clean });
|
await builder.buildAllExpansionPacks({ clean: options.clean })
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log('Expansion pack build completed successfully!');
|
console.log('Expansion pack build completed successfully!')
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Expansion pack build failed:', error.message);
|
console.error('Expansion pack build failed:', error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('list:agents')
|
.command('list:agents')
|
||||||
.description('List all available agents')
|
.description('List all available agents')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
const builder = new WebBuilder({ rootDir: process.cwd() });
|
const builder = new WebBuilder({ rootDir: process.cwd() })
|
||||||
const agents = await builder.resolver.listAgents();
|
const agents = await builder.resolver.listAgents()
|
||||||
console.log('Available agents:');
|
console.log('Available agents:')
|
||||||
agents.forEach(agent => console.log(` - ${agent}`));
|
agents.forEach(agent => console.log(` - ${agent}`))
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('list:expansions')
|
.command('list:expansions')
|
||||||
.description('List all available expansion packs')
|
.description('List all available expansion packs')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
const builder = new WebBuilder({ rootDir: process.cwd() });
|
const builder = new WebBuilder({ rootDir: process.cwd() })
|
||||||
const expansions = await builder.listExpansionPacks();
|
const expansions = await builder.listExpansionPacks()
|
||||||
console.log('Available expansion packs:');
|
console.log('Available expansion packs:')
|
||||||
expansions.forEach(expansion => console.log(` - ${expansion}`));
|
expansions.forEach(expansion => console.log(` - ${expansion}`))
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('validate')
|
.command('validate')
|
||||||
.description('Validate agent and team configurations')
|
.description('Validate agent and team configurations')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
const builder = new WebBuilder({ rootDir: process.cwd() });
|
const builder = new WebBuilder({ rootDir: process.cwd() })
|
||||||
try {
|
try {
|
||||||
// Validate by attempting to build all agents and teams
|
// Validate by attempting to build all agents and teams
|
||||||
const agents = await builder.resolver.listAgents();
|
const agents = await builder.resolver.listAgents()
|
||||||
const teams = await builder.resolver.listTeams();
|
const teams = await builder.resolver.listTeams()
|
||||||
|
|
||||||
console.log('Validating agents...');
|
console.log('Validating agents...')
|
||||||
for (const agent of agents) {
|
for (const agent of agents) {
|
||||||
await builder.resolver.resolveAgentDependencies(agent);
|
await builder.resolver.resolveAgentDependencies(agent)
|
||||||
console.log(` ✓ ${agent}`);
|
console.log(` ✓ ${agent}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log('\nValidating teams...');
|
console.log('\nValidating teams...')
|
||||||
for (const team of teams) {
|
for (const team of teams) {
|
||||||
await builder.resolver.resolveTeamDependencies(team);
|
await builder.resolver.resolveTeamDependencies(team)
|
||||||
console.log(` ✓ ${team}`);
|
console.log(` ✓ ${team}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log('\nAll configurations are valid!');
|
console.log('\nAll configurations are valid!')
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Validation failed:', error.message);
|
console.error('Validation failed:', error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('upgrade')
|
.command('upgrade')
|
||||||
|
|
@ -141,12 +141,12 @@ program
|
||||||
.option('--dry-run', 'Show what would be changed without making changes')
|
.option('--dry-run', 'Show what would be changed without making changes')
|
||||||
.option('--no-backup', 'Skip creating backup (not recommended)')
|
.option('--no-backup', 'Skip creating backup (not recommended)')
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
const upgrader = new V3ToV4Upgrader();
|
const upgrader = new V3ToV4Upgrader()
|
||||||
await upgrader.upgrade({
|
await upgrader.upgrade({
|
||||||
projectPath: options.project,
|
projectPath: options.project,
|
||||||
dryRun: options.dryRun,
|
dryRun: options.dryRun,
|
||||||
backup: options.backup
|
backup: options.backup
|
||||||
});
|
})
|
||||||
});
|
})
|
||||||
|
|
||||||
program.parse();
|
program.parse()
|
||||||
|
|
|
||||||
|
|
@ -1,39 +1,39 @@
|
||||||
#!/usr/bin/env node
|
#!/usr/bin/env node
|
||||||
|
|
||||||
const { program } = require('commander');
|
const { program } = require('commander')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const fs = require('fs').promises;
|
const fs = require('fs').promises
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const chalk = require('chalk');
|
const chalk = require('chalk')
|
||||||
const inquirer = require('inquirer');
|
const inquirer = require('inquirer')
|
||||||
|
|
||||||
// Handle both execution contexts (from root via npx or from installer directory)
|
// Handle both execution contexts (from root via npx or from installer directory)
|
||||||
let version;
|
let version
|
||||||
let installer;
|
let installer
|
||||||
try {
|
try {
|
||||||
// Try installer context first (when run from tools/installer/)
|
// Try installer context first (when run from tools/installer/)
|
||||||
version = require('../package.json').version;
|
version = require('../package.json').version
|
||||||
installer = require('../lib/installer');
|
installer = require('../lib/installer')
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
// Fall back to root context (when run via npx from GitHub)
|
// Fall back to root context (when run via npx from GitHub)
|
||||||
console.log(`Installer context not found (${e.message}), trying root context...`);
|
console.log(`Installer context not found (${e.message}), trying root context...`)
|
||||||
try {
|
try {
|
||||||
version = require('../../../package.json').version;
|
version = require('../../../package.json').version
|
||||||
installer = require('../../../tools/installer/lib/installer');
|
installer = require('../../../tools/installer/lib/installer')
|
||||||
} catch (e2) {
|
} catch (e2) {
|
||||||
console.error('Error: Could not load required modules. Please ensure you are running from the correct directory.');
|
console.error('Error: Could not load required modules. Please ensure you are running from the correct directory.')
|
||||||
console.error('Debug info:', {
|
console.error('Debug info:', {
|
||||||
__dirname,
|
__dirname,
|
||||||
cwd: process.cwd(),
|
cwd: process.cwd(),
|
||||||
error: e2.message
|
error: e2.message
|
||||||
});
|
})
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
program
|
program
|
||||||
.version(version)
|
.version(version)
|
||||||
.description('BMad Method installer - Universal AI agent framework for any domain');
|
.description('BMad Method installer - Universal AI agent framework for any domain')
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('install')
|
.command('install')
|
||||||
|
|
@ -47,30 +47,30 @@ program
|
||||||
try {
|
try {
|
||||||
if (!options.full && !options.expansionOnly) {
|
if (!options.full && !options.expansionOnly) {
|
||||||
// Interactive mode
|
// Interactive mode
|
||||||
const answers = await promptInstallation();
|
const answers = await promptInstallation()
|
||||||
if (!answers._alreadyInstalled) {
|
if (!answers._alreadyInstalled) {
|
||||||
await installer.install(answers);
|
await installer.install(answers)
|
||||||
process.exit(0);
|
process.exit(0)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Direct mode
|
// Direct mode
|
||||||
let installType = 'full';
|
let installType = 'full'
|
||||||
if (options.expansionOnly) installType = 'expansion-only';
|
if (options.expansionOnly) installType = 'expansion-only'
|
||||||
|
|
||||||
const config = {
|
const config = {
|
||||||
installType,
|
installType,
|
||||||
directory: options.directory || '.',
|
directory: options.directory || '.',
|
||||||
ides: (options.ide || []).filter(ide => ide !== 'other'),
|
ides: (options.ide || []).filter(ide => ide !== 'other'),
|
||||||
expansionPacks: options.expansionPacks || []
|
expansionPacks: options.expansionPacks || []
|
||||||
};
|
}
|
||||||
await installer.install(config);
|
await installer.install(config)
|
||||||
process.exit(0);
|
process.exit(0)
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red('Installation failed:'), error.message);
|
console.error(chalk.red('Installation failed:'), error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('update')
|
.command('update')
|
||||||
|
|
@ -79,39 +79,38 @@ program
|
||||||
.option('--dry-run', 'Show what would be updated without making changes')
|
.option('--dry-run', 'Show what would be updated without making changes')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
try {
|
try {
|
||||||
await installer.update();
|
await installer.update()
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red('Update failed:'), error.message);
|
console.error(chalk.red('Update failed:'), error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('list:expansions')
|
.command('list:expansions')
|
||||||
.description('List available expansion packs')
|
.description('List available expansion packs')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
try {
|
try {
|
||||||
await installer.listExpansionPacks();
|
await installer.listExpansionPacks()
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red('Error:'), error.message);
|
console.error(chalk.red('Error:'), error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
program
|
program
|
||||||
.command('status')
|
.command('status')
|
||||||
.description('Show installation status')
|
.description('Show installation status')
|
||||||
.action(async () => {
|
.action(async () => {
|
||||||
try {
|
try {
|
||||||
await installer.showStatus();
|
await installer.showStatus()
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red('Error:'), error.message);
|
console.error(chalk.red('Error:'), error.message)
|
||||||
process.exit(1);
|
process.exit(1)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
async function promptInstallation() {
|
|
||||||
|
|
||||||
|
async function promptInstallation () {
|
||||||
// Display ASCII logo
|
// Display ASCII logo
|
||||||
console.log(chalk.bold.cyan(`
|
console.log(chalk.bold.cyan(`
|
||||||
██████╗ ███╗ ███╗ █████╗ ██████╗ ███╗ ███╗███████╗████████╗██╗ ██╗ ██████╗ ██████╗
|
██████╗ ███╗ ███╗ █████╗ ██████╗ ███╗ ███╗███████╗████████╗██╗ ██╗ ██████╗ ██████╗
|
||||||
|
|
@ -120,12 +119,12 @@ async function promptInstallation() {
|
||||||
██╔══██╗██║╚██╔╝██║██╔══██║██║ ██║╚════╝██║╚██╔╝██║██╔══╝ ██║ ██╔══██║██║ ██║██║ ██║
|
██╔══██╗██║╚██╔╝██║██╔══██║██║ ██║╚════╝██║╚██╔╝██║██╔══╝ ██║ ██╔══██║██║ ██║██║ ██║
|
||||||
██████╔╝██║ ╚═╝ ██║██║ ██║██████╔╝ ██║ ╚═╝ ██║███████╗ ██║ ██║ ██║╚██████╔╝██████╔╝
|
██████╔╝██║ ╚═╝ ██║██║ ██║██████╔╝ ██║ ╚═╝ ██║███████╗ ██║ ██║ ██║╚██████╔╝██████╔╝
|
||||||
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝
|
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝
|
||||||
`));
|
`))
|
||||||
|
|
||||||
console.log(chalk.bold.magenta('🚀 Universal AI Agent Framework for Any Domain'));
|
console.log(chalk.bold.magenta('🚀 Universal AI Agent Framework for Any Domain'))
|
||||||
console.log(chalk.bold.blue(`✨ Installer v${version}\n`));
|
console.log(chalk.bold.blue(`✨ Installer v${version}\n`))
|
||||||
|
|
||||||
const answers = {};
|
const answers = {}
|
||||||
|
|
||||||
// Ask for installation directory first
|
// Ask for installation directory first
|
||||||
const { directory } = await inquirer.prompt([
|
const { directory } = await inquirer.prompt([
|
||||||
|
|
@ -135,72 +134,72 @@ async function promptInstallation() {
|
||||||
message: 'Enter the full path to your project directory where BMad should be installed:',
|
message: 'Enter the full path to your project directory where BMad should be installed:',
|
||||||
validate: (input) => {
|
validate: (input) => {
|
||||||
if (!input.trim()) {
|
if (!input.trim()) {
|
||||||
return 'Please enter a valid project path';
|
return 'Please enter a valid project path'
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.directory = directory;
|
answers.directory = directory
|
||||||
|
|
||||||
// Detect existing installations
|
// Detect existing installations
|
||||||
const installDir = path.resolve(directory);
|
const installDir = path.resolve(directory)
|
||||||
const state = await installer.detectInstallationState(installDir);
|
const state = await installer.detectInstallationState(installDir)
|
||||||
|
|
||||||
// Check for existing expansion packs
|
// Check for existing expansion packs
|
||||||
const existingExpansionPacks = state.expansionPacks || {};
|
const existingExpansionPacks = state.expansionPacks || {}
|
||||||
|
|
||||||
// Get available expansion packs
|
// Get available expansion packs
|
||||||
const availableExpansionPacks = await installer.getAvailableExpansionPacks();
|
const availableExpansionPacks = await installer.getAvailableExpansionPacks()
|
||||||
|
|
||||||
// Build choices list
|
// Build choices list
|
||||||
const choices = [];
|
const choices = []
|
||||||
|
|
||||||
// Load core config to get short-title
|
// Load core config to get short-title
|
||||||
const coreConfigPath = path.join(__dirname, '..', '..', '..', 'bmad-core', 'core-config.yaml');
|
const coreConfigPath = path.join(__dirname, '..', '..', '..', 'bmad-core', 'core-config.yaml')
|
||||||
const coreConfig = yaml.load(await fs.readFile(coreConfigPath, 'utf8'));
|
const coreConfig = yaml.load(await fs.readFile(coreConfigPath, 'utf8'))
|
||||||
const coreShortTitle = coreConfig['short-title'] || 'BMad Agile Core System';
|
const coreShortTitle = coreConfig['short-title'] || 'BMad Agile Core System'
|
||||||
|
|
||||||
// Add BMad core option
|
// Add BMad core option
|
||||||
let bmadOptionText;
|
let bmadOptionText
|
||||||
if (state.type === 'v4_existing') {
|
if (state.type === 'v4_existing') {
|
||||||
const currentVersion = state.manifest?.version || 'unknown';
|
const currentVersion = state.manifest?.version || 'unknown'
|
||||||
const newVersion = version; // Always use package.json version
|
const newVersion = version // Always use package.json version
|
||||||
const versionInfo = currentVersion === newVersion
|
const versionInfo = currentVersion === newVersion
|
||||||
? `(v${currentVersion} - reinstall)`
|
? `(v${currentVersion} - reinstall)`
|
||||||
: `(v${currentVersion} → v${newVersion})`;
|
: `(v${currentVersion} → v${newVersion})`
|
||||||
bmadOptionText = `Update ${coreShortTitle} ${versionInfo} .bmad-core`;
|
bmadOptionText = `Update ${coreShortTitle} ${versionInfo} .bmad-core`
|
||||||
} else {
|
} else {
|
||||||
bmadOptionText = `${coreShortTitle} (v${version}) .bmad-core`;
|
bmadOptionText = `${coreShortTitle} (v${version}) .bmad-core`
|
||||||
}
|
}
|
||||||
|
|
||||||
choices.push({
|
choices.push({
|
||||||
name: bmadOptionText,
|
name: bmadOptionText,
|
||||||
value: 'bmad-core',
|
value: 'bmad-core',
|
||||||
checked: true
|
checked: true
|
||||||
});
|
})
|
||||||
|
|
||||||
// Add expansion pack options
|
// Add expansion pack options
|
||||||
for (const pack of availableExpansionPacks) {
|
for (const pack of availableExpansionPacks) {
|
||||||
const existing = existingExpansionPacks[pack.id];
|
const existing = existingExpansionPacks[pack.id]
|
||||||
let packOptionText;
|
let packOptionText
|
||||||
|
|
||||||
if (existing) {
|
if (existing) {
|
||||||
const currentVersion = existing.manifest?.version || 'unknown';
|
const currentVersion = existing.manifest?.version || 'unknown'
|
||||||
const newVersion = pack.version;
|
const newVersion = pack.version
|
||||||
const versionInfo = currentVersion === newVersion
|
const versionInfo = currentVersion === newVersion
|
||||||
? `(v${currentVersion} - reinstall)`
|
? `(v${currentVersion} - reinstall)`
|
||||||
: `(v${currentVersion} → v${newVersion})`;
|
: `(v${currentVersion} → v${newVersion})`
|
||||||
packOptionText = `Update ${pack.shortTitle} ${versionInfo} .${pack.id}`;
|
packOptionText = `Update ${pack.shortTitle} ${versionInfo} .${pack.id}`
|
||||||
} else {
|
} else {
|
||||||
packOptionText = `${pack.shortTitle} (v${pack.version}) .${pack.id}`;
|
packOptionText = `${pack.shortTitle} (v${pack.version}) .${pack.id}`
|
||||||
}
|
}
|
||||||
|
|
||||||
choices.push({
|
choices.push({
|
||||||
name: packOptionText,
|
name: packOptionText,
|
||||||
value: pack.id,
|
value: pack.id,
|
||||||
checked: false
|
checked: false
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ask what to install
|
// Ask what to install
|
||||||
|
|
@ -209,24 +208,24 @@ async function promptInstallation() {
|
||||||
type: 'checkbox',
|
type: 'checkbox',
|
||||||
name: 'selectedItems',
|
name: 'selectedItems',
|
||||||
message: 'Select what to install/update (use space to select, enter to continue):',
|
message: 'Select what to install/update (use space to select, enter to continue):',
|
||||||
choices: choices,
|
choices,
|
||||||
validate: (selected) => {
|
validate: (selected) => {
|
||||||
if (selected.length === 0) {
|
if (selected.length === 0) {
|
||||||
return 'Please select at least one item to install';
|
return 'Please select at least one item to install'
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
// Process selections
|
// Process selections
|
||||||
answers.installType = selectedItems.includes('bmad-core') ? 'full' : 'expansion-only';
|
answers.installType = selectedItems.includes('bmad-core') ? 'full' : 'expansion-only'
|
||||||
answers.expansionPacks = selectedItems.filter(item => item !== 'bmad-core');
|
answers.expansionPacks = selectedItems.filter(item => item !== 'bmad-core')
|
||||||
|
|
||||||
// Ask sharding questions if installing BMad core
|
// Ask sharding questions if installing BMad core
|
||||||
if (selectedItems.includes('bmad-core')) {
|
if (selectedItems.includes('bmad-core')) {
|
||||||
console.log(chalk.cyan('\n📋 Document Organization Settings'));
|
console.log(chalk.cyan('\n📋 Document Organization Settings'))
|
||||||
console.log(chalk.dim('Configure how your project documentation should be organized.\n'));
|
console.log(chalk.dim('Configure how your project documentation should be organized.\n'))
|
||||||
|
|
||||||
// Ask about PRD sharding
|
// Ask about PRD sharding
|
||||||
const { prdSharded } = await inquirer.prompt([
|
const { prdSharded } = await inquirer.prompt([
|
||||||
|
|
@ -236,8 +235,8 @@ async function promptInstallation() {
|
||||||
message: 'Will the PRD (Product Requirements Document) be sharded into multiple files?',
|
message: 'Will the PRD (Product Requirements Document) be sharded into multiple files?',
|
||||||
default: true
|
default: true
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.prdSharded = prdSharded;
|
answers.prdSharded = prdSharded
|
||||||
|
|
||||||
// Ask about architecture sharding
|
// Ask about architecture sharding
|
||||||
const { architectureSharded } = await inquirer.prompt([
|
const { architectureSharded } = await inquirer.prompt([
|
||||||
|
|
@ -247,17 +246,17 @@ async function promptInstallation() {
|
||||||
message: 'Will the architecture documentation be sharded into multiple files?',
|
message: 'Will the architecture documentation be sharded into multiple files?',
|
||||||
default: true
|
default: true
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.architectureSharded = architectureSharded;
|
answers.architectureSharded = architectureSharded
|
||||||
|
|
||||||
// Show warning if architecture sharding is disabled
|
// Show warning if architecture sharding is disabled
|
||||||
if (!architectureSharded) {
|
if (!architectureSharded) {
|
||||||
console.log(chalk.yellow.bold('\n⚠️ IMPORTANT: Architecture Sharding Disabled'));
|
console.log(chalk.yellow.bold('\n⚠️ IMPORTANT: Architecture Sharding Disabled'))
|
||||||
console.log(chalk.yellow('With architecture sharding disabled, you should still create the files listed'));
|
console.log(chalk.yellow('With architecture sharding disabled, you should still create the files listed'))
|
||||||
console.log(chalk.yellow('in devLoadAlwaysFiles (like coding-standards.md, tech-stack.md, source-tree.md)'));
|
console.log(chalk.yellow('in devLoadAlwaysFiles (like coding-standards.md, tech-stack.md, source-tree.md)'))
|
||||||
console.log(chalk.yellow('as these are used by the dev agent at runtime.'));
|
console.log(chalk.yellow('as these are used by the dev agent at runtime.'))
|
||||||
console.log(chalk.yellow('\nAlternatively, you can remove these files from the devLoadAlwaysFiles list'));
|
console.log(chalk.yellow('\nAlternatively, you can remove these files from the devLoadAlwaysFiles list'))
|
||||||
console.log(chalk.yellow('in your core-config.yaml after installation.'));
|
console.log(chalk.yellow('in your core-config.yaml after installation.'))
|
||||||
|
|
||||||
const { acknowledge } = await inquirer.prompt([
|
const { acknowledge } = await inquirer.prompt([
|
||||||
{
|
{
|
||||||
|
|
@ -266,25 +265,25 @@ async function promptInstallation() {
|
||||||
message: 'Do you acknowledge this requirement and want to proceed?',
|
message: 'Do you acknowledge this requirement and want to proceed?',
|
||||||
default: false
|
default: false
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
if (!acknowledge) {
|
if (!acknowledge) {
|
||||||
console.log(chalk.red('Installation cancelled.'));
|
console.log(chalk.red('Installation cancelled.'))
|
||||||
process.exit(0);
|
process.exit(0)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ask for IDE configuration
|
// Ask for IDE configuration
|
||||||
let ides = [];
|
let ides = []
|
||||||
let ideSelectionComplete = false;
|
let ideSelectionComplete = false
|
||||||
|
|
||||||
while (!ideSelectionComplete) {
|
while (!ideSelectionComplete) {
|
||||||
console.log(chalk.cyan('\n🛠️ IDE Configuration'));
|
console.log(chalk.cyan('\n🛠️ IDE Configuration'))
|
||||||
console.log(chalk.bold.yellow.bgRed(' ⚠️ IMPORTANT: This is a MULTISELECT! Use SPACEBAR to toggle each IDE! '));
|
console.log(chalk.bold.yellow.bgRed(' ⚠️ IMPORTANT: This is a MULTISELECT! Use SPACEBAR to toggle each IDE! '))
|
||||||
console.log(chalk.bold.magenta('🔸 Use arrow keys to navigate'));
|
console.log(chalk.bold.magenta('🔸 Use arrow keys to navigate'))
|
||||||
console.log(chalk.bold.magenta('🔸 Use SPACEBAR to select/deselect IDEs'));
|
console.log(chalk.bold.magenta('🔸 Use SPACEBAR to select/deselect IDEs'))
|
||||||
console.log(chalk.bold.magenta('🔸 Press ENTER when finished selecting\n'));
|
console.log(chalk.bold.magenta('🔸 Press ENTER when finished selecting\n'))
|
||||||
|
|
||||||
const ideResponse = await inquirer.prompt([
|
const ideResponse = await inquirer.prompt([
|
||||||
{
|
{
|
||||||
|
|
@ -302,9 +301,9 @@ async function promptInstallation() {
|
||||||
{ name: 'Github Copilot', value: 'github-copilot' }
|
{ name: 'Github Copilot', value: 'github-copilot' }
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
ides = ideResponse.ides;
|
ides = ideResponse.ides
|
||||||
|
|
||||||
// Confirm no IDE selection if none selected
|
// Confirm no IDE selection if none selected
|
||||||
if (ides.length === 0) {
|
if (ides.length === 0) {
|
||||||
|
|
@ -315,24 +314,24 @@ async function promptInstallation() {
|
||||||
message: chalk.red('⚠️ You have NOT selected any IDEs. This means NO IDE integration will be set up. Is this correct?'),
|
message: chalk.red('⚠️ You have NOT selected any IDEs. This means NO IDE integration will be set up. Is this correct?'),
|
||||||
default: false
|
default: false
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
if (!confirmNoIde) {
|
if (!confirmNoIde) {
|
||||||
console.log(chalk.bold.red('\n🔄 Returning to IDE selection. Remember to use SPACEBAR to select IDEs!\n'));
|
console.log(chalk.bold.red('\n🔄 Returning to IDE selection. Remember to use SPACEBAR to select IDEs!\n'))
|
||||||
continue; // Go back to IDE selection only
|
continue // Go back to IDE selection only
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ideSelectionComplete = true;
|
ideSelectionComplete = true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use selected IDEs directly
|
// Use selected IDEs directly
|
||||||
answers.ides = ides;
|
answers.ides = ides
|
||||||
|
|
||||||
// Configure GitHub Copilot immediately if selected
|
// Configure GitHub Copilot immediately if selected
|
||||||
if (ides.includes('github-copilot')) {
|
if (ides.includes('github-copilot')) {
|
||||||
console.log(chalk.cyan('\n🔧 GitHub Copilot Configuration'));
|
console.log(chalk.cyan('\n🔧 GitHub Copilot Configuration'))
|
||||||
console.log(chalk.dim('BMad works best with specific VS Code settings for optimal agent experience.\n'));
|
console.log(chalk.dim('BMad works best with specific VS Code settings for optimal agent experience.\n'))
|
||||||
|
|
||||||
const { configChoice } = await inquirer.prompt([
|
const { configChoice } = await inquirer.prompt([
|
||||||
{
|
{
|
||||||
|
|
@ -355,9 +354,9 @@ async function promptInstallation() {
|
||||||
],
|
],
|
||||||
default: 'defaults'
|
default: 'defaults'
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
answers.githubCopilotConfig = { configChoice };
|
answers.githubCopilotConfig = { configChoice }
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ask for web bundles installation
|
// Ask for web bundles installation
|
||||||
|
|
@ -368,11 +367,11 @@ async function promptInstallation() {
|
||||||
message: 'Would you like to include pre-built web bundles? (standalone files for ChatGPT, Claude, Gemini)',
|
message: 'Would you like to include pre-built web bundles? (standalone files for ChatGPT, Claude, Gemini)',
|
||||||
default: false
|
default: false
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
if (includeWebBundles) {
|
if (includeWebBundles) {
|
||||||
console.log(chalk.cyan('\n📦 Web bundles are standalone files perfect for web AI platforms.'));
|
console.log(chalk.cyan('\n📦 Web bundles are standalone files perfect for web AI platforms.'))
|
||||||
console.log(chalk.dim(' You can choose different teams/agents than your IDE installation.\n'));
|
console.log(chalk.dim(' You can choose different teams/agents than your IDE installation.\n'))
|
||||||
|
|
||||||
const { webBundleType } = await inquirer.prompt([
|
const { webBundleType } = await inquirer.prompt([
|
||||||
{
|
{
|
||||||
|
|
@ -398,13 +397,13 @@ async function promptInstallation() {
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
|
|
||||||
answers.webBundleType = webBundleType;
|
answers.webBundleType = webBundleType
|
||||||
|
|
||||||
// If specific teams, let them choose which teams
|
// If specific teams, let them choose which teams
|
||||||
if (webBundleType === 'teams' || webBundleType === 'custom') {
|
if (webBundleType === 'teams' || webBundleType === 'custom') {
|
||||||
const teams = await installer.getAvailableTeams();
|
const teams = await installer.getAvailableTeams()
|
||||||
const { selectedTeams } = await inquirer.prompt([
|
const { selectedTeams } = await inquirer.prompt([
|
||||||
{
|
{
|
||||||
type: 'checkbox',
|
type: 'checkbox',
|
||||||
|
|
@ -417,13 +416,13 @@ async function promptInstallation() {
|
||||||
})),
|
})),
|
||||||
validate: (answer) => {
|
validate: (answer) => {
|
||||||
if (answer.length < 1) {
|
if (answer.length < 1) {
|
||||||
return 'You must select at least one team.';
|
return 'You must select at least one team.'
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.selectedWebBundleTeams = selectedTeams;
|
answers.selectedWebBundleTeams = selectedTeams
|
||||||
}
|
}
|
||||||
|
|
||||||
// If custom selection, also ask about individual agents
|
// If custom selection, also ask about individual agents
|
||||||
|
|
@ -435,8 +434,8 @@ async function promptInstallation() {
|
||||||
message: 'Also include individual agent bundles?',
|
message: 'Also include individual agent bundles?',
|
||||||
default: true
|
default: true
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.includeIndividualAgents = includeIndividualAgents;
|
answers.includeIndividualAgents = includeIndividualAgents
|
||||||
}
|
}
|
||||||
|
|
||||||
const { webBundlesDirectory } = await inquirer.prompt([
|
const { webBundlesDirectory } = await inquirer.prompt([
|
||||||
|
|
@ -447,23 +446,23 @@ async function promptInstallation() {
|
||||||
default: `${answers.directory}/web-bundles`,
|
default: `${answers.directory}/web-bundles`,
|
||||||
validate: (input) => {
|
validate: (input) => {
|
||||||
if (!input.trim()) {
|
if (!input.trim()) {
|
||||||
return 'Please enter a valid directory path';
|
return 'Please enter a valid directory path'
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]);
|
])
|
||||||
answers.webBundlesDirectory = webBundlesDirectory;
|
answers.webBundlesDirectory = webBundlesDirectory
|
||||||
}
|
}
|
||||||
|
|
||||||
answers.includeWebBundles = includeWebBundles;
|
answers.includeWebBundles = includeWebBundles
|
||||||
|
|
||||||
return answers;
|
return answers
|
||||||
}
|
}
|
||||||
|
|
||||||
program.parse(process.argv);
|
program.parse(process.argv)
|
||||||
|
|
||||||
// Show help if no command provided
|
// Show help if no command provided
|
||||||
if (!process.argv.slice(2).length) {
|
if (!process.argv.slice(2).length) {
|
||||||
program.outputHelp();
|
program.outputHelp()
|
||||||
}
|
}
|
||||||
|
|
@ -1,91 +1,91 @@
|
||||||
const fs = require('fs-extra');
|
const fs = require('fs-extra')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const { extractYamlFromAgent } = require('../../lib/yaml-utils');
|
const { extractYamlFromAgent } = require('../../lib/yaml-utils')
|
||||||
|
|
||||||
class ConfigLoader {
|
class ConfigLoader {
|
||||||
constructor() {
|
constructor () {
|
||||||
this.configPath = path.join(__dirname, '..', 'config', 'install.config.yaml');
|
this.configPath = path.join(__dirname, '..', 'config', 'install.config.yaml')
|
||||||
this.config = null;
|
this.config = null
|
||||||
}
|
}
|
||||||
|
|
||||||
async load() {
|
async load () {
|
||||||
if (this.config) return this.config;
|
if (this.config) return this.config
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const configContent = await fs.readFile(this.configPath, 'utf8');
|
const configContent = await fs.readFile(this.configPath, 'utf8')
|
||||||
this.config = yaml.load(configContent);
|
this.config = yaml.load(configContent)
|
||||||
return this.config;
|
return this.config
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new Error(`Failed to load configuration: ${error.message}`);
|
throw new Error(`Failed to load configuration: ${error.message}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async getInstallationOptions() {
|
async getInstallationOptions () {
|
||||||
const config = await this.load();
|
const config = await this.load()
|
||||||
return config['installation-options'] || {};
|
return config['installation-options'] || {}
|
||||||
}
|
}
|
||||||
|
|
||||||
async getAvailableAgents() {
|
async getAvailableAgents () {
|
||||||
const agentsDir = path.join(this.getBmadCorePath(), 'agents');
|
const agentsDir = path.join(this.getBmadCorePath(), 'agents')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const entries = await fs.readdir(agentsDir, { withFileTypes: true });
|
const entries = await fs.readdir(agentsDir, { withFileTypes: true })
|
||||||
const agents = [];
|
const agents = []
|
||||||
|
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (entry.isFile() && entry.name.endsWith('.md')) {
|
if (entry.isFile() && entry.name.endsWith('.md')) {
|
||||||
const agentPath = path.join(agentsDir, entry.name);
|
const agentPath = path.join(agentsDir, entry.name)
|
||||||
const agentId = path.basename(entry.name, '.md');
|
const agentId = path.basename(entry.name, '.md')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const agentContent = await fs.readFile(agentPath, 'utf8');
|
const agentContent = await fs.readFile(agentPath, 'utf8')
|
||||||
|
|
||||||
// Extract YAML block from agent file
|
// Extract YAML block from agent file
|
||||||
const yamlContentText = extractYamlFromAgent(agentContent);
|
const yamlContentText = extractYamlFromAgent(agentContent)
|
||||||
if (yamlContentText) {
|
if (yamlContentText) {
|
||||||
const yamlContent = yaml.load(yamlContentText);
|
const yamlContent = yaml.load(yamlContentText)
|
||||||
const agentConfig = yamlContent.agent || {};
|
const agentConfig = yamlContent.agent || {}
|
||||||
|
|
||||||
agents.push({
|
agents.push({
|
||||||
id: agentId,
|
id: agentId,
|
||||||
name: agentConfig.title || agentConfig.name || agentId,
|
name: agentConfig.title || agentConfig.name || agentId,
|
||||||
file: `bmad-core/agents/${entry.name}`,
|
file: `bmad-core/agents/${entry.name}`,
|
||||||
description: agentConfig.whenToUse || 'No description available'
|
description: agentConfig.whenToUse || 'No description available'
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(`Failed to read agent ${entry.name}: ${error.message}`);
|
console.warn(`Failed to read agent ${entry.name}: ${error.message}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sort agents by name for consistent display
|
// Sort agents by name for consistent display
|
||||||
agents.sort((a, b) => a.name.localeCompare(b.name));
|
agents.sort((a, b) => a.name.localeCompare(b.name))
|
||||||
|
|
||||||
return agents;
|
return agents
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(`Failed to read agents directory: ${error.message}`);
|
console.warn(`Failed to read agents directory: ${error.message}`)
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async getAvailableExpansionPacks() {
|
async getAvailableExpansionPacks () {
|
||||||
const expansionPacksDir = path.join(this.getBmadCorePath(), '..', 'expansion-packs');
|
const expansionPacksDir = path.join(this.getBmadCorePath(), '..', 'expansion-packs')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const entries = await fs.readdir(expansionPacksDir, { withFileTypes: true });
|
const entries = await fs.readdir(expansionPacksDir, { withFileTypes: true })
|
||||||
const expansionPacks = [];
|
const expansionPacks = []
|
||||||
|
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (entry.isDirectory() && !entry.name.startsWith('.')) {
|
if (entry.isDirectory() && !entry.name.startsWith('.')) {
|
||||||
const packPath = path.join(expansionPacksDir, entry.name);
|
const packPath = path.join(expansionPacksDir, entry.name)
|
||||||
const configPath = path.join(packPath, 'config.yaml');
|
const configPath = path.join(packPath, 'config.yaml')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Read config.yaml
|
// Read config.yaml
|
||||||
const configContent = await fs.readFile(configPath, 'utf8');
|
const configContent = await fs.readFile(configPath, 'utf8')
|
||||||
const config = yaml.load(configContent);
|
const config = yaml.load(configContent)
|
||||||
|
|
||||||
expansionPacks.push({
|
expansionPacks.push({
|
||||||
id: entry.name,
|
id: entry.name,
|
||||||
|
|
@ -94,49 +94,49 @@ class ConfigLoader {
|
||||||
fullDescription: config.description || config['short-title'] || 'No description available',
|
fullDescription: config.description || config['short-title'] || 'No description available',
|
||||||
version: config.version || '1.0.0',
|
version: config.version || '1.0.0',
|
||||||
author: config.author || 'BMad Team',
|
author: config.author || 'BMad Team',
|
||||||
packPath: packPath,
|
packPath,
|
||||||
dependencies: config.dependencies?.agents || []
|
dependencies: config.dependencies?.agents || []
|
||||||
});
|
})
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Fallback if config.yaml doesn't exist or can't be read
|
// Fallback if config.yaml doesn't exist or can't be read
|
||||||
console.warn(`Failed to read config for expansion pack ${entry.name}: ${error.message}`);
|
console.warn(`Failed to read config for expansion pack ${entry.name}: ${error.message}`)
|
||||||
|
|
||||||
// Try to derive info from directory name as fallback
|
// Try to derive info from directory name as fallback
|
||||||
const name = entry.name
|
const name = entry.name
|
||||||
.split('-')
|
.split('-')
|
||||||
.map(word => word.charAt(0).toUpperCase() + word.slice(1))
|
.map(word => word.charAt(0).toUpperCase() + word.slice(1))
|
||||||
.join(' ');
|
.join(' ')
|
||||||
|
|
||||||
expansionPacks.push({
|
expansionPacks.push({
|
||||||
id: entry.name,
|
id: entry.name,
|
||||||
name: name,
|
name,
|
||||||
description: 'No description available',
|
description: 'No description available',
|
||||||
fullDescription: 'No description available',
|
fullDescription: 'No description available',
|
||||||
version: '1.0.0',
|
version: '1.0.0',
|
||||||
author: 'BMad Team',
|
author: 'BMad Team',
|
||||||
packPath: packPath,
|
packPath,
|
||||||
dependencies: []
|
dependencies: []
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return expansionPacks;
|
return expansionPacks
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(`Failed to read expansion packs directory: ${error.message}`);
|
console.warn(`Failed to read expansion packs directory: ${error.message}`)
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async getAgentDependencies(agentId) {
|
async getAgentDependencies (agentId) {
|
||||||
// Use DependencyResolver to dynamically parse agent dependencies
|
// Use DependencyResolver to dynamically parse agent dependencies
|
||||||
const DependencyResolver = require('../../lib/dependency-resolver');
|
const DependencyResolver = require('../../lib/dependency-resolver')
|
||||||
const resolver = new DependencyResolver(path.join(__dirname, '..', '..', '..'));
|
const resolver = new DependencyResolver(path.join(__dirname, '..', '..', '..'))
|
||||||
|
|
||||||
const agentDeps = await resolver.resolveAgentDependencies(agentId);
|
const agentDeps = await resolver.resolveAgentDependencies(agentId)
|
||||||
|
|
||||||
// Convert to flat list of file paths
|
// Convert to flat list of file paths
|
||||||
const depPaths = [];
|
const depPaths = []
|
||||||
|
|
||||||
// Core files and utilities are included automatically by DependencyResolver
|
// Core files and utilities are included automatically by DependencyResolver
|
||||||
|
|
||||||
|
|
@ -144,49 +144,49 @@ class ConfigLoader {
|
||||||
|
|
||||||
// Add all resolved resources
|
// Add all resolved resources
|
||||||
for (const resource of agentDeps.resources) {
|
for (const resource of agentDeps.resources) {
|
||||||
const filePath = `.bmad-core/${resource.type}/${resource.id}.md`;
|
const filePath = `.bmad-core/${resource.type}/${resource.id}.md`
|
||||||
if (!depPaths.includes(filePath)) {
|
if (!depPaths.includes(filePath)) {
|
||||||
depPaths.push(filePath);
|
depPaths.push(filePath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return depPaths;
|
return depPaths
|
||||||
}
|
}
|
||||||
|
|
||||||
async getIdeConfiguration(ide) {
|
async getIdeConfiguration (ide) {
|
||||||
const config = await this.load();
|
const config = await this.load()
|
||||||
const ideConfigs = config['ide-configurations'] || {};
|
const ideConfigs = config['ide-configurations'] || {}
|
||||||
return ideConfigs[ide] || null;
|
return ideConfigs[ide] || null
|
||||||
}
|
}
|
||||||
|
|
||||||
getBmadCorePath() {
|
getBmadCorePath () {
|
||||||
// Get the path to bmad-core relative to the installer (now under tools)
|
// Get the path to bmad-core relative to the installer (now under tools)
|
||||||
return path.join(__dirname, '..', '..', '..', 'bmad-core');
|
return path.join(__dirname, '..', '..', '..', 'bmad-core')
|
||||||
}
|
}
|
||||||
|
|
||||||
getDistPath() {
|
getDistPath () {
|
||||||
// Get the path to dist directory relative to the installer
|
// Get the path to dist directory relative to the installer
|
||||||
return path.join(__dirname, '..', '..', '..', 'dist');
|
return path.join(__dirname, '..', '..', '..', 'dist')
|
||||||
}
|
}
|
||||||
|
|
||||||
getAgentPath(agentId) {
|
getAgentPath (agentId) {
|
||||||
return path.join(this.getBmadCorePath(), 'agents', `${agentId}.md`);
|
return path.join(this.getBmadCorePath(), 'agents', `${agentId}.md`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async getAvailableTeams() {
|
async getAvailableTeams () {
|
||||||
const teamsDir = path.join(this.getBmadCorePath(), 'agent-teams');
|
const teamsDir = path.join(this.getBmadCorePath(), 'agent-teams')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const entries = await fs.readdir(teamsDir, { withFileTypes: true });
|
const entries = await fs.readdir(teamsDir, { withFileTypes: true })
|
||||||
const teams = [];
|
const teams = []
|
||||||
|
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (entry.isFile() && entry.name.endsWith('.yaml')) {
|
if (entry.isFile() && entry.name.endsWith('.yaml')) {
|
||||||
const teamPath = path.join(teamsDir, entry.name);
|
const teamPath = path.join(teamsDir, entry.name)
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const teamContent = await fs.readFile(teamPath, 'utf8');
|
const teamContent = await fs.readFile(teamPath, 'utf8')
|
||||||
const teamConfig = yaml.load(teamContent);
|
const teamConfig = yaml.load(teamContent)
|
||||||
|
|
||||||
if (teamConfig.bundle) {
|
if (teamConfig.bundle) {
|
||||||
teams.push({
|
teams.push({
|
||||||
|
|
@ -194,60 +194,60 @@ class ConfigLoader {
|
||||||
name: teamConfig.bundle.name || entry.name,
|
name: teamConfig.bundle.name || entry.name,
|
||||||
description: teamConfig.bundle.description || 'Team configuration',
|
description: teamConfig.bundle.description || 'Team configuration',
|
||||||
icon: teamConfig.bundle.icon || '📋'
|
icon: teamConfig.bundle.icon || '📋'
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(`Warning: Could not load team config ${entry.name}: ${error.message}`);
|
console.warn(`Warning: Could not load team config ${entry.name}: ${error.message}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return teams;
|
return teams
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn(`Warning: Could not scan teams directory: ${error.message}`);
|
console.warn(`Warning: Could not scan teams directory: ${error.message}`)
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
getTeamPath(teamId) {
|
getTeamPath (teamId) {
|
||||||
return path.join(this.getBmadCorePath(), 'agent-teams', `${teamId}.yaml`);
|
return path.join(this.getBmadCorePath(), 'agent-teams', `${teamId}.yaml`)
|
||||||
}
|
}
|
||||||
|
|
||||||
async getTeamDependencies(teamId) {
|
async getTeamDependencies (teamId) {
|
||||||
// Use DependencyResolver to dynamically parse team dependencies
|
// Use DependencyResolver to dynamically parse team dependencies
|
||||||
const DependencyResolver = require('../../lib/dependency-resolver');
|
const DependencyResolver = require('../../lib/dependency-resolver')
|
||||||
const resolver = new DependencyResolver(path.join(__dirname, '..', '..', '..'));
|
const resolver = new DependencyResolver(path.join(__dirname, '..', '..', '..'))
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const teamDeps = await resolver.resolveTeamDependencies(teamId);
|
const teamDeps = await resolver.resolveTeamDependencies(teamId)
|
||||||
|
|
||||||
// Convert to flat list of file paths
|
// Convert to flat list of file paths
|
||||||
const depPaths = [];
|
const depPaths = []
|
||||||
|
|
||||||
// Add team config file
|
// Add team config file
|
||||||
depPaths.push(`.bmad-core/agent-teams/${teamId}.yaml`);
|
depPaths.push(`.bmad-core/agent-teams/${teamId}.yaml`)
|
||||||
|
|
||||||
// Add all agents
|
// Add all agents
|
||||||
for (const agent of teamDeps.agents) {
|
for (const agent of teamDeps.agents) {
|
||||||
const filePath = `.bmad-core/agents/${agent.id}.md`;
|
const filePath = `.bmad-core/agents/${agent.id}.md`
|
||||||
if (!depPaths.includes(filePath)) {
|
if (!depPaths.includes(filePath)) {
|
||||||
depPaths.push(filePath);
|
depPaths.push(filePath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add all resolved resources
|
// Add all resolved resources
|
||||||
for (const resource of teamDeps.resources) {
|
for (const resource of teamDeps.resources) {
|
||||||
const filePath = `.bmad-core/${resource.type}/${resource.id}.${resource.type === 'workflows' ? 'yaml' : 'md'}`;
|
const filePath = `.bmad-core/${resource.type}/${resource.id}.${resource.type === 'workflows' ? 'yaml' : 'md'}`
|
||||||
if (!depPaths.includes(filePath)) {
|
if (!depPaths.includes(filePath)) {
|
||||||
depPaths.push(filePath);
|
depPaths.push(filePath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return depPaths;
|
return depPaths
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new Error(`Failed to resolve team dependencies for ${teamId}: ${error.message}`);
|
throw new Error(`Failed to resolve team dependencies for ${teamId}: ${error.message}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = new ConfigLoader();
|
module.exports = new ConfigLoader()
|
||||||
|
|
|
||||||
|
|
@ -1,53 +1,53 @@
|
||||||
const fs = require("fs-extra");
|
const fs = require('fs-extra')
|
||||||
const path = require("path");
|
const path = require('path')
|
||||||
const crypto = require("crypto");
|
const crypto = require('crypto')
|
||||||
const yaml = require("js-yaml");
|
const yaml = require('js-yaml')
|
||||||
const chalk = require("chalk");
|
const chalk = require('chalk')
|
||||||
const { createReadStream, createWriteStream, promises: fsPromises } = require('fs');
|
const { createReadStream, createWriteStream, promises: fsPromises } = require('fs')
|
||||||
const { pipeline } = require('stream/promises');
|
const { pipeline } = require('stream/promises')
|
||||||
const resourceLocator = require('./resource-locator');
|
const resourceLocator = require('./resource-locator')
|
||||||
|
|
||||||
class FileManager {
|
class FileManager {
|
||||||
constructor() {
|
constructor () {
|
||||||
this.manifestDir = ".bmad-core";
|
this.manifestDir = '.bmad-core'
|
||||||
this.manifestFile = "install-manifest.yaml";
|
this.manifestFile = 'install-manifest.yaml'
|
||||||
}
|
}
|
||||||
|
|
||||||
async copyFile(source, destination) {
|
async copyFile (source, destination) {
|
||||||
try {
|
try {
|
||||||
await fs.ensureDir(path.dirname(destination));
|
await fs.ensureDir(path.dirname(destination))
|
||||||
|
|
||||||
// Use streaming for large files (> 10MB)
|
// Use streaming for large files (> 10MB)
|
||||||
const stats = await fs.stat(source);
|
const stats = await fs.stat(source)
|
||||||
if (stats.size > 10 * 1024 * 1024) {
|
if (stats.size > 10 * 1024 * 1024) {
|
||||||
await pipeline(
|
await pipeline(
|
||||||
createReadStream(source),
|
createReadStream(source),
|
||||||
createWriteStream(destination)
|
createWriteStream(destination)
|
||||||
);
|
)
|
||||||
} else {
|
} else {
|
||||||
await fs.copy(source, destination);
|
await fs.copy(source, destination)
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red(`Failed to copy ${source}:`), error.message);
|
console.error(chalk.red(`Failed to copy ${source}:`), error.message)
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async copyDirectory(source, destination) {
|
async copyDirectory (source, destination) {
|
||||||
try {
|
try {
|
||||||
await fs.ensureDir(destination);
|
await fs.ensureDir(destination)
|
||||||
|
|
||||||
// Use streaming copy for large directories
|
// Use streaming copy for large directories
|
||||||
const files = await resourceLocator.findFiles('**/*', {
|
const files = await resourceLocator.findFiles('**/*', {
|
||||||
cwd: source,
|
cwd: source,
|
||||||
nodir: true
|
nodir: true
|
||||||
});
|
})
|
||||||
|
|
||||||
// Process files in batches to avoid memory issues
|
// Process files in batches to avoid memory issues
|
||||||
const batchSize = 50;
|
const batchSize = 50
|
||||||
for (let i = 0; i < files.length; i += batchSize) {
|
for (let i = 0; i < files.length; i += batchSize) {
|
||||||
const batch = files.slice(i, i + batchSize);
|
const batch = files.slice(i, i + batchSize)
|
||||||
await Promise.all(
|
await Promise.all(
|
||||||
batch.map(file =>
|
batch.map(file =>
|
||||||
this.copyFile(
|
this.copyFile(
|
||||||
|
|
@ -55,75 +55,75 @@ class FileManager {
|
||||||
path.join(destination, file)
|
path.join(destination, file)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(
|
console.error(
|
||||||
chalk.red(`Failed to copy directory ${source}:`),
|
chalk.red(`Failed to copy directory ${source}:`),
|
||||||
error.message
|
error.message
|
||||||
);
|
)
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async copyGlobPattern(pattern, sourceDir, destDir, rootValue = null) {
|
async copyGlobPattern (pattern, sourceDir, destDir, rootValue = null) {
|
||||||
const files = await resourceLocator.findFiles(pattern, { cwd: sourceDir });
|
const files = await resourceLocator.findFiles(pattern, { cwd: sourceDir })
|
||||||
const copied = [];
|
const copied = []
|
||||||
|
|
||||||
for (const file of files) {
|
for (const file of files) {
|
||||||
const sourcePath = path.join(sourceDir, file);
|
const sourcePath = path.join(sourceDir, file)
|
||||||
const destPath = path.join(destDir, file);
|
const destPath = path.join(destDir, file)
|
||||||
|
|
||||||
// Use root replacement if rootValue is provided and file needs it
|
// Use root replacement if rootValue is provided and file needs it
|
||||||
const needsRootReplacement = rootValue && (file.endsWith('.md') || file.endsWith('.yaml') || file.endsWith('.yml'));
|
const needsRootReplacement = rootValue && (file.endsWith('.md') || file.endsWith('.yaml') || file.endsWith('.yml'))
|
||||||
|
|
||||||
let success = false;
|
let success = false
|
||||||
if (needsRootReplacement) {
|
if (needsRootReplacement) {
|
||||||
success = await this.copyFileWithRootReplacement(sourcePath, destPath, rootValue);
|
success = await this.copyFileWithRootReplacement(sourcePath, destPath, rootValue)
|
||||||
} else {
|
} else {
|
||||||
success = await this.copyFile(sourcePath, destPath);
|
success = await this.copyFile(sourcePath, destPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (success) {
|
if (success) {
|
||||||
copied.push(file);
|
copied.push(file)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return copied;
|
return copied
|
||||||
}
|
}
|
||||||
|
|
||||||
async calculateFileHash(filePath) {
|
async calculateFileHash (filePath) {
|
||||||
try {
|
try {
|
||||||
// Use streaming for hash calculation to reduce memory usage
|
// Use streaming for hash calculation to reduce memory usage
|
||||||
const stream = createReadStream(filePath);
|
const stream = createReadStream(filePath)
|
||||||
const hash = crypto.createHash("sha256");
|
const hash = crypto.createHash('sha256')
|
||||||
|
|
||||||
for await (const chunk of stream) {
|
for await (const chunk of stream) {
|
||||||
hash.update(chunk);
|
hash.update(chunk)
|
||||||
}
|
}
|
||||||
|
|
||||||
return hash.digest("hex").slice(0, 16);
|
return hash.digest('hex').slice(0, 16)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async createManifest(installDir, config, files) {
|
async createManifest (installDir, config, files) {
|
||||||
const manifestPath = path.join(
|
const manifestPath = path.join(
|
||||||
installDir,
|
installDir,
|
||||||
this.manifestDir,
|
this.manifestDir,
|
||||||
this.manifestFile
|
this.manifestFile
|
||||||
);
|
)
|
||||||
|
|
||||||
// Read version from package.json
|
// Read version from package.json
|
||||||
let coreVersion = "unknown";
|
let coreVersion = 'unknown'
|
||||||
try {
|
try {
|
||||||
const packagePath = path.join(__dirname, '..', '..', '..', 'package.json');
|
const packagePath = path.join(__dirname, '..', '..', '..', 'package.json')
|
||||||
const packageJson = require(packagePath);
|
const packageJson = require(packagePath)
|
||||||
coreVersion = packageJson.version;
|
coreVersion = packageJson.version
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn("Could not read version from package.json, using 'unknown'");
|
console.warn("Could not read version from package.json, using 'unknown'")
|
||||||
}
|
}
|
||||||
|
|
||||||
const manifest = {
|
const manifest = {
|
||||||
|
|
@ -133,279 +133,279 @@ class FileManager {
|
||||||
agent: config.agent || null,
|
agent: config.agent || null,
|
||||||
ides_setup: config.ides || [],
|
ides_setup: config.ides || [],
|
||||||
expansion_packs: config.expansionPacks || [],
|
expansion_packs: config.expansionPacks || [],
|
||||||
files: [],
|
files: []
|
||||||
};
|
}
|
||||||
|
|
||||||
// Add file information
|
// Add file information
|
||||||
for (const file of files) {
|
for (const file of files) {
|
||||||
const filePath = path.join(installDir, file);
|
const filePath = path.join(installDir, file)
|
||||||
const hash = await this.calculateFileHash(filePath);
|
const hash = await this.calculateFileHash(filePath)
|
||||||
|
|
||||||
manifest.files.push({
|
manifest.files.push({
|
||||||
path: file,
|
path: file,
|
||||||
hash: hash,
|
hash,
|
||||||
modified: false,
|
modified: false
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write manifest
|
// Write manifest
|
||||||
await fs.ensureDir(path.dirname(manifestPath));
|
await fs.ensureDir(path.dirname(manifestPath))
|
||||||
await fs.writeFile(manifestPath, yaml.dump(manifest, { indent: 2 }));
|
await fs.writeFile(manifestPath, yaml.dump(manifest, { indent: 2 }))
|
||||||
|
|
||||||
return manifest;
|
return manifest
|
||||||
}
|
}
|
||||||
|
|
||||||
async readManifest(installDir) {
|
async readManifest (installDir) {
|
||||||
const manifestPath = path.join(
|
const manifestPath = path.join(
|
||||||
installDir,
|
installDir,
|
||||||
this.manifestDir,
|
this.manifestDir,
|
||||||
this.manifestFile
|
this.manifestFile
|
||||||
);
|
)
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const content = await fs.readFile(manifestPath, "utf8");
|
const content = await fs.readFile(manifestPath, 'utf8')
|
||||||
return yaml.load(content);
|
return yaml.load(content)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async readExpansionPackManifest(installDir, packId) {
|
async readExpansionPackManifest (installDir, packId) {
|
||||||
const manifestPath = path.join(
|
const manifestPath = path.join(
|
||||||
installDir,
|
installDir,
|
||||||
`.${packId}`,
|
`.${packId}`,
|
||||||
this.manifestFile
|
this.manifestFile
|
||||||
);
|
)
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const content = await fs.readFile(manifestPath, "utf8");
|
const content = await fs.readFile(manifestPath, 'utf8')
|
||||||
return yaml.load(content);
|
return yaml.load(content)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async checkModifiedFiles(installDir, manifest) {
|
async checkModifiedFiles (installDir, manifest) {
|
||||||
const modified = [];
|
const modified = []
|
||||||
|
|
||||||
for (const file of manifest.files) {
|
for (const file of manifest.files) {
|
||||||
const filePath = path.join(installDir, file.path);
|
const filePath = path.join(installDir, file.path)
|
||||||
const currentHash = await this.calculateFileHash(filePath);
|
const currentHash = await this.calculateFileHash(filePath)
|
||||||
|
|
||||||
if (currentHash && currentHash !== file.hash) {
|
if (currentHash && currentHash !== file.hash) {
|
||||||
modified.push(file.path);
|
modified.push(file.path)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return modified;
|
return modified
|
||||||
}
|
}
|
||||||
|
|
||||||
async checkFileIntegrity(installDir, manifest) {
|
async checkFileIntegrity (installDir, manifest) {
|
||||||
const result = {
|
const result = {
|
||||||
missing: [],
|
missing: [],
|
||||||
modified: []
|
modified: []
|
||||||
};
|
}
|
||||||
|
|
||||||
for (const file of manifest.files) {
|
for (const file of manifest.files) {
|
||||||
const filePath = path.join(installDir, file.path);
|
const filePath = path.join(installDir, file.path)
|
||||||
|
|
||||||
// Skip checking the manifest file itself - it will always be different due to timestamps
|
// Skip checking the manifest file itself - it will always be different due to timestamps
|
||||||
if (file.path.endsWith('install-manifest.yaml')) {
|
if (file.path.endsWith('install-manifest.yaml')) {
|
||||||
continue;
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!(await this.pathExists(filePath))) {
|
if (!(await this.pathExists(filePath))) {
|
||||||
result.missing.push(file.path);
|
result.missing.push(file.path)
|
||||||
} else {
|
} else {
|
||||||
const currentHash = await this.calculateFileHash(filePath);
|
const currentHash = await this.calculateFileHash(filePath)
|
||||||
if (currentHash && currentHash !== file.hash) {
|
if (currentHash && currentHash !== file.hash) {
|
||||||
result.modified.push(file.path);
|
result.modified.push(file.path)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return result;
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
async backupFile(filePath) {
|
async backupFile (filePath) {
|
||||||
const backupPath = filePath + ".bak";
|
const backupPath = filePath + '.bak'
|
||||||
let counter = 1;
|
let counter = 1
|
||||||
let finalBackupPath = backupPath;
|
let finalBackupPath = backupPath
|
||||||
|
|
||||||
// Find a unique backup filename
|
// Find a unique backup filename
|
||||||
while (await fs.pathExists(finalBackupPath)) {
|
while (await fs.pathExists(finalBackupPath)) {
|
||||||
finalBackupPath = `${filePath}.bak${counter}`;
|
finalBackupPath = `${filePath}.bak${counter}`
|
||||||
counter++;
|
counter++
|
||||||
}
|
}
|
||||||
|
|
||||||
await fs.copy(filePath, finalBackupPath);
|
await fs.copy(filePath, finalBackupPath)
|
||||||
return finalBackupPath;
|
return finalBackupPath
|
||||||
}
|
}
|
||||||
|
|
||||||
async ensureDirectory(dirPath) {
|
async ensureDirectory (dirPath) {
|
||||||
try {
|
try {
|
||||||
await fs.ensureDir(dirPath);
|
await fs.ensureDir(dirPath)
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw error;
|
throw error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async pathExists(filePath) {
|
async pathExists (filePath) {
|
||||||
return fs.pathExists(filePath);
|
return fs.pathExists(filePath)
|
||||||
}
|
}
|
||||||
|
|
||||||
async readFile(filePath) {
|
async readFile (filePath) {
|
||||||
return fs.readFile(filePath, "utf8");
|
return fs.readFile(filePath, 'utf8')
|
||||||
}
|
}
|
||||||
|
|
||||||
async writeFile(filePath, content) {
|
async writeFile (filePath, content) {
|
||||||
await fs.ensureDir(path.dirname(filePath));
|
await fs.ensureDir(path.dirname(filePath))
|
||||||
await fs.writeFile(filePath, content);
|
await fs.writeFile(filePath, content)
|
||||||
}
|
}
|
||||||
|
|
||||||
async removeDirectory(dirPath) {
|
async removeDirectory (dirPath) {
|
||||||
await fs.remove(dirPath);
|
await fs.remove(dirPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
async createExpansionPackManifest(installDir, packId, config, files) {
|
async createExpansionPackManifest (installDir, packId, config, files) {
|
||||||
const manifestPath = path.join(
|
const manifestPath = path.join(
|
||||||
installDir,
|
installDir,
|
||||||
`.${packId}`,
|
`.${packId}`,
|
||||||
this.manifestFile
|
this.manifestFile
|
||||||
);
|
)
|
||||||
|
|
||||||
const manifest = {
|
const manifest = {
|
||||||
version: config.expansionPackVersion || require("../../../package.json").version,
|
version: config.expansionPackVersion || require('../../../package.json').version,
|
||||||
installed_at: new Date().toISOString(),
|
installed_at: new Date().toISOString(),
|
||||||
install_type: config.installType,
|
install_type: config.installType,
|
||||||
expansion_pack_id: config.expansionPackId,
|
expansion_pack_id: config.expansionPackId,
|
||||||
expansion_pack_name: config.expansionPackName,
|
expansion_pack_name: config.expansionPackName,
|
||||||
ides_setup: config.ides || [],
|
ides_setup: config.ides || [],
|
||||||
files: [],
|
files: []
|
||||||
};
|
}
|
||||||
|
|
||||||
// Add file information
|
// Add file information
|
||||||
for (const file of files) {
|
for (const file of files) {
|
||||||
const filePath = path.join(installDir, file);
|
const filePath = path.join(installDir, file)
|
||||||
const hash = await this.calculateFileHash(filePath);
|
const hash = await this.calculateFileHash(filePath)
|
||||||
|
|
||||||
manifest.files.push({
|
manifest.files.push({
|
||||||
path: file,
|
path: file,
|
||||||
hash: hash,
|
hash,
|
||||||
modified: false,
|
modified: false
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write manifest
|
// Write manifest
|
||||||
await fs.ensureDir(path.dirname(manifestPath));
|
await fs.ensureDir(path.dirname(manifestPath))
|
||||||
await fs.writeFile(manifestPath, yaml.dump(manifest, { indent: 2 }));
|
await fs.writeFile(manifestPath, yaml.dump(manifest, { indent: 2 }))
|
||||||
|
|
||||||
return manifest;
|
return manifest
|
||||||
}
|
}
|
||||||
|
|
||||||
async modifyCoreConfig(installDir, config) {
|
async modifyCoreConfig (installDir, config) {
|
||||||
const coreConfigPath = path.join(installDir, '.bmad-core', 'core-config.yaml');
|
const coreConfigPath = path.join(installDir, '.bmad-core', 'core-config.yaml')
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Read the existing core-config.yaml
|
// Read the existing core-config.yaml
|
||||||
const coreConfigContent = await fs.readFile(coreConfigPath, 'utf8');
|
const coreConfigContent = await fs.readFile(coreConfigPath, 'utf8')
|
||||||
const coreConfig = yaml.load(coreConfigContent);
|
const coreConfig = yaml.load(coreConfigContent)
|
||||||
|
|
||||||
// Modify sharding settings if provided
|
// Modify sharding settings if provided
|
||||||
if (config.prdSharded !== undefined) {
|
if (config.prdSharded !== undefined) {
|
||||||
coreConfig.prd.prdSharded = config.prdSharded;
|
coreConfig.prd.prdSharded = config.prdSharded
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config.architectureSharded !== undefined) {
|
if (config.architectureSharded !== undefined) {
|
||||||
coreConfig.architecture.architectureSharded = config.architectureSharded;
|
coreConfig.architecture.architectureSharded = config.architectureSharded
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write back the modified config
|
// Write back the modified config
|
||||||
await fs.writeFile(coreConfigPath, yaml.dump(coreConfig, { indent: 2 }));
|
await fs.writeFile(coreConfigPath, yaml.dump(coreConfig, { indent: 2 }))
|
||||||
|
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red(`Failed to modify core-config.yaml:`), error.message);
|
console.error(chalk.red('Failed to modify core-config.yaml:'), error.message)
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async copyFileWithRootReplacement(source, destination, rootValue) {
|
async copyFileWithRootReplacement (source, destination, rootValue) {
|
||||||
try {
|
try {
|
||||||
// Check file size to determine if we should stream
|
// Check file size to determine if we should stream
|
||||||
const stats = await fs.stat(source);
|
const stats = await fs.stat(source)
|
||||||
|
|
||||||
if (stats.size > 5 * 1024 * 1024) { // 5MB threshold
|
if (stats.size > 5 * 1024 * 1024) { // 5MB threshold
|
||||||
// Use streaming for large files
|
// Use streaming for large files
|
||||||
const { Transform } = require('stream');
|
const { Transform } = require('stream')
|
||||||
const replaceStream = new Transform({
|
const replaceStream = new Transform({
|
||||||
transform(chunk, encoding, callback) {
|
transform (chunk, encoding, callback) {
|
||||||
const modified = chunk.toString().replace(/\{root\}/g, rootValue);
|
const modified = chunk.toString().replace(/\{root\}/g, rootValue)
|
||||||
callback(null, modified);
|
callback(null, modified)
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
await this.ensureDirectory(path.dirname(destination));
|
await this.ensureDirectory(path.dirname(destination))
|
||||||
await pipeline(
|
await pipeline(
|
||||||
createReadStream(source, { encoding: 'utf8' }),
|
createReadStream(source, { encoding: 'utf8' }),
|
||||||
replaceStream,
|
replaceStream,
|
||||||
createWriteStream(destination, { encoding: 'utf8' })
|
createWriteStream(destination, { encoding: 'utf8' })
|
||||||
);
|
)
|
||||||
} else {
|
} else {
|
||||||
// Regular approach for smaller files
|
// Regular approach for smaller files
|
||||||
const content = await fsPromises.readFile(source, 'utf8');
|
const content = await fsPromises.readFile(source, 'utf8')
|
||||||
const updatedContent = content.replace(/\{root\}/g, rootValue);
|
const updatedContent = content.replace(/\{root\}/g, rootValue)
|
||||||
await this.ensureDirectory(path.dirname(destination));
|
await this.ensureDirectory(path.dirname(destination))
|
||||||
await fsPromises.writeFile(destination, updatedContent, 'utf8');
|
await fsPromises.writeFile(destination, updatedContent, 'utf8')
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red(`Failed to copy ${source} with root replacement:`), error.message);
|
console.error(chalk.red(`Failed to copy ${source} with root replacement:`), error.message)
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async copyDirectoryWithRootReplacement(source, destination, rootValue, fileExtensions = ['.md', '.yaml', '.yml']) {
|
async copyDirectoryWithRootReplacement (source, destination, rootValue, fileExtensions = ['.md', '.yaml', '.yml']) {
|
||||||
try {
|
try {
|
||||||
await this.ensureDirectory(destination);
|
await this.ensureDirectory(destination)
|
||||||
|
|
||||||
// Get all files in source directory
|
// Get all files in source directory
|
||||||
const files = await resourceLocator.findFiles('**/*', {
|
const files = await resourceLocator.findFiles('**/*', {
|
||||||
cwd: source,
|
cwd: source,
|
||||||
nodir: true
|
nodir: true
|
||||||
});
|
})
|
||||||
|
|
||||||
let replacedCount = 0;
|
let replacedCount = 0
|
||||||
|
|
||||||
for (const file of files) {
|
for (const file of files) {
|
||||||
const sourcePath = path.join(source, file);
|
const sourcePath = path.join(source, file)
|
||||||
const destPath = path.join(destination, file);
|
const destPath = path.join(destination, file)
|
||||||
|
|
||||||
// Check if this file type should have {root} replacement
|
// Check if this file type should have {root} replacement
|
||||||
const shouldReplace = fileExtensions.some(ext => file.endsWith(ext));
|
const shouldReplace = fileExtensions.some(ext => file.endsWith(ext))
|
||||||
|
|
||||||
if (shouldReplace) {
|
if (shouldReplace) {
|
||||||
if (await this.copyFileWithRootReplacement(sourcePath, destPath, rootValue)) {
|
if (await this.copyFileWithRootReplacement(sourcePath, destPath, rootValue)) {
|
||||||
replacedCount++;
|
replacedCount++
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Regular copy for files that don't need replacement
|
// Regular copy for files that don't need replacement
|
||||||
await this.copyFile(sourcePath, destPath);
|
await this.copyFile(sourcePath, destPath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (replacedCount > 0) {
|
if (replacedCount > 0) {
|
||||||
console.log(chalk.dim(` Processed ${replacedCount} files with {root} replacement`));
|
console.log(chalk.dim(` Processed ${replacedCount} files with {root} replacement`))
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(chalk.red(`Failed to copy directory ${source} with root replacement:`), error.message);
|
console.error(chalk.red(`Failed to copy directory ${source} with root replacement:`), error.message)
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = new FileManager();
|
module.exports = new FileManager()
|
||||||
|
|
|
||||||
|
|
@ -3,225 +3,225 @@
|
||||||
* Reduces duplication and provides shared methods
|
* Reduces duplication and provides shared methods
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const path = require("path");
|
const path = require('path')
|
||||||
const fs = require("fs-extra");
|
const fs = require('fs-extra')
|
||||||
const yaml = require("js-yaml");
|
const yaml = require('js-yaml')
|
||||||
const chalk = require("chalk");
|
const chalk = require('chalk')
|
||||||
const fileManager = require("./file-manager");
|
const fileManager = require('./file-manager')
|
||||||
const resourceLocator = require("./resource-locator");
|
const resourceLocator = require('./resource-locator')
|
||||||
const { extractYamlFromAgent } = require("../../lib/yaml-utils");
|
const { extractYamlFromAgent } = require('../../lib/yaml-utils')
|
||||||
|
|
||||||
class BaseIdeSetup {
|
class BaseIdeSetup {
|
||||||
constructor() {
|
constructor () {
|
||||||
this._agentCache = new Map();
|
this._agentCache = new Map()
|
||||||
this._pathCache = new Map();
|
this._pathCache = new Map()
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get all agent IDs with caching
|
* Get all agent IDs with caching
|
||||||
*/
|
*/
|
||||||
async getAllAgentIds(installDir) {
|
async getAllAgentIds (installDir) {
|
||||||
const cacheKey = `all-agents:${installDir}`;
|
const cacheKey = `all-agents:${installDir}`
|
||||||
if (this._agentCache.has(cacheKey)) {
|
if (this._agentCache.has(cacheKey)) {
|
||||||
return this._agentCache.get(cacheKey);
|
return this._agentCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const allAgents = new Set();
|
const allAgents = new Set()
|
||||||
|
|
||||||
// Get core agents
|
// Get core agents
|
||||||
const coreAgents = await this.getCoreAgentIds(installDir);
|
const coreAgents = await this.getCoreAgentIds(installDir)
|
||||||
coreAgents.forEach(id => allAgents.add(id));
|
coreAgents.forEach(id => allAgents.add(id))
|
||||||
|
|
||||||
// Get expansion pack agents
|
// Get expansion pack agents
|
||||||
const expansionPacks = await this.getInstalledExpansionPacks(installDir);
|
const expansionPacks = await this.getInstalledExpansionPacks(installDir)
|
||||||
for (const pack of expansionPacks) {
|
for (const pack of expansionPacks) {
|
||||||
const packAgents = await this.getExpansionPackAgents(pack.path);
|
const packAgents = await this.getExpansionPackAgents(pack.path)
|
||||||
packAgents.forEach(id => allAgents.add(id));
|
packAgents.forEach(id => allAgents.add(id))
|
||||||
}
|
}
|
||||||
|
|
||||||
const result = Array.from(allAgents);
|
const result = Array.from(allAgents)
|
||||||
this._agentCache.set(cacheKey, result);
|
this._agentCache.set(cacheKey, result)
|
||||||
return result;
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get core agent IDs
|
* Get core agent IDs
|
||||||
*/
|
*/
|
||||||
async getCoreAgentIds(installDir) {
|
async getCoreAgentIds (installDir) {
|
||||||
const coreAgents = [];
|
const coreAgents = []
|
||||||
const corePaths = [
|
const corePaths = [
|
||||||
path.join(installDir, ".bmad-core", "agents"),
|
path.join(installDir, '.bmad-core', 'agents'),
|
||||||
path.join(installDir, "bmad-core", "agents")
|
path.join(installDir, 'bmad-core', 'agents')
|
||||||
];
|
]
|
||||||
|
|
||||||
for (const agentsDir of corePaths) {
|
for (const agentsDir of corePaths) {
|
||||||
if (await fileManager.pathExists(agentsDir)) {
|
if (await fileManager.pathExists(agentsDir)) {
|
||||||
const files = await resourceLocator.findFiles("*.md", { cwd: agentsDir });
|
const files = await resourceLocator.findFiles('*.md', { cwd: agentsDir })
|
||||||
coreAgents.push(...files.map(file => path.basename(file, ".md")));
|
coreAgents.push(...files.map(file => path.basename(file, '.md')))
|
||||||
break; // Use first found
|
break // Use first found
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return coreAgents;
|
return coreAgents
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Find agent path with caching
|
* Find agent path with caching
|
||||||
*/
|
*/
|
||||||
async findAgentPath(agentId, installDir) {
|
async findAgentPath (agentId, installDir) {
|
||||||
const cacheKey = `agent-path:${agentId}:${installDir}`;
|
const cacheKey = `agent-path:${agentId}:${installDir}`
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use resource locator for efficient path finding
|
// Use resource locator for efficient path finding
|
||||||
let agentPath = await resourceLocator.getAgentPath(agentId);
|
let agentPath = await resourceLocator.getAgentPath(agentId)
|
||||||
|
|
||||||
if (!agentPath) {
|
if (!agentPath) {
|
||||||
// Check installation-specific paths
|
// Check installation-specific paths
|
||||||
const possiblePaths = [
|
const possiblePaths = [
|
||||||
path.join(installDir, ".bmad-core", "agents", `${agentId}.md`),
|
path.join(installDir, '.bmad-core', 'agents', `${agentId}.md`),
|
||||||
path.join(installDir, "bmad-core", "agents", `${agentId}.md`),
|
path.join(installDir, 'bmad-core', 'agents', `${agentId}.md`),
|
||||||
path.join(installDir, "common", "agents", `${agentId}.md`)
|
path.join(installDir, 'common', 'agents', `${agentId}.md`)
|
||||||
];
|
]
|
||||||
|
|
||||||
for (const testPath of possiblePaths) {
|
for (const testPath of possiblePaths) {
|
||||||
if (await fileManager.pathExists(testPath)) {
|
if (await fileManager.pathExists(testPath)) {
|
||||||
agentPath = testPath;
|
agentPath = testPath
|
||||||
break;
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (agentPath) {
|
if (agentPath) {
|
||||||
this._pathCache.set(cacheKey, agentPath);
|
this._pathCache.set(cacheKey, agentPath)
|
||||||
}
|
}
|
||||||
return agentPath;
|
return agentPath
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get agent title from metadata
|
* Get agent title from metadata
|
||||||
*/
|
*/
|
||||||
async getAgentTitle(agentId, installDir) {
|
async getAgentTitle (agentId, installDir) {
|
||||||
const agentPath = await this.findAgentPath(agentId, installDir);
|
const agentPath = await this.findAgentPath(agentId, installDir)
|
||||||
if (!agentPath) return agentId;
|
if (!agentPath) return agentId
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const content = await fileManager.readFile(agentPath);
|
const content = await fileManager.readFile(agentPath)
|
||||||
const yamlContent = extractYamlFromAgent(content);
|
const yamlContent = extractYamlFromAgent(content)
|
||||||
if (yamlContent) {
|
if (yamlContent) {
|
||||||
const metadata = yaml.load(yamlContent);
|
const metadata = yaml.load(yamlContent)
|
||||||
return metadata.agent_name || agentId;
|
return metadata.agent_name || agentId
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// Fallback to agent ID
|
// Fallback to agent ID
|
||||||
}
|
}
|
||||||
return agentId;
|
return agentId
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get installed expansion packs
|
* Get installed expansion packs
|
||||||
*/
|
*/
|
||||||
async getInstalledExpansionPacks(installDir) {
|
async getInstalledExpansionPacks (installDir) {
|
||||||
const cacheKey = `expansion-packs:${installDir}`;
|
const cacheKey = `expansion-packs:${installDir}`
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const expansionPacks = [];
|
const expansionPacks = []
|
||||||
|
|
||||||
// Check for dot-prefixed expansion packs
|
// Check for dot-prefixed expansion packs
|
||||||
const dotExpansions = await resourceLocator.findFiles(".bmad-*", { cwd: installDir });
|
const dotExpansions = await resourceLocator.findFiles('.bmad-*', { cwd: installDir })
|
||||||
|
|
||||||
for (const dotExpansion of dotExpansions) {
|
for (const dotExpansion of dotExpansions) {
|
||||||
if (dotExpansion !== ".bmad-core") {
|
if (dotExpansion !== '.bmad-core') {
|
||||||
const packPath = path.join(installDir, dotExpansion);
|
const packPath = path.join(installDir, dotExpansion)
|
||||||
const packName = dotExpansion.substring(1); // remove the dot
|
const packName = dotExpansion.substring(1) // remove the dot
|
||||||
expansionPacks.push({
|
expansionPacks.push({
|
||||||
name: packName,
|
name: packName,
|
||||||
path: packPath
|
path: packPath
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check other dot folders that have config.yaml
|
// Check other dot folders that have config.yaml
|
||||||
const allDotFolders = await resourceLocator.findFiles(".*", { cwd: installDir });
|
const allDotFolders = await resourceLocator.findFiles('.*', { cwd: installDir })
|
||||||
for (const folder of allDotFolders) {
|
for (const folder of allDotFolders) {
|
||||||
if (!folder.startsWith(".bmad-") && folder !== ".bmad-core") {
|
if (!folder.startsWith('.bmad-') && folder !== '.bmad-core') {
|
||||||
const packPath = path.join(installDir, folder);
|
const packPath = path.join(installDir, folder)
|
||||||
const configPath = path.join(packPath, "config.yaml");
|
const configPath = path.join(packPath, 'config.yaml')
|
||||||
if (await fileManager.pathExists(configPath)) {
|
if (await fileManager.pathExists(configPath)) {
|
||||||
expansionPacks.push({
|
expansionPacks.push({
|
||||||
name: folder.substring(1), // remove the dot
|
name: folder.substring(1), // remove the dot
|
||||||
path: packPath
|
path: packPath
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
this._pathCache.set(cacheKey, expansionPacks);
|
this._pathCache.set(cacheKey, expansionPacks)
|
||||||
return expansionPacks;
|
return expansionPacks
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get expansion pack agents
|
* Get expansion pack agents
|
||||||
*/
|
*/
|
||||||
async getExpansionPackAgents(packPath) {
|
async getExpansionPackAgents (packPath) {
|
||||||
const agentsDir = path.join(packPath, "agents");
|
const agentsDir = path.join(packPath, 'agents')
|
||||||
if (!(await fileManager.pathExists(agentsDir))) {
|
if (!(await fileManager.pathExists(agentsDir))) {
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
|
|
||||||
const agentFiles = await resourceLocator.findFiles("*.md", { cwd: agentsDir });
|
const agentFiles = await resourceLocator.findFiles('*.md', { cwd: agentsDir })
|
||||||
return agentFiles.map(file => path.basename(file, ".md"));
|
return agentFiles.map(file => path.basename(file, '.md'))
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Create agent rule content (shared logic)
|
* Create agent rule content (shared logic)
|
||||||
*/
|
*/
|
||||||
async createAgentRuleContent(agentId, agentPath, installDir, format = 'mdc') {
|
async createAgentRuleContent (agentId, agentPath, installDir, format = 'mdc') {
|
||||||
const agentContent = await fileManager.readFile(agentPath);
|
const agentContent = await fileManager.readFile(agentPath)
|
||||||
const agentTitle = await this.getAgentTitle(agentId, installDir);
|
const agentTitle = await this.getAgentTitle(agentId, installDir)
|
||||||
const yamlContent = extractYamlFromAgent(agentContent);
|
const yamlContent = extractYamlFromAgent(agentContent)
|
||||||
|
|
||||||
let content = "";
|
let content = ''
|
||||||
|
|
||||||
if (format === 'mdc') {
|
if (format === 'mdc') {
|
||||||
// MDC format for Cursor
|
// MDC format for Cursor
|
||||||
content = "---\n";
|
content = '---\n'
|
||||||
content += "description: \n";
|
content += 'description: \n'
|
||||||
content += "globs: []\n";
|
content += 'globs: []\n'
|
||||||
content += "alwaysApply: false\n";
|
content += 'alwaysApply: false\n'
|
||||||
content += "---\n\n";
|
content += '---\n\n'
|
||||||
content += `# ${agentId.toUpperCase()} Agent Rule\n\n`;
|
content += `# ${agentId.toUpperCase()} Agent Rule\n\n`
|
||||||
content += `This rule is triggered when the user types \`@${agentId}\` and activates the ${agentTitle} agent persona.\n\n`;
|
content += `This rule is triggered when the user types \`@${agentId}\` and activates the ${agentTitle} agent persona.\n\n`
|
||||||
content += "## Agent Activation\n\n";
|
content += '## Agent Activation\n\n'
|
||||||
content += "CRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n";
|
content += 'CRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n'
|
||||||
content += "```yaml\n";
|
content += '```yaml\n'
|
||||||
content += yamlContent || agentContent.replace(/^#.*$/m, "").trim();
|
content += yamlContent || agentContent.replace(/^#.*$/m, '').trim()
|
||||||
content += "\n```\n\n";
|
content += '\n```\n\n'
|
||||||
content += "## File Reference\n\n";
|
content += '## File Reference\n\n'
|
||||||
const relativePath = path.relative(installDir, agentPath).replace(/\\/g, '/');
|
const relativePath = path.relative(installDir, agentPath).replace(/\\/g, '/')
|
||||||
content += `The complete agent definition is available in [${relativePath}](mdc:${relativePath}).\n\n`;
|
content += `The complete agent definition is available in [${relativePath}](mdc:${relativePath}).\n\n`
|
||||||
content += "## Usage\n\n";
|
content += '## Usage\n\n'
|
||||||
content += `When the user types \`@${agentId}\`, activate this ${agentTitle} persona and follow all instructions defined in the YAML configuration above.\n`;
|
content += `When the user types \`@${agentId}\`, activate this ${agentTitle} persona and follow all instructions defined in the YAML configuration above.\n`
|
||||||
} else if (format === 'claude') {
|
} else if (format === 'claude') {
|
||||||
// Claude Code format
|
// Claude Code format
|
||||||
content = `# /${agentId} Command\n\n`;
|
content = `# /${agentId} Command\n\n`
|
||||||
content += `When this command is used, adopt the following agent persona:\n\n`;
|
content += 'When this command is used, adopt the following agent persona:\n\n'
|
||||||
content += agentContent;
|
content += agentContent
|
||||||
}
|
}
|
||||||
|
|
||||||
return content;
|
return content
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clear all caches
|
* Clear all caches
|
||||||
*/
|
*/
|
||||||
clearCache() {
|
clearCache () {
|
||||||
this._agentCache.clear();
|
this._agentCache.clear()
|
||||||
this._pathCache.clear();
|
this._pathCache.clear()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = BaseIdeSetup;
|
module.exports = BaseIdeSetup
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
|
@ -3,22 +3,22 @@
|
||||||
* Helps identify memory leaks and optimize resource usage
|
* Helps identify memory leaks and optimize resource usage
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const v8 = require('v8');
|
const v8 = require('v8')
|
||||||
|
|
||||||
class MemoryProfiler {
|
class MemoryProfiler {
|
||||||
constructor() {
|
constructor () {
|
||||||
this.checkpoints = [];
|
this.checkpoints = []
|
||||||
this.startTime = Date.now();
|
this.startTime = Date.now()
|
||||||
this.peakMemory = 0;
|
this.peakMemory = 0
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Create a memory checkpoint
|
* Create a memory checkpoint
|
||||||
* @param {string} label - Label for this checkpoint
|
* @param {string} label - Label for this checkpoint
|
||||||
*/
|
*/
|
||||||
checkpoint(label) {
|
checkpoint (label) {
|
||||||
const memUsage = process.memoryUsage();
|
const memUsage = process.memoryUsage()
|
||||||
const heapStats = v8.getHeapStatistics();
|
const heapStats = v8.getHeapStatistics()
|
||||||
|
|
||||||
const checkpoint = {
|
const checkpoint = {
|
||||||
label,
|
label,
|
||||||
|
|
@ -40,33 +40,33 @@ class MemoryProfiler {
|
||||||
raw: {
|
raw: {
|
||||||
heapUsed: memUsage.heapUsed
|
heapUsed: memUsage.heapUsed
|
||||||
}
|
}
|
||||||
};
|
}
|
||||||
|
|
||||||
// Track peak memory
|
// Track peak memory
|
||||||
if (memUsage.heapUsed > this.peakMemory) {
|
if (memUsage.heapUsed > this.peakMemory) {
|
||||||
this.peakMemory = memUsage.heapUsed;
|
this.peakMemory = memUsage.heapUsed
|
||||||
}
|
}
|
||||||
|
|
||||||
this.checkpoints.push(checkpoint);
|
this.checkpoints.push(checkpoint)
|
||||||
return checkpoint;
|
return checkpoint
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Force garbage collection (requires --expose-gc flag)
|
* Force garbage collection (requires --expose-gc flag)
|
||||||
*/
|
*/
|
||||||
forceGC() {
|
forceGC () {
|
||||||
if (global.gc) {
|
if (global.gc) {
|
||||||
global.gc();
|
global.gc()
|
||||||
return true;
|
return true
|
||||||
}
|
}
|
||||||
return false;
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get memory usage summary
|
* Get memory usage summary
|
||||||
*/
|
*/
|
||||||
getSummary() {
|
getSummary () {
|
||||||
const currentMemory = process.memoryUsage();
|
const currentMemory = process.memoryUsage()
|
||||||
|
|
||||||
return {
|
return {
|
||||||
currentUsage: {
|
currentUsage: {
|
||||||
|
|
@ -77,36 +77,36 @@ class MemoryProfiler {
|
||||||
peakMemory: this.formatBytes(this.peakMemory),
|
peakMemory: this.formatBytes(this.peakMemory),
|
||||||
totalCheckpoints: this.checkpoints.length,
|
totalCheckpoints: this.checkpoints.length,
|
||||||
runTime: `${((Date.now() - this.startTime) / 1000).toFixed(2)}s`
|
runTime: `${((Date.now() - this.startTime) / 1000).toFixed(2)}s`
|
||||||
};
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get detailed report of memory usage
|
* Get detailed report of memory usage
|
||||||
*/
|
*/
|
||||||
getDetailedReport() {
|
getDetailedReport () {
|
||||||
const summary = this.getSummary();
|
const summary = this.getSummary()
|
||||||
const memoryGrowth = this.calculateMemoryGrowth();
|
const memoryGrowth = this.calculateMemoryGrowth()
|
||||||
|
|
||||||
return {
|
return {
|
||||||
summary,
|
summary,
|
||||||
memoryGrowth,
|
memoryGrowth,
|
||||||
checkpoints: this.checkpoints,
|
checkpoints: this.checkpoints,
|
||||||
recommendations: this.getRecommendations(memoryGrowth)
|
recommendations: this.getRecommendations(memoryGrowth)
|
||||||
};
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Calculate memory growth between checkpoints
|
* Calculate memory growth between checkpoints
|
||||||
*/
|
*/
|
||||||
calculateMemoryGrowth() {
|
calculateMemoryGrowth () {
|
||||||
if (this.checkpoints.length < 2) return [];
|
if (this.checkpoints.length < 2) return []
|
||||||
|
|
||||||
const growth = [];
|
const growth = []
|
||||||
for (let i = 1; i < this.checkpoints.length; i++) {
|
for (let i = 1; i < this.checkpoints.length; i++) {
|
||||||
const prev = this.checkpoints[i - 1];
|
const prev = this.checkpoints[i - 1]
|
||||||
const curr = this.checkpoints[i];
|
const curr = this.checkpoints[i]
|
||||||
|
|
||||||
const heapDiff = curr.raw.heapUsed - prev.raw.heapUsed;
|
const heapDiff = curr.raw.heapUsed - prev.raw.heapUsed
|
||||||
|
|
||||||
growth.push({
|
growth.push({
|
||||||
from: prev.label,
|
from: prev.label,
|
||||||
|
|
@ -114,30 +114,30 @@ class MemoryProfiler {
|
||||||
heapGrowth: this.formatBytes(Math.abs(heapDiff)),
|
heapGrowth: this.formatBytes(Math.abs(heapDiff)),
|
||||||
isIncrease: heapDiff > 0,
|
isIncrease: heapDiff > 0,
|
||||||
timeDiff: `${((curr.timestamp - prev.timestamp) / 1000).toFixed(2)}s`
|
timeDiff: `${((curr.timestamp - prev.timestamp) / 1000).toFixed(2)}s`
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
return growth;
|
return growth
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get recommendations based on memory usage
|
* Get recommendations based on memory usage
|
||||||
*/
|
*/
|
||||||
getRecommendations(memoryGrowth) {
|
getRecommendations (memoryGrowth) {
|
||||||
const recommendations = [];
|
const recommendations = []
|
||||||
|
|
||||||
// Check for large memory growth
|
// Check for large memory growth
|
||||||
const largeGrowths = memoryGrowth.filter(g => {
|
const largeGrowths = memoryGrowth.filter(g => {
|
||||||
const bytes = this.parseBytes(g.heapGrowth);
|
const bytes = this.parseBytes(g.heapGrowth)
|
||||||
return bytes > 50 * 1024 * 1024; // 50MB
|
return bytes > 50 * 1024 * 1024 // 50MB
|
||||||
});
|
})
|
||||||
|
|
||||||
if (largeGrowths.length > 0) {
|
if (largeGrowths.length > 0) {
|
||||||
recommendations.push({
|
recommendations.push({
|
||||||
type: 'warning',
|
type: 'warning',
|
||||||
message: `Large memory growth detected in ${largeGrowths.length} operations`,
|
message: `Large memory growth detected in ${largeGrowths.length} operations`,
|
||||||
details: largeGrowths.map(g => `${g.from} → ${g.to}: ${g.heapGrowth}`)
|
details: largeGrowths.map(g => `${g.from} → ${g.to}: ${g.heapGrowth}`)
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check peak memory
|
// Check peak memory
|
||||||
|
|
@ -146,79 +146,79 @@ class MemoryProfiler {
|
||||||
type: 'warning',
|
type: 'warning',
|
||||||
message: `High peak memory usage: ${this.formatBytes(this.peakMemory)}`,
|
message: `High peak memory usage: ${this.formatBytes(this.peakMemory)}`,
|
||||||
suggestion: 'Consider processing files in smaller batches'
|
suggestion: 'Consider processing files in smaller batches'
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check for potential memory leaks
|
// Check for potential memory leaks
|
||||||
const continuousGrowth = this.checkContinuousGrowth();
|
const continuousGrowth = this.checkContinuousGrowth()
|
||||||
if (continuousGrowth) {
|
if (continuousGrowth) {
|
||||||
recommendations.push({
|
recommendations.push({
|
||||||
type: 'error',
|
type: 'error',
|
||||||
message: 'Potential memory leak detected',
|
message: 'Potential memory leak detected',
|
||||||
details: 'Memory usage continuously increases without significant decreases'
|
details: 'Memory usage continuously increases without significant decreases'
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
return recommendations;
|
return recommendations
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check for continuous memory growth (potential leak)
|
* Check for continuous memory growth (potential leak)
|
||||||
*/
|
*/
|
||||||
checkContinuousGrowth() {
|
checkContinuousGrowth () {
|
||||||
if (this.checkpoints.length < 5) return false;
|
if (this.checkpoints.length < 5) return false
|
||||||
|
|
||||||
let increasingCount = 0;
|
let increasingCount = 0
|
||||||
for (let i = 1; i < this.checkpoints.length; i++) {
|
for (let i = 1; i < this.checkpoints.length; i++) {
|
||||||
if (this.checkpoints[i].raw.heapUsed > this.checkpoints[i - 1].raw.heapUsed) {
|
if (this.checkpoints[i].raw.heapUsed > this.checkpoints[i - 1].raw.heapUsed) {
|
||||||
increasingCount++;
|
increasingCount++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// If memory increases in more than 80% of checkpoints, might be a leak
|
// If memory increases in more than 80% of checkpoints, might be a leak
|
||||||
return increasingCount / (this.checkpoints.length - 1) > 0.8;
|
return increasingCount / (this.checkpoints.length - 1) > 0.8
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Format bytes to human-readable string
|
* Format bytes to human-readable string
|
||||||
*/
|
*/
|
||||||
formatBytes(bytes) {
|
formatBytes (bytes) {
|
||||||
if (bytes === 0) return '0 B';
|
if (bytes === 0) return '0 B'
|
||||||
|
|
||||||
const k = 1024;
|
const k = 1024
|
||||||
const sizes = ['B', 'KB', 'MB', 'GB'];
|
const sizes = ['B', 'KB', 'MB', 'GB']
|
||||||
const i = Math.floor(Math.log(bytes) / Math.log(k));
|
const i = Math.floor(Math.log(bytes) / Math.log(k))
|
||||||
|
|
||||||
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
|
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse human-readable bytes back to number
|
* Parse human-readable bytes back to number
|
||||||
*/
|
*/
|
||||||
parseBytes(str) {
|
parseBytes (str) {
|
||||||
const match = str.match(/^([\d.]+)\s*([KMGT]?B?)$/i);
|
const match = str.match(/^([\d.]+)\s*([KMGT]?B?)$/i)
|
||||||
if (!match) return 0;
|
if (!match) return 0
|
||||||
|
|
||||||
const value = parseFloat(match[1]);
|
const value = parseFloat(match[1])
|
||||||
const unit = match[2].toUpperCase();
|
const unit = match[2].toUpperCase()
|
||||||
|
|
||||||
const multipliers = {
|
const multipliers = {
|
||||||
'B': 1,
|
B: 1,
|
||||||
'KB': 1024,
|
KB: 1024,
|
||||||
'MB': 1024 * 1024,
|
MB: 1024 * 1024,
|
||||||
'GB': 1024 * 1024 * 1024
|
GB: 1024 * 1024 * 1024
|
||||||
};
|
}
|
||||||
|
|
||||||
return value * (multipliers[unit] || 1);
|
return value * (multipliers[unit] || 1)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clear checkpoints to free memory
|
* Clear checkpoints to free memory
|
||||||
*/
|
*/
|
||||||
clear() {
|
clear () {
|
||||||
this.checkpoints = [];
|
this.checkpoints = []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Export singleton instance
|
// Export singleton instance
|
||||||
module.exports = new MemoryProfiler();
|
module.exports = new MemoryProfiler()
|
||||||
|
|
|
||||||
|
|
@ -4,27 +4,27 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
class ModuleManager {
|
class ModuleManager {
|
||||||
constructor() {
|
constructor () {
|
||||||
this._cache = new Map();
|
this._cache = new Map()
|
||||||
this._loadingPromises = new Map();
|
this._loadingPromises = new Map()
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Initialize all commonly used ES modules at once
|
* Initialize all commonly used ES modules at once
|
||||||
* @returns {Promise<Object>} Object containing all loaded modules
|
* @returns {Promise<Object>} Object containing all loaded modules
|
||||||
*/
|
*/
|
||||||
async initializeCommonModules() {
|
async initializeCommonModules () {
|
||||||
const modules = await Promise.all([
|
const modules = await Promise.all([
|
||||||
this.getModule('chalk'),
|
this.getModule('chalk'),
|
||||||
this.getModule('ora'),
|
this.getModule('ora'),
|
||||||
this.getModule('inquirer')
|
this.getModule('inquirer')
|
||||||
]);
|
])
|
||||||
|
|
||||||
return {
|
return {
|
||||||
chalk: modules[0],
|
chalk: modules[0],
|
||||||
ora: modules[1],
|
ora: modules[1],
|
||||||
inquirer: modules[2]
|
inquirer: modules[2]
|
||||||
};
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -32,29 +32,29 @@ class ModuleManager {
|
||||||
* @param {string} moduleName - Name of the module to load
|
* @param {string} moduleName - Name of the module to load
|
||||||
* @returns {Promise<any>} The loaded module
|
* @returns {Promise<any>} The loaded module
|
||||||
*/
|
*/
|
||||||
async getModule(moduleName) {
|
async getModule (moduleName) {
|
||||||
// Return from cache if available
|
// Return from cache if available
|
||||||
if (this._cache.has(moduleName)) {
|
if (this._cache.has(moduleName)) {
|
||||||
return this._cache.get(moduleName);
|
return this._cache.get(moduleName)
|
||||||
}
|
}
|
||||||
|
|
||||||
// If already loading, return the existing promise
|
// If already loading, return the existing promise
|
||||||
if (this._loadingPromises.has(moduleName)) {
|
if (this._loadingPromises.has(moduleName)) {
|
||||||
return this._loadingPromises.get(moduleName);
|
return this._loadingPromises.get(moduleName)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start loading the module
|
// Start loading the module
|
||||||
const loadPromise = this._loadModule(moduleName);
|
const loadPromise = this._loadModule(moduleName)
|
||||||
this._loadingPromises.set(moduleName, loadPromise);
|
this._loadingPromises.set(moduleName, loadPromise)
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const module = await loadPromise;
|
const module = await loadPromise
|
||||||
this._cache.set(moduleName, module);
|
this._cache.set(moduleName, module)
|
||||||
this._loadingPromises.delete(moduleName);
|
this._loadingPromises.delete(moduleName)
|
||||||
return module;
|
return module
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this._loadingPromises.delete(moduleName);
|
this._loadingPromises.delete(moduleName)
|
||||||
throw error;
|
throw error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -62,29 +62,29 @@ class ModuleManager {
|
||||||
* Internal method to load a specific module
|
* Internal method to load a specific module
|
||||||
* @private
|
* @private
|
||||||
*/
|
*/
|
||||||
async _loadModule(moduleName) {
|
async _loadModule (moduleName) {
|
||||||
switch (moduleName) {
|
switch (moduleName) {
|
||||||
case 'chalk':
|
case 'chalk':
|
||||||
return (await import('chalk')).default;
|
return (await import('chalk')).default
|
||||||
case 'ora':
|
case 'ora':
|
||||||
return (await import('ora')).default;
|
return (await import('ora')).default
|
||||||
case 'inquirer':
|
case 'inquirer':
|
||||||
return (await import('inquirer')).default;
|
return (await import('inquirer')).default
|
||||||
case 'glob':
|
case 'glob':
|
||||||
return (await import('glob')).glob;
|
return (await import('glob')).glob
|
||||||
case 'globSync':
|
case 'globSync':
|
||||||
return (await import('glob')).globSync;
|
return (await import('glob')).globSync
|
||||||
default:
|
default:
|
||||||
throw new Error(`Unknown module: ${moduleName}`);
|
throw new Error(`Unknown module: ${moduleName}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clear the module cache to free memory
|
* Clear the module cache to free memory
|
||||||
*/
|
*/
|
||||||
clearCache() {
|
clearCache () {
|
||||||
this._cache.clear();
|
this._cache.clear()
|
||||||
this._loadingPromises.clear();
|
this._loadingPromises.clear()
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -92,19 +92,19 @@ class ModuleManager {
|
||||||
* @param {string[]} moduleNames - Array of module names
|
* @param {string[]} moduleNames - Array of module names
|
||||||
* @returns {Promise<Object>} Object with module names as keys
|
* @returns {Promise<Object>} Object with module names as keys
|
||||||
*/
|
*/
|
||||||
async getModules(moduleNames) {
|
async getModules (moduleNames) {
|
||||||
const modules = await Promise.all(
|
const modules = await Promise.all(
|
||||||
moduleNames.map(name => this.getModule(name))
|
moduleNames.map(name => this.getModule(name))
|
||||||
);
|
)
|
||||||
|
|
||||||
return moduleNames.reduce((acc, name, index) => {
|
return moduleNames.reduce((acc, name, index) => {
|
||||||
acc[name] = modules[index];
|
acc[name] = modules[index]
|
||||||
return acc;
|
return acc
|
||||||
}, {});
|
}, {})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Singleton instance
|
// Singleton instance
|
||||||
const moduleManager = new ModuleManager();
|
const moduleManager = new ModuleManager()
|
||||||
|
|
||||||
module.exports = moduleManager;
|
module.exports = moduleManager
|
||||||
|
|
|
||||||
|
|
@ -3,36 +3,36 @@
|
||||||
* Reduces duplicate file system operations and memory usage
|
* Reduces duplicate file system operations and memory usage
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const path = require('node:path');
|
const path = require('node:path')
|
||||||
const fs = require('fs-extra');
|
const fs = require('fs-extra')
|
||||||
const moduleManager = require('./module-manager');
|
const moduleManager = require('./module-manager')
|
||||||
|
|
||||||
class ResourceLocator {
|
class ResourceLocator {
|
||||||
constructor() {
|
constructor () {
|
||||||
this._pathCache = new Map();
|
this._pathCache = new Map()
|
||||||
this._globCache = new Map();
|
this._globCache = new Map()
|
||||||
this._bmadCorePath = null;
|
this._bmadCorePath = null
|
||||||
this._expansionPacksPath = null;
|
this._expansionPacksPath = null
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the base path for bmad-core
|
* Get the base path for bmad-core
|
||||||
*/
|
*/
|
||||||
getBmadCorePath() {
|
getBmadCorePath () {
|
||||||
if (!this._bmadCorePath) {
|
if (!this._bmadCorePath) {
|
||||||
this._bmadCorePath = path.join(__dirname, '../../../bmad-core');
|
this._bmadCorePath = path.join(__dirname, '../../../bmad-core')
|
||||||
}
|
}
|
||||||
return this._bmadCorePath;
|
return this._bmadCorePath
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the base path for expansion packs
|
* Get the base path for expansion packs
|
||||||
*/
|
*/
|
||||||
getExpansionPacksPath() {
|
getExpansionPacksPath () {
|
||||||
if (!this._expansionPacksPath) {
|
if (!this._expansionPacksPath) {
|
||||||
this._expansionPacksPath = path.join(__dirname, '../../../expansion-packs');
|
this._expansionPacksPath = path.join(__dirname, '../../../expansion-packs')
|
||||||
}
|
}
|
||||||
return this._expansionPacksPath;
|
return this._expansionPacksPath
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -41,21 +41,21 @@ class ResourceLocator {
|
||||||
* @param {Object} options - Glob options
|
* @param {Object} options - Glob options
|
||||||
* @returns {Promise<string[]>} Array of matched file paths
|
* @returns {Promise<string[]>} Array of matched file paths
|
||||||
*/
|
*/
|
||||||
async findFiles(pattern, options = {}) {
|
async findFiles (pattern, options = {}) {
|
||||||
const cacheKey = `${pattern}:${JSON.stringify(options)}`;
|
const cacheKey = `${pattern}:${JSON.stringify(options)}`
|
||||||
|
|
||||||
if (this._globCache.has(cacheKey)) {
|
if (this._globCache.has(cacheKey)) {
|
||||||
return this._globCache.get(cacheKey);
|
return this._globCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const { glob } = await moduleManager.getModules(['glob']);
|
const { glob } = await moduleManager.getModules(['glob'])
|
||||||
const files = await glob(pattern, options);
|
const files = await glob(pattern, options)
|
||||||
|
|
||||||
// Cache for 5 minutes
|
// Cache for 5 minutes
|
||||||
this._globCache.set(cacheKey, files);
|
this._globCache.set(cacheKey, files)
|
||||||
setTimeout(() => this._globCache.delete(cacheKey), 5 * 60 * 1000);
|
setTimeout(() => this._globCache.delete(cacheKey), 5 * 60 * 1000)
|
||||||
|
|
||||||
return files;
|
return files
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -63,68 +63,68 @@ class ResourceLocator {
|
||||||
* @param {string} agentId - Agent identifier
|
* @param {string} agentId - Agent identifier
|
||||||
* @returns {Promise<string|null>} Path to agent file or null if not found
|
* @returns {Promise<string|null>} Path to agent file or null if not found
|
||||||
*/
|
*/
|
||||||
async getAgentPath(agentId) {
|
async getAgentPath (agentId) {
|
||||||
const cacheKey = `agent:${agentId}`;
|
const cacheKey = `agent:${agentId}`
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check in bmad-core
|
// Check in bmad-core
|
||||||
let agentPath = path.join(this.getBmadCorePath(), 'agents', `${agentId}.md`);
|
let agentPath = path.join(this.getBmadCorePath(), 'agents', `${agentId}.md`)
|
||||||
if (await fs.pathExists(agentPath)) {
|
if (await fs.pathExists(agentPath)) {
|
||||||
this._pathCache.set(cacheKey, agentPath);
|
this._pathCache.set(cacheKey, agentPath)
|
||||||
return agentPath;
|
return agentPath
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check in expansion packs
|
// Check in expansion packs
|
||||||
const expansionPacks = await this.getExpansionPacks();
|
const expansionPacks = await this.getExpansionPacks()
|
||||||
for (const pack of expansionPacks) {
|
for (const pack of expansionPacks) {
|
||||||
agentPath = path.join(pack.path, 'agents', `${agentId}.md`);
|
agentPath = path.join(pack.path, 'agents', `${agentId}.md`)
|
||||||
if (await fs.pathExists(agentPath)) {
|
if (await fs.pathExists(agentPath)) {
|
||||||
this._pathCache.set(cacheKey, agentPath);
|
this._pathCache.set(cacheKey, agentPath)
|
||||||
return agentPath;
|
return agentPath
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get available agents with metadata
|
* Get available agents with metadata
|
||||||
* @returns {Promise<Array>} Array of agent objects
|
* @returns {Promise<Array>} Array of agent objects
|
||||||
*/
|
*/
|
||||||
async getAvailableAgents() {
|
async getAvailableAgents () {
|
||||||
const cacheKey = 'all-agents';
|
const cacheKey = 'all-agents'
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const agents = [];
|
const agents = []
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const { extractYamlFromAgent } = require('../../lib/yaml-utils');
|
const { extractYamlFromAgent } = require('../../lib/yaml-utils')
|
||||||
|
|
||||||
// Get agents from bmad-core
|
// Get agents from bmad-core
|
||||||
const coreAgents = await this.findFiles('agents/*.md', {
|
const coreAgents = await this.findFiles('agents/*.md', {
|
||||||
cwd: this.getBmadCorePath()
|
cwd: this.getBmadCorePath()
|
||||||
});
|
})
|
||||||
|
|
||||||
for (const agentFile of coreAgents) {
|
for (const agentFile of coreAgents) {
|
||||||
const content = await fs.readFile(
|
const content = await fs.readFile(
|
||||||
path.join(this.getBmadCorePath(), agentFile),
|
path.join(this.getBmadCorePath(), agentFile),
|
||||||
'utf8'
|
'utf8'
|
||||||
);
|
)
|
||||||
const yamlContent = extractYamlFromAgent(content);
|
const yamlContent = extractYamlFromAgent(content)
|
||||||
if (yamlContent) {
|
if (yamlContent) {
|
||||||
try {
|
try {
|
||||||
const metadata = yaml.load(yamlContent);
|
const metadata = yaml.load(yamlContent)
|
||||||
agents.push({
|
agents.push({
|
||||||
id: path.basename(agentFile, '.md'),
|
id: path.basename(agentFile, '.md'),
|
||||||
name: metadata.agent_name || path.basename(agentFile, '.md'),
|
name: metadata.agent_name || path.basename(agentFile, '.md'),
|
||||||
description: metadata.description || 'No description available',
|
description: metadata.description || 'No description available',
|
||||||
source: 'core'
|
source: 'core'
|
||||||
});
|
})
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
// Skip invalid agents
|
// Skip invalid agents
|
||||||
}
|
}
|
||||||
|
|
@ -132,36 +132,36 @@ class ResourceLocator {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Cache for 10 minutes
|
// Cache for 10 minutes
|
||||||
this._pathCache.set(cacheKey, agents);
|
this._pathCache.set(cacheKey, agents)
|
||||||
setTimeout(() => this._pathCache.delete(cacheKey), 10 * 60 * 1000);
|
setTimeout(() => this._pathCache.delete(cacheKey), 10 * 60 * 1000)
|
||||||
|
|
||||||
return agents;
|
return agents
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get available expansion packs
|
* Get available expansion packs
|
||||||
* @returns {Promise<Array>} Array of expansion pack objects
|
* @returns {Promise<Array>} Array of expansion pack objects
|
||||||
*/
|
*/
|
||||||
async getExpansionPacks() {
|
async getExpansionPacks () {
|
||||||
const cacheKey = 'expansion-packs';
|
const cacheKey = 'expansion-packs'
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const packs = [];
|
const packs = []
|
||||||
const expansionPacksPath = this.getExpansionPacksPath();
|
const expansionPacksPath = this.getExpansionPacksPath()
|
||||||
|
|
||||||
if (await fs.pathExists(expansionPacksPath)) {
|
if (await fs.pathExists(expansionPacksPath)) {
|
||||||
const entries = await fs.readdir(expansionPacksPath, { withFileTypes: true });
|
const entries = await fs.readdir(expansionPacksPath, { withFileTypes: true })
|
||||||
|
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (entry.isDirectory()) {
|
if (entry.isDirectory()) {
|
||||||
const configPath = path.join(expansionPacksPath, entry.name, 'config.yaml');
|
const configPath = path.join(expansionPacksPath, entry.name, 'config.yaml')
|
||||||
if (await fs.pathExists(configPath)) {
|
if (await fs.pathExists(configPath)) {
|
||||||
try {
|
try {
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const config = yaml.load(await fs.readFile(configPath, 'utf8'));
|
const config = yaml.load(await fs.readFile(configPath, 'utf8'))
|
||||||
packs.push({
|
packs.push({
|
||||||
id: entry.name,
|
id: entry.name,
|
||||||
name: config.name || entry.name,
|
name: config.name || entry.name,
|
||||||
|
|
@ -170,7 +170,7 @@ class ResourceLocator {
|
||||||
shortTitle: config['short-title'] || config.description || 'No description available',
|
shortTitle: config['short-title'] || config.description || 'No description available',
|
||||||
author: config.author || 'Unknown',
|
author: config.author || 'Unknown',
|
||||||
path: path.join(expansionPacksPath, entry.name)
|
path: path.join(expansionPacksPath, entry.name)
|
||||||
});
|
})
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
// Skip invalid packs
|
// Skip invalid packs
|
||||||
}
|
}
|
||||||
|
|
@ -180,10 +180,10 @@ class ResourceLocator {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Cache for 10 minutes
|
// Cache for 10 minutes
|
||||||
this._pathCache.set(cacheKey, packs);
|
this._pathCache.set(cacheKey, packs)
|
||||||
setTimeout(() => this._pathCache.delete(cacheKey), 10 * 60 * 1000);
|
setTimeout(() => this._pathCache.delete(cacheKey), 10 * 60 * 1000)
|
||||||
|
|
||||||
return packs;
|
return packs
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -191,28 +191,28 @@ class ResourceLocator {
|
||||||
* @param {string} teamId - Team identifier
|
* @param {string} teamId - Team identifier
|
||||||
* @returns {Promise<Object|null>} Team configuration or null
|
* @returns {Promise<Object|null>} Team configuration or null
|
||||||
*/
|
*/
|
||||||
async getTeamConfig(teamId) {
|
async getTeamConfig (teamId) {
|
||||||
const cacheKey = `team:${teamId}`;
|
const cacheKey = `team:${teamId}`
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const teamPath = path.join(this.getBmadCorePath(), 'agent-teams', `${teamId}.yaml`);
|
const teamPath = path.join(this.getBmadCorePath(), 'agent-teams', `${teamId}.yaml`)
|
||||||
|
|
||||||
if (await fs.pathExists(teamPath)) {
|
if (await fs.pathExists(teamPath)) {
|
||||||
try {
|
try {
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const content = await fs.readFile(teamPath, 'utf8');
|
const content = await fs.readFile(teamPath, 'utf8')
|
||||||
const config = yaml.load(content);
|
const config = yaml.load(content)
|
||||||
this._pathCache.set(cacheKey, config);
|
this._pathCache.set(cacheKey, config)
|
||||||
return config;
|
return config
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -220,58 +220,58 @@ class ResourceLocator {
|
||||||
* @param {string} agentId - Agent identifier
|
* @param {string} agentId - Agent identifier
|
||||||
* @returns {Promise<Object>} Dependencies object
|
* @returns {Promise<Object>} Dependencies object
|
||||||
*/
|
*/
|
||||||
async getAgentDependencies(agentId) {
|
async getAgentDependencies (agentId) {
|
||||||
const cacheKey = `deps:${agentId}`;
|
const cacheKey = `deps:${agentId}`
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const agentPath = await this.getAgentPath(agentId);
|
const agentPath = await this.getAgentPath(agentId)
|
||||||
if (!agentPath) {
|
if (!agentPath) {
|
||||||
return { all: [], byType: {} };
|
return { all: [], byType: {} }
|
||||||
}
|
}
|
||||||
|
|
||||||
const content = await fs.readFile(agentPath, 'utf8');
|
const content = await fs.readFile(agentPath, 'utf8')
|
||||||
const { extractYamlFromAgent } = require('../../lib/yaml-utils');
|
const { extractYamlFromAgent } = require('../../lib/yaml-utils')
|
||||||
const yamlContent = extractYamlFromAgent(content);
|
const yamlContent = extractYamlFromAgent(content)
|
||||||
|
|
||||||
if (!yamlContent) {
|
if (!yamlContent) {
|
||||||
return { all: [], byType: {} };
|
return { all: [], byType: {} }
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const metadata = yaml.load(yamlContent);
|
const metadata = yaml.load(yamlContent)
|
||||||
const dependencies = metadata.dependencies || {};
|
const dependencies = metadata.dependencies || {}
|
||||||
|
|
||||||
// Flatten dependencies
|
// Flatten dependencies
|
||||||
const allDeps = [];
|
const allDeps = []
|
||||||
const byType = {};
|
const byType = {}
|
||||||
|
|
||||||
for (const [type, deps] of Object.entries(dependencies)) {
|
for (const [type, deps] of Object.entries(dependencies)) {
|
||||||
if (Array.isArray(deps)) {
|
if (Array.isArray(deps)) {
|
||||||
byType[type] = deps;
|
byType[type] = deps
|
||||||
for (const dep of deps) {
|
for (const dep of deps) {
|
||||||
allDeps.push(`.bmad-core/${type}/${dep}`);
|
allDeps.push(`.bmad-core/${type}/${dep}`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const result = { all: allDeps, byType };
|
const result = { all: allDeps, byType }
|
||||||
this._pathCache.set(cacheKey, result);
|
this._pathCache.set(cacheKey, result)
|
||||||
return result;
|
return result
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
return { all: [], byType: {} };
|
return { all: [], byType: {} }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clear all caches to free memory
|
* Clear all caches to free memory
|
||||||
*/
|
*/
|
||||||
clearCache() {
|
clearCache () {
|
||||||
this._pathCache.clear();
|
this._pathCache.clear()
|
||||||
this._globCache.clear();
|
this._globCache.clear()
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -279,32 +279,32 @@ class ResourceLocator {
|
||||||
* @param {string} ideId - IDE identifier
|
* @param {string} ideId - IDE identifier
|
||||||
* @returns {Promise<Object|null>} IDE configuration or null
|
* @returns {Promise<Object|null>} IDE configuration or null
|
||||||
*/
|
*/
|
||||||
async getIdeConfig(ideId) {
|
async getIdeConfig (ideId) {
|
||||||
const cacheKey = `ide:${ideId}`;
|
const cacheKey = `ide:${ideId}`
|
||||||
|
|
||||||
if (this._pathCache.has(cacheKey)) {
|
if (this._pathCache.has(cacheKey)) {
|
||||||
return this._pathCache.get(cacheKey);
|
return this._pathCache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
const idePath = path.join(this.getBmadCorePath(), 'ide-rules', `${ideId}.yaml`);
|
const idePath = path.join(this.getBmadCorePath(), 'ide-rules', `${ideId}.yaml`)
|
||||||
|
|
||||||
if (await fs.pathExists(idePath)) {
|
if (await fs.pathExists(idePath)) {
|
||||||
try {
|
try {
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const content = await fs.readFile(idePath, 'utf8');
|
const content = await fs.readFile(idePath, 'utf8')
|
||||||
const config = yaml.load(content);
|
const config = yaml.load(content)
|
||||||
this._pathCache.set(cacheKey, config);
|
this._pathCache.set(cacheKey, config)
|
||||||
return config;
|
return config
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Singleton instance
|
// Singleton instance
|
||||||
const resourceLocator = new ResourceLocator();
|
const resourceLocator = new ResourceLocator()
|
||||||
|
|
||||||
module.exports = resourceLocator;
|
module.exports = resourceLocator
|
||||||
|
|
|
||||||
|
|
@ -1,27 +1,27 @@
|
||||||
const fs = require('fs').promises;
|
const fs = require('fs').promises
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
const yaml = require('js-yaml');
|
const yaml = require('js-yaml')
|
||||||
const { extractYamlFromAgent } = require('./yaml-utils');
|
const { extractYamlFromAgent } = require('./yaml-utils')
|
||||||
|
|
||||||
class DependencyResolver {
|
class DependencyResolver {
|
||||||
constructor(rootDir) {
|
constructor (rootDir) {
|
||||||
this.rootDir = rootDir;
|
this.rootDir = rootDir
|
||||||
this.bmadCore = path.join(rootDir, 'bmad-core');
|
this.bmadCore = path.join(rootDir, 'bmad-core')
|
||||||
this.common = path.join(rootDir, 'common');
|
this.common = path.join(rootDir, 'common')
|
||||||
this.cache = new Map();
|
this.cache = new Map()
|
||||||
}
|
}
|
||||||
|
|
||||||
async resolveAgentDependencies(agentId) {
|
async resolveAgentDependencies (agentId) {
|
||||||
const agentPath = path.join(this.bmadCore, 'agents', `${agentId}.md`);
|
const agentPath = path.join(this.bmadCore, 'agents', `${agentId}.md`)
|
||||||
const agentContent = await fs.readFile(agentPath, 'utf8');
|
const agentContent = await fs.readFile(agentPath, 'utf8')
|
||||||
|
|
||||||
// Extract YAML from markdown content with command cleaning
|
// Extract YAML from markdown content with command cleaning
|
||||||
const yamlContent = extractYamlFromAgent(agentContent, true);
|
const yamlContent = extractYamlFromAgent(agentContent, true)
|
||||||
if (!yamlContent) {
|
if (!yamlContent) {
|
||||||
throw new Error(`No YAML configuration found in agent ${agentId}`);
|
throw new Error(`No YAML configuration found in agent ${agentId}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
const agentConfig = yaml.load(yamlContent);
|
const agentConfig = yaml.load(yamlContent)
|
||||||
|
|
||||||
const dependencies = {
|
const dependencies = {
|
||||||
agent: {
|
agent: {
|
||||||
|
|
@ -31,27 +31,27 @@ class DependencyResolver {
|
||||||
config: agentConfig
|
config: agentConfig
|
||||||
},
|
},
|
||||||
resources: []
|
resources: []
|
||||||
};
|
}
|
||||||
|
|
||||||
// Personas are now embedded in agent configs, no need to resolve separately
|
// Personas are now embedded in agent configs, no need to resolve separately
|
||||||
|
|
||||||
// Resolve other dependencies
|
// Resolve other dependencies
|
||||||
const depTypes = ['tasks', 'templates', 'checklists', 'data', 'utils'];
|
const depTypes = ['tasks', 'templates', 'checklists', 'data', 'utils']
|
||||||
for (const depType of depTypes) {
|
for (const depType of depTypes) {
|
||||||
const deps = agentConfig.dependencies?.[depType] || [];
|
const deps = agentConfig.dependencies?.[depType] || []
|
||||||
for (const depId of deps) {
|
for (const depId of deps) {
|
||||||
const resource = await this.loadResource(depType, depId);
|
const resource = await this.loadResource(depType, depId)
|
||||||
if (resource) dependencies.resources.push(resource);
|
if (resource) dependencies.resources.push(resource)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return dependencies;
|
return dependencies
|
||||||
}
|
}
|
||||||
|
|
||||||
async resolveTeamDependencies(teamId) {
|
async resolveTeamDependencies (teamId) {
|
||||||
const teamPath = path.join(this.bmadCore, 'agent-teams', `${teamId}.yaml`);
|
const teamPath = path.join(this.bmadCore, 'agent-teams', `${teamId}.yaml`)
|
||||||
const teamContent = await fs.readFile(teamPath, 'utf8');
|
const teamContent = await fs.readFile(teamPath, 'utf8')
|
||||||
const teamConfig = yaml.load(teamContent);
|
const teamConfig = yaml.load(teamContent)
|
||||||
|
|
||||||
const dependencies = {
|
const dependencies = {
|
||||||
team: {
|
team: {
|
||||||
|
|
@ -62,80 +62,80 @@ class DependencyResolver {
|
||||||
},
|
},
|
||||||
agents: [],
|
agents: [],
|
||||||
resources: new Map() // Use Map to deduplicate resources
|
resources: new Map() // Use Map to deduplicate resources
|
||||||
};
|
}
|
||||||
|
|
||||||
// Always add bmad-orchestrator agent first if it's a team
|
// Always add bmad-orchestrator agent first if it's a team
|
||||||
const bmadAgent = await this.resolveAgentDependencies('bmad-orchestrator');
|
const bmadAgent = await this.resolveAgentDependencies('bmad-orchestrator')
|
||||||
dependencies.agents.push(bmadAgent.agent);
|
dependencies.agents.push(bmadAgent.agent)
|
||||||
bmadAgent.resources.forEach(res => {
|
bmadAgent.resources.forEach(res => {
|
||||||
dependencies.resources.set(res.path, res);
|
dependencies.resources.set(res.path, res)
|
||||||
});
|
})
|
||||||
|
|
||||||
// Resolve all agents in the team
|
// Resolve all agents in the team
|
||||||
let agentsToResolve = teamConfig.agents || [];
|
let agentsToResolve = teamConfig.agents || []
|
||||||
|
|
||||||
// Handle wildcard "*" - include all agents except bmad-master
|
// Handle wildcard "*" - include all agents except bmad-master
|
||||||
if (agentsToResolve.includes('*')) {
|
if (agentsToResolve.includes('*')) {
|
||||||
const allAgents = await this.listAgents();
|
const allAgents = await this.listAgents()
|
||||||
// Remove wildcard and add all agents except those already in the list and bmad-master
|
// Remove wildcard and add all agents except those already in the list and bmad-master
|
||||||
agentsToResolve = agentsToResolve.filter(a => a !== '*');
|
agentsToResolve = agentsToResolve.filter(a => a !== '*')
|
||||||
for (const agent of allAgents) {
|
for (const agent of allAgents) {
|
||||||
if (!agentsToResolve.includes(agent) && agent !== 'bmad-master') {
|
if (!agentsToResolve.includes(agent) && agent !== 'bmad-master') {
|
||||||
agentsToResolve.push(agent);
|
agentsToResolve.push(agent)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (const agentId of agentsToResolve) {
|
for (const agentId of agentsToResolve) {
|
||||||
if (agentId === 'bmad-orchestrator' || agentId === 'bmad-master') continue; // Already added or excluded
|
if (agentId === 'bmad-orchestrator' || agentId === 'bmad-master') continue // Already added or excluded
|
||||||
const agentDeps = await this.resolveAgentDependencies(agentId);
|
const agentDeps = await this.resolveAgentDependencies(agentId)
|
||||||
dependencies.agents.push(agentDeps.agent);
|
dependencies.agents.push(agentDeps.agent)
|
||||||
|
|
||||||
// Add resources with deduplication
|
// Add resources with deduplication
|
||||||
agentDeps.resources.forEach(res => {
|
agentDeps.resources.forEach(res => {
|
||||||
dependencies.resources.set(res.path, res);
|
dependencies.resources.set(res.path, res)
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resolve workflows
|
// Resolve workflows
|
||||||
for (const workflowId of teamConfig.workflows || []) {
|
for (const workflowId of teamConfig.workflows || []) {
|
||||||
const resource = await this.loadResource('workflows', workflowId);
|
const resource = await this.loadResource('workflows', workflowId)
|
||||||
if (resource) dependencies.resources.set(resource.path, resource);
|
if (resource) dependencies.resources.set(resource.path, resource)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert Map back to array
|
// Convert Map back to array
|
||||||
dependencies.resources = Array.from(dependencies.resources.values());
|
dependencies.resources = Array.from(dependencies.resources.values())
|
||||||
|
|
||||||
return dependencies;
|
return dependencies
|
||||||
}
|
}
|
||||||
|
|
||||||
async loadResource(type, id) {
|
async loadResource (type, id) {
|
||||||
const cacheKey = `${type}#${id}`;
|
const cacheKey = `${type}#${id}`
|
||||||
if (this.cache.has(cacheKey)) {
|
if (this.cache.has(cacheKey)) {
|
||||||
return this.cache.get(cacheKey);
|
return this.cache.get(cacheKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
let content = null;
|
let content = null
|
||||||
let filePath = null;
|
let filePath = null
|
||||||
|
|
||||||
// First try bmad-core
|
// First try bmad-core
|
||||||
try {
|
try {
|
||||||
filePath = path.join(this.bmadCore, type, id);
|
filePath = path.join(this.bmadCore, type, id)
|
||||||
content = await fs.readFile(filePath, 'utf8');
|
content = await fs.readFile(filePath, 'utf8')
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
// If not found in bmad-core, try common folder
|
// If not found in bmad-core, try common folder
|
||||||
try {
|
try {
|
||||||
filePath = path.join(this.common, type, id);
|
filePath = path.join(this.common, type, id)
|
||||||
content = await fs.readFile(filePath, 'utf8');
|
content = await fs.readFile(filePath, 'utf8')
|
||||||
} catch (e2) {
|
} catch (e2) {
|
||||||
// File not found in either location
|
// File not found in either location
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!content) {
|
if (!content) {
|
||||||
console.warn(`Resource not found: ${type}/${id}`);
|
console.warn(`Resource not found: ${type}/${id}`)
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
|
|
||||||
const resource = {
|
const resource = {
|
||||||
|
|
@ -143,37 +143,37 @@ class DependencyResolver {
|
||||||
id,
|
id,
|
||||||
path: filePath,
|
path: filePath,
|
||||||
content
|
content
|
||||||
};
|
}
|
||||||
|
|
||||||
this.cache.set(cacheKey, resource);
|
this.cache.set(cacheKey, resource)
|
||||||
return resource;
|
return resource
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(`Error loading resource ${type}/${id}:`, error.message);
|
console.error(`Error loading resource ${type}/${id}:`, error.message)
|
||||||
return null;
|
return null
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async listAgents() {
|
async listAgents () {
|
||||||
try {
|
try {
|
||||||
const files = await fs.readdir(path.join(this.bmadCore, 'agents'));
|
const files = await fs.readdir(path.join(this.bmadCore, 'agents'))
|
||||||
return files
|
return files
|
||||||
.filter(f => f.endsWith('.md'))
|
.filter(f => f.endsWith('.md'))
|
||||||
.map(f => f.replace('.md', ''));
|
.map(f => f.replace('.md', ''))
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async listTeams() {
|
async listTeams () {
|
||||||
try {
|
try {
|
||||||
const files = await fs.readdir(path.join(this.bmadCore, 'agent-teams'));
|
const files = await fs.readdir(path.join(this.bmadCore, 'agent-teams'))
|
||||||
return files
|
return files
|
||||||
.filter(f => f.endsWith('.yaml'))
|
.filter(f => f.endsWith('.yaml'))
|
||||||
.map(f => f.replace('.yaml', ''));
|
.map(f => f.replace('.yaml', ''))
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return [];
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = DependencyResolver;
|
module.exports = DependencyResolver
|
||||||
|
|
|
||||||
|
|
@ -8,22 +8,22 @@
|
||||||
* @param {boolean} cleanCommands - Whether to clean command descriptions (default: false)
|
* @param {boolean} cleanCommands - Whether to clean command descriptions (default: false)
|
||||||
* @returns {string|null} - The extracted YAML content or null if not found
|
* @returns {string|null} - The extracted YAML content or null if not found
|
||||||
*/
|
*/
|
||||||
function extractYamlFromAgent(agentContent, cleanCommands = false) {
|
function extractYamlFromAgent (agentContent, cleanCommands = false) {
|
||||||
// Remove carriage returns and match YAML block
|
// Remove carriage returns and match YAML block
|
||||||
const yamlMatch = agentContent.replace(/\r/g, "").match(/```ya?ml\n([\s\S]*?)\n```/);
|
const yamlMatch = agentContent.replace(/\r/g, '').match(/```ya?ml\n([\s\S]*?)\n```/)
|
||||||
if (!yamlMatch) return null;
|
if (!yamlMatch) return null
|
||||||
|
|
||||||
let yamlContent = yamlMatch[1].trim();
|
let yamlContent = yamlMatch[1].trim()
|
||||||
|
|
||||||
// Clean up command descriptions if requested
|
// Clean up command descriptions if requested
|
||||||
// Converts "- command - description" to just "- command"
|
// Converts "- command - description" to just "- command"
|
||||||
if (cleanCommands) {
|
if (cleanCommands) {
|
||||||
yamlContent = yamlContent.replace(/^(\s*-)(\s*"[^"]+")(\s*-\s*.*)$/gm, '$1$2');
|
yamlContent = yamlContent.replace(/^(\s*-)(\s*"[^"]+")(\s*-\s*.*)$/gm, '$1$2')
|
||||||
}
|
}
|
||||||
|
|
||||||
return yamlContent;
|
return yamlContent
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = {
|
module.exports = {
|
||||||
extractYamlFromAgent
|
extractYamlFromAgent
|
||||||
};
|
}
|
||||||
|
|
|
||||||
|
|
@ -4,9 +4,9 @@ You are now operating as a specialized AI agent from the BMad-Method framework.
|
||||||
|
|
||||||
## Important Instructions
|
## Important Instructions
|
||||||
|
|
||||||
### **Follow all startup commands**: Your agent configuration includes startup instructions that define your behavior, personality, and approach. These MUST be followed exactly.
|
### **Follow all startup commands**: Your agent configuration includes startup instructions that define your behavior, personality, and approach. These MUST be followed exactly
|
||||||
|
|
||||||
### **Resource Navigation**: This bundle contains all resources you need. Resources are marked with tags like:
|
### **Resource Navigation**: This bundle contains all resources you need. Resources are marked with tags like
|
||||||
|
|
||||||
- `==================== START: .bmad-core/folder/filename.md ====================`
|
- `==================== START: .bmad-core/folder/filename.md ====================`
|
||||||
- `==================== END: .bmad-core/folder/filename.md ====================`
|
- `==================== END: .bmad-core/folder/filename.md ====================`
|
||||||
|
|
@ -32,8 +32,8 @@ These references map directly to bundle sections:
|
||||||
- `dependencies.utils: template-format` → Look for `==================== START: .bmad-core/utils/template-format.md ====================`
|
- `dependencies.utils: template-format` → Look for `==================== START: .bmad-core/utils/template-format.md ====================`
|
||||||
- `dependencies.utils: create-story` → Look for `==================== START: .bmad-core/tasks/create-story.md ====================`
|
- `dependencies.utils: create-story` → Look for `==================== START: .bmad-core/tasks/create-story.md ====================`
|
||||||
|
|
||||||
### **Execution Context**: You are operating in a web environment. All your capabilities and knowledge are contained within this bundle. Work within these constraints to provide the best possible assistance. You have no file system to write to, so you will maintain document history being drafted in your memory unless a canvas feature is available and the user confirms its usage.
|
### **Execution Context**: You are operating in a web environment. All your capabilities and knowledge are contained within this bundle. Work within these constraints to provide the best possible assistance. You have no file system to write to, so you will maintain document history being drafted in your memory unless a canvas feature is available and the user confirms its usage
|
||||||
|
|
||||||
## **Primary Directive**: Your primary goal is defined in your agent configuration below. Focus on fulfilling your designated role explicitly as defined.
|
## **Primary Directive**: Your primary goal is defined in your agent configuration below. Focus on fulfilling your designated role explicitly as defined
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -2,29 +2,29 @@
|
||||||
* Semantic-release plugin to sync installer package.json version
|
* Semantic-release plugin to sync installer package.json version
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const fs = require('fs');
|
const fs = require('fs')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
|
|
||||||
// This function runs during the "prepare" step of semantic-release
|
// This function runs during the "prepare" step of semantic-release
|
||||||
function prepare(_, { nextRelease, logger }) {
|
function prepare (_, { nextRelease, logger }) {
|
||||||
// Define the path to the installer package.json file
|
// Define the path to the installer package.json file
|
||||||
const file = path.join(process.cwd(), 'tools/installer/package.json');
|
const file = path.join(process.cwd(), 'tools/installer/package.json')
|
||||||
|
|
||||||
// If the file does not exist, skip syncing and log a message
|
// If the file does not exist, skip syncing and log a message
|
||||||
if (!fs.existsSync(file)) return logger.log('Installer package.json not found, skipping');
|
if (!fs.existsSync(file)) return logger.log('Installer package.json not found, skipping')
|
||||||
|
|
||||||
// Read and parse the package.json file
|
// Read and parse the package.json file
|
||||||
const pkg = JSON.parse(fs.readFileSync(file, 'utf8'));
|
const pkg = JSON.parse(fs.readFileSync(file, 'utf8'))
|
||||||
|
|
||||||
// Update the version field with the next release version
|
// Update the version field with the next release version
|
||||||
pkg.version = nextRelease.version;
|
pkg.version = nextRelease.version
|
||||||
|
|
||||||
// Write the updated JSON back to the file
|
// Write the updated JSON back to the file
|
||||||
fs.writeFileSync(file, JSON.stringify(pkg, null, 2) + '\n');
|
fs.writeFileSync(file, JSON.stringify(pkg, null, 2) + '\n')
|
||||||
|
|
||||||
// Log success message
|
// Log success message
|
||||||
logger.log(`Synced installer package.json to version ${nextRelease.version}`);
|
logger.log(`Synced installer package.json to version ${nextRelease.version}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Export the prepare function so semantic-release can use it
|
// Export the prepare function so semantic-release can use it
|
||||||
module.exports = { prepare };
|
module.exports = { prepare }
|
||||||
|
|
|
||||||
|
|
@ -5,30 +5,30 @@
|
||||||
* Used by semantic-release to keep versions in sync
|
* Used by semantic-release to keep versions in sync
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const fs = require('fs');
|
const fs = require('fs')
|
||||||
const path = require('path');
|
const path = require('path')
|
||||||
|
|
||||||
function syncInstallerVersion() {
|
function syncInstallerVersion () {
|
||||||
// Read main package.json
|
// Read main package.json
|
||||||
const mainPackagePath = path.join(__dirname, '..', 'package.json');
|
const mainPackagePath = path.join(__dirname, '..', 'package.json')
|
||||||
const mainPackage = JSON.parse(fs.readFileSync(mainPackagePath, 'utf8'));
|
const mainPackage = JSON.parse(fs.readFileSync(mainPackagePath, 'utf8'))
|
||||||
|
|
||||||
// Read installer package.json
|
// Read installer package.json
|
||||||
const installerPackagePath = path.join(__dirname, 'installer', 'package.json');
|
const installerPackagePath = path.join(__dirname, 'installer', 'package.json')
|
||||||
const installerPackage = JSON.parse(fs.readFileSync(installerPackagePath, 'utf8'));
|
const installerPackage = JSON.parse(fs.readFileSync(installerPackagePath, 'utf8'))
|
||||||
|
|
||||||
// Update installer version to match main version
|
// Update installer version to match main version
|
||||||
installerPackage.version = mainPackage.version;
|
installerPackage.version = mainPackage.version
|
||||||
|
|
||||||
// Write back installer package.json
|
// Write back installer package.json
|
||||||
fs.writeFileSync(installerPackagePath, JSON.stringify(installerPackage, null, 2) + '\n');
|
fs.writeFileSync(installerPackagePath, JSON.stringify(installerPackage, null, 2) + '\n')
|
||||||
|
|
||||||
console.log(`Synced installer version to ${mainPackage.version}`);
|
console.log(`Synced installer version to ${mainPackage.version}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Run if called directly
|
// Run if called directly
|
||||||
if (require.main === module) {
|
if (require.main === module) {
|
||||||
syncInstallerVersion();
|
syncInstallerVersion()
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = { syncInstallerVersion };
|
module.exports = { syncInstallerVersion }
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue