Compare commits
31 Commits
7dc3223738
...
f82cdef6cf
| Author | SHA1 | Date |
|---|---|---|
|
|
f82cdef6cf | |
|
|
2da016f797 | |
|
|
6947851393 | |
|
|
9d7b09d065 | |
|
|
86f2786dde | |
|
|
a638f062b9 | |
|
|
738237b4ae | |
|
|
6430173738 | |
|
|
baaa984a90 | |
|
|
38e65abd83 | |
|
|
ff9a085dd0 | |
|
|
d5c687d99d | |
|
|
b68e5c0225 | |
|
|
c6e53dbbc7 | |
|
|
93db60b8f6 | |
|
|
f344e5cdc2 | |
|
|
8a91c6fffe | |
|
|
36ce3c42d2 | |
|
|
82b4f1dcb4 | |
|
|
6d1da5fc72 | |
|
|
ffe6f6c26b | |
|
|
3fa0865542 | |
|
|
ebc5acd2aa | |
|
|
b7239c1ec3 | |
|
|
0edda967a5 | |
|
|
5077941621 | |
|
|
74240cf842 | |
|
|
83c0a59887 | |
|
|
02d07ed254 | |
|
|
9edc699a8f | |
|
|
28c5b581e9 |
|
|
@ -60,7 +60,7 @@ representative at an online or offline event.
|
|||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
the official BMAD Discord server (https://discord.com/invite/gk8jAdXWmj) - DM a moderator or flag a post.
|
||||
the official BMAD Discord server (<https://discord.com/invite/gk8jAdXWmj>) - DM a moderator or flag a post.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
|
|
@ -116,7 +116,7 @@ the community.
|
|||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
|
@ -124,5 +124,5 @@ enforcement ladder](https://github.com/mozilla/diversity).
|
|||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
<https://www.contributor-covenant.org/faq>. Translations are available at
|
||||
<https://www.contributor-covenant.org/translations>.
|
||||
544
CHANGELOG.md
544
CHANGELOG.md
|
|
@ -1,5 +1,75 @@
|
|||
# Changelog
|
||||
|
||||
## [6.0.0-alpha.15]
|
||||
|
||||
**Release: December 7, 2025**
|
||||
|
||||
### 🔧 Module Installation Standardization
|
||||
|
||||
**Unified Module Configuration:**
|
||||
|
||||
- **module.yaml Standard**: All modules now use `module.yaml` instead of `_module-installer/install-config.yaml` for consistent configuration (BREAKING CHANGE)
|
||||
- **Universal Installer**: Both core and custom modules now use the same installer with consistent behavior
|
||||
- **Streamlined Module Creation**: Module builder templates updated to use new module.yaml standard
|
||||
- **Enhanced Module Discovery**: Improved module caching and discovery mechanisms
|
||||
|
||||
**Custom Content Installation Revolution:**
|
||||
|
||||
- **Interactive Custom Content Search**: Installer now proactively asks if you have custom content to install
|
||||
- **Flexible Location Specification**: Users can indicate custom content location during installation
|
||||
- **Improved Custom Module Handler**: Enhanced error handling and debug output for custom installations
|
||||
- **Comprehensive Documentation**: New custom-content-installation.md guide (245 lines) replacing custom-agent-installation.md
|
||||
|
||||
### 🤖 Code Review Integration Expansion
|
||||
|
||||
**AI Review Tools:**
|
||||
|
||||
- **CodeRabbit AI Integration**: Added .coderabbit.yaml configuration for automated code review
|
||||
- **Raven's Verdict PR Review Tool**: New PR review automation tool (297 lines of documentation)
|
||||
- **Review Path Configuration**: Proper exclusion patterns for node_modules and generated files
|
||||
- **Review Documentation**: Comprehensive usage guidance and skip conditions for PRs
|
||||
|
||||
### 📚 Documentation Improvements
|
||||
|
||||
**Documentation Restructuring:**
|
||||
|
||||
- **Code of Conduct**: Moved to .github/ folder following GitHub standards
|
||||
- **Gem Creation Link**: Updated to point to Gemini Gem manager instead of deprecated interface
|
||||
- **Example Custom Content**: Improved README files and disabled example modules to prevent accidental installation
|
||||
- **Custom Module Documentation**: Enhanced module installation guides with new YAML structure
|
||||
|
||||
### 🧹 Cleanup & Optimization
|
||||
|
||||
**Memory Management:**
|
||||
|
||||
- **Removed Hardcoded .bmad Folders**: Cleaned up demo content to use configurable paths
|
||||
- **Sidecar File Cleanup**: Removed old .bmad-user-memory folders from wellness modules
|
||||
- **Example Content Organization**: Better organization of example-custom-content directory
|
||||
|
||||
**Installer Improvements:**
|
||||
|
||||
- **Debug Output Enhancement**: Added informative debug output when installer encounters errors
|
||||
- **Custom Module Caching**: Improved caching mechanism for custom module installations
|
||||
- **Consistent Behavior**: All modules now behave consistently regardless of custom or core status
|
||||
|
||||
### 📊 Statistics
|
||||
|
||||
- **77 files changed** with 2,852 additions and 607 deletions
|
||||
- **15 commits** since alpha.14
|
||||
|
||||
### ⚠️ Breaking Changes
|
||||
|
||||
1. **module.yaml Configuration**: All modules must now use `module.yaml` instead of `_module-installer/install-config.yaml`
|
||||
- Core modules updated automatically
|
||||
- Custom modules will need to rename their configuration file
|
||||
- Module builder templates generate new format
|
||||
|
||||
### 📦 New Dependencies
|
||||
|
||||
- No new dependencies added in this release
|
||||
|
||||
---
|
||||
|
||||
## [6.0.0-alpha.14]
|
||||
|
||||
**Release: December 7, 2025**
|
||||
|
|
@ -101,159 +171,29 @@
|
|||
|
||||
### 🏗️ Revolutionary Workflow Architecture
|
||||
|
||||
**Granular Step-File Workflow System (NEW in alpha.13):**
|
||||
|
||||
- **Multi-Menu Support**: Workflows now support granular step-file architecture with dynamic menu generation
|
||||
- **Sharded Workflows**: Complete conversion of Phase 1 and 2 workflows to stepwise sharded architecture
|
||||
- **Improved Performance**: Reduced file loading times and eliminated time-based estimates throughout
|
||||
- **Workflow Builder**: New dedicated workflow builder for creating stepwise workflows
|
||||
- **PRD Workflow**: First completely reworked sharded workflow resolving Sonnet compatibility issues
|
||||
|
||||
**Core Workflow Transformations:**
|
||||
|
||||
- Phase 1 and 2 workflows completely converted to sharded step-flow architecture
|
||||
- UX Design workflow converted to sharded step workflow
|
||||
- Brainstorming, Research, and Party Mode updated to use sharded step-flow workflows
|
||||
- Architecture workflows enhanced with step sharding and performance improvements
|
||||
|
||||
### 🎯 Code Review & Development Enhancement
|
||||
|
||||
**Advanced Code Review System:**
|
||||
|
||||
- **Adversarial Code Review**: Quick-dev workflow now recommends adversarial review approach for higher quality
|
||||
- **Multi-LLM Strategy**: Dev-story workflow recommends different LLM models for code review tasks
|
||||
- **Agent Compiler Optimization**: Complete handler cleanup and performance improvements
|
||||
- **Step-File System**: Complete conversion to granular step-file architecture with dynamic menu generation
|
||||
- **Phase 4 Transformation**: Simplified architecture with sprint planning integration (Jira, Linear, Trello)
|
||||
- **Performance Improvements**: Eliminated time-based estimates, reduced file loading times
|
||||
- **Legacy Cleanup**: Removed all deprecated workflows for cleaner system
|
||||
|
||||
### 🤖 Agent System Revolution
|
||||
|
||||
**Universal Custom Agent Support:**
|
||||
- **Universal Custom Agent Support**: Extended to ALL IDEs including Antigravity and Rovo Dev
|
||||
- **Agent Creation Workflow**: Enhanced with better documentation and parameter clarity
|
||||
- **Multi-Source Discovery**: Agents now check multiple source locations for better discovery
|
||||
- **GitHub Migration**: Integration moved from chatmodes to agents folder
|
||||
|
||||
- **Complete IDE Coverage**: Custom agent support extended to ALL remaining IDEs
|
||||
- **Antigravity IDE Integration**: Added custom agent support with proper gitignore configuration
|
||||
- **Multiple Source Locations**: Compile agents now checks multiple source locations for better discovery
|
||||
- **Persona Name Display**: Fixed proper persona names display in custom agent manifests
|
||||
- **New IDE Support**: Added support for Rovo Dev IDE
|
||||
### 🧪 Testing Infrastructure
|
||||
|
||||
**Agent Creation & Management:**
|
||||
|
||||
- **Improved Creation Workflow**: Enhanced agent creation workflow with better documentation
|
||||
- **Parameter Clarity**: Renamed agent-install parameters for better understanding
|
||||
- **Menu Organization**: BMad Agents menu items logically ordered with optional/recommended/required tags
|
||||
- **GitHub Migration**: GitHub integration now uses agents folder instead of chatmodes
|
||||
|
||||
### 🔧 Phase 4 & Sprint Evolution
|
||||
|
||||
**Complete Phase 4 Transformation:**
|
||||
|
||||
- **Simplified Architecture**: Phase 4 workflows completely transformed - simpler, faster, better results
|
||||
- **Sprint Planning Integration**: Unified sprint planning with placeholders for Jira, Linear, and Trello integration
|
||||
- **Status Management**: Better status loading and updating for Phase 4 artifacts
|
||||
- **Workflow Reduction**: Phase 4 streamlined to single sprint planning item with clear validation
|
||||
- **Dynamic Workflows**: All Level 1-3 workflows now dynamically suggest next steps based on context
|
||||
|
||||
### 🧪 Testing Infrastructure Expansion
|
||||
|
||||
**Playwright Utils Integration:**
|
||||
|
||||
- Test Architect now supports `@seontechnologies/playwright-utils` integration
|
||||
- Installation prompt with `use_playwright_utils` configuration flag
|
||||
- 11 comprehensive knowledge fragments covering ALL utilities
|
||||
- Adaptive workflow recommendations across 6 testing workflows
|
||||
- Production-ready utilities from SEON Technologies integrated with TEA patterns
|
||||
|
||||
**Testing Environment:**
|
||||
|
||||
- **Web Bundle Support**: Enabled web bundles for test and development environments
|
||||
- **Test Architecture**: Enhanced test design for architecture level (Phase 3) testing
|
||||
|
||||
### 📦 Installation & Configuration
|
||||
|
||||
**Installer Improvements:**
|
||||
|
||||
- **Cleanup Options**: Installer now allows cleanup of unneeded files during upgrades
|
||||
- **Username Default**: Installer now defaults to system username for better UX
|
||||
- **IDE Selection**: Added empty IDE selection warning and promoted Antigravity to recommended
|
||||
- **NPM Vulnerabilities**: Resolved all npm vulnerabilities for enhanced security
|
||||
- **Documentation Installation**: Made documentation installation optional to reduce footprint
|
||||
|
||||
**Text-to-Speech from AgentVibes optional Integration:**
|
||||
|
||||
- **TTS_INJECTION System**: Complete text-to-speech integration via injection system
|
||||
- **Agent Vibes**: Enhanced with TTS capabilities for voice feedback
|
||||
|
||||
### 🛠️ Tool & IDE Updates
|
||||
|
||||
**IDE Tool Enhancements:**
|
||||
|
||||
- **GitHub Copilot**: Fixed tool names consistency across workflows
|
||||
- **KiloCode Integration**: Gave kilocode tool proper access to bmad modes
|
||||
- **Code Quality**: Added radix parameter to parseInt() calls for better reliability
|
||||
- **Agent Menu Optimization**: Improved agent performance in Claude Code slash commands
|
||||
|
||||
### 📚 Documentation & Standards
|
||||
|
||||
**Documentation Cleanup:**
|
||||
|
||||
- **Installation Guide**: Removed fluff and updated with npx support
|
||||
- **Workflow Documentation**: Fixed documentation by removing non-existent workflows and Mermaid diagrams
|
||||
- **Phase Numbering**: Fixed phase numbering consistency throughout documentation
|
||||
- **Package References**: Corrected incorrect npm package references
|
||||
|
||||
**Workflow Compliance:**
|
||||
|
||||
- **Validation Checks**: Enhanced workflow validation checks for compliance
|
||||
- **Product Brief**: Updated to comply with documented workflow standards
|
||||
- **Status Integration**: Workflow-status can now call workflow-init for better integration
|
||||
|
||||
### 🔍 Legacy Workflow Cleanup
|
||||
|
||||
**Deprecated Workflows Removed:**
|
||||
|
||||
- **Audit Workflow**: Completely removed audit workflow and all associated files
|
||||
- **Convert Legacy**: Removed legacy conversion utilities
|
||||
- **Create/Edit Workflows**: Removed old workflow creation and editing workflows
|
||||
- **Clean Architecture**: Simplified workflow structure by removing deprecated legacy workflows
|
||||
|
||||
### 🐛 Technical Fixes
|
||||
|
||||
**System Improvements:**
|
||||
|
||||
- **File Path Handling**: Fixed various file path issues across workflows
|
||||
- **Manifest Updates**: Updated manifest to use agents folder structure
|
||||
- **Web Bundle Configuration**: Fixed web bundle configurations for better compatibility
|
||||
- **CSV Column Mismatch**: Fixed manifest schema upgrade issues
|
||||
- **Playwright Utils Integration**: @seontechnologies/playwright-utils across all testing workflows
|
||||
- **TTS Injection System**: Complete text-to-speech integration for voice feedback
|
||||
- **Web Bundle Test Support**: Enabled web bundles for test environments
|
||||
|
||||
### ⚠️ Breaking Changes
|
||||
|
||||
**Workflow Architecture:**
|
||||
|
||||
- All legacy workflows have been removed - ensure you're using the new stepwise sharded workflows
|
||||
- Phase 4 completely restructured - update any automation expecting old Phase 4 structure
|
||||
- Epic creation now requires architectural context (moved to Phase 3 in previous release)
|
||||
|
||||
**Agent System:**
|
||||
|
||||
- Custom agents now require proper compilation - use the new agent creation workflow
|
||||
- GitHub integration moved from chatmodes to agents folder - update any references
|
||||
|
||||
### 📊 Impact Summary
|
||||
|
||||
**New in alpha.13:**
|
||||
|
||||
- **Stepwise Workflow Architecture**: Complete transformation of all workflows to granular step-file system
|
||||
- **Universal Custom Agent Support**: Extended to ALL IDEs with improved creation workflow
|
||||
- **Phase 4 Revolution**: Completely restructured with sprint planning integration
|
||||
- **Legacy Cleanup**: Removed all deprecated workflows for cleaner system
|
||||
- **Advanced Code Review**: New adversarial review approach with multi-LLM strategy
|
||||
- **Text-to-Speech**: Full TTS integration for voice feedback
|
||||
- **Testing Expansion**: Playwright utils integration across all testing workflows
|
||||
|
||||
**Enhanced from alpha.12:**
|
||||
|
||||
- **Performance**: Improved file loading and removed time-based estimates
|
||||
- **Documentation**: Complete cleanup with accurate references
|
||||
- **Installer**: Better UX with cleanup options and improved defaults
|
||||
- **Agent System**: More reliable compilation and better persona handling
|
||||
1. **Legacy Workflows Removed**: Migrate to new stepwise sharded workflows
|
||||
2. **Phase 4 Restructured**: Update automation expecting old Phase 4 structure
|
||||
3. **Agent Compilation Required**: Custom agents must use new creation workflow
|
||||
|
||||
## [6.0.0-alpha.12]
|
||||
|
||||
|
|
@ -267,313 +207,101 @@
|
|||
|
||||
**Release: November 18, 2025**
|
||||
|
||||
This alpha release introduces a complete agent installation system with the new `bmad agent-install` command, vastly improves the BMB agent builder capabilities with comprehensive documentation and reference agents, and refines diagram distribution to better align with BMad Method's core principle: **BMad agents mirror real agile teams**.
|
||||
### 🚀 Agent Installation Revolution
|
||||
|
||||
### 🎨 Diagram Capabilities Refined and Distributed
|
||||
|
||||
**Excalidraw Integration Evolution:**
|
||||
|
||||
Building on the excellent Excalidraw integration introduced with the Frame Expert agent, we've refined how diagram capabilities are distributed across the BMad Method ecosystem to better reflect real agile team dynamics.
|
||||
|
||||
**The Refinement:**
|
||||
|
||||
- The valuable Excalidraw diagramming capabilities have been distributed to the agents who naturally create these artifacts in real teams
|
||||
- **Architect**: System architecture diagrams, data flow visualizations
|
||||
- **Product Manager**: Process flowcharts and workflow diagrams
|
||||
- **UX Designer**: Wireframe creation capabilities
|
||||
- **Tech Writer**: All diagram types for documentation needs
|
||||
- **New CIS Agent**: presentation-master for specialized visual communication
|
||||
|
||||
**Shared Infrastructure Enhancement:**
|
||||
|
||||
- Excalidraw templates, component libraries, and validation patterns elevated to core resources
|
||||
- Available to both BMM agents AND CIS presentation specialists
|
||||
- Preserves all the excellent Excalidraw functionality while aligning with natural team roles
|
||||
|
||||
### 🚀 New Agent Installation System
|
||||
|
||||
**Agent Installation Infrastructure (NEW in alpha.11):**
|
||||
|
||||
- `bmad agent-install` CLI command with interactive persona customization
|
||||
- **YAML → XML compilation engine** with smart handler injection
|
||||
- Supports Simple (single file), Expert (with sidecars), and Module agents
|
||||
- Handlebars-style template variable processing
|
||||
- Automatic manifest tracking and IDE integration
|
||||
- Source preservation in `_cfg/custom/agents/` for reinstallation
|
||||
|
||||
**New Reference Agents Added:**
|
||||
|
||||
- **commit-poet**: Poetic git commit message generator (Simple agent example)
|
||||
- **journal-keeper**: Daily journaling agent with templates (Expert agent example)
|
||||
- **security-engineer & trend-analyst**: Module agent examples with ecosystem integration
|
||||
|
||||
**Critical Persona Field Guidance Added:**
|
||||
|
||||
New documentation explaining how LLMs interpret persona fields for better agent quality:
|
||||
|
||||
- **role** → "What knowledge, skills, and capabilities do I possess?"
|
||||
- **identity** → "What background, experience, and context shape my responses?"
|
||||
- **communication_style** → "What verbal patterns, word choice, and phrasing do I use?"
|
||||
- **principles** → "What beliefs and operating philosophy drive my choices?"
|
||||
|
||||
Key insight: `communication_style` should ONLY describe HOW the agent talks, not WHAT they do
|
||||
|
||||
**BMM Agent Voice Enhancement:**
|
||||
|
||||
All 9 existing BMM agents enhanced with distinct, memorable communication voices:
|
||||
|
||||
- **Mary (analyst)**: "Treats analysis like a treasure hunt - excited by every clue"
|
||||
- **John (PM)**: "Asks 'WHY?' relentlessly like a detective on a case"
|
||||
- **Winston (architect)**: "Champions boring technology that actually works"
|
||||
- **Amelia (dev)**: "Ultra-succinct. Speaks in file paths and AC IDs"
|
||||
- **Sally (UX)**: "Paints pictures with words, telling user stories that make you FEEL"
|
||||
|
||||
### 🔧 Edit-Agent Workflow Comprehensive Enhancement
|
||||
|
||||
**Expert Agent Sidecar Support (NEW):**
|
||||
|
||||
- Automatically detects and handles Expert agents with multiple files
|
||||
- Loads and manages templates, data files, knowledge bases
|
||||
- Smart sidecar analysis: maps references, finds orphans, validates paths
|
||||
- 5 complete sidecar editing patterns with warm, educational feedback
|
||||
|
||||
**7-Step Communication Style Refinement Pattern:**
|
||||
|
||||
1. Diagnose current style with red flag word detection
|
||||
2. Extract non-style content to working copy
|
||||
3. Discover TRUE communication style through interview questions
|
||||
4. Craft pure style using presets and reference agents
|
||||
5. Show before/after transformation with full context
|
||||
6. Validate against standards (zero red flags)
|
||||
7. Confirm with user through dramatic reading
|
||||
|
||||
**Unified Validation Checklist:**
|
||||
|
||||
- Single source of truth: `agent-validation-checklist.md` (160 lines)
|
||||
- Shared between create-agent and edit-agent workflows
|
||||
- Comprehensive persona field separation validation
|
||||
- Expert agent sidecar validation (9 specific checks)
|
||||
- Common issues and fixes with real examples
|
||||
- **bmad agent-install CLI**: Interactive agent installation with persona customization
|
||||
- **4 Reference Agents**: commit-poet, journal-keeper, security-engineer, trend-analyst
|
||||
- **Agent Compilation Engine**: YAML → XML with smart handler injection
|
||||
- **60 Communication Presets**: Pure communication styles for agent personas
|
||||
|
||||
### 📚 BMB Agent Builder Enhancement
|
||||
|
||||
**Vastly Improved Agent Creation & Editing Capabilities:**
|
||||
- **Complete Documentation Suite**: 7 new guides for agent architecture and creation
|
||||
- **Expert Agent Sidecar Support**: Multi-file agents with templates and knowledge bases
|
||||
- **Unified Validation**: 160-line checklist shared across workflows
|
||||
- **BMM Agent Voices**: All 9 agents enhanced with distinct communication styles
|
||||
|
||||
- Create-agent and edit-agent workflows now have accurate, comprehensive documentation
|
||||
- All context references updated and validated for consistency
|
||||
- Workflows can now properly guide users through complex agent design decisions
|
||||
### 🎯 Workflow Architecture Change
|
||||
|
||||
**New Agent Documentation Suite:**
|
||||
|
||||
- `understanding-agent-types.md` - Architecture vs capability distinction
|
||||
- `simple-agent-architecture.md` - Self-contained agents guide
|
||||
- `expert-agent-architecture.md` - Agents with sidecar files
|
||||
- `module-agent-architecture.md` - Workflow-integrated agents
|
||||
- `agent-compilation.md` - YAML → XML transformation process
|
||||
- `agent-menu-patterns.md` - Menu design patterns
|
||||
- `communication-presets.csv` - 60 pure communication styles for reference
|
||||
|
||||
**New Reference Agents for Learning:**
|
||||
|
||||
- Complete working examples of Simple, Expert, and Module agents
|
||||
- Can be installed directly via the new `bmad agent-install` command
|
||||
- Serve as both learning resources and ready-to-use agents
|
||||
|
||||
### 🎯 Epic Creation Moved to Phase 3 (After Architecture)
|
||||
|
||||
**Workflow Sequence Corrected:**
|
||||
|
||||
```
|
||||
Phase 2: PRD → UX Design
|
||||
Phase 3: Architecture → Epics & Stories ← NOW HERE (technically informed)
|
||||
```
|
||||
|
||||
**Why This Fundamental Change:**
|
||||
|
||||
- Epics need architectural context: API contracts, data models, technical decisions
|
||||
- Stories can reference actual architectural patterns and constraints
|
||||
- Reduces rewrites when architecture reveals complexity
|
||||
- Better complexity-based estimation (not time-based)
|
||||
|
||||
### 🖥️ New IDE Support
|
||||
|
||||
**Google Antigravity IDE Installer:**
|
||||
|
||||
- Flattened file naming for proper slash commands (bmad-module-agents-name.md)
|
||||
- Namespace isolation prevents module conflicts
|
||||
- Subagent installation support (project or user level)
|
||||
- Module-specific injection configuration
|
||||
|
||||
**Codex CLI Enhancement:**
|
||||
|
||||
- Now supports both global and project-specific installation
|
||||
- CODEX_HOME configuration for multi-project workflows
|
||||
- OS-specific setup instructions (Unix/Mac/Windows)
|
||||
|
||||
### 🏗️ Reference Agents & Standards
|
||||
|
||||
**New Reference Agents Provide Clear Examples:**
|
||||
|
||||
- **commit-poet.agent.yaml**: Simple agent with pure communication style
|
||||
- **journal-keeper.agent.yaml**: Expert agent with sidecar file structure
|
||||
- **security-engineer.agent.yaml**: Module agent for ecosystem integration
|
||||
- **trend-analyst.agent.yaml**: Module agent with cross-workflow capabilities
|
||||
|
||||
**Agent Type Clarification:**
|
||||
|
||||
- Clear documentation that agent types (Simple/Expert/Module) describe architecture, not capability
|
||||
- Module = designed for ecosystem integration, not limited in function
|
||||
|
||||
### 🐛 Technical Improvements
|
||||
|
||||
**Linting Compliance:**
|
||||
|
||||
- Fixed all ESLint warnings across agent tooling
|
||||
- `'utf-8'` → `'utf8'` (unicorn/text-encoding-identifier-case)
|
||||
- `hasOwnProperty` → `Object.hasOwn` (unicorn/prefer-object-has-own)
|
||||
- `JSON.parse(JSON.stringify(...))` → `structuredClone(...)`
|
||||
|
||||
**Agent Compilation Engine:**
|
||||
|
||||
- Auto-injects frontmatter, activation, handlers, help/exit menu items
|
||||
- Smart handler inclusion (only includes handlers actually used)
|
||||
- Proper XML escaping and formatting
|
||||
- Persona name customization support
|
||||
|
||||
### 📊 Impact Summary
|
||||
|
||||
**New in alpha.11:**
|
||||
|
||||
- **Agent installation system** with `bmad agent-install` CLI command
|
||||
- **4 new reference agents** (commit-poet, journal-keeper, security-engineer, trend-analyst)
|
||||
- **Complete agent documentation suite** with 7 new focused guides
|
||||
- **Expert agent sidecar support** in edit-agent workflow
|
||||
- **2 new IDE installers** (Google Antigravity, enhanced Codex)
|
||||
- **Unified validation checklist** (160 lines) for consistent quality standards
|
||||
- **60 pure communication style presets** for agent persona design
|
||||
|
||||
**Enhanced from alpha.10:**
|
||||
|
||||
- **BMB agent builder workflows** with accurate context and comprehensive guidance
|
||||
- **All 9 BMM agents** enhanced with distinct, memorable communication voices
|
||||
- **Excalidraw capabilities** refined and distributed to role-appropriate agents
|
||||
- **Epic creation** moved to Phase 3 (after Architecture) for technical context
|
||||
- **Epic Creation Moved**: Now in Phase 3 after Architecture for technical context
|
||||
- **Excalidraw Distribution**: Diagram capabilities moved to role-appropriate agents
|
||||
- **Google Antigravity IDE**: New installer with flattened file naming
|
||||
|
||||
### ⚠️ Breaking Changes
|
||||
|
||||
**Agent Changes:**
|
||||
|
||||
- Frame Expert agent retired - diagram capabilities now available through role-appropriate agents:
|
||||
- Architecture diagrams → `/architect`
|
||||
- Process flows → `/pm`
|
||||
- Wireframes → `/ux-designer`
|
||||
- Documentation visuals → `/tech-writer`
|
||||
|
||||
**Workflow Changes:**
|
||||
|
||||
- Epic creation moved from Phase 2 to Phase 3 (after Architecture)
|
||||
- Excalidraw workflows redistributed to appropriate agents
|
||||
|
||||
**Installation Changes:**
|
||||
|
||||
- New `bmad agent-install` command replaces manual agent installation
|
||||
- Agent YAML files must be compiled to XML for use
|
||||
|
||||
### 🔄 Migration Notes
|
||||
|
||||
**For Existing Projects:**
|
||||
|
||||
1. **Frame Expert Users:**
|
||||
- Transition to role-appropriate agents for diagrams
|
||||
- All Excalidraw functionality preserved and enhanced
|
||||
- Shared templates now in core resources for wider access
|
||||
|
||||
2. **Agent Installation:**
|
||||
- Use `bmad agent-install` for all agent installations
|
||||
- Existing manual installations still work but won't have customization
|
||||
|
||||
3. **Epic Creation Timing:**
|
||||
- Epics now created in Phase 3 after Architecture
|
||||
- Update any automation expecting epics in Phase 2
|
||||
|
||||
4. **Communication Styles:**
|
||||
- Review agent communication_style fields
|
||||
- Remove any role/identity/principle content
|
||||
- Use communication-presets.csv for pure styles
|
||||
|
||||
5. **Expert Agents:**
|
||||
- Edit-agent workflow now fully supports sidecar files
|
||||
- Organize templates and data files in agent folder
|
||||
1. **Frame Expert Retired**: Use role-appropriate agents for diagrams
|
||||
2. **Agent Installation**: New bmad agent-install command replaces manual installation
|
||||
3. **Epic Creation Phase**: Moved from Phase 2 to Phase 3
|
||||
|
||||
## [6.0.0-alpha.10]
|
||||
|
||||
**Release: November 16, 2025**
|
||||
|
||||
- **🎯 Epics Generated AFTER Architecture**: Major milestone - epics/stories now created after architecture for technically-informed user stories with better acceptance criteria
|
||||
- **🎨 Frame Expert Agent**: New Excalidraw specialist with 4 diagram workflows (flowchart, diagram, dataflow, wireframe) for visual documentation
|
||||
- **⏰ Time Estimate Prohibition**: Critical warnings added across 33 workflows - acknowledges AI has fundamentally changed development speed
|
||||
- **🎯 Platform-Specific Commands**: New `ide-only`/`web-only` fields filter menu items based on environment (IDE vs web bundle)
|
||||
- **🔧 Agent Customization**: Enhanced memory/prompts merging via `*.customize.yaml` files for persistent agent personalization
|
||||
- **Epics After Architecture**: Major milestone - technically-informed user stories created post-architecture
|
||||
- **Frame Expert Agent**: New Excalidraw specialist with 4 diagram workflows
|
||||
- **Time Estimate Prohibition**: Warnings across 33 workflows acknowledging AI's impact on development speed
|
||||
- **Platform-Specific Commands**: ide-only/web-only fields filter menu items by environment
|
||||
- **Agent Customization**: Enhanced memory/prompts merging via \*.customize.yaml files
|
||||
|
||||
## [6.0.0-alpha.9]
|
||||
|
||||
**Release: November 12, 2025**
|
||||
|
||||
- **🚀 Intelligent File Discovery Protocol**: New `discover_inputs` with FULL_LOAD, SELECTIVE_LOAD, and INDEX_GUIDED strategies for automatic context loading
|
||||
- **📚 3-Track System**: Simplified from 5 levels to 3 intuitive tracks: quick-flow, bmad-method, and enterprise-bmad-method
|
||||
- **🌐 Web Bundles Guide**: Comprehensive documentation for Gemini Gems and Custom GPTs with 60-80% cost savings strategies
|
||||
- **🏗️ Unified Output Structure**: Eliminated `.ephemeral/` folders - all artifacts now in single configurable output folder
|
||||
- **🎮 BMGD Phase 4**: Added 10 game development workflows following BMM patterns with game-specific adaptations
|
||||
- **Intelligent File Discovery**: discover_inputs with FULL_LOAD, SELECTIVE_LOAD, INDEX_GUIDED strategies
|
||||
- **3-Track System**: Simplified from 5 levels to 3 intuitive tracks
|
||||
- **Web Bundles Guide**: Comprehensive documentation with 60-80% cost savings strategies
|
||||
- **Unified Output Structure**: Eliminated .ephemeral/ folders - single configurable output folder
|
||||
- **BMGD Phase 4**: Added 10 game development workflows with BMM patterns
|
||||
|
||||
## [6.0.0-alpha.8]
|
||||
|
||||
**Release: November 9, 2025**
|
||||
|
||||
- **🎯 Configurable Installation**: Custom directories with `.bmad` hidden folder default for cleaner project structure
|
||||
- **🚀 Optimized Agent Loading**: CLI loads from installed files eliminating duplication and maintenance burden
|
||||
- **🌐 Party Mode Everywhere**: All web bundles include multi-agent collaboration with customizable party configurations
|
||||
- **🔧 Phase 4 Artifact Separation**: Stories, code reviews, sprint plans now configurable outside docs folder
|
||||
- **📦 Expanded Web Bundles**: All BMM, BMGD, and CIS agents bundled with advanced elicitation integration
|
||||
- **Configurable Installation**: Custom directories with .bmad hidden folder default
|
||||
- **Optimized Agent Loading**: CLI loads from installed files, eliminating duplication
|
||||
- **Party Mode Everywhere**: All web bundles include multi-agent collaboration
|
||||
- **Phase 4 Artifact Separation**: Stories, code reviews, sprint plans configurable outside docs
|
||||
- **Expanded Web Bundles**: All BMM, BMGD, CIS agents bundled with elicitation integration
|
||||
|
||||
## [6.0.0-alpha.7]
|
||||
|
||||
**Release: November 7, 2025**
|
||||
|
||||
- **🌐 Workflow Vendoring**: Web bundler performs automatic workflow vendoring for cross-module dependencies
|
||||
- **🎮 BMGD Module Extraction**: Game development split into standalone module with 4-phase industry-standard structure
|
||||
- **🔧 Enhanced Dependency Resolution**: Better handling of `web_bundle: false` workflows with positive resolution messages
|
||||
- **📚 Advanced Elicitation Fix**: Added missing CSV files to workflow bundles fixing runtime failures
|
||||
- **🐛 Claude Code Fix**: Resolved README slash command installation regression
|
||||
- **Workflow Vendoring**: Web bundler performs automatic cross-module dependency vendoring
|
||||
- **BMGD Module Extraction**: Game development split into standalone 4-phase structure
|
||||
- **Enhanced Dependency Resolution**: Better handling of web_bundle: false workflows
|
||||
- **Advanced Elicitation Fix**: Added missing CSV files to workflow bundles
|
||||
- **Claude Code Fix**: Resolved README slash command installation regression
|
||||
|
||||
## [6.0.0-alpha.6]
|
||||
|
||||
**Release: November 4, 2025**
|
||||
|
||||
- **🐛 Critical Installer Fixes**: Fixed manifestPath error and option display issues blocking installation
|
||||
- **📖 Conditional Docs Installation**: Optional documentation installation to reduce footprint in production
|
||||
- **🎨 Improved Installer UX**: Better formatting with descriptive labels and clearer feedback
|
||||
- **🧹 Issue Tracker Cleanup**: Closed 54 legacy v4 issues for focused v6 development
|
||||
- **📝 Contributing Updates**: Removed references to non-existent branches in documentation
|
||||
- **Critical Installer Fixes**: Fixed manifestPath error and option display issues
|
||||
- **Conditional Docs Installation**: Optional documentation to reduce production footprint
|
||||
- **Improved Installer UX**: Better formatting with descriptive labels and clearer feedback
|
||||
- **Issue Tracker Cleanup**: Closed 54 legacy v4 issues for focused v6 development
|
||||
- **Contributing Updates**: Removed references to non-existent branches
|
||||
|
||||
## [6.0.0-alpha.5]
|
||||
|
||||
**Release: November 4, 2025**
|
||||
|
||||
- **🎯 3-Track Scale System**: Revolutionary simplification from 5 confusing levels to 3 intuitive preference-driven tracks
|
||||
- **✨ Elicitation Modernization**: Replaced legacy XML tags with explicit `invoke-task` pattern at strategic decision points
|
||||
- **📚 PM/UX Evolution Section**: Added November 2025 industry research on AI Agent PMs and Full-Stack Product Leads
|
||||
- **🏗️ Brownfield Reality Check**: Rewrote Phase 0 with 4 real-world scenarios for messy existing codebases
|
||||
- **📖 Documentation Accuracy**: All agent capabilities now match YAML source of truth with zero hallucination risk
|
||||
- **3-Track Scale System**: Simplified from 5 levels to 3 intuitive preference-driven tracks
|
||||
- **Elicitation Modernization**: Replaced legacy XML tags with explicit invoke-task pattern
|
||||
- **PM/UX Evolution**: Added November 2025 industry research on AI Agent PMs
|
||||
- **Brownfield Reality Check**: Rewrote Phase 0 with 4 real-world scenarios
|
||||
- **Documentation Accuracy**: All agent capabilities now match YAML source of truth
|
||||
|
||||
## [6.0.0-alpha.4]
|
||||
|
||||
**Release: November 2, 2025**
|
||||
|
||||
- **📚 Documentation Hub**: Created 18 comprehensive guides (7000+ lines) with professional technical writing standards
|
||||
- **🤖 Paige Agent**: New technical documentation specialist available across all BMM phases
|
||||
- **🚀 Quick Spec Flow**: Intelligent Level 0-1 planning with auto-stack detection and brownfield analysis
|
||||
- **📦 Universal Shard-Doc**: Split large markdown documents into organized sections with dual-strategy loading
|
||||
- **🔧 Intent-Driven Planning**: PRD and Product Brief transformed from template-filling to natural conversation
|
||||
- **Documentation Hub**: Created 18 comprehensive guides (7000+ lines) with professional standards
|
||||
- **Paige Agent**: New technical documentation specialist across all BMM phases
|
||||
- **Quick Spec Flow**: Intelligent Level 0-1 planning with auto-stack detection
|
||||
- **Universal Shard-Doc**: Split large markdown documents with dual-strategy loading
|
||||
- **Intent-Driven Planning**: PRD and Product Brief transformed from template-filling to conversation
|
||||
|
||||
## [6.0.0-alpha.3]
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,9 @@
|
|||
|
||||
## AI-Driven Agile Development That Scales From Bug Fixes to Enterprise
|
||||
|
||||
**Build More, Architect Dreams** (BMAD) with **19 specialized AI agents** and **50+ guided workflows** that adapt to your project's complexity—from quick bug fixes to enterprise platforms.
|
||||
**Build More, Architect Dreams** (BMAD) with **21 specialized AI agents** across 4 official modules, and **50+ guided workflows** that adapt to your project's complexity—from quick bug fixes to enterprise platforms, and new step file workflows that allow for incredibly long workflows to stay on the rails longer than ever before!
|
||||
|
||||
Additionally - when we say 'Build More, Architect Dreams' - we mean it! The BMad Builder has landed, and now as of Alpha.15 is fully supported in the installation flow via NPX - custom stand along agents, workflows and the modules of your dreams! The community forge will soon open, endless possibility awaits!
|
||||
|
||||
> **🚀 v6 is a MASSIVE upgrade from v4!** Complete architectural overhaul, scale-adaptive intelligence, visual workflows, and the powerful BMad Core framework. v4 users: this changes everything. [See what's new →](#whats-new-in-v6)
|
||||
|
||||
|
|
@ -154,6 +156,7 @@ Each agent brings deep expertise and can be customized to match your team's styl
|
|||
- **[GitHub Issues](https://github.com/bmad-code-org/BMAD-METHOD/issues)** - Report bugs, request features
|
||||
- **[YouTube Channel](https://www.youtube.com/@BMadCode)** - Video tutorials and demos
|
||||
- **[Web Bundles](https://bmad-code-org.github.io/bmad-bundles/)** - Pre-built agent bundles
|
||||
- **[Code of Conduct](.github/CODE_OF_CONDUCT.md)** - Community guidelines
|
||||
|
||||
## 🛠️ Development
|
||||
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,137 +0,0 @@
|
|||
# Custom Agent Installation
|
||||
|
||||
BMAD agents and workflows are now installed through the main CLI installer using a `custom.yaml` configuration file or by having an installer file.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Create a `custom.yaml` file in the root of your agent/workflow folder:
|
||||
|
||||
```yaml
|
||||
code: my-custom-agent
|
||||
name: 'My Custom Agent'
|
||||
default_selected: true
|
||||
```
|
||||
|
||||
Then run the BMAD installer from your project directory:
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
Or if you have bmad-cli installed globally:
|
||||
|
||||
```bash
|
||||
bmad install
|
||||
```
|
||||
|
||||
## Installation Methods
|
||||
|
||||
### Method 1: Stand-alone Folder with custom.yaml
|
||||
|
||||
Place your agent or workflow in a folder with a `custom.yaml` file at the root:
|
||||
|
||||
```
|
||||
my-agent/
|
||||
├── custom.yaml # Required configuration file
|
||||
├── my-agent.agent.yaml
|
||||
└── sidecar/ # Optional
|
||||
└── instructions.md
|
||||
```
|
||||
|
||||
### Method 2: Installer File
|
||||
|
||||
For more complex installations, include an `installer.js` or `installer.yaml` file in your agent/workflow folder:
|
||||
|
||||
```
|
||||
my-workflow/
|
||||
├── workflow.md
|
||||
└── installer.yaml # Custom installation logic
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
1. **Discovers** available agents and workflows from folders with `custom.yaml`
|
||||
2. **Installs** to your project's `.bmad/custom/` directory
|
||||
3. **Creates** IDE commands for all your configured IDEs (Claude Code, Codex, Cursor, etc.)
|
||||
4. **Registers** the agent/workflow in the BMAD system
|
||||
|
||||
## Example custom.yaml
|
||||
|
||||
```yaml
|
||||
code: my-custom-agent
|
||||
name: 'My Custom Agent'
|
||||
default_selected: true
|
||||
```
|
||||
|
||||
## Installing Reference Agents
|
||||
|
||||
The BMAD source includes example agents you can install. **You must copy them to your project first.**
|
||||
|
||||
### Step 1: Copy the Agent Template
|
||||
|
||||
**For simple agents** (single file):
|
||||
|
||||
```bash
|
||||
# From your project root
|
||||
mkdir -p .bmad/custom/agents/my-agent
|
||||
cp node_modules/bmad-method/src/modules/bmb/reference/agents/stand-alone/commit-poet.agent.yaml \
|
||||
.bmad/custom/agents/my-agent/
|
||||
```
|
||||
|
||||
**For expert agents** (folder with sidecar files):
|
||||
|
||||
```bash
|
||||
# Copy the entire folder
|
||||
cp -r node_modules/bmad-method/src/modules/bmb/reference/agents/agent-with-memory/journal-keeper \
|
||||
.bmad/custom/agents/
|
||||
```
|
||||
|
||||
### Step 2: Create custom.yaml
|
||||
|
||||
```bash
|
||||
# In the agent folder, create custom.yaml
|
||||
cat > .bmad/custom/agents/my-agent/custom.yaml << EOF
|
||||
code: my-agent
|
||||
name: "My Custom Agent"
|
||||
default_selected: true
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 3: Install
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
# or: bmad install (if BMAD installed locally)
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
||||
1. Find the agent with its `custom.yaml`
|
||||
2. Install it to the appropriate location
|
||||
3. Create IDE commands for immediate use
|
||||
|
||||
### Available Reference Agents
|
||||
|
||||
**Simple (standalone file):**
|
||||
|
||||
- `commit-poet.agent.yaml` - Commit message artisan with style preferences
|
||||
|
||||
**Expert (folder with sidecar):**
|
||||
|
||||
- `journal-keeper/` - Personal journal companion with memory and pattern recognition
|
||||
|
||||
Find these in the BMAD source:
|
||||
|
||||
```
|
||||
src/modules/bmb/reference/agents/
|
||||
├── stand-alone/
|
||||
│ └── commit-poet.agent.yaml
|
||||
└── agent-with-memory/
|
||||
└── journal-keeper/
|
||||
├── journal-keeper.agent.yaml
|
||||
└── journal-keeper-sidecar/
|
||||
```
|
||||
|
||||
## Creating Your Own
|
||||
|
||||
Use the BMB agent builder to craft your agents. Once ready to use, place your `.agent.yaml` files or folders with `custom.yaml` in `.bmad/custom/agents/` or `.bmad/custom/workflows/`.
|
||||
|
|
@ -0,0 +1,245 @@
|
|||
# Custom Content Installation
|
||||
|
||||
This guide explains how to create and install custom BMAD content including agents, workflows, and modules. Custom content allows you to extend BMAD's functionality with your own specialized tools and workflows that can be shared across projects or teams.
|
||||
|
||||
## Types of Custom Content
|
||||
|
||||
### 1. Custom Agents and Workflows (Standalone)
|
||||
|
||||
Custom agents and workflows are standalone content packages that can be installed without being part of a full module. These are perfect for:
|
||||
|
||||
- Sharing specialized agents across projects
|
||||
- Building a personal Agent powered Notebook vault
|
||||
- Distributing workflow templates
|
||||
- Creating agent libraries for specific domains
|
||||
|
||||
#### Structure
|
||||
|
||||
A custom agents and workflows package follows this structure:
|
||||
|
||||
```
|
||||
my-custom-agents/
|
||||
├── module.yaml # Package configuration
|
||||
├── agents/ # Agent definitions
|
||||
│ └── my-agent/
|
||||
│ └── agent.md
|
||||
└── workflows/ # Workflow definitions
|
||||
└── my-workflow/
|
||||
└── workflow.md
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
|
||||
Create a `module.yaml` file in your package root:
|
||||
|
||||
```yaml
|
||||
code: my-custom-agents
|
||||
name: 'My Custom Agents and Workflows'
|
||||
default_selected: true
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
See `/example-custom-content` for a working example of a folder with multiple random custom agents and workflows. Technically its also just a module, but you will be able to further pick and choose from this folders contents of what you do and do not want to include in a destination folder. This way, you can store all custom content source in one location and easily install it to different locations.
|
||||
|
||||
### 2. Custom Modules
|
||||
|
||||
Custom modules are complete BMAD modules that can include their own configuration, documentation, along with agents and workflows that all compliment each other. Additionally they will have their own installation scripts, data, and potentially other tools. Modules can be used for:
|
||||
|
||||
- Domain-specific functionality (e.g., industry-specific workflows, entertainment, education and training, medical, etc...)
|
||||
- Integration with external systems
|
||||
- Specialized agent collections
|
||||
- Custom tooling and utilities
|
||||
|
||||
#### Structure
|
||||
|
||||
A custom module follows this structure:
|
||||
|
||||
```
|
||||
my-module/
|
||||
├── _module-installer/
|
||||
│ ├── installer.js # optional, when it exists it will run with module installation
|
||||
├── module.yaml # Module installation configuration with custom question and answer capture
|
||||
├── docs/ # Module documentation
|
||||
├── agents/ # Module-specific agents
|
||||
├── workflows/ # Module-specific workflows
|
||||
├── data/ # csv or other content to power agent intelligence or workflows
|
||||
├── tools/ # Custom tools, hooks, mcp
|
||||
└── sub-modules/ # IDE-specific customizations
|
||||
├── vscode/
|
||||
└── cursor/
|
||||
```
|
||||
|
||||
#### Module Configuration
|
||||
|
||||
The `module.yaml` file defines how your module is installed:
|
||||
|
||||
```yaml
|
||||
# Module metadata
|
||||
code: my-module
|
||||
name: 'My Custom Module'
|
||||
default_selected: false
|
||||
|
||||
header: 'My Custom Module'
|
||||
subheader: 'Description of what this module does'
|
||||
|
||||
# Configuration prompts
|
||||
my_setting:
|
||||
prompt: 'Configure your module setting'
|
||||
default: 'default-value'
|
||||
result: '{value}'
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
See `/example-custom-module` for a complete example:
|
||||
|
||||
## Installation Process
|
||||
|
||||
### Step 1: Running the Installer
|
||||
|
||||
When you run the existing normal BMAD installer - either from the cloned repo, OR via NPX, it will ask about custom content:
|
||||
|
||||
```
|
||||
? Do you have custom content to install?
|
||||
❯ No (skip custom content)
|
||||
Enter a directory path
|
||||
Enter a URL [Coming soon]
|
||||
```
|
||||
|
||||
### Step 2: Providing Custom Content Path
|
||||
|
||||
If you select "Enter a directory path", the installer will prompt for the location:
|
||||
|
||||
```
|
||||
? Enter the path to your custom content directory: /path/to/folder/containing/content/folder
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
||||
- Scan for `module.yaml` files (modules)
|
||||
- Display an indication of how many installable folders it has found. Note that a project with stand along agents and workflows all under a single folder like the example will just list the count as 1 for that directory.
|
||||
|
||||
### Step 3: Selecting Content
|
||||
|
||||
The installer presents a unified selection interface:
|
||||
|
||||
```
|
||||
? Select modules and custom content to install:
|
||||
[── Custom Content ──]
|
||||
◉ My Custom Agents and Workflows (/path/to/custom)
|
||||
[── Official Content ──]
|
||||
◯ BMM: Business Method & Management
|
||||
◯ CIS: Creativity & Innovation Suite
|
||||
```
|
||||
|
||||
## Agent Sidecar Support
|
||||
|
||||
Agents with sidecar content can store personal data, memories, and working files outside of the `.bmad` directory. This separation keeps personal content separate from BMAD's core files.
|
||||
|
||||
### What is Sidecar Content?
|
||||
|
||||
Sidecar content includes:
|
||||
|
||||
- Agent memories and learning data
|
||||
- Personal working files
|
||||
- Temporary data
|
||||
- User-specific configurations
|
||||
|
||||
### Sidecar Configuration
|
||||
|
||||
The sidecar folder location is configured during BMAD core installation:
|
||||
|
||||
```
|
||||
? Where should users' agent sidecar memory folders be stored?
|
||||
❯ .bmad-user-memory
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Agent Declaration**: Agents declare `hasSidecar: true` in their metadata
|
||||
2. **Sidecar Detection**: The installer automatically detects folders with "sidecar" in the name
|
||||
3. **Installation**: Sidecar content is copied to the configured location
|
||||
4. **Path Replacement**: The `{agent_sidecar_folder}` placeholder in agent configurations is replaced with the actual path to the installed instance of the sidecar folder. Now when you use the agent, depending on its design, will use the content in sidecar to record interactions, remember things you tell it, or serve a host of many other issues.
|
||||
|
||||
### Example Structure
|
||||
|
||||
```
|
||||
my-agent/
|
||||
├── agent.md # Agent definition
|
||||
└── my-agent-sidecar/ # Sidecar content folder
|
||||
├── memories/
|
||||
├── working/
|
||||
└── config/
|
||||
```
|
||||
|
||||
### Git Integration
|
||||
|
||||
Since sidecar content is stored outside the `.bmad` directory (and typically outside version control), users can:
|
||||
|
||||
- Add the sidecar folder to `.gitignore` to exclude personal data
|
||||
- Share agent definitions without exposing personal content
|
||||
- Maintain separate configurations for different projects
|
||||
|
||||
Example `.gitignore` entry:
|
||||
|
||||
```
|
||||
# Exclude agent personal data
|
||||
.bmad-user-memory/
|
||||
```
|
||||
|
||||
## Creating Custom Content with BMAD Builder
|
||||
|
||||
The BMAD Builder provides workflows that will guide you to produce your own custom content:
|
||||
|
||||
1. **Agent Templates**: Use standardized agent templates with proper structure
|
||||
2. **Workflow Templates**: Create workflows using proven patterns
|
||||
3. **Validation Tools**: Validate your content before distribution
|
||||
4. **Package Generation**: Generate properly structured packages
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Use Clear Naming**: Make your content codes and names descriptive
|
||||
2. **Provide Documentation**: Include clear setup and usage instructions
|
||||
3. **Test Installation**: Test your content in a clean environment
|
||||
4. **Version Management**: Use semantic versioning for updates
|
||||
5. **Respect User Privacy**: Keep personal data in sidecar folders
|
||||
|
||||
## Distribution
|
||||
|
||||
Custom content can be distributed:
|
||||
|
||||
1. **File System**: Copy folders directly to users
|
||||
2. **Git Repositories**: Clone or download from version control
|
||||
3. **Package Managers**: [Coming soon] npm package support
|
||||
4. **URL Installation**: [Coming soon] Direct URL installation, including an official community vetted module forge
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Custom Content Found
|
||||
|
||||
- Ensure your `module.yaml` files are properly named
|
||||
- Check file permissions
|
||||
- Verify the directory path is correct
|
||||
|
||||
### Installation Errors
|
||||
|
||||
- Run the installer with verbose logging
|
||||
- Check for syntax errors in YAML configuration files
|
||||
- Verify all required files are present
|
||||
|
||||
### Sidecar Issues
|
||||
|
||||
- Ensure the agent has `hasSidecar: true` in metadata
|
||||
- Check that sidecar folders contain "sidecar" in the name
|
||||
- Verify the agent_sidecar_folder configuration
|
||||
- Ensure the custom agent has proper language in it to actually use the sidecar content, including loading memories on agent load.
|
||||
|
||||
## Support
|
||||
|
||||
For help with custom content creation or installation:
|
||||
|
||||
1. Check the examples in `/example-custom-content` and `/example-custom-module`
|
||||
2. Review the BMAD documentation
|
||||
3. Create an issue in the BMAD repository
|
||||
4. Join the BMAD community discussions on discord
|
||||
|
|
@ -96,9 +96,9 @@ Instructions for loading agents and running workflows in your development enviro
|
|||
|
||||
## 🔧 Advanced Topics
|
||||
|
||||
### Custom Agents
|
||||
### Custom Agents, Workflow and Modules
|
||||
|
||||
- **[Custom Agent Installation](./custom-agent-installation.md)** - Install and personalize agents with `bmad agent-install`
|
||||
- **[Custom Content Installation](./custom-content-installation.md)** - Install and personalize agents, workflows and modules with the default bmad-method installer!
|
||||
- [Agent Customization Guide](./agent-customization-guide.md) - Customize agent behavior and responses
|
||||
|
||||
### Installation & Bundling
|
||||
|
|
|
|||
|
|
@ -59,6 +59,7 @@ project-root/
|
|||
### Key Exclusions
|
||||
|
||||
- `_module-installer/` directories are never copied to destination
|
||||
- module.yaml
|
||||
- `localskip="true"` agents are filtered out
|
||||
- Source `config.yaml` templates are replaced with generated configs
|
||||
|
||||
|
|
@ -93,7 +94,7 @@ Creative Innovation Studio for design workflows
|
|||
src/modules/{module}/
|
||||
├── _module-installer/ # Not copied to destination
|
||||
│ ├── installer.js # Post-install logic
|
||||
│ └── install-config.yaml
|
||||
├── module.yaml
|
||||
├── agents/
|
||||
├── tasks/
|
||||
├── templates/
|
||||
|
|
@ -107,7 +108,7 @@ src/modules/{module}/
|
|||
|
||||
### Collection Process
|
||||
|
||||
Modules define prompts in `install-config.yaml`:
|
||||
Modules define prompts in `module.yaml`:
|
||||
|
||||
```yaml
|
||||
project_name:
|
||||
|
|
@ -218,12 +219,12 @@ Platform-specific content without source modification:
|
|||
src/modules/mymod/
|
||||
├── _module-installer/
|
||||
│ ├── installer.js
|
||||
│ └── install-config.yaml
|
||||
├── module.yaml
|
||||
├── agents/
|
||||
└── tasks/
|
||||
```
|
||||
|
||||
2. **Configuration** (`install-config.yaml`)
|
||||
2. **Configuration** (`module.yaml`)
|
||||
|
||||
```yaml
|
||||
code: mymod
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: .bmad/agents/commit-poet/commit-poet.md
|
||||
id: "{bmad_folder}/agents/commit-poet/commit-poet.md"
|
||||
name: "Inkwell Von Comitizen"
|
||||
title: "Commit Message Artisan"
|
||||
icon: "📜"
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ CLI uses Commander.js, commands auto-loaded from `tools/cli/commands/`:
|
|||
### Core Architecture Patterns
|
||||
|
||||
1. **IDE Handlers**: Each IDE extends BaseIdeSetup class
|
||||
2. **Module Installers**: Modules can have `_module-installer/installer.js`
|
||||
2. **Module Installers**: Modules can have `module.yaml` and `_module-installer/installer.js`
|
||||
3. **Sub-modules**: IDE-specific customizations in `sub-modules/{ide-name}/`
|
||||
4. **Shared Utilities**: `tools/cli/installers/lib/ide/shared/` contains generators
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@
|
|||
- @/docs/v6-open-items.md - Known issues and open items
|
||||
- @/docs/document-sharding-guide.md - Guide for sharding large documents
|
||||
- @/docs/agent-customization-guide.md - How to customize agents
|
||||
- @/docs/custom-agent-installation.md - Custom agent installation guide
|
||||
- @/docs/custom-content-installation.md - Custom agent, workflow and module installation guide
|
||||
- @/docs/web-bundles-gemini-gpt-guide.md - Web bundle usage for AI platforms
|
||||
- @/docs/BUNDLE_DISTRIBUTION_SETUP.md - Bundle distribution setup
|
||||
|
||||
|
|
|
|||
|
|
@ -117,7 +117,7 @@ Contains:
|
|||
|
||||
- Add new IDE handler: Create file in /tools/cli/installers/lib/ide/, extend BaseIdeSetup
|
||||
- Fix installer bug: Check installer.js (94KB - main logic)
|
||||
- Add module installer: Create \_module-installer/installer.js in module
|
||||
- Add module installer: Create \_module-installer/installer.js if custom installer logic needed
|
||||
- Update shared generators: Modify files in /shared/ directory
|
||||
|
||||
## Relationships
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ src/modules/{module-name}/
|
|||
│ ├── injections.yaml
|
||||
│ ├── config.yaml
|
||||
│ └── sub-agents/
|
||||
├── install-config.yaml # Module install configuration
|
||||
├── module.yaml # Module install configuration
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
|
|
@ -145,7 +145,7 @@ Defined in @/tools/cli/lib/platform-codes.js
|
|||
- Create new module installer: Add \_module-installer/installer.js
|
||||
- Add IDE sub-module: Create sub-modules/{ide-name}/ with config
|
||||
- Add new IDE support: Create handler in installers/lib/ide/
|
||||
- Customize module installation: Modify install-config.yaml
|
||||
- Customize module installation: Modify module.yaml
|
||||
|
||||
## Relationships
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: custom/agents/toolsmith/toolsmith.md
|
||||
id: "{bmad_folder}/agents/toolsmith/toolsmith.md"
|
||||
name: Vexor
|
||||
title: Infernal Toolsmith + Guardian of the BMAD Forge
|
||||
icon: ⚒️
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
code: bmad-custom
|
||||
name: "BMAD-Custom: Sample Stand Alone Custom Agents and Workflows"
|
||||
default_selected: true
|
||||
type: custom
|
||||
|
|
@ -3,7 +3,7 @@ name: 'step-01-init'
|
|||
description: 'Initialize quiz game with mode selection and category choice'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-01-init.md'
|
||||
|
|
@ -66,7 +66,7 @@ To set up the quiz game by selecting game mode, choosing a category, and prepari
|
|||
|
||||
### 1. Welcome and Configuration Loading
|
||||
|
||||
Load config from {project-root}/.bmad/bmb/config.yaml to get user_name.
|
||||
Load config from {project-root}/{bmad_folder}/bmb/config.yaml to get user_name.
|
||||
|
||||
Present dramatic welcome:
|
||||
"🎺 _DRAMATIC MUSIC PLAYS_ 🎺
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-02-q1'
|
|||
description: 'Question 1 - Level 1 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-02-q1.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-03-q2'
|
|||
description: 'Question 2 - Level 2 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-03-q2.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-04-q3'
|
|||
description: 'Question 3 - Level 3 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-04-q3.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-05-q4'
|
|||
description: 'Question 4 - Level 4 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-05-q4.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-06-q5'
|
|||
description: 'Question 5 - Level 5 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-06-q5.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-07-q6'
|
|||
description: 'Question 6 - Level 6 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-07-q6.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-08-q7'
|
|||
description: 'Question 7 - Level 7 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-08-q7.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-09-q8'
|
|||
description: 'Question 8 - Level 8 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-09-q8.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-10-q9'
|
|||
description: 'Question 9 - Level 9 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-10-q9.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-11-q10'
|
|||
description: 'Question 10 - Level 10 difficulty'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-11-q10.md'
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: 'step-12-results'
|
|||
description: 'Final results and celebration'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/.bmad/custom/src/workflows/quiz-master'
|
||||
workflow_path: '{project-root}/{bmad_folder}/custom/src/workflows/quiz-master'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-12-results.md'
|
||||
|
|
|
|||
|
|
@ -1,269 +0,0 @@
|
|||
---
|
||||
stepsCompleted: [1, 2, 3, 4, 5, 6, 7]
|
||||
---
|
||||
|
||||
## Build Summary
|
||||
|
||||
**Date:** 2025-12-04
|
||||
**Status:** Build Complete
|
||||
|
||||
### Files Generated
|
||||
|
||||
**Main Workflow:**
|
||||
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/workflow.md`
|
||||
|
||||
**Step Files (12 total):**
|
||||
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-01-init.md` - Game setup and mode selection
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-02-q1.md` - Question 1 (Level 1)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-03-q2.md` - Question 2 (Level 2)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-04-q3.md` - Question 3 (Level 3)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-05-q4.md` - Question 4 (Level 4)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-06-q5.md` - Question 5 (Level 5)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-07-q6.md` - Question 6 (Level 6)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-08-q7.md` - Question 7 (Level 7)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-09-q8.md` - Question 8 (Level 8)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-10-q9.md` - Question 9 (Level 9)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-11-q10.md` - Question 10 (Level 10)
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/steps/step-12-results.md` - Final results and celebration
|
||||
|
||||
**Templates:**
|
||||
|
||||
- `/Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master/templates/csv-headers.template` - CSV column headers
|
||||
|
||||
### Key Features Implemented
|
||||
|
||||
1. **Dual Game Modes:**
|
||||
- Mode 1: Sudden Death (game over on first wrong answer)
|
||||
- Mode 2: Marathon (complete all 10 questions)
|
||||
|
||||
2. **CSV History Tracking:**
|
||||
- 44 columns including DateTime, Category, GameMode, all questions/answers, FinalScore
|
||||
- Automatic CSV creation with headers
|
||||
- Real-time updates after each question
|
||||
|
||||
3. **Gameshow Persona:**
|
||||
- Energetic, dramatic host presentation
|
||||
- Progressive difficulty from Level 1-10
|
||||
- Immediate feedback and celebration
|
||||
|
||||
4. **Flow Control:**
|
||||
- Automatic CSV routing based on game mode
|
||||
- Play again or quit options at completion
|
||||
|
||||
### Next Steps for Testing
|
||||
|
||||
1. Run the workflow: `/bmad:bmb:workflows:quiz-master`
|
||||
2. Test both game modes
|
||||
3. Verify CSV file creation and updates
|
||||
4. Check question progression and difficulty
|
||||
5. Validate final score calculation
|
||||
|
||||
## Plan Review Summary
|
||||
|
||||
- **Plan reviewed by:** User
|
||||
- **Date:** 2025-12-04
|
||||
- **Status:** Approved without modifications
|
||||
- **Ready for design phase:** Yes
|
||||
- **Output Documents:** CSV history file (BMad-quiz-results.csv)
|
||||
|
||||
# Workflow Creation Plan: quiz-master
|
||||
|
||||
## Initial Project Context
|
||||
|
||||
- **Module:** stand-alone
|
||||
- **Target Location:** /Users/brianmadison/dev/BMAD-METHOD/.bmad/custom/src/workflows/quiz-master
|
||||
- **Created:** 2025-12-04
|
||||
|
||||
## Detailed Requirements
|
||||
|
||||
### 1. Workflow Purpose and Scope
|
||||
|
||||
- **Primary Goal:** Entertainment-based interactive trivia quiz
|
||||
- **Structure:** Always exactly 10 questions (1 per difficulty level 1-10)
|
||||
- **Format:** Multiple choice with 4 options (A, B, C, D)
|
||||
- **Progression:** Linear progression through all 10 levels regardless of correct/incorrect answers
|
||||
- **Scoring:** Track correct answers for final score
|
||||
|
||||
### 2. Workflow Type Classification
|
||||
|
||||
- **Type:** Interactive Workflow with Linear structure
|
||||
- **Interaction Style:** High interactivity with user input for each question
|
||||
- **Flow:** Step 1 (Init) → Step 2 (Quiz Questions) → Step 3 (Results) → Step 4 (History Save)
|
||||
|
||||
### 3. Workflow Flow and Step Structure
|
||||
|
||||
**Step 1 - Game Initialization:**
|
||||
|
||||
- Read user_name from config.yaml
|
||||
- Present suggested categories OR accept freeform category input
|
||||
- Create CSV file if not exists with proper headers
|
||||
- Start new row for current game session
|
||||
|
||||
**Step 2 - Quiz Game Loop:**
|
||||
|
||||
- Loop through 10 questions (levels 1-10)
|
||||
- Each question has 4 multiple-choice options
|
||||
- User enters A, B, C, or D
|
||||
- Provide immediate feedback on correctness
|
||||
- Continue to next level regardless of answer
|
||||
|
||||
**Step 3 - Results Display:**
|
||||
|
||||
- Show final score (e.g., "You got 7 out of 10!")
|
||||
- Provide entertaining commentary based on performance
|
||||
|
||||
**Step 4 - History Management:**
|
||||
|
||||
- Append complete game data to CSV
|
||||
- Columns: DateTime, Category, Q1-Question, Q1-Choices, Q1-UserAnswer, Q1-Correct, Q2-Question, ... Q10-Correct, FinalScore
|
||||
|
||||
### 4. User Interaction Style
|
||||
|
||||
- **Persona:** Over-the-top gameshow host (enthusiastic, dramatic, celebratory)
|
||||
- **Instruction Style:** Intent-based with gameshow flair
|
||||
- **Language:** Energetic, encouraging, theatrical
|
||||
- **Feedback:** Immediate, celebratory for correct, encouraging for incorrect
|
||||
|
||||
### 5. Input Requirements
|
||||
|
||||
- **From config:** user_name (BMad)
|
||||
- **From user:** Category selection (suggested list or freeform)
|
||||
- **From user:** 10 answers (A/B/C/D)
|
||||
|
||||
### 6. Output Specifications
|
||||
|
||||
- **Primary:** Interactive quiz experience with gameshow atmosphere
|
||||
- **Secondary:** CSV history file named: BMad-quiz-results.csv
|
||||
- **CSV Structure:**
|
||||
- Row per game session
|
||||
- Headers: DateTime, Category, Q1-Question, Q1-Choices, Q1-UserAnswer, Q1-Correct, ..., Q10-Correct, FinalScore
|
||||
|
||||
### 7. Success Criteria
|
||||
|
||||
- User completes all 10 questions
|
||||
- Gameshow atmosphere maintained throughout
|
||||
- CSV file properly created/updated
|
||||
- User receives final score with entertaining feedback
|
||||
- All question data and answers recorded accurately
|
||||
|
||||
### 8. Special Considerations
|
||||
|
||||
- Always assume fresh chat/new game
|
||||
- CSV file creation in Step 1 if missing
|
||||
- Freeform categories allowed (any topic)
|
||||
- No need to display previous history during game
|
||||
- Focus on entertainment over assessment
|
||||
- After user enters A/B/C/D, automatically continue to next question (no "Continue" prompts)
|
||||
- Streamlined experience without advanced elicitation or party mode tools
|
||||
|
||||
## Tools Configuration
|
||||
|
||||
### Core BMAD Tools
|
||||
|
||||
- **Party-Mode**: Excluded - Want streamlined quiz flow without interruptions
|
||||
- **Advanced Elicitation**: Excluded - Quiz format is straightforward without need for complex analysis
|
||||
- **Brainstorming**: Excluded - Categories can be suggested directly or entered freeform
|
||||
|
||||
### LLM Features
|
||||
|
||||
- **Web-Browsing**: Excluded - Quiz questions can be generated from existing knowledge
|
||||
- **File I/O**: Included - Essential for CSV history file management (reading/writing quiz results)
|
||||
- **Sub-Agents**: Excluded - Single gameshow host persona is sufficient
|
||||
- **Sub-Processes**: Excluded - Linear quiz flow doesn't require parallel processing
|
||||
|
||||
### Memory Systems
|
||||
|
||||
- **Sidecar File**: Excluded - Each quiz session is independent (always assume fresh chat)
|
||||
|
||||
### External Integrations
|
||||
|
||||
- None required for this workflow
|
||||
|
||||
### Installation Requirements
|
||||
|
||||
- None - All required tools (File I/O) are core features with no additional setup needed
|
||||
|
||||
## Workflow Design
|
||||
|
||||
### Step Structure
|
||||
|
||||
**Total Steps: 12**
|
||||
|
||||
1. Step 01 - Init: Mode selection, category choice, CSV setup
|
||||
2. Steps 02-11: Individual questions (1-10) with CSV updates
|
||||
3. Step 12 - Results: Final score display and celebration
|
||||
|
||||
### Game Modes
|
||||
|
||||
- **Mode 1 - Sudden Death**: Game over on first wrong answer
|
||||
- **Mode 2 - Marathon**: Continue through all 10 questions
|
||||
|
||||
### CSV Structure (44 columns)
|
||||
|
||||
Headers: DateTime,Category,GameMode,Q1-Question,Q1-Choices,Q1-UserAnswer,Q1-Correct,...,Q10-Correct,FinalScore
|
||||
|
||||
### Flow Logic
|
||||
|
||||
- Step 01: Create row with DateTime, Category, GameMode
|
||||
- Steps 02-11: Update CSV with question data
|
||||
- Mode 1: IF incorrect → jump to Step 12
|
||||
- Mode 2: Always continue
|
||||
- Step 12: Update FinalScore, display results
|
||||
|
||||
### Gameshow Persona
|
||||
|
||||
- Energetic, dramatic host
|
||||
- Celebratory feedback for correct answers
|
||||
- Encouraging messages for incorrect
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
quiz-master/
|
||||
├── workflow.md
|
||||
├── steps/
|
||||
│ ├── step-01-init.md
|
||||
│ ├── step-02-q1.md
|
||||
│ ├── ...
|
||||
│ └── step-12-results.md
|
||||
└── templates/
|
||||
└── csv-headers.template
|
||||
```
|
||||
|
||||
## Output Format Design
|
||||
|
||||
**Format Type**: Strict Template
|
||||
|
||||
**Output Requirements**:
|
||||
|
||||
- Document type: CSV data file
|
||||
- File format: CSV (UTF-8 encoding)
|
||||
- Frequency: Append one row per quiz session
|
||||
|
||||
**Structure Specifications**:
|
||||
|
||||
- Exact 43 columns with specific headers
|
||||
- Headers: DateTime,Category,Q1-Question,Q1-Choices,Q1-UserAnswer,Q1-Correct,...,Q10-Correct,FinalScore
|
||||
- Data formats:
|
||||
- DateTime: ISO 8601 (YYYY-MM-DDTHH:MM:SS)
|
||||
- Category: Text
|
||||
- QX-Question: Text
|
||||
- QX-Choices: (A)Opt1|(B)Opt2|(C)Opt3|(D)Opt4
|
||||
- QX-UserAnswer: A/B/C/D
|
||||
- QX-Correct: TRUE/FALSE
|
||||
- FinalScore: Number (0-10)
|
||||
|
||||
**Template Information**:
|
||||
|
||||
- Template source: Created based on requirements
|
||||
- Template file: CSV with fixed column structure
|
||||
- Placeholders: None - strict format required
|
||||
|
||||
**Special Considerations**:
|
||||
|
||||
- CSV commas within text must be quoted
|
||||
- Newlines in questions replaced with spaces
|
||||
- Headers created only if file doesn't exist
|
||||
- Append mode for all subsequent quiz sessions
|
||||
|
|
@ -45,7 +45,7 @@ web_bundle: true
|
|||
|
||||
### 1. Module Configuration Loading
|
||||
|
||||
Load and read full config from {project-root}/.bmad/bmb/config.yaml and resolve:
|
||||
Load and read full config from {project-root}/{bmad_folder}/bmb/config.yaml and resolve:
|
||||
|
||||
- `user_name`, `output_folder`, `communication_language`, `document_output_language`
|
||||
|
||||
|
|
|
|||
|
|
@ -3,9 +3,6 @@
|
|||
This module is an example and is not at all recommended for any usage, this module was not vetted by any medical professionals and should
|
||||
be considered at best for entertainment purposes only.
|
||||
|
||||
IF you want to see how a custom module installation works, copy this whole folder to where you will be installing from with npx, and rename
|
||||
"\_module-installer/install-config.bak" to "\_module-installer/install-config.yaml".
|
||||
|
||||
You should see the option in the module selector when installing.
|
||||
|
||||
If you have received a module from someone else that is not in the official installation - you can install it similarly by running the
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: "{bmad_folder}/mwm/agents/cbt-coach/cbt-coach.md"
|
||||
name: "Dr. Alexis, M.D."
|
||||
title: "CBT Coach"
|
||||
icon: "🧠"
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: "{bmad_folder}/mwm/agents/crisis-navigator.md"
|
||||
name: "Beacon"
|
||||
title: "Crisis Navigator"
|
||||
icon: "🆘"
|
||||
|
|
@ -95,7 +96,7 @@ agent:
|
|||
triggers:
|
||||
- trigger: party-mode
|
||||
input: SPM or fuzzy match start party mode
|
||||
route: "{project-root}/.bmad/core/workflows/edit-agent/workflow.md"
|
||||
route: "{project-root}/{bmad_folder}/core/workflows/edit-agent/workflow.md"
|
||||
data: crisis navigator agent discussion
|
||||
type: exec
|
||||
- trigger: expert-chat
|
||||
|
|
@ -117,7 +118,7 @@ agent:
|
|||
type: action
|
||||
|
||||
- trigger: "safety-plan"
|
||||
route: "{project-root}/.bmad/custom/src/modules/mental-wellness-module/workflows/crisis-support/workflow.md"
|
||||
route: "{project-root}/{bmad_folder}/custom/src/modules/mental-wellness-module/workflows/crisis-support/workflow.md"
|
||||
description: "Create safety plan 🛡️"
|
||||
type: workflow
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: "{bmad_folder}/mwm/agents/meditation-guide.md"
|
||||
name: "Serenity"
|
||||
title: "Meditation Guide"
|
||||
icon: "🧘"
|
||||
|
|
@ -92,7 +93,7 @@ agent:
|
|||
triggers:
|
||||
- trigger: party-mode
|
||||
input: SPM or fuzzy match start party mode
|
||||
route: "{project-root}/.bmad/core/workflows/edit-agent/workflow.md"
|
||||
route: "{project-root}/{bmad_folder}/core/workflows/edit-agent/workflow.md"
|
||||
data: meditation guide agent discussion
|
||||
type: exec
|
||||
- trigger: expert-chat
|
||||
|
|
@ -104,7 +105,7 @@ agent:
|
|||
triggers:
|
||||
- trigger: guided-meditation
|
||||
input: GM or fuzzy match guided meditation
|
||||
route: "{project-root}/.bmad/custom/src/modules/mental-wellness-module/workflows/guided-meditation/workflow.md"
|
||||
route: "{project-root}/{bmad_folder}/custom/src/modules/mental-wellness-module/workflows/guided-meditation/workflow.md"
|
||||
description: "Full meditation session 🧘"
|
||||
type: workflow
|
||||
- trigger: body-scan
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: "{bmad_folder}/mwm/agents/wellness-companion/wellness-companion.md"
|
||||
name: "Riley"
|
||||
title: "Wellness Companion"
|
||||
icon: "🌱"
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@
|
|||
code: mwm
|
||||
name: "MWM: Mental Wellness Module"
|
||||
default_selected: false
|
||||
type: module
|
||||
|
||||
header: "MWM™: Custom Wellness Module"
|
||||
subheader: "Demo of Potential Non Coding Custom Module Use case"
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "bmad-method",
|
||||
"version": "6.0.0-alpha.14",
|
||||
"version": "6.0.0-alpha.15",
|
||||
"description": "Breakthrough Method of Agile AI-driven Development",
|
||||
"keywords": [
|
||||
"agile",
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ const chalk = require('chalk');
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Array<string>} options.installedIDEs - Array of IDE codes that were installed
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
* @returns {Promise<boolean>} - Success status
|
||||
|
|
|
|||
|
|
@ -0,0 +1,335 @@
|
|||
# Autominator - n8n Workflow Automation Module
|
||||
|
||||
**Arnold the Autominator - I'll be back... with your workflows automated!** 🦾
|
||||
|
||||
Standalone module for n8n workflow automation, creation, migration, and optimization. Build, modify, migrate, and optimize n8n workflows with expert guidance and up-to-date documentation.
|
||||
|
||||
## Overview
|
||||
|
||||
Autominator is an independent BMAD module that specializes in n8n workflow automation. Whether you're building new workflows from scratch, migrating from other platforms, or optimizing existing workflows, Arnold has you covered.
|
||||
|
||||
## Agent
|
||||
|
||||
**Arnold** - n8n Workflow Automation Specialist
|
||||
|
||||
- Expert in n8n workflow creation, modification, and optimization
|
||||
- Specializes in platform migration (Zapier, Make, HubSpot, Power Automate)
|
||||
- Uses web search to access up-to-date n8n documentation
|
||||
- Smart elicitation for accurate requirement gathering
|
||||
- Comprehensive workflow validation and testing
|
||||
|
||||
## Workflows
|
||||
|
||||
### 1. Gather Requirements
|
||||
|
||||
Gather and document workflow requirements before creating n8n workflows.
|
||||
|
||||
**Triggers:**
|
||||
|
||||
- `*gather-requirements`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Interactive requirement gathering
|
||||
- Documents problem statement, triggers, integrations
|
||||
- Creates requirement file for workflow creation
|
||||
- Saves to `docs/workflow-requirements/`
|
||||
- Required before creating workflows
|
||||
|
||||
### 2. Create Workflow
|
||||
|
||||
Build new n8n workflows from scratch based on requirements.
|
||||
|
||||
**Triggers:**
|
||||
|
||||
- `*create-workflow`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Smart elicitation to understand your needs
|
||||
- Workflow type selection (webhook, scheduled, event-driven, manual, database-driven)
|
||||
- Integration selection and configuration
|
||||
- Complexity assessment
|
||||
- Error handling strategy planning
|
||||
- Web search integration for latest n8n docs
|
||||
- Automatic JSON validation
|
||||
|
||||
### 3. Modify Workflow
|
||||
|
||||
Edit or update existing n8n workflows with backup and safety checks.
|
||||
|
||||
**Triggers:**
|
||||
|
||||
- `*modify-workflow`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Load existing workflows from file or paste
|
||||
- Selective modification (add, modify, or remove nodes)
|
||||
- Connection management
|
||||
- Automatic backup creation
|
||||
- Change validation
|
||||
- Rollback capability
|
||||
|
||||
### 4. Migrate Workflow
|
||||
|
||||
Migrate automation workflows from other platforms to n8n.
|
||||
|
||||
**Supported Platforms:**
|
||||
|
||||
- Zapier
|
||||
- Make (Integromat)
|
||||
- HubSpot Workflows
|
||||
- Microsoft Power Automate
|
||||
- IFTTT
|
||||
- Custom platforms
|
||||
|
||||
**Triggers:**
|
||||
|
||||
- `*migrate-workflow`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Platform-specific mapping
|
||||
- Trigger and action conversion
|
||||
- Data transformation planning
|
||||
- Credential requirement identification
|
||||
- Migration notes and documentation
|
||||
- Post-migration testing guidance
|
||||
|
||||
### 5. Optimize Workflow
|
||||
|
||||
Analyze and improve existing n8n workflows for performance and best practices.
|
||||
|
||||
**Triggers:**
|
||||
|
||||
- `*optimize-workflow`
|
||||
|
||||
**Features:**
|
||||
|
||||
- Comprehensive workflow analysis
|
||||
- Performance optimization recommendations
|
||||
- Error handling improvements
|
||||
- Code quality assessment
|
||||
- Structure optimization
|
||||
- Best practices validation
|
||||
- Security review
|
||||
- Automatic backup before changes
|
||||
- Selective optimization application
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Load Arnold Agent
|
||||
|
||||
```bash
|
||||
# In your IDE, load the Autominator agent
|
||||
agent autominator/autominator
|
||||
|
||||
# Or use the agent trigger
|
||||
*autominator
|
||||
```
|
||||
|
||||
### Gather Requirements (Recommended First Step)
|
||||
|
||||
```bash
|
||||
# Start the requirements gathering process
|
||||
*gather-requirements
|
||||
|
||||
# Follow the interactive prompts to:
|
||||
# 1. Describe the problem you're solving
|
||||
# 2. Define trigger type
|
||||
# 3. Specify data requirements
|
||||
# 4. Define desired outcome
|
||||
# 5. List integrations
|
||||
# 6. Define conditional logic
|
||||
# 7. Set criticality level
|
||||
# 8. Name the workflow
|
||||
|
||||
# Requirements are saved to: docs/workflow-requirements/req-{name}.md
|
||||
```
|
||||
|
||||
### Create a Workflow
|
||||
|
||||
```bash
|
||||
# Start the create workflow process
|
||||
*create-workflow
|
||||
|
||||
# Arnold will:
|
||||
# 1. Check for requirements file (or prompt to create one)
|
||||
# 2. Load requirements automatically
|
||||
# 3. Research n8n documentation
|
||||
# 4. Design workflow structure
|
||||
# 5. Build and validate workflow JSON
|
||||
# 6. Save to docs/workflows/{name}.json
|
||||
```
|
||||
|
||||
### Migrate from Another Platform
|
||||
|
||||
```bash
|
||||
# Start the migration process
|
||||
*migrate-workflow
|
||||
|
||||
# Provide:
|
||||
# 1. Source platform (Zapier, Make, HubSpot, etc.)
|
||||
# 2. Workflow details or export file
|
||||
# 3. Integration list
|
||||
# 4. Desired output location
|
||||
```
|
||||
|
||||
### Optimize Existing Workflow
|
||||
|
||||
```bash
|
||||
# Analyze and improve a workflow
|
||||
*optimize-workflow
|
||||
|
||||
# Select optimization focus:
|
||||
# - Performance
|
||||
# - Error Handling
|
||||
# - Code Quality
|
||||
# - Structure
|
||||
# - Best Practices
|
||||
# - Security
|
||||
# - All
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
### Web Search Integration
|
||||
|
||||
- Automatic web search for n8n documentation
|
||||
- Accesses official docs.n8n.io resources
|
||||
- Up-to-date node configurations and best practices
|
||||
- Problem-specific solution research
|
||||
|
||||
### Smart Elicitation
|
||||
|
||||
- Contextual analysis of existing information
|
||||
- Numbered option selection
|
||||
- Progressive requirement gathering
|
||||
- Validation before execution
|
||||
|
||||
### Comprehensive Validation
|
||||
|
||||
- JSON syntax validation
|
||||
- Schema compliance checking
|
||||
- Connection integrity verification
|
||||
- Error recovery (never deletes files)
|
||||
|
||||
### Platform Mappings
|
||||
|
||||
Built-in mappings for:
|
||||
|
||||
- Zapier triggers and actions
|
||||
- Make modules and routers
|
||||
- HubSpot workflow actions
|
||||
- Power Automate flows
|
||||
- Common automation patterns
|
||||
|
||||
### Shared Resources
|
||||
|
||||
- **n8n-helpers.md** - Node creation guidelines and patterns
|
||||
- **n8n-templates.yaml** - 8 reusable workflow templates
|
||||
- **platform-mappings.yaml** - Platform conversion reference
|
||||
|
||||
## Module Structure
|
||||
|
||||
```
|
||||
autominator/
|
||||
├── _module-installer/
|
||||
│ └── install-config.yaml
|
||||
├── agents/
|
||||
│ └── autominator.agent.yaml
|
||||
├── workflows/
|
||||
│ ├── _shared/
|
||||
│ │ ├── n8n-helpers.md
|
||||
│ │ ├── n8n-templates.yaml
|
||||
│ │ └── platform-mappings.yaml
|
||||
│ ├── create-workflow/
|
||||
│ │ ├── workflow.yaml
|
||||
│ │ ├── instructions.md
|
||||
│ │ └── checklist.md
|
||||
│ ├── modify-workflow/
|
||||
│ │ ├── workflow.yaml
|
||||
│ │ ├── instructions.md
|
||||
│ │ └── checklist.md
|
||||
│ ├── migrate-workflow/
|
||||
│ │ ├── workflow.yaml
|
||||
│ │ ├── instructions.md
|
||||
│ │ └── checklist.md
|
||||
│ └── optimize-workflow/
|
||||
│ ├── workflow.yaml
|
||||
│ ├── instructions.md
|
||||
│ └── checklist.md
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- n8n instance or account
|
||||
- IDE with BMAD support
|
||||
|
||||
## Installation
|
||||
|
||||
Autominator is a standalone module and can be installed independently:
|
||||
|
||||
```bash
|
||||
# Install via BMAD
|
||||
npx bmad-method@alpha install autominator
|
||||
|
||||
# Or manually copy to your BMAD installation
|
||||
cp -r autominator/ /path/to/bmad/src/modules/
|
||||
```
|
||||
|
||||
## Integration with Other Modules
|
||||
|
||||
Autominator is independent but can be used alongside:
|
||||
|
||||
- **BMM** - For project lifecycle management
|
||||
- **CIS** - For creative workflow design
|
||||
- **BMB** - For module building
|
||||
- **BMGD** - For game development workflows
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Provide Clear Context** - Describe your workflow purpose and requirements
|
||||
2. **Use Smart Elicitation** - Let Arnold ask clarifying questions
|
||||
3. **Test Before Activation** - Always test workflows with sample data
|
||||
4. **Monitor Initial Runs** - Watch for errors in first executions
|
||||
5. **Document Changes** - Keep notes on workflow modifications
|
||||
6. **Backup Regularly** - Use modify-workflow's backup feature
|
||||
7. **Review Optimizations** - Understand changes before applying
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Workflow JSON Validation Fails
|
||||
|
||||
- Check for missing commas or brackets
|
||||
- Verify all node IDs are unique
|
||||
- Ensure all connections reference existing nodes
|
||||
- Use the error location to fix syntax
|
||||
|
||||
### Workflow Execution Issues
|
||||
|
||||
- Verify all credentials are configured
|
||||
- Test with sample data first
|
||||
- Check error handling settings
|
||||
- Review workflow logs for details
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **[n8n Documentation](https://docs.n8n.io/)** - Official n8n docs
|
||||
- **[BMAD Method](../bmm/README.md)** - Core BMAD framework
|
||||
- **[CIS Module](../cis/README.md)** - Creative facilitation
|
||||
- **[BMB Module](../bmb/README.md)** - Module building
|
||||
|
||||
## Support
|
||||
|
||||
- **Issues** - Report bugs on GitHub
|
||||
- **Questions** - Check the troubleshooting section
|
||||
- **Feedback** - Share suggestions for improvements
|
||||
|
||||
---
|
||||
|
||||
**Ready to automate?** Load Arnold and start with `*create-workflow`!
|
||||
|
||||
Part of BMad Method - Transform automation potential through expert AI guidance.
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
# Autominator Module Installation Configuration
|
||||
|
||||
code: autominator
|
||||
name: "Autominator: n8n Workflow Automation"
|
||||
default_selected: false
|
||||
|
||||
header: "Autominator - n8n Workflow Automation Module"
|
||||
subheader: "Configure the settings for the Autominator module"
|
||||
|
||||
# Core config values automatically inherited:
|
||||
## user_name
|
||||
## communication_language
|
||||
## document_output_language
|
||||
## output_folder
|
||||
## bmad_folder
|
||||
## install_user_docs
|
||||
## kb_install
|
||||
|
||||
n8n_instance_url:
|
||||
prompt: "What is your n8n instance URL? (optional, for reference)"
|
||||
default: "https://n8n.example.com"
|
||||
result: "{value}"
|
||||
|
||||
workflow_output_folder:
|
||||
prompt: "Where should generated workflows be stored?"
|
||||
default: "{output_folder}/n8n-workflows"
|
||||
result: "{project-root}/{value}"
|
||||
|
||||
automation_experience:
|
||||
prompt: "What is your n8n/automation experience level?"
|
||||
default: "intermediate"
|
||||
result: "{value}"
|
||||
single-select:
|
||||
- value: "beginner"
|
||||
label: "Beginner - New to n8n, provide detailed guidance"
|
||||
- value: "intermediate"
|
||||
label: "Intermediate - Familiar with n8n concepts, balanced approach"
|
||||
- value: "expert"
|
||||
label: "Expert - Experienced n8n developer, be direct and technical"
|
||||
|
||||
primary_integrations:
|
||||
prompt: "Which integrations do you primarily use? (select all that apply)"
|
||||
default: ["http", "database"]
|
||||
result: "{value}"
|
||||
multi-select:
|
||||
- value: "http"
|
||||
label: "HTTP/REST APIs"
|
||||
- value: "database"
|
||||
label: "Databases (PostgreSQL, MySQL, MongoDB)"
|
||||
- value: "cloud"
|
||||
label: "Cloud Services (Google Sheets, Slack, Notion, Airtable)"
|
||||
- value: "crm"
|
||||
label: "CRM Systems (HubSpot, Salesforce)"
|
||||
- value: "email"
|
||||
label: "Email"
|
||||
- value: "custom"
|
||||
label: "Custom/Other"
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
# Autominator - Arnold the Automation Expert
|
||||
|
||||
agent:
|
||||
webskip: true
|
||||
metadata:
|
||||
id: "{bmad_folder}/autominator/agents/autominator.md"
|
||||
name: Arnold
|
||||
title: Arnold the Autominator
|
||||
icon: 🦾
|
||||
module: autominator
|
||||
|
||||
persona:
|
||||
role: n8n Workflow Automation Specialist
|
||||
identity: Arnold the Autominator - I'll be back... with your workflows automated! 🦾 Expert in n8n workflow creation, migration, and optimization. Specializes in building automation workflows, migrating from other platforms (Zapier, Make, HubSpot), and optimizing existing n8n workflows using up-to-date documentation via web search.
|
||||
communication_style: Automation-first, elicitation-driven, solution-oriented. Presents options as numbered lists for easy selection. Always validates understanding before building. Direct, confident, and results-focused.
|
||||
principles: |
|
||||
- Web Search Integration - Always search for latest n8n documentation from docs.n8n.io for accurate, up-to-date implementations.
|
||||
- Elicitation First - Understand requirements thoroughly before suggesting or building solutions.
|
||||
- Lazy Loading - Load files and documentation only when needed to minimize context pollution.
|
||||
- Validation - Always validate workflow JSON syntax after creation.
|
||||
- Platform Agnostic - Support migration from any automation platform with proper mapping.
|
||||
- Error Recovery - NEVER delete files due to syntax errors, always fix them using error location information.
|
||||
- Structured Approach - Follow task-specific workflows for different automation scenarios.
|
||||
|
||||
menu:
|
||||
- trigger: gather-requirements
|
||||
workflow: "{project-root}/{bmad_folder}/autominator/workflows/gather-requirements/workflow.yaml"
|
||||
description: Gather and document workflow requirements (run this first before creating workflows)
|
||||
|
||||
- trigger: create-workflow
|
||||
workflow: "{project-root}/{bmad_folder}/autominator/workflows/create-workflow/workflow.yaml"
|
||||
description: Create new n8n workflow from scratch based on requirements
|
||||
|
||||
- trigger: modify-workflow
|
||||
workflow: "{project-root}/{bmad_folder}/autominator/workflows/modify-workflow/workflow.yaml"
|
||||
description: Edit or update existing n8n workflow
|
||||
|
||||
- trigger: migrate-workflow
|
||||
workflow: "{project-root}/{bmad_folder}/autominator/workflows/migrate-workflow/workflow.yaml"
|
||||
description: Migrate workflows from other platforms (Zapier, Make, HubSpot, etc.) to n8n
|
||||
|
||||
- trigger: optimize-workflow
|
||||
workflow: "{project-root}/{bmad_folder}/autominator/workflows/optimize-workflow/workflow.yaml"
|
||||
description: Review and improve existing n8n workflows for performance and best practices
|
||||
|
||||
- trigger: party-mode
|
||||
workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
|
||||
description: Bring the whole team in to chat with other expert agents from the party
|
||||
|
|
@ -0,0 +1,405 @@
|
|||
# n8n Workflow Helpers
|
||||
|
||||
## UUID Generation
|
||||
|
||||
n8n uses UUIDs for node IDs, workflow IDs, and webhook IDs. Generate UUIDs in this format:
|
||||
|
||||
**Full UUID (36 characters):** `f8b7ff4f-6375-4c79-9b2c-9814bfdd0c92`
|
||||
|
||||
- Used for: node `id`, `webhookId`, `versionId`
|
||||
- Format: 8-4-4-4-12 hexadecimal characters with hyphens
|
||||
|
||||
**Short ID (16 characters):** `Wvmqb0POKmqwCoKy`
|
||||
|
||||
- Used for: workflow `id`, tag `id`
|
||||
- Format: alphanumeric (a-z, A-Z, 0-9)
|
||||
|
||||
**Assignment ID:** `id-1`, `id-2`, `id-3`
|
||||
|
||||
- Used for: Set node assignments, IF node conditions
|
||||
- Format: "id-" + sequential number
|
||||
|
||||
## Node Creation Guidelines
|
||||
|
||||
### Basic Node Structure (Modern n8n Format)
|
||||
|
||||
```json
|
||||
{
|
||||
"parameters": {},
|
||||
"id": "f8b7ff4f-6375-4c79-9b2c-9814bfdd0c92",
|
||||
"name": "Node Name",
|
||||
"type": "n8n-nodes-base.nodeName",
|
||||
"typeVersion": 2,
|
||||
"position": [1424, 496],
|
||||
"webhookId": "b5f0b784-2440-4371-bcf1-b59dd2b29e68",
|
||||
"credentials": {}
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules:**
|
||||
|
||||
- `parameters` comes FIRST
|
||||
- `id` must be UUID format (e.g., "f8b7ff4f-6375-4c79-9b2c-9814bfdd0c92")
|
||||
- `type` must be `n8n-nodes-base.nodeName` format (NOT @n8n/n8n-nodes-\*)
|
||||
- `typeVersion` must be INTEGER (e.g., 2, 3, 4) NOT float (2.1, 3.4)
|
||||
- `position` must be array of integers: [x, y]
|
||||
- `webhookId` required for webhook nodes (UUID format)
|
||||
- Field order matters for n8n compatibility
|
||||
|
||||
### Node Positioning
|
||||
|
||||
- Start node: [250, 300]
|
||||
- Horizontal spacing: 220px between nodes
|
||||
- Vertical spacing: 100px for parallel branches
|
||||
- Grid alignment: Snap to 20px grid for clean layout
|
||||
|
||||
### Common Node Types
|
||||
|
||||
### ⚠️ CRITICAL: Node Type Format Rules
|
||||
|
||||
**ALWAYS use format:** `n8n-nodes-base.nodeName`
|
||||
|
||||
**NEVER use these formats:**
|
||||
|
||||
- ❌ `@n8n/n8n-nodes-slack.slackTrigger` (wrong package format)
|
||||
- ❌ `n8n-nodes-slack.slackTrigger` (missing base)
|
||||
- ❌ `slackTrigger` (missing prefix)
|
||||
|
||||
**Correct Examples:**
|
||||
|
||||
- ✅ `n8n-nodes-base.webhook`
|
||||
- ✅ `n8n-nodes-base.slackTrigger`
|
||||
- ✅ `n8n-nodes-base.gmail`
|
||||
- ✅ `n8n-nodes-base.if`
|
||||
|
||||
**Trigger Nodes:**
|
||||
|
||||
- `n8n-nodes-base.webhook` - HTTP webhook trigger
|
||||
- `n8n-nodes-base.scheduleTrigger` - Cron/interval trigger
|
||||
- `n8n-nodes-base.manualTrigger` - Manual execution trigger
|
||||
- `n8n-nodes-base.emailTrigger` - Email trigger
|
||||
- `n8n-nodes-base.slackTrigger` - Slack event trigger
|
||||
|
||||
**Action Nodes:**
|
||||
|
||||
- `n8n-nodes-base.httpRequest` - HTTP API calls
|
||||
- `n8n-nodes-base.set` - Data transformation
|
||||
- `n8n-nodes-base.code` - Custom JavaScript/Python code
|
||||
- `n8n-nodes-base.if` - Conditional branching
|
||||
- `n8n-nodes-base.merge` - Merge data from multiple branches
|
||||
- `n8n-nodes-base.splitInBatches` - Process data in batches
|
||||
|
||||
**Integration Nodes:**
|
||||
|
||||
- `n8n-nodes-base.googleSheets` - Google Sheets
|
||||
- `n8n-nodes-base.slack` - Slack actions
|
||||
- `n8n-nodes-base.gmail` - Gmail
|
||||
- `n8n-nodes-base.notion` - Notion
|
||||
- `n8n-nodes-base.airtable` - Airtable
|
||||
- `n8n-nodes-base.postgres` - PostgreSQL
|
||||
- `n8n-nodes-base.mysql` - MySQL
|
||||
|
||||
## Connection Guidelines
|
||||
|
||||
### Connection Structure
|
||||
|
||||
### ⚠️ CRITICAL: Connection Format Rules
|
||||
|
||||
**CORRECT Format:**
|
||||
|
||||
```json
|
||||
{
|
||||
"Source Node Name": {
|
||||
"main": [
|
||||
[
|
||||
{
|
||||
"node": "Target Node Name",
|
||||
"type": "main",
|
||||
"index": 0
|
||||
}
|
||||
]
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**WRONG Formats:**
|
||||
|
||||
```json
|
||||
// ❌ WRONG - Missing "main" wrapper
|
||||
{
|
||||
"Source Node Name": [
|
||||
[
|
||||
{
|
||||
"node": "Target Node Name",
|
||||
"type": "main",
|
||||
"index": 0
|
||||
}
|
||||
]
|
||||
]
|
||||
}
|
||||
|
||||
// ❌ WRONG - Direct array
|
||||
{
|
||||
"Source Node Name": [[{...}]]
|
||||
}
|
||||
```
|
||||
|
||||
### Connection Rules
|
||||
|
||||
1. Each connection has a source node and target node
|
||||
2. Connections object structure: `{"Source": {"main": [[{...}]]}}`
|
||||
3. The "main" key is REQUIRED (wraps the connection array)
|
||||
4. Index 0 is default output, index 1+ for conditional branches
|
||||
5. IF nodes have index 0 (true) and index 1 (false)
|
||||
6. Always validate that referenced node names exist
|
||||
|
||||
### Connection Patterns
|
||||
|
||||
**Linear Flow:**
|
||||
|
||||
```
|
||||
Trigger → Action1 → Action2 → End
|
||||
```
|
||||
|
||||
**Conditional Branch:**
|
||||
|
||||
```
|
||||
Trigger → IF Node → [true: Action1, false: Action2] → Merge
|
||||
```
|
||||
|
||||
**Parallel Processing:**
|
||||
|
||||
```
|
||||
Trigger → Split → [Branch1, Branch2, Branch3] → Merge
|
||||
```
|
||||
|
||||
## Error Handling Best Practices
|
||||
|
||||
### Error Workflow Pattern
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Error Handler",
|
||||
"type": "n8n-nodes-base.errorTrigger",
|
||||
"parameters": {
|
||||
"errorWorkflows": ["workflow-id"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Retry Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"retryOnFail": true,
|
||||
"maxTries": 3,
|
||||
"waitBetweenTries": 1000
|
||||
}
|
||||
```
|
||||
|
||||
## Data Transformation Patterns
|
||||
|
||||
### Using Set Node (Modern Format - typeVersion 3+)
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Transform Data",
|
||||
"type": "n8n-nodes-base.set",
|
||||
"typeVersion": 3,
|
||||
"parameters": {
|
||||
"assignments": {
|
||||
"assignments": [
|
||||
{
|
||||
"id": "id-1",
|
||||
"name": "outputField",
|
||||
"value": "={{ $json.inputField }}",
|
||||
"type": "string"
|
||||
}
|
||||
]
|
||||
},
|
||||
"includeOtherFields": true,
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules for Set Node:**
|
||||
|
||||
- Use `assignments.assignments` structure (not `values`)
|
||||
- Each assignment needs `id` field (e.g., "id-1", "id-2")
|
||||
- Each assignment needs `type` field ("string", "number", "boolean")
|
||||
- Include `includeOtherFields: true` to pass through other data
|
||||
- Include `options: {}` for compatibility
|
||||
|
||||
### Using Gmail Node (typeVersion 2+)
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Send Email",
|
||||
"type": "n8n-nodes-base.gmail",
|
||||
"typeVersion": 2,
|
||||
"parameters": {
|
||||
"sendTo": "user@example.com",
|
||||
"subject": "Email Subject",
|
||||
"message": "Email body content",
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules for Gmail Node:**
|
||||
|
||||
- Use `message` parameter (NOT `text`)
|
||||
- Use `sendTo` (NOT `to`)
|
||||
- Include `options: {}` for compatibility
|
||||
|
||||
### Using Slack Node with Channel Selection
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Slack Action",
|
||||
"type": "n8n-nodes-base.slack",
|
||||
"typeVersion": 2,
|
||||
"parameters": {
|
||||
"channel": {
|
||||
"__rl": true,
|
||||
"value": "general",
|
||||
"mode": "list",
|
||||
"cachedResultName": "#general"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules for Slack Channel:**
|
||||
|
||||
- Use `__rl: true` flag for resource locator
|
||||
- Include `mode: "list"` for channel selection
|
||||
- Include `cachedResultName` with # prefix
|
||||
|
||||
### Using IF Node (typeVersion 2+)
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Check Condition",
|
||||
"type": "n8n-nodes-base.if",
|
||||
"typeVersion": 2,
|
||||
"parameters": {
|
||||
"conditions": {
|
||||
"options": {
|
||||
"caseSensitive": false,
|
||||
"leftValue": "",
|
||||
"typeValidation": "loose"
|
||||
},
|
||||
"conditions": [
|
||||
{
|
||||
"id": "id-1",
|
||||
"leftValue": "={{ $json.field }}",
|
||||
"rightValue": "value",
|
||||
"operator": {
|
||||
"type": "string",
|
||||
"operation": "equals"
|
||||
}
|
||||
}
|
||||
],
|
||||
"combinator": "and"
|
||||
},
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules for IF Node:**
|
||||
|
||||
- Use `conditions.conditions` structure
|
||||
- Each condition needs `id` field
|
||||
- Do NOT include `name` field in conditions
|
||||
- Use `operator` object with `type` and `operation`
|
||||
- Include `options` at root level
|
||||
|
||||
### Using Code Node
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Custom Logic",
|
||||
"type": "n8n-nodes-base.code",
|
||||
"parameters": {
|
||||
"language": "javaScript",
|
||||
"jsCode": "return items.map(item => ({ json: { ...item.json, processed: true } }));"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Credentials Management
|
||||
|
||||
### Credential Reference
|
||||
|
||||
```json
|
||||
{
|
||||
"credentials": {
|
||||
"httpBasicAuth": {
|
||||
"id": "credential-id",
|
||||
"name": "My API Credentials"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Common Credential Types
|
||||
|
||||
- `httpBasicAuth` - Basic authentication
|
||||
- `oAuth2Api` - OAuth2
|
||||
- `httpHeaderAuth` - Header-based auth
|
||||
- `httpQueryAuth` - Query parameter auth
|
||||
|
||||
## Workflow Metadata (Modern n8n Format)
|
||||
|
||||
### Required Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Workflow Name",
|
||||
"nodes": [],
|
||||
"pinData": {},
|
||||
"connections": {},
|
||||
"active": false,
|
||||
"settings": {
|
||||
"executionOrder": "v1"
|
||||
},
|
||||
"versionId": "7d745171-e378-411c-bd0a-25a8368a1cb6",
|
||||
"meta": {
|
||||
"templateCredsSetupCompleted": true,
|
||||
"instanceId": "2229c21690ffe7e7b16788a579be3103980c4445acb933f7ced2a6a17f0bd18b"
|
||||
},
|
||||
"id": "Wvmqb0POKmqwCoKy",
|
||||
"tags": [
|
||||
{
|
||||
"name": "Automation",
|
||||
"id": "7FHIZPUaIaChwuiS",
|
||||
"updatedAt": "2025-11-21T19:39:46.484Z",
|
||||
"createdAt": "2025-11-21T19:39:46.484Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Critical Rules:**
|
||||
|
||||
- `pinData` must be empty object `{}`
|
||||
- `versionId` must be UUID
|
||||
- `meta` object with `templateCredsSetupCompleted` and `instanceId`
|
||||
- `id` must be short alphanumeric (e.g., "Wvmqb0POKmqwCoKy")
|
||||
- `tags` must be array of objects (not strings) with id, name, createdAt, updatedAt
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [ ] All node IDs are unique
|
||||
- [ ] All node names are unique
|
||||
- [ ] All connections reference existing nodes
|
||||
- [ ] Trigger node exists and is properly configured
|
||||
- [ ] Node positions don't overlap
|
||||
- [ ] Required parameters are set for each node
|
||||
- [ ] Credentials are properly referenced
|
||||
- [ ] Error handling is configured where needed
|
||||
- [ ] JSON syntax is valid
|
||||
|
|
@ -0,0 +1,299 @@
|
|||
# n8n Workflow Templates
|
||||
|
||||
# Basic webhook workflow template
|
||||
webhook_workflow:
|
||||
name: "Webhook Workflow"
|
||||
nodes:
|
||||
- id: "webhook_trigger"
|
||||
name: "Webhook"
|
||||
type: "n8n-nodes-base.webhook"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters:
|
||||
httpMethod: "POST"
|
||||
path: "webhook-path"
|
||||
responseMode: "onReceived"
|
||||
- id: "process_data"
|
||||
name: "Process Data"
|
||||
type: "n8n-nodes-base.set"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
mode: "manual"
|
||||
values: {}
|
||||
connections:
|
||||
Webhook:
|
||||
- - node: "Process Data"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# Scheduled workflow template
|
||||
scheduled_workflow:
|
||||
name: "Scheduled Workflow"
|
||||
nodes:
|
||||
- id: "schedule_trigger"
|
||||
name: "Schedule Trigger"
|
||||
type: "n8n-nodes-base.scheduleTrigger"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters:
|
||||
rule:
|
||||
interval:
|
||||
- field: "hours"
|
||||
hoursInterval: 1
|
||||
- id: "execute_action"
|
||||
name: "Execute Action"
|
||||
type: "n8n-nodes-base.httpRequest"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
method: "GET"
|
||||
url: ""
|
||||
connections:
|
||||
Schedule Trigger:
|
||||
- - node: "Execute Action"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# Conditional workflow template
|
||||
conditional_workflow:
|
||||
name: "Conditional Workflow"
|
||||
nodes:
|
||||
- id: "manual_trigger"
|
||||
name: "Manual Trigger"
|
||||
type: "n8n-nodes-base.manualTrigger"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters: {}
|
||||
- id: "if_condition"
|
||||
name: "IF"
|
||||
type: "n8n-nodes-base.if"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
conditions:
|
||||
boolean: []
|
||||
number: []
|
||||
string: []
|
||||
- id: "true_branch"
|
||||
name: "True Branch"
|
||||
type: "n8n-nodes-base.noOp"
|
||||
typeVersion: 1
|
||||
position: [690, 200]
|
||||
parameters: {}
|
||||
- id: "false_branch"
|
||||
name: "False Branch"
|
||||
type: "n8n-nodes-base.noOp"
|
||||
typeVersion: 1
|
||||
position: [690, 400]
|
||||
parameters: {}
|
||||
connections:
|
||||
Manual Trigger:
|
||||
- - node: "IF"
|
||||
type: "main"
|
||||
index: 0
|
||||
IF:
|
||||
- - node: "True Branch"
|
||||
type: "main"
|
||||
index: 0
|
||||
- - node: "False Branch"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# API integration workflow template
|
||||
api_integration_workflow:
|
||||
name: "API Integration Workflow"
|
||||
nodes:
|
||||
- id: "webhook_trigger"
|
||||
name: "Webhook"
|
||||
type: "n8n-nodes-base.webhook"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters:
|
||||
httpMethod: "POST"
|
||||
path: "api-webhook"
|
||||
responseMode: "onReceived"
|
||||
- id: "http_request"
|
||||
name: "HTTP Request"
|
||||
type: "n8n-nodes-base.httpRequest"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
method: "POST"
|
||||
url: ""
|
||||
jsonParameters: true
|
||||
options: {}
|
||||
- id: "transform_response"
|
||||
name: "Transform Response"
|
||||
type: "n8n-nodes-base.set"
|
||||
typeVersion: 1
|
||||
position: [690, 300]
|
||||
parameters:
|
||||
mode: "manual"
|
||||
values: {}
|
||||
connections:
|
||||
Webhook:
|
||||
- - node: "HTTP Request"
|
||||
type: "main"
|
||||
index: 0
|
||||
HTTP Request:
|
||||
- - node: "Transform Response"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# Database workflow template
|
||||
database_workflow:
|
||||
name: "Database Workflow"
|
||||
nodes:
|
||||
- id: "schedule_trigger"
|
||||
name: "Schedule Trigger"
|
||||
type: "n8n-nodes-base.scheduleTrigger"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters:
|
||||
rule:
|
||||
interval:
|
||||
- field: "minutes"
|
||||
minutesInterval: 15
|
||||
- id: "postgres_query"
|
||||
name: "Postgres"
|
||||
type: "n8n-nodes-base.postgres"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
operation: "executeQuery"
|
||||
query: ""
|
||||
- id: "process_results"
|
||||
name: "Process Results"
|
||||
type: "n8n-nodes-base.code"
|
||||
typeVersion: 1
|
||||
position: [690, 300]
|
||||
parameters:
|
||||
language: "javaScript"
|
||||
jsCode: "return items;"
|
||||
connections:
|
||||
Schedule Trigger:
|
||||
- - node: "Postgres"
|
||||
type: "main"
|
||||
index: 0
|
||||
Postgres:
|
||||
- - node: "Process Results"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# Error handling workflow template
|
||||
error_handling_workflow:
|
||||
name: "Error Handling Workflow"
|
||||
nodes:
|
||||
- id: "manual_trigger"
|
||||
name: "Manual Trigger"
|
||||
type: "n8n-nodes-base.manualTrigger"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters: {}
|
||||
- id: "risky_operation"
|
||||
name: "Risky Operation"
|
||||
type: "n8n-nodes-base.httpRequest"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
method: "GET"
|
||||
url: ""
|
||||
continueOnFail: true
|
||||
retryOnFail: true
|
||||
maxTries: 3
|
||||
waitBetweenTries: 1000
|
||||
- id: "check_error"
|
||||
name: "Check for Error"
|
||||
type: "n8n-nodes-base.if"
|
||||
typeVersion: 1
|
||||
position: [690, 300]
|
||||
parameters:
|
||||
conditions:
|
||||
boolean:
|
||||
- value1: "={{ $json.error !== undefined }}"
|
||||
value2: true
|
||||
- id: "handle_error"
|
||||
name: "Handle Error"
|
||||
type: "n8n-nodes-base.set"
|
||||
typeVersion: 1
|
||||
position: [910, 200]
|
||||
parameters:
|
||||
mode: "manual"
|
||||
values:
|
||||
string:
|
||||
- name: "status"
|
||||
value: "error"
|
||||
- id: "success_path"
|
||||
name: "Success Path"
|
||||
type: "n8n-nodes-base.noOp"
|
||||
typeVersion: 1
|
||||
position: [910, 400]
|
||||
parameters: {}
|
||||
connections:
|
||||
Manual Trigger:
|
||||
- - node: "Risky Operation"
|
||||
type: "main"
|
||||
index: 0
|
||||
Risky Operation:
|
||||
- - node: "Check for Error"
|
||||
type: "main"
|
||||
index: 0
|
||||
Check for Error:
|
||||
- - node: "Handle Error"
|
||||
type: "main"
|
||||
index: 0
|
||||
- - node: "Success Path"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
||||
# Batch processing workflow template
|
||||
batch_processing_workflow:
|
||||
name: "Batch Processing Workflow"
|
||||
nodes:
|
||||
- id: "manual_trigger"
|
||||
name: "Manual Trigger"
|
||||
type: "n8n-nodes-base.manualTrigger"
|
||||
typeVersion: 1
|
||||
position: [250, 300]
|
||||
parameters: {}
|
||||
- id: "get_data"
|
||||
name: "Get Data"
|
||||
type: "n8n-nodes-base.httpRequest"
|
||||
typeVersion: 1
|
||||
position: [470, 300]
|
||||
parameters:
|
||||
method: "GET"
|
||||
url: ""
|
||||
- id: "split_batches"
|
||||
name: "Split In Batches"
|
||||
type: "n8n-nodes-base.splitInBatches"
|
||||
typeVersion: 1
|
||||
position: [690, 300]
|
||||
parameters:
|
||||
batchSize: 10
|
||||
- id: "process_batch"
|
||||
name: "Process Batch"
|
||||
type: "n8n-nodes-base.code"
|
||||
typeVersion: 1
|
||||
position: [910, 300]
|
||||
parameters:
|
||||
language: "javaScript"
|
||||
jsCode: "return items;"
|
||||
connections:
|
||||
Manual Trigger:
|
||||
- - node: "Get Data"
|
||||
type: "main"
|
||||
index: 0
|
||||
Get Data:
|
||||
- - node: "Split In Batches"
|
||||
type: "main"
|
||||
index: 0
|
||||
Split In Batches:
|
||||
- - node: "Process Batch"
|
||||
type: "main"
|
||||
index: 0
|
||||
Process Batch:
|
||||
- - node: "Split In Batches"
|
||||
type: "main"
|
||||
index: 0
|
||||
|
|
@ -0,0 +1,282 @@
|
|||
# Platform Migration Mappings
|
||||
# Maps common automation platform concepts to n8n equivalents
|
||||
|
||||
# Zapier to n8n mappings
|
||||
zapier:
|
||||
triggers:
|
||||
"New Email":
|
||||
n8n_node: "n8n-nodes-base.emailTrigger"
|
||||
notes: "Configure IMAP/POP3 credentials"
|
||||
"Webhook":
|
||||
n8n_node: "n8n-nodes-base.webhook"
|
||||
notes: "Use POST method by default"
|
||||
"Schedule":
|
||||
n8n_node: "n8n-nodes-base.scheduleTrigger"
|
||||
notes: "Convert Zapier schedule format to cron"
|
||||
"New Row in Google Sheets":
|
||||
n8n_node: "n8n-nodes-base.googleSheetsTrigger"
|
||||
notes: "Requires Google OAuth credentials"
|
||||
"New Slack Message":
|
||||
n8n_node: "n8n-nodes-base.slackTrigger"
|
||||
notes: "Configure channel and event type"
|
||||
|
||||
actions:
|
||||
"Send Email":
|
||||
n8n_node: "n8n-nodes-base.emailSend"
|
||||
notes: "Configure SMTP credentials"
|
||||
"HTTP Request":
|
||||
n8n_node: "n8n-nodes-base.httpRequest"
|
||||
notes: "Map method, URL, headers, and body"
|
||||
"Create Google Sheets Row":
|
||||
n8n_node: "n8n-nodes-base.googleSheets"
|
||||
parameters:
|
||||
operation: "append"
|
||||
"Send Slack Message":
|
||||
n8n_node: "n8n-nodes-base.slack"
|
||||
parameters:
|
||||
operation: "post"
|
||||
resource: "message"
|
||||
"Delay":
|
||||
n8n_node: "n8n-nodes-base.wait"
|
||||
notes: "Convert delay duration to milliseconds"
|
||||
"Filter":
|
||||
n8n_node: "n8n-nodes-base.if"
|
||||
notes: "Convert filter conditions to IF node logic"
|
||||
"Formatter":
|
||||
n8n_node: "n8n-nodes-base.set"
|
||||
notes: "Use Set node for data transformation"
|
||||
"Code":
|
||||
n8n_node: "n8n-nodes-base.code"
|
||||
notes: "JavaScript or Python code execution"
|
||||
|
||||
concepts:
|
||||
"Multi-step Zap":
|
||||
n8n_equivalent: "Linear workflow with connected nodes"
|
||||
"Paths":
|
||||
n8n_equivalent: "IF node with multiple branches"
|
||||
"Filters":
|
||||
n8n_equivalent: "IF node with conditions"
|
||||
"Formatter":
|
||||
n8n_equivalent: "Set node or Code node"
|
||||
"Looping":
|
||||
n8n_equivalent: "Split In Batches node"
|
||||
|
||||
# Make (Integromat) to n8n mappings
|
||||
make:
|
||||
triggers:
|
||||
"Webhook":
|
||||
n8n_node: "n8n-nodes-base.webhook"
|
||||
notes: "Direct equivalent"
|
||||
"Watch Records":
|
||||
n8n_node: "n8n-nodes-base.scheduleTrigger"
|
||||
notes: "Combine with polling logic in Code node"
|
||||
"Custom Webhook":
|
||||
n8n_node: "n8n-nodes-base.webhook"
|
||||
notes: "Configure response mode"
|
||||
|
||||
actions:
|
||||
"HTTP Request":
|
||||
n8n_node: "n8n-nodes-base.httpRequest"
|
||||
notes: "Map all HTTP parameters"
|
||||
"Router":
|
||||
n8n_node: "n8n-nodes-base.switch"
|
||||
notes: "Multiple conditional branches"
|
||||
"Iterator":
|
||||
n8n_node: "n8n-nodes-base.splitInBatches"
|
||||
notes: "Process array items individually"
|
||||
"Aggregator":
|
||||
n8n_node: "n8n-nodes-base.merge"
|
||||
notes: "Combine data from multiple sources"
|
||||
"Data Store":
|
||||
n8n_node: "n8n-nodes-base.redis"
|
||||
notes: "Use Redis or database node for storage"
|
||||
"JSON Parser":
|
||||
n8n_node: "n8n-nodes-base.code"
|
||||
notes: "Parse JSON in Code node"
|
||||
"Text Parser":
|
||||
n8n_node: "n8n-nodes-base.set"
|
||||
notes: "Use expressions for text manipulation"
|
||||
|
||||
concepts:
|
||||
"Scenario":
|
||||
n8n_equivalent: "Workflow"
|
||||
"Module":
|
||||
n8n_equivalent: "Node"
|
||||
"Route":
|
||||
n8n_equivalent: "Connection"
|
||||
"Filter":
|
||||
n8n_equivalent: "IF node"
|
||||
"Router":
|
||||
n8n_equivalent: "Switch node or multiple IF nodes"
|
||||
"Iterator":
|
||||
n8n_equivalent: "Split In Batches node"
|
||||
"Aggregator":
|
||||
n8n_equivalent: "Merge node"
|
||||
|
||||
# HubSpot Workflows to n8n mappings
|
||||
hubspot:
|
||||
triggers:
|
||||
"Contact Property Change":
|
||||
n8n_node: "n8n-nodes-base.hubspotTrigger"
|
||||
notes: "Configure webhook for property updates"
|
||||
"Deal Stage Change":
|
||||
n8n_node: "n8n-nodes-base.hubspotTrigger"
|
||||
notes: "Monitor deal pipeline changes"
|
||||
"Form Submission":
|
||||
n8n_node: "n8n-nodes-base.hubspotTrigger"
|
||||
notes: "Webhook for form submissions"
|
||||
"List Membership":
|
||||
n8n_node: "n8n-nodes-base.scheduleTrigger"
|
||||
notes: "Poll HubSpot API for list changes"
|
||||
|
||||
actions:
|
||||
"Update Contact Property":
|
||||
n8n_node: "n8n-nodes-base.hubspot"
|
||||
parameters:
|
||||
resource: "contact"
|
||||
operation: "update"
|
||||
"Create Deal":
|
||||
n8n_node: "n8n-nodes-base.hubspot"
|
||||
parameters:
|
||||
resource: "deal"
|
||||
operation: "create"
|
||||
"Send Email":
|
||||
n8n_node: "n8n-nodes-base.hubspot"
|
||||
parameters:
|
||||
resource: "email"
|
||||
operation: "send"
|
||||
"Add to List":
|
||||
n8n_node: "n8n-nodes-base.hubspot"
|
||||
parameters:
|
||||
resource: "contact"
|
||||
operation: "addToList"
|
||||
"Create Task":
|
||||
n8n_node: "n8n-nodes-base.hubspot"
|
||||
parameters:
|
||||
resource: "task"
|
||||
operation: "create"
|
||||
|
||||
concepts:
|
||||
"Enrollment Trigger":
|
||||
n8n_equivalent: "Trigger node (webhook or schedule)"
|
||||
"If/Then Branch":
|
||||
n8n_equivalent: "IF node"
|
||||
"Delay":
|
||||
n8n_equivalent: "Wait node"
|
||||
"Goal":
|
||||
n8n_equivalent: "IF node checking completion criteria"
|
||||
"Re-enrollment":
|
||||
n8n_equivalent: "Workflow settings with loop detection"
|
||||
|
||||
# Microsoft Power Automate to n8n mappings
|
||||
power_automate:
|
||||
triggers:
|
||||
"When an item is created":
|
||||
n8n_node: "n8n-nodes-base.webhook"
|
||||
notes: "Configure webhook for item creation events"
|
||||
"Recurrence":
|
||||
n8n_node: "n8n-nodes-base.scheduleTrigger"
|
||||
notes: "Convert recurrence pattern to cron"
|
||||
"When a HTTP request is received":
|
||||
n8n_node: "n8n-nodes-base.webhook"
|
||||
notes: "Direct equivalent"
|
||||
|
||||
actions:
|
||||
"HTTP":
|
||||
n8n_node: "n8n-nodes-base.httpRequest"
|
||||
notes: "Map all HTTP parameters"
|
||||
"Condition":
|
||||
n8n_node: "n8n-nodes-base.if"
|
||||
notes: "Convert condition logic"
|
||||
"Apply to each":
|
||||
n8n_node: "n8n-nodes-base.splitInBatches"
|
||||
notes: "Process array items"
|
||||
"Compose":
|
||||
n8n_node: "n8n-nodes-base.set"
|
||||
notes: "Data transformation"
|
||||
"Parse JSON":
|
||||
n8n_node: "n8n-nodes-base.code"
|
||||
notes: "Parse JSON in Code node"
|
||||
"Delay":
|
||||
n8n_node: "n8n-nodes-base.wait"
|
||||
notes: "Convert delay duration"
|
||||
|
||||
concepts:
|
||||
"Flow":
|
||||
n8n_equivalent: "Workflow"
|
||||
"Action":
|
||||
n8n_equivalent: "Node"
|
||||
"Condition":
|
||||
n8n_equivalent: "IF node"
|
||||
"Switch":
|
||||
n8n_equivalent: "Switch node"
|
||||
"Scope":
|
||||
n8n_equivalent: "Error handling with try/catch in Code node"
|
||||
"Apply to each":
|
||||
n8n_equivalent: "Split In Batches node"
|
||||
|
||||
# Common patterns across platforms
|
||||
common_patterns:
|
||||
conditional_logic:
|
||||
description: "If/then/else branching"
|
||||
n8n_implementation: "IF node with true/false branches"
|
||||
|
||||
loops:
|
||||
description: "Iterate over array items"
|
||||
n8n_implementation: "Split In Batches node"
|
||||
|
||||
data_transformation:
|
||||
description: "Transform, format, or map data"
|
||||
n8n_implementation: "Set node or Code node"
|
||||
|
||||
error_handling:
|
||||
description: "Handle errors and retries"
|
||||
n8n_implementation: "Node settings: continueOnFail, retryOnFail, maxTries"
|
||||
|
||||
delays:
|
||||
description: "Wait before next action"
|
||||
n8n_implementation: "Wait node with duration"
|
||||
|
||||
webhooks:
|
||||
description: "Receive HTTP requests"
|
||||
n8n_implementation: "Webhook node with response configuration"
|
||||
|
||||
api_calls:
|
||||
description: "Make HTTP requests to APIs"
|
||||
n8n_implementation: "HTTP Request node"
|
||||
|
||||
parallel_execution:
|
||||
description: "Execute multiple actions simultaneously"
|
||||
n8n_implementation: "Multiple connections from single node"
|
||||
|
||||
merge_data:
|
||||
description: "Combine data from multiple sources"
|
||||
n8n_implementation: "Merge node"
|
||||
|
||||
# Migration considerations
|
||||
migration_notes:
|
||||
authentication:
|
||||
- "Recreate all credentials in n8n"
|
||||
- "OAuth flows may need re-authorization"
|
||||
- "API keys and tokens must be securely stored"
|
||||
|
||||
scheduling:
|
||||
- "Convert platform-specific schedules to cron expressions"
|
||||
- "Consider timezone differences"
|
||||
- "Test schedule triggers before going live"
|
||||
|
||||
data_formats:
|
||||
- "Verify JSON structure compatibility"
|
||||
- "Check date/time format conversions"
|
||||
- "Validate data type mappings"
|
||||
|
||||
error_handling:
|
||||
- "Implement retry logic where needed"
|
||||
- "Add error notification workflows"
|
||||
- "Test failure scenarios"
|
||||
|
||||
testing:
|
||||
- "Test with sample data first"
|
||||
- "Verify all integrations work correctly"
|
||||
- "Monitor initial executions closely"
|
||||
- "Compare outputs with original platform"
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
# Create n8n Workflow - Validation Checklist
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
- [ ] Workflow has a valid name
|
||||
- [ ] Workflow contains at least one trigger node
|
||||
- [ ] All nodes have unique IDs
|
||||
- [ ] All nodes have unique names
|
||||
- [ ] Workflow JSON is valid and parseable
|
||||
|
||||
## Node Configuration
|
||||
|
||||
- [ ] Trigger node is properly configured
|
||||
- [ ] All action nodes have required parameters set
|
||||
- [ ] Node types are valid n8n node types
|
||||
- [ ] Node positions are set and don't overlap
|
||||
- [ ] TypeVersion is set for all nodes (usually 1)
|
||||
|
||||
## Connections
|
||||
|
||||
- [ ] All nodes are connected (no orphaned nodes except trigger)
|
||||
- [ ] All connections reference existing node names
|
||||
- [ ] Connection types are set correctly (usually "main")
|
||||
- [ ] Connection indices are correct (0 for default, 0/1 for IF nodes)
|
||||
- [ ] No circular dependencies (unless intentional loops)
|
||||
|
||||
## Error Handling
|
||||
|
||||
- [ ] Error handling strategy matches requirements
|
||||
- [ ] Critical nodes have retry logic if needed
|
||||
- [ ] continueOnFail is set appropriately
|
||||
- [ ] maxTries and waitBetweenTries are configured if retries enabled
|
||||
|
||||
## Data Flow
|
||||
|
||||
- [ ] Data transformations are properly configured
|
||||
- [ ] Set nodes have correct value mappings
|
||||
- [ ] Code nodes have valid JavaScript/Python code
|
||||
- [ ] Expressions use correct n8n syntax (={{ }})
|
||||
|
||||
## Integrations
|
||||
|
||||
- [ ] All required integrations are included
|
||||
- [ ] Credential placeholders are set for authenticated services
|
||||
- [ ] API endpoints and methods are correct
|
||||
- [ ] Request/response formats are properly configured
|
||||
|
||||
## Best Practices
|
||||
|
||||
- [ ] Workflow follows n8n naming conventions
|
||||
- [ ] Nodes are logically organized and positioned
|
||||
- [ ] Complex logic is broken into manageable steps
|
||||
- [ ] Workflow is documented (node names are descriptive)
|
||||
|
||||
## Testing Readiness
|
||||
|
||||
- [ ] Workflow can be imported into n8n without errors
|
||||
- [ ] All required credentials are identified
|
||||
- [ ] Test data requirements are clear
|
||||
- [ ] Expected outputs are defined
|
||||
|
||||
## File Output
|
||||
|
||||
- [ ] File is saved to correct location
|
||||
- [ ] File has .json extension
|
||||
- [ ] File is valid JSON (passes JSON.parse)
|
||||
- [ ] File size is reasonable (not corrupted)
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] User has been informed how to import workflow
|
||||
- [ ] Credential requirements have been communicated
|
||||
- [ ] Testing instructions have been provided
|
||||
- [ ] Any special configuration notes have been shared
|
||||
|
|
@ -0,0 +1,449 @@
|
|||
# Create n8n Workflow - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow creates a new n8n workflow from scratch based on user requirements.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Load Config and Check Prerequisites">
|
||||
<critical>Load configuration and check for requirements file before proceeding</critical>
|
||||
|
||||
<action>Resolve variables from config_source: workflows_folder, requirements_folder, output_folder, user_name, communication_language</action>
|
||||
<action>Create {{workflows_folder}} directory if it does not exist</action>
|
||||
<action>Create {{requirements_folder}} directory if it does not exist</action>
|
||||
|
||||
<action>Search for requirements files in {{requirements_folder}}</action>
|
||||
<action>List all files matching pattern: req-*.md</action>
|
||||
|
||||
<check if="no requirements files found">
|
||||
<output>⚠️ No Requirements File Found
|
||||
|
||||
Before creating a workflow, you need to gather requirements.
|
||||
|
||||
**Options:**
|
||||
1. Run `*gather-requirements` to create a requirements file
|
||||
2. Provide requirements manually in this session
|
||||
|
||||
Would you like to:
|
||||
a) Run gather-requirements workflow now
|
||||
b) Continue without requirements file (manual elicitation)
|
||||
|
||||
Enter your choice (a/b):</output>
|
||||
<action>WAIT for user input</action>
|
||||
|
||||
<check if="user chooses 'a'">
|
||||
<action>Invoke workflow: {project-root}/{bmad_folder}/autominator/workflows/gather-requirements/workflow.yaml</action>
|
||||
<action>After gather-requirements completes, reload this step to find the new requirements file</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses 'b'">
|
||||
<action>Set {{requirements_file}} = empty</action>
|
||||
<action>Proceed to Step 1 for manual elicitation</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="one requirements file found">
|
||||
<action>Set {{requirements_file}} to the found file path</action>
|
||||
<action>Load and parse requirements file COMPLETELY</action>
|
||||
<action>Extract requirements: workflow_name, problem_description, trigger_type, data_requirements, desired_outcome, integrations, conditional_logic, criticality</action>
|
||||
<action>Extract research findings: use_case_research, node_research, parameter_structures, workflow_pattern_research</action>
|
||||
<action>Display loaded requirements summary to user</action>
|
||||
<action>Skip to Step 4 (Plan Workflow Structure) - research already done</action>
|
||||
</check>
|
||||
|
||||
<check if="multiple requirements files found">
|
||||
<output>📋 Multiple Requirements Files Found:
|
||||
|
||||
[Display numbered list of files with workflow names]
|
||||
|
||||
Which requirements file would you like to use?
|
||||
Enter the number (1-N) or 'new' to create a new one:</output>
|
||||
<action>WAIT for user input</action>
|
||||
|
||||
<check if="user enters number">
|
||||
<action>Set {{requirements_file}} to selected file path</action>
|
||||
<action>Load and parse requirements file COMPLETELY</action>
|
||||
<action>Extract requirements: workflow_name, problem_description, trigger_type, data_requirements, desired_outcome, integrations, conditional_logic, criticality</action>
|
||||
<action>Extract research findings: use_case_research, node_research, parameter_structures, workflow_pattern_research</action>
|
||||
<action>Display loaded requirements summary to user</action>
|
||||
<action>Skip to Step 4 (Plan Workflow Structure) - research already done</action>
|
||||
</check>
|
||||
|
||||
<check if="user enters 'new'">
|
||||
<action>Invoke workflow: {project-root}/{bmad_folder}/autominator/workflows/gather-requirements/workflow.yaml</action>
|
||||
<action>After gather-requirements completes, reload this step to find the new requirements file</action>
|
||||
</check>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Gather Requirements" elicit="true">
|
||||
<critical>Start by understanding the ACTUAL PROBLEM the user wants to solve, not just technical requirements</critical>
|
||||
|
||||
<action>Ask Question 1: "What problem are you trying to solve with this automation?"</action>
|
||||
<action>Encourage detailed explanation: "Describe the current manual process, pain points, and desired outcome."</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{problem_description}}</action>
|
||||
|
||||
<action>Ask Question 2: "What triggers this process? When should the automation run?"</action>
|
||||
<action>Present numbered options:
|
||||
1. When data arrives - Webhook, form submission, API call
|
||||
2. On a schedule - Every hour, daily, weekly, custom cron
|
||||
3. When something changes - Database update, file change, service event
|
||||
4. Manually - On-demand execution
|
||||
5. Multiple triggers - Combination of above
|
||||
6. Not sure - Help me decide based on my problem
|
||||
</action>
|
||||
<action>WAIT for user selection (1-6)</action>
|
||||
<action>Store selection in {{trigger_type}}</action>
|
||||
|
||||
<check if="selection is 6 (Not sure)">
|
||||
<action>Analyze {{problem_description}} and suggest appropriate trigger</action>
|
||||
<action>Ask: "Based on your problem, I recommend [trigger type]. Does this make sense?"</action>
|
||||
<action>WAIT for confirmation or adjustment</action>
|
||||
<action>Store final trigger in {{trigger_type}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 3: "What data or information does this workflow need to work with?"</action>
|
||||
<action>Examples: "Customer data, order details, form responses, API data, etc."</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{data_requirements}}</action>
|
||||
|
||||
<action>Ask Question 4: "What should happen with this data? What's the desired outcome?"</action>
|
||||
<action>Examples: "Send to Slack, update database, create invoice, notify team, etc."</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{desired_outcome}}</action>
|
||||
|
||||
<action>Ask Question 5: "What services or systems are involved?"</action>
|
||||
<action>Examples: "Slack, Google Sheets, PostgreSQL, HubSpot, custom API, etc."</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{integrations}}</action>
|
||||
|
||||
<action>Ask Question 6: "Are there any conditions or decision points in this process?"</action>
|
||||
<action>Examples: "If amount > $1000, notify manager; If status = 'urgent', send immediately"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - Straight-through processing
|
||||
2. Yes - Describe the conditions
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "Describe the conditions and what should happen in each case"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{conditional_logic}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 7: "How critical is this workflow? What happens if it fails?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Low - Can retry manually if needed
|
||||
2. Medium - Should retry automatically, notify on failure
|
||||
3. High - Must succeed, need alerts and logging
|
||||
4. Critical - Business-critical, need comprehensive error handling
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
<action>Store selection in {{criticality}}</action>
|
||||
|
||||
<action>Ask Question 8: "What should the workflow be named?"</action>
|
||||
<action>Suggest name based on {{problem_description}}</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_name}}</action>
|
||||
|
||||
<action>Ask Question 9: "Where should the workflow file be saved?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Default location - workflows/[workflow-name].json
|
||||
2. Custom path - Specify your own file path
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask for specific path</action>
|
||||
<action>WAIT for user input</action>
|
||||
</check>
|
||||
<action>Store final path in {{save_location}}</action>
|
||||
|
||||
<action>Summarize understanding:</action>
|
||||
<action>- Problem: {{problem_description}}</action>
|
||||
<action>- Trigger: {{trigger_type}}</action>
|
||||
<action>- Data: {{data_requirements}}</action>
|
||||
<action>- Outcome: {{desired_outcome}}</action>
|
||||
<action>- Services: {{integrations}}</action>
|
||||
<action>- Conditions: {{conditional_logic}}</action>
|
||||
<action>- Criticality: {{criticality}}</action>
|
||||
|
||||
<action>Ask: "Does this capture your requirements correctly?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Proceed with workflow creation
|
||||
2. No - Let me clarify or add details
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "What needs to be clarified or added?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Update relevant variables</action>
|
||||
<action>Repeat summary and confirmation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Research n8n Documentation">
|
||||
<critical>Search for up-to-date n8n documentation based on user requirements</critical>
|
||||
|
||||
<action>Inform user: "Researching n8n documentation for your workflow requirements..."</action>
|
||||
|
||||
<action>Perform web search for n8n documentation on:</action>
|
||||
<action>1. Trigger type: {{trigger_type}}</action>
|
||||
<action>2. Integrations: {{integrations}}</action>
|
||||
<action>3. Conditional logic: {{conditional_logic}}</action>
|
||||
<action>4. Error handling: {{criticality}}</action>
|
||||
|
||||
<action>Search queries to use:</action>
|
||||
<action>- "n8n [trigger_type] node documentation"</action>
|
||||
<action>- "n8n [integration] node setup"</action>
|
||||
<action>- "n8n workflow best practices"</action>
|
||||
<action>- "n8n error handling retry logic"</action>
|
||||
|
||||
<action>Focus on official n8n documentation at docs.n8n.io</action>
|
||||
<action>Store relevant documentation snippets for reference</action>
|
||||
<action>Note any specific node configurations or parameters needed</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Verify Documentation Understanding">
|
||||
<action>Summarize key findings from documentation:</action>
|
||||
<action>- Available node types for requirements</action>
|
||||
<action>- Required parameters and configurations</action>
|
||||
<action>- Best practices for this use case</action>
|
||||
<action>- Any limitations or considerations</action>
|
||||
|
||||
<action>Inform user: "Based on n8n documentation, I found the necessary nodes and configurations for your workflow."</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Plan Workflow Structure">
|
||||
<critical>Design workflow based on the ACTUAL PROBLEM, not just technical specs</critical>
|
||||
|
||||
<action>Analyze the problem and requirements:</action>
|
||||
<action>- Problem to solve: {{problem_description}}</action>
|
||||
<action>- Trigger: {{trigger_type}}</action>
|
||||
<action>- Data needed: {{data_requirements}}</action>
|
||||
<action>- Desired outcome: {{desired_outcome}}</action>
|
||||
<action>- Services: {{integrations}}</action>
|
||||
<action>- Conditions: {{conditional_logic}}</action>
|
||||
<action>- Criticality: {{criticality}}</action>
|
||||
|
||||
<action>Design workflow structure that solves the problem:</action>
|
||||
<action>1. Map trigger to appropriate n8n trigger node</action>
|
||||
<action>2. Design data acquisition steps (API calls, database queries)</action>
|
||||
<action>3. Plan data transformations needed for the outcome</action>
|
||||
<action>4. Implement conditional logic from {{conditional_logic}}</action>
|
||||
<action>5. Design actions to achieve {{desired_outcome}}</action>
|
||||
<action>6. Add error handling based on {{criticality}}</action>
|
||||
<action>7. Plan node connections and data flow</action>
|
||||
|
||||
<action>Present the solution-focused workflow plan:</action>
|
||||
<action>## Workflow Solution for: {{problem_description}}</action>
|
||||
<action></action>
|
||||
<action>**How it works:**</action>
|
||||
<action>[Explain in plain language how the workflow solves the problem]</action>
|
||||
<action></action>
|
||||
<action>**Workflow Steps:**</action>
|
||||
<action>1. Trigger: [When/how it starts] - [n8n node type]</action>
|
||||
<action>2. Get Data: [What data is retrieved] - [n8n nodes]</action>
|
||||
<action>3. Process: [How data is transformed] - [n8n nodes]</action>
|
||||
<action>4. Decide: [Conditional logic if any] - [IF/Switch nodes]</action>
|
||||
<action>5. Act: [Final actions to achieve outcome] - [n8n nodes]</action>
|
||||
<action>6. Handle Errors: [Error strategy] - [Error handling config]</action>
|
||||
<action></action>
|
||||
<action>**Expected Result:**</action>
|
||||
<action>[Describe what happens when workflow runs successfully]</action>
|
||||
|
||||
<action>Ask: "Does this workflow solve your problem?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - This solves my problem, proceed
|
||||
2. No - Missing something important
|
||||
3. Partially - Needs adjustments
|
||||
4. Explain more - I need clarification
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "What's missing? What else needs to happen?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Adjust workflow design to include missing elements</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Ask: "What needs to be adjusted?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Modify workflow design based on feedback</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 4">
|
||||
<action>Ask: "Which part needs clarification?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Provide detailed explanation of that part</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Load Templates and Resources">
|
||||
<action>Load {{templates}} file</action>
|
||||
<action>Identify closest matching template based on workflow type</action>
|
||||
<action>Load {{helpers}} for node creation guidelines</action>
|
||||
<action>Extract relevant template sections</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Build Workflow JSON">
|
||||
<critical>Use EXACT node types and parameter structures from {{node_research}} and {{parameter_structures}}</critical>
|
||||
<critical>Follow modern n8n format from {{helpers}}</critical>
|
||||
|
||||
<action>Initialize workflow structure with modern n8n format:</action>
|
||||
<substep>
|
||||
{
|
||||
"name": "{{workflow_name}}",
|
||||
"nodes": [],
|
||||
"pinData": {},
|
||||
"connections": {},
|
||||
"active": false,
|
||||
"settings": {
|
||||
"executionOrder": "v1"
|
||||
},
|
||||
"versionId": "[generate UUID]",
|
||||
"meta": {
|
||||
"templateCredsSetupCompleted": true,
|
||||
"instanceId": "[generate UUID]"
|
||||
},
|
||||
"id": "[generate short ID]",
|
||||
"tags": []
|
||||
}
|
||||
</substep>
|
||||
|
||||
<action>Build nodes ONE at a time following these rules:</action>
|
||||
|
||||
<substep>For Each Node (Use EXACT structures from research):
|
||||
1. Generate UUID for node ID (format: "f8b7ff4f-6375-4c79-9b2c-9814bfdd0c92")
|
||||
2. Set node name (unique, descriptive)
|
||||
3. Use EXACT node type from {{node_research}}:
|
||||
- MUST be format: "n8n-nodes-base.nodeName"
|
||||
- NEVER use: "@n8n/n8n-nodes-*" format
|
||||
- Example: "n8n-nodes-base.gmail" NOT "@n8n/n8n-nodes-gmail.gmail"
|
||||
4. Use EXACT typeVersion from {{node_research}}:
|
||||
- MUST be INTEGER (2, 3, 4)
|
||||
- NEVER use float (2.1, 3.4)
|
||||
5. Calculate position as INTEGER array:
|
||||
- Format: [x, y] where x and y are integers
|
||||
- First node (trigger): [240, 300]
|
||||
- Subsequent nodes: add 220 to x for each step
|
||||
- Branches: adjust y by ±100
|
||||
6. Use EXACT parameter structure from {{parameter_structures}}:
|
||||
- For Set node (v3+): use assignments.assignments structure
|
||||
- For Gmail node (v2+): use "message" parameter (NOT "text")
|
||||
- For IF node (v2+): use conditions.conditions structure (NO "name" field in conditions)
|
||||
- For Slack channel: use __rl flag with mode and cachedResultName
|
||||
- Copy structure EXACTLY from research, don't modify
|
||||
7. Add webhookId (UUID) if node type is webhook
|
||||
8. Add credentials if needed
|
||||
9. Field order: parameters, id, name, type, typeVersion, position, webhookId, credentials
|
||||
10. Store node name in list for validation
|
||||
</substep>
|
||||
|
||||
<substep>For Each Connection (CRITICAL FORMAT):
|
||||
1. Identify source node name (must match node "name" field exactly)
|
||||
2. Identify target node name (must match node "name" field exactly)
|
||||
3. Create connection entry with CORRECT format:
|
||||
"Source Node Name": {
|
||||
"main": [
|
||||
[
|
||||
{
|
||||
"node": "Target Node Name",
|
||||
"type": "main",
|
||||
"index": 0
|
||||
}
|
||||
]
|
||||
]
|
||||
}
|
||||
4. CRITICAL: The "main" wrapper object is REQUIRED
|
||||
5. NEVER use format: "Source": [[{...}]] (missing "main" wrapper)
|
||||
6. For IF nodes:
|
||||
- index 0 = true branch
|
||||
- index 1 = false branch
|
||||
7. Validate all referenced nodes exist in node names list
|
||||
</substep>
|
||||
|
||||
<substep>Node Building Order:
|
||||
1. Trigger node
|
||||
2. Main flow action nodes
|
||||
3. Conditional nodes (IF/Switch)
|
||||
4. Branch nodes
|
||||
5. Merge nodes
|
||||
6. Final action nodes
|
||||
7. Connect all nodes
|
||||
</substep>
|
||||
|
||||
<action>Apply error handling based on {{error_handling}} selection</action>
|
||||
<action>Validate all node IDs are unique</action>
|
||||
<action>Validate all node names are unique</action>
|
||||
<action>Validate all connections reference existing nodes</action>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Optimize and Save">
|
||||
<action>Review workflow for optimization:</action>
|
||||
<action>- Remove any unused nodes</action>
|
||||
<action>- Verify node positions don't overlap</action>
|
||||
<action>- Ensure proper spacing and alignment</action>
|
||||
<action>- Validate all required parameters are set</action>
|
||||
|
||||
<action>Resolve final save path:</action>
|
||||
<check if="{{save_location}} was provided by user">
|
||||
<action>Use {{save_location}} as-is</action>
|
||||
</check>
|
||||
<check if="{{save_location}} is empty or default">
|
||||
<action>Use {{workflows_folder}}/{{workflow_name}}.json</action>
|
||||
<action>Ensure {{workflows_folder}} directory exists</action>
|
||||
<action>Store resolved path in {{save_location}}</action>
|
||||
</check>
|
||||
<action>Save workflow to {{save_location}}</action>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Validate JSON Syntax">
|
||||
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
|
||||
|
||||
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{save_location}}', 'utf8')); console.log('✓ Valid JSON')"</action>
|
||||
|
||||
<check if="validation fails (exit code 1)">
|
||||
<action>Read the error message carefully - it shows the syntax error and position</action>
|
||||
<action>Open the file and navigate to the error location</action>
|
||||
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
|
||||
<action>Save the file</action>
|
||||
<action>Re-run validation with the same command</action>
|
||||
<action>Repeat until validation passes</action>
|
||||
</check>
|
||||
|
||||
<action>Once validation passes, confirm with user: "n8n workflow created at {{save_location}}"</action>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Provide Usage Instructions">
|
||||
<action>Inform user how to use the workflow:</action>
|
||||
<action>1. Import the JSON file into n8n</action>
|
||||
<action>2. Configure credentials for integrated services</action>
|
||||
<action>3. Test the workflow with sample data</action>
|
||||
<action>4. Activate the workflow when ready</action>
|
||||
|
||||
<action>Ask: "Would you like me to explain any part of the workflow?"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - I'm good to go
|
||||
2. Yes - Explain specific nodes
|
||||
3. Yes - Explain the overall flow
|
||||
4. Yes - Explain how to test it
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
|
||||
<check if="selection is 2, 3, or 4">
|
||||
<action>Provide requested explanation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Validate Content">
|
||||
<invoke-task>Validate against checklist at {{validation}} using {{bmad_folder}}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
name: create-workflow
|
||||
description: "Create new n8n workflow from scratch based on requirements"
|
||||
author: "Saif"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/{bmad_folder}/autominator/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
workflows_folder: "{config_source}:workflows_folder"
|
||||
requirements_folder: "{config_source}:requirements_folder"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/autominator/workflows/create-workflow"
|
||||
shared_path: "{project-root}/{bmad_folder}/autominator/workflows/_shared"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Shared resources
|
||||
helpers: "{shared_path}/n8n-helpers.md"
|
||||
templates: "{shared_path}/n8n-templates.yaml"
|
||||
platform_mappings: "{shared_path}/platform-mappings.yaml"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
requirements_file: "" # Will be discovered or elicited
|
||||
workflow_type: "" # Will be loaded from requirements or elicited
|
||||
trigger_type: "" # Will be loaded from requirements or elicited
|
||||
integrations: [] # Will be loaded from requirements or elicited
|
||||
complexity: "" # Will be elicited
|
||||
error_handling: "" # Will be elicited
|
||||
workflow_name: "" # Will be loaded from requirements or elicited
|
||||
problem_description: "" # Will be loaded from requirements
|
||||
data_requirements: "" # Will be loaded from requirements
|
||||
desired_outcome: "" # Will be loaded from requirements
|
||||
conditional_logic: "" # Will be loaded from requirements
|
||||
criticality: "" # Will be loaded from requirements
|
||||
|
||||
default_output_file: "{workflows_folder}/{workflow_name}.json"
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
# Gather Requirements - Validation Checklist
|
||||
|
||||
## Requirements Completeness
|
||||
|
||||
- [ ] Problem statement is clear and specific
|
||||
- [ ] Trigger type is defined
|
||||
- [ ] Data requirements are documented
|
||||
- [ ] Desired outcome is clear
|
||||
- [ ] All integrations are listed
|
||||
- [ ] Conditional logic is documented (or marked as not needed)
|
||||
- [ ] Criticality level is set
|
||||
- [ ] Workflow name is descriptive
|
||||
|
||||
## Document Quality
|
||||
|
||||
- [ ] Requirements file is saved to correct location
|
||||
- [ ] All template fields are filled
|
||||
- [ ] No placeholder text remains
|
||||
- [ ] Change log is initialized
|
||||
|
||||
## Readiness
|
||||
|
||||
- [ ] Requirements are sufficient to create workflow
|
||||
- [ ] User has confirmed requirements are correct
|
||||
- [ ] File is ready for use by create-workflow
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
# Gather Requirements - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow gathers requirements for n8n workflow creation.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load Config and Initialize">
|
||||
<action>Resolve variables from config_source: requirements_folder, output_folder, user_name, communication_language</action>
|
||||
<action>Create {{requirements_folder}} directory if it does not exist</action>
|
||||
<action>Load template from {{template}}</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Gather Requirements" elicit="true">
|
||||
<critical>Ask questions ONE AT A TIME and WAIT for user response after each question</critical>
|
||||
|
||||
<ask>Question 1: What problem are you trying to solve with this automation?
|
||||
|
||||
Describe the current manual process, pain points, and desired outcome.</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{problem_description}}</action>
|
||||
|
||||
<action>Perform web search to understand the use case:</action>
|
||||
<action>- "n8n workflow for [problem description] site:docs.n8n.io"</action>
|
||||
<action>- "n8n automation [problem description] best practices"</action>
|
||||
<action>Store findings in {{use_case_research}}</action>
|
||||
|
||||
<ask>Question 2: What triggers this process? When should the automation run?
|
||||
|
||||
Options:
|
||||
1. When data arrives - Webhook, form submission, API call
|
||||
2. On a schedule - Every hour, daily, weekly, custom cron
|
||||
3. When something changes - Database update, file change, service event
|
||||
4. Manually - On-demand execution
|
||||
5. Multiple triggers - Combination of above
|
||||
6. Not sure - Help me decide based on my problem
|
||||
|
||||
Enter your selection (1-6):</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{trigger_type}}</action>
|
||||
|
||||
<check if="selection is 6">
|
||||
<action>Analyze {{problem_description}} and suggest appropriate trigger</action>
|
||||
<ask>Based on your problem, I recommend [trigger type]. Does this make sense? (yes/no)</ask>
|
||||
<action>WAIT for confirmation</action>
|
||||
<action>Store final trigger in {{trigger_type}}</action>
|
||||
</check>
|
||||
|
||||
<ask>Question 3: What data or information does this workflow need to work with?
|
||||
|
||||
Examples: Customer data, order details, form responses, API data, etc.</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{data_requirements}}</action>
|
||||
|
||||
<ask>Question 4: What should happen with this data? What's the desired outcome?
|
||||
|
||||
Examples: Send to Slack, update database, create invoice, notify team, etc.</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{desired_outcome}}</action>
|
||||
|
||||
<ask>Question 5: What services or systems are involved?
|
||||
|
||||
Examples: Slack, Google Sheets, PostgreSQL, HubSpot, custom API, etc.</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{integrations}}</action>
|
||||
|
||||
<action>Research EXACT n8n node types for each integration:</action>
|
||||
<action>For each service in {{integrations}}:</action>
|
||||
<action>1. Search: "n8n [service] node documentation site:docs.n8n.io"</action>
|
||||
<action>2. Extract EXACT node type string (e.g., "n8n-nodes-base.webhook")</action>
|
||||
<action>3. Extract typeVersion (e.g., 2.1)</action>
|
||||
<action>4. Extract available parameters structure</action>
|
||||
<action>5. Extract example usage from docs</action>
|
||||
<action>6. Note if trigger node or action node</action>
|
||||
<action>Store all findings in {{node_research}}</action>
|
||||
|
||||
<ask>Question 6: Are there any conditions or decision points in this process?
|
||||
|
||||
Examples: If amount > $1000, notify manager; If status = 'urgent', send immediately
|
||||
|
||||
Options:
|
||||
1. No - Straight-through processing
|
||||
2. Yes - Describe the conditions
|
||||
|
||||
Enter your selection (1-2):</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<check if="selection is 2">
|
||||
<ask>Describe the conditions and what should happen in each case:</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{conditional_logic}}</action>
|
||||
</check>
|
||||
<check if="selection is 1">
|
||||
<action>Store "No conditional logic required" in {{conditional_logic}}</action>
|
||||
</check>
|
||||
|
||||
<ask>Question 7: How critical is this workflow? What happens if it fails?
|
||||
|
||||
Options:
|
||||
1. Low - Can retry manually if needed
|
||||
2. Medium - Should retry automatically, notify on failure
|
||||
3. High - Must succeed, need alerts and logging
|
||||
4. Critical - Business-critical, need comprehensive error handling
|
||||
|
||||
Enter your selection (1-4):</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store selection in {{criticality}}</action>
|
||||
|
||||
<ask>Question 8: What should the workflow be named?</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store response in {{workflow_name}}</action>
|
||||
<action>Generate {{workflow_slug}} from {{workflow_name}} (lowercase, hyphens, no spaces)</action>
|
||||
|
||||
<action>Display summary:
|
||||
- Problem: {{problem_description}}
|
||||
- Trigger: {{trigger_type}}
|
||||
- Data: {{data_requirements}}
|
||||
- Outcome: {{desired_outcome}}
|
||||
- Services: {{integrations}}
|
||||
- Conditions: {{conditional_logic}}
|
||||
- Criticality: {{criticality}}
|
||||
- Name: {{workflow_name}}
|
||||
</action>
|
||||
|
||||
<ask>Does this capture your requirements correctly?
|
||||
|
||||
Options:
|
||||
1. Yes - Save requirements
|
||||
2. No - Let me clarify or add details
|
||||
|
||||
Enter your selection (1-2):</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<check if="selection is 2">
|
||||
<ask>What needs to be clarified or added?</ask>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Update relevant variables based on feedback</action>
|
||||
<action>Repeat summary and confirmation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Research Workflow Pattern">
|
||||
<action>Perform comprehensive web search for workflow pattern:</action>
|
||||
<action>- "n8n workflow pattern [trigger_type] to [desired_outcome] site:docs.n8n.io"</action>
|
||||
<action>- "n8n [integrations] workflow example site:docs.n8n.io"</action>
|
||||
<action>- "n8n best practices [use case] site:docs.n8n.io"</action>
|
||||
<action>Store findings in {{workflow_pattern_research}}</action>
|
||||
|
||||
<action>Research parameter structures for each node type:</action>
|
||||
<action>For each node type in {{node_research}}:</action>
|
||||
<action>1. Search: "n8n [node type] parameters documentation site:docs.n8n.io"</action>
|
||||
<action>2. Extract EXACT parameter structure from docs</action>
|
||||
<action>3. Extract required vs optional parameters</action>
|
||||
<action>4. Extract parameter data types</action>
|
||||
<action>5. Extract example values</action>
|
||||
<action>Store in {{parameter_structures}}</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Save Requirements Document">
|
||||
<action>Resolve output path: {{default_output_file}} using {{workflow_slug}}</action>
|
||||
<action>Fill template with all gathered variables AND research findings</action>
|
||||
<action>Include in document:</action>
|
||||
<action>- Problem description and requirements</action>
|
||||
<action>- Use case research findings</action>
|
||||
<action>- EXACT node types with typeVersions</action>
|
||||
<action>- EXACT parameter structures from docs</action>
|
||||
<action>- Workflow pattern recommendations</action>
|
||||
<action>- Best practices from research</action>
|
||||
<action>Save document to {{default_output_file}}</action>
|
||||
<action>Report saved file path to user</action>
|
||||
|
||||
<output>✅ Requirements Saved Successfully!
|
||||
|
||||
**File:** {{default_output_file}}
|
||||
|
||||
**Next Steps:**
|
||||
1. Review the requirements file
|
||||
2. Run `*create-workflow` to generate the n8n workflow
|
||||
(The create-workflow will automatically load this requirements file)
|
||||
|
||||
**Note:** You can edit the requirements file manually before creating the workflow.
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Validate Content">
|
||||
<invoke-task>Validate against checklist at {{validation}} using {{bmad_folder}}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,75 @@
|
|||
# Workflow Requirements: {{workflow_name}}
|
||||
|
||||
**Created:** {{date}}
|
||||
**Status:** Requirements Gathered
|
||||
**Criticality:** {{criticality}}
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
{{problem_description}}
|
||||
|
||||
---
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
**Trigger:** {{trigger_type}}
|
||||
|
||||
**Desired Outcome:** {{desired_outcome}}
|
||||
|
||||
---
|
||||
|
||||
## Data Requirements
|
||||
|
||||
{{data_requirements}}
|
||||
|
||||
---
|
||||
|
||||
## Integrations
|
||||
|
||||
{{integrations}}
|
||||
|
||||
---
|
||||
|
||||
## Conditional Logic
|
||||
|
||||
{{conditional_logic}}
|
||||
|
||||
---
|
||||
|
||||
## Research Findings
|
||||
|
||||
### Use Case Research
|
||||
|
||||
{{use_case_research}}
|
||||
|
||||
### Node Types (From n8n Documentation)
|
||||
|
||||
{{node_research}}
|
||||
|
||||
### Parameter Structures (From n8n Documentation)
|
||||
|
||||
{{parameter_structures}}
|
||||
|
||||
### Workflow Pattern Recommendations
|
||||
|
||||
{{workflow_pattern_research}}
|
||||
|
||||
---
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- Requirements gathered: {{date}}
|
||||
- Research completed from n8n documentation
|
||||
- All node types and parameters verified from docs.n8n.io
|
||||
- Ready for workflow creation
|
||||
- Use this file as input for `*create-workflow`
|
||||
|
||||
---
|
||||
|
||||
## Change Log
|
||||
|
||||
| Date | Change | Author |
|
||||
| -------- | ----------------------------- | ------------- |
|
||||
| {{date}} | Initial requirements gathered | {{user_name}} |
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
name: gather-requirements
|
||||
description: "Gather and document workflow requirements before creating n8n workflow"
|
||||
author: "Saif"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/{bmad_folder}/autominator/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
requirements_folder: "{config_source}:requirements_folder"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/autominator/workflows/gather-requirements"
|
||||
template: "{installed_path}/template.md"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
workflow_name: "" # Will be elicited
|
||||
workflow_slug: "" # Generated from workflow_name
|
||||
problem_description: "" # Will be elicited
|
||||
trigger_type: "" # Will be elicited
|
||||
data_requirements: "" # Will be elicited
|
||||
desired_outcome: "" # Will be elicited
|
||||
integrations: "" # Will be elicited
|
||||
conditional_logic: "" # Will be elicited
|
||||
criticality: "" # Will be elicited
|
||||
|
||||
default_output_file: "{requirements_folder}/req-{workflow_slug}.md"
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
# Migrate Workflow to n8n - Validation Checklist
|
||||
|
||||
## Source Analysis
|
||||
|
||||
- [ ] Source platform was identified
|
||||
- [ ] Source workflow details were gathered
|
||||
- [ ] Trigger type was identified
|
||||
- [ ] All integrations were identified
|
||||
- [ ] Workflow complexity was assessed
|
||||
|
||||
## Platform Mapping
|
||||
|
||||
- [ ] Platform mappings were loaded
|
||||
- [ ] Source trigger was mapped to n8n trigger
|
||||
- [ ] All source actions were mapped to n8n nodes
|
||||
- [ ] Conditional logic was mapped correctly
|
||||
- [ ] Loops/iterations were mapped correctly
|
||||
- [ ] Data transformations were identified
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
- [ ] n8n workflow has valid JSON structure
|
||||
- [ ] Workflow name is set
|
||||
- [ ] Migration tag is added (migrated-from-[platform])
|
||||
- [ ] All nodes have unique IDs
|
||||
- [ ] All nodes have unique names
|
||||
- [ ] Trigger node is properly configured
|
||||
|
||||
## Node Configuration
|
||||
|
||||
- [ ] All mapped nodes are created
|
||||
- [ ] Node types are valid n8n types
|
||||
- [ ] Node parameters are configured
|
||||
- [ ] Credentials placeholders are set
|
||||
- [ ] Node positions are calculated correctly
|
||||
- [ ] No overlapping nodes
|
||||
|
||||
## Data Mappings
|
||||
|
||||
- [ ] Field mappings from source to n8n are correct
|
||||
- [ ] Data type conversions are handled
|
||||
- [ ] Date/time format differences are addressed
|
||||
- [ ] Expressions use correct n8n syntax (={{ }})
|
||||
- [ ] Set nodes are used for simple transformations
|
||||
- [ ] Code nodes are used for complex transformations
|
||||
|
||||
## Conditional Logic
|
||||
|
||||
- [ ] IF nodes are created for conditional branches
|
||||
- [ ] Switch nodes are created for multiple conditions
|
||||
- [ ] Conditions are properly configured
|
||||
- [ ] True/false branches are correct (index 0/1)
|
||||
- [ ] All branches are connected
|
||||
|
||||
## Connections
|
||||
|
||||
- [ ] All nodes are connected properly
|
||||
- [ ] Trigger connects to first action
|
||||
- [ ] Actions are connected in sequence
|
||||
- [ ] Conditional branches are connected
|
||||
- [ ] Merge points are connected
|
||||
- [ ] All connections reference existing nodes
|
||||
- [ ] No orphaned nodes (except trigger)
|
||||
|
||||
## Error Handling
|
||||
|
||||
- [ ] Error handling strategy is defined
|
||||
- [ ] Critical nodes have retry logic if needed
|
||||
- [ ] continueOnFail is set appropriately
|
||||
- [ ] Error handling matches or improves on source
|
||||
|
||||
## Migration Notes
|
||||
|
||||
- [ ] Source platform is documented
|
||||
- [ ] Migration date is recorded
|
||||
- [ ] Credentials needed are listed
|
||||
- [ ] Platform-specific differences are noted
|
||||
- [ ] Testing considerations are documented
|
||||
|
||||
## Validation
|
||||
|
||||
- [ ] Workflow passes JSON validation
|
||||
- [ ] All required parameters are set
|
||||
- [ ] Workflow structure is logical
|
||||
- [ ] Migration matches source workflow functionality
|
||||
|
||||
## Credentials & Authentication
|
||||
|
||||
- [ ] All services requiring credentials are identified
|
||||
- [ ] Credential types are correct for n8n
|
||||
- [ ] OAuth requirements are noted
|
||||
- [ ] API key requirements are noted
|
||||
- [ ] Authentication differences from source are documented
|
||||
|
||||
## Testing Readiness
|
||||
|
||||
- [ ] Workflow can be imported into n8n
|
||||
- [ ] Test data requirements are clear
|
||||
- [ ] Expected outputs are defined
|
||||
- [ ] Comparison approach with source is defined
|
||||
- [ ] Initial monitoring plan is suggested
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] User has import instructions
|
||||
- [ ] Credential setup guidance provided
|
||||
- [ ] Data mapping explanations provided
|
||||
- [ ] Testing approach explained
|
||||
- [ ] Platform differences highlighted
|
||||
- [ ] Post-migration checklist provided
|
||||
|
|
@ -0,0 +1,354 @@
|
|||
# Migrate Workflow to n8n - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow migrates automation workflows from other platforms to n8n.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Contextual Analysis (Smart Elicitation)">
|
||||
<critical>Before asking any questions, analyze what the user has already told you</critical>
|
||||
|
||||
<action>Review the user's initial request and conversation history</action>
|
||||
<action>Extract any mentioned: source platform, workflow details, integrations, file paths</action>
|
||||
|
||||
<check if="ALL requirements are clear from context">
|
||||
<action>Summarize your understanding</action>
|
||||
<action>Skip directly to Step 2 (Research n8n Documentation for Migration)</action>
|
||||
</check>
|
||||
|
||||
<check if="SOME requirements are clear">
|
||||
<action>Note what you already know</action>
|
||||
<action>Only ask about missing information in Step 1</action>
|
||||
</check>
|
||||
|
||||
<check if="requirements are unclear or minimal">
|
||||
<action>Proceed with full elicitation in Step 1</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Gather Migration Requirements" elicit="true">
|
||||
<critical>Understand the BUSINESS PURPOSE of the workflow being migrated, not just technical details</critical>
|
||||
|
||||
<action>Ask Question 1: "What does this workflow do? What problem does it solve?"</action>
|
||||
<action>Encourage business context: "Describe the business process, not just the technical steps"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{business_purpose}}</action>
|
||||
|
||||
<action>Ask Question 2: "Which platform are you migrating from?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Zapier - Migrate Zapier Zaps to n8n
|
||||
2. Make (Integromat) - Migrate Make scenarios to n8n
|
||||
3. HubSpot Workflows - Migrate HubSpot workflows to n8n
|
||||
4. Microsoft Power Automate - Migrate Power Automate flows to n8n
|
||||
5. IFTTT - Migrate IFTTT applets to n8n
|
||||
6. Other - Specify another automation platform
|
||||
</action>
|
||||
<action>WAIT for user selection (1-6)</action>
|
||||
<action>Store selection in {{source_platform}}</action>
|
||||
|
||||
<check if="selection is 6 (Other)">
|
||||
<action>Ask: "Please specify the platform you're migrating from"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{source_platform}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 3: "Why are you migrating to n8n?"</action>
|
||||
<action>Examples: "Cost savings", "More flexibility", "Self-hosting", "Better integrations"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{migration_reason}}</action>
|
||||
|
||||
<action>Ask Question 4: "How will you provide the workflow details?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Describe the process - Explain what happens step by step
|
||||
2. Provide export file - Upload/paste workflow export file
|
||||
3. Provide screenshots - Share workflow screenshots
|
||||
4. Combination - Multiple sources
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
|
||||
<check if="selection is 1 or 4">
|
||||
<action>Ask: "Describe the workflow step by step:"</action>
|
||||
<action>- What triggers it?</action>
|
||||
<action>- What data does it process?</action>
|
||||
<action>- What actions does it take?</action>
|
||||
<action>- What's the final outcome?</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_description}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 2 or 4">
|
||||
<action>Ask: "Please provide the workflow export file path or paste the content"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_file}} or {{workflow_content}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3 or 4">
|
||||
<action>Ask: "Please share the workflow screenshots and describe what each part does"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_screenshots}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 5: "What services/integrations does this workflow connect to?"</action>
|
||||
<action>Ask: "List all services (e.g., Slack, Google Sheets, HubSpot, custom APIs, etc.)"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{integrations_used}}</action>
|
||||
|
||||
<action>Ask Question 6: "Are there any pain points or issues with the current workflow?"</action>
|
||||
<action>Examples: "Slow execution", "Unreliable", "Missing features", "Hard to maintain"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - Works fine, just migrating platform
|
||||
2. Yes - Describe the issues
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "What issues should we fix during migration?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{issues_to_fix}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 6: "What should the migrated workflow be named?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_name}}</action>
|
||||
|
||||
<action>Ask Question 7: "Where should the n8n workflow file be saved?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Default location - workflows/[workflow-name].json
|
||||
2. Custom path - Specify your own file path
|
||||
3. Project root - Save in main project directory
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask for specific path</action>
|
||||
<action>WAIT for user input</action>
|
||||
</check>
|
||||
<action>Store final path in {{save_location}}</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Research n8n Documentation for Migration">
|
||||
<critical>Search for n8n documentation relevant to migration requirements</critical>
|
||||
|
||||
<action>Inform user: "Researching n8n documentation for migration from {{source_platform}}..."</action>
|
||||
|
||||
<action>Perform web search for:</action>
|
||||
<action>1. n8n equivalents for {{source_platform}} features</action>
|
||||
<action>2. Integration nodes: {{integrations_used}}</action>
|
||||
<action>3. Migration best practices</action>
|
||||
<action>4. Platform-specific considerations</action>
|
||||
|
||||
<action>Search queries to use:</action>
|
||||
<action>- "n8n migrate from [source_platform]"</action>
|
||||
<action>- "n8n [integration] node documentation"</action>
|
||||
<action>- "n8n vs [source_platform] comparison"</action>
|
||||
<action>- "n8n workflow migration guide"</action>
|
||||
|
||||
<action>Focus on official n8n documentation at docs.n8n.io</action>
|
||||
<action>Store relevant migration patterns and node configurations</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Verify Migration Approach">
|
||||
<action>Summarize migration strategy based on documentation:</action>
|
||||
<action>- n8n equivalents for {{source_platform}} features</action>
|
||||
<action>- Required node types and configurations</action>
|
||||
<action>- Data transformation needs</action>
|
||||
<action>- Any migration challenges or limitations</action>
|
||||
|
||||
<action>Inform user: "Based on n8n documentation, I've identified the migration path from {{source_platform}}."</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Load Platform Mappings">
|
||||
<action>Load {{platform_mappings}} file</action>
|
||||
<action>Extract mappings for {{source_platform}}</action>
|
||||
<action>Identify equivalent n8n nodes for source platform components</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Analyze Source Workflow">
|
||||
<check if="{{workflow_file}} or {{workflow_content}} provided">
|
||||
<action>Parse source workflow file/content</action>
|
||||
<action>Extract workflow structure</action>
|
||||
</check>
|
||||
|
||||
<action>Analyze workflow based on description and details:</action>
|
||||
<action>1. Identify trigger type and configuration</action>
|
||||
<action>2. List all actions/steps in order</action>
|
||||
<action>3. Identify conditional logic (if/then branches)</action>
|
||||
<action>4. Identify loops or iterations</action>
|
||||
<action>5. Identify data transformations</action>
|
||||
<action>6. Identify error handling</action>
|
||||
<action>7. Map integrations to n8n nodes</action>
|
||||
|
||||
<action>Present analysis to user:</action>
|
||||
<action>- Source trigger: [platform-specific trigger]</action>
|
||||
<action>- n8n trigger: [mapped n8n node]</action>
|
||||
<action>- Source actions: [list with platform names]</action>
|
||||
<action>- n8n actions: [list with n8n node types]</action>
|
||||
<action>- Logic: [conditional branches, loops]</action>
|
||||
<action>- Transformations: [data mapping needs]</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Plan n8n Workflow Structure">
|
||||
<action>Load {{helpers}} for node creation guidelines</action>
|
||||
<action>Load {{templates}} for reference</action>
|
||||
|
||||
<action>Design n8n workflow structure:</action>
|
||||
<action>1. Map source trigger to n8n trigger node</action>
|
||||
<action>2. Map each source action to n8n node(s)</action>
|
||||
<action>3. Convert conditional logic to IF/Switch nodes</action>
|
||||
<action>4. Convert loops to Split In Batches nodes</action>
|
||||
<action>5. Add Set/Code nodes for data transformations</action>
|
||||
<action>6. Plan node connections</action>
|
||||
<action>7. Add error handling where needed</action>
|
||||
|
||||
<action>Present migration plan to user:</action>
|
||||
<action>- n8n Trigger: [node type and configuration]</action>
|
||||
<action>- n8n Nodes: [list with descriptions]</action>
|
||||
<action>- Connections: [flow diagram]</action>
|
||||
<action>- Data Mappings: [field mappings]</action>
|
||||
<action>- Credentials Needed: [list of integrations requiring auth]</action>
|
||||
|
||||
<action>Ask: "Does this migration plan look correct?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Proceed with migration
|
||||
2. No - Adjust the plan
|
||||
3. Add more details - Provide additional information
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
|
||||
<check if="selection is 2 or 3">
|
||||
<action>Ask: "What changes or additions are needed?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Adjust plan based on feedback</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Build n8n Workflow">
|
||||
<critical>Follow guidelines from {{helpers}} for proper node creation</critical>
|
||||
|
||||
<action>Initialize workflow structure:</action>
|
||||
<substep>
|
||||
{
|
||||
"name": "{{workflow_name}}",
|
||||
"nodes": [],
|
||||
"connections": {},
|
||||
"active": false,
|
||||
"settings": {
|
||||
"executionOrder": "v1"
|
||||
},
|
||||
"tags": [
|
||||
{
|
||||
"name": "migrated-from-{{source_platform}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
</substep>
|
||||
|
||||
<action>Build nodes ONE at a time:</action>
|
||||
|
||||
<substep>For Each Mapped Node:
|
||||
1. Generate unique node ID
|
||||
2. Set node name (descriptive, unique)
|
||||
3. Set node type from platform mappings
|
||||
4. Set typeVersion (usually 1)
|
||||
5. Calculate position (220px spacing)
|
||||
6. Configure parameters based on source workflow
|
||||
7. Map data fields from source to n8n format
|
||||
8. Add credentials placeholder if needed
|
||||
9. Set error handling if required
|
||||
</substep>
|
||||
|
||||
<substep>For Data Transformations:
|
||||
1. Identify field mappings needed
|
||||
2. Create Set nodes for simple mappings
|
||||
3. Create Code nodes for complex transformations
|
||||
4. Use n8n expressions: ={{ $json.fieldName }}
|
||||
5. Handle data type conversions
|
||||
6. Handle date/time format differences
|
||||
</substep>
|
||||
|
||||
<substep>For Conditional Logic:
|
||||
1. Create IF nodes for if/then branches
|
||||
2. Create Switch nodes for multiple conditions
|
||||
3. Map source conditions to n8n condition format
|
||||
4. Set up true/false branches (index 0/1)
|
||||
</substep>
|
||||
|
||||
<substep>For Connections:
|
||||
1. Connect trigger to first action
|
||||
2. Connect actions in sequence
|
||||
3. Connect conditional branches
|
||||
4. Connect merge points
|
||||
5. Validate all connections
|
||||
</substep>
|
||||
|
||||
<action>Add migration notes as workflow tags</action>
|
||||
<action>Validate all node IDs are unique</action>
|
||||
<action>Validate all connections reference existing nodes</action>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Add Migration Notes">
|
||||
<action>Add comment nodes or documentation:</action>
|
||||
<action>- Source platform: {{source_platform}}</action>
|
||||
<action>- Migration date: {timestamp}</action>
|
||||
<action>- Credentials to configure: [list]</action>
|
||||
<action>- Testing notes: [important considerations]</action>
|
||||
<action>- Platform-specific differences: [notes]</action>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Save Migrated Workflow">
|
||||
<action>Save workflow to {{save_location}}</action>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Validate JSON Syntax">
|
||||
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
|
||||
|
||||
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{save_location}}', 'utf8')); console.log('✓ Valid JSON')"</action>
|
||||
|
||||
<check if="validation fails (exit code 1)">
|
||||
<action>Read the error message carefully - it shows the syntax error and position</action>
|
||||
<action>Open the file and navigate to the error location</action>
|
||||
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
|
||||
<action>Save the file</action>
|
||||
<action>Re-run validation with the same command</action>
|
||||
<action>Repeat until validation passes</action>
|
||||
</check>
|
||||
|
||||
<action>Once validation passes, confirm with user: "Workflow migrated successfully to {{save_location}}"</action>
|
||||
</step>
|
||||
|
||||
<step n="11" goal="Provide Migration Guidance">
|
||||
<action>Provide post-migration instructions:</action>
|
||||
<action>1. Import the JSON file into n8n</action>
|
||||
<action>2. Configure credentials for these services: [list]</action>
|
||||
<action>3. Review and update these data mappings: [list]</action>
|
||||
<action>4. Test with sample data before activating</action>
|
||||
<action>5. Compare outputs with original platform</action>
|
||||
<action>6. Monitor initial executions closely</action>
|
||||
|
||||
<action>Highlight platform-specific differences:</action>
|
||||
<action>- Authentication: [differences]</action>
|
||||
<action>- Data formats: [differences]</action>
|
||||
<action>- Scheduling: [differences]</action>
|
||||
<action>- Error handling: [differences]</action>
|
||||
|
||||
<action>Ask: "Would you like help with any specific part of the migration?"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - I'm ready to test
|
||||
2. Yes - Explain credential setup
|
||||
3. Yes - Explain data mappings
|
||||
4. Yes - Explain testing approach
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
|
||||
<check if="selection is 2, 3, or 4">
|
||||
<action>Provide requested explanation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="12" goal="Validate Content">
|
||||
<invoke-task>Validate against checklist at {{validation}} using {{bmad_folder}}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
name: migrate-workflow
|
||||
description: "Migrate workflows from other platforms (Zapier, Make, HubSpot, etc.) to n8n"
|
||||
author: "Saif"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/autominator/workflows/migrate-workflow"
|
||||
shared_path: "{project-root}/{bmad_folder}/autominator/workflows/_shared"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Shared resources
|
||||
helpers: "{shared_path}/n8n-helpers.md"
|
||||
templates: "{shared_path}/n8n-templates.yaml"
|
||||
platform_mappings: "{shared_path}/platform-mappings.yaml"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
source_platform: "" # Will be elicited
|
||||
workflow_description: "" # Will be elicited
|
||||
workflow_file: "" # Will be elicited (optional)
|
||||
workflow_content: "" # Will be elicited (optional)
|
||||
integrations_used: [] # Will be elicited
|
||||
trigger_type: "" # Will be elicited
|
||||
complexity: "" # Will be elicited
|
||||
save_location: "" # Will be elicited
|
||||
workflow_name: "" # Will be elicited
|
||||
|
||||
default_output_file: "{project-root}/workflows/migrated-workflow-{timestamp}.json"
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
# Modify n8n Workflow - Validation Checklist
|
||||
|
||||
## Pre-Modification
|
||||
|
||||
- [ ] Original workflow file was successfully loaded
|
||||
- [ ] Workflow JSON was valid before modifications
|
||||
- [ ] Backup was created before making changes
|
||||
- [ ] User requirements were clearly understood
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
- [ ] Workflow maintains valid JSON structure
|
||||
- [ ] Workflow name is preserved (unless intentionally changed)
|
||||
- [ ] All nodes still have unique IDs
|
||||
- [ ] All nodes still have unique names
|
||||
- [ ] Workflow settings are preserved
|
||||
|
||||
## Node Modifications
|
||||
|
||||
- [ ] Added nodes have unique IDs
|
||||
- [ ] Added nodes have unique names
|
||||
- [ ] Added nodes have valid node types
|
||||
- [ ] Added nodes have required parameters set
|
||||
- [ ] Modified nodes preserve their IDs
|
||||
- [ ] Modified nodes have valid parameter values
|
||||
- [ ] Removed nodes are completely removed from nodes array
|
||||
|
||||
## Connections
|
||||
|
||||
- [ ] All connections reference existing nodes
|
||||
- [ ] Connections to/from added nodes are properly configured
|
||||
- [ ] Connections affected by removed nodes are updated
|
||||
- [ ] No orphaned connections remain
|
||||
- [ ] Connection indices are correct (0 for default, 0/1 for IF nodes)
|
||||
- [ ] No circular dependencies (unless intentional)
|
||||
|
||||
## Node Positioning
|
||||
|
||||
- [ ] New nodes have valid positions
|
||||
- [ ] New nodes don't overlap with existing nodes
|
||||
- [ ] Node positions follow spacing guidelines (220px horizontal)
|
||||
- [ ] Branch nodes have appropriate vertical spacing (±100px)
|
||||
|
||||
## Error Handling
|
||||
|
||||
- [ ] Error handling modifications are applied correctly
|
||||
- [ ] Retry logic is properly configured if added
|
||||
- [ ] continueOnFail settings are appropriate
|
||||
- [ ] maxTries and waitBetweenTries are set if retries enabled
|
||||
|
||||
## Data Flow
|
||||
|
||||
- [ ] Data flow is maintained after modifications
|
||||
- [ ] New transformations are properly configured
|
||||
- [ ] Expressions use correct n8n syntax (={{ }})
|
||||
- [ ] No data flow breaks introduced
|
||||
|
||||
## Integration Changes
|
||||
|
||||
- [ ] New integrations are properly configured
|
||||
- [ ] Credential requirements are identified
|
||||
- [ ] API configurations are correct
|
||||
- [ ] Existing integrations still work
|
||||
|
||||
## Validation
|
||||
|
||||
- [ ] Modified workflow passes JSON validation
|
||||
- [ ] All modifications match user requirements
|
||||
- [ ] No unintended changes were made
|
||||
- [ ] Workflow structure is still logical
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
- [ ] Backup file was created successfully
|
||||
- [ ] Backup location was communicated to user
|
||||
- [ ] Original workflow can be restored if needed
|
||||
|
||||
## Testing Readiness
|
||||
|
||||
- [ ] Modified workflow can be imported into n8n
|
||||
- [ ] Changes are testable
|
||||
- [ ] Expected behavior is clear
|
||||
- [ ] Any new credentials needed are identified
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] Changes made are summarized for user
|
||||
- [ ] User understands what was modified
|
||||
- [ ] Testing recommendations provided if needed
|
||||
- [ ] Backup location shared with user
|
||||
|
|
@ -0,0 +1,336 @@
|
|||
# Modify n8n Workflow - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow modifies an existing n8n workflow based on user requirements.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Contextual Analysis (Smart Elicitation)">
|
||||
<critical>Before asking any questions, analyze what the user has already told you</critical>
|
||||
|
||||
<action>Review the user's initial request and conversation history</action>
|
||||
<action>Extract any mentioned: workflow file path, changes needed, specific nodes</action>
|
||||
|
||||
<check if="ALL requirements are clear from context">
|
||||
<action>Summarize your understanding</action>
|
||||
<action>Skip directly to Step 2 (Load Existing Workflow)</action>
|
||||
</check>
|
||||
|
||||
<check if="SOME requirements are clear">
|
||||
<action>Note what you already know</action>
|
||||
<action>Only ask about missing information in Step 1</action>
|
||||
</check>
|
||||
|
||||
<check if="requirements are unclear or minimal">
|
||||
<action>Proceed with full elicitation in Step 1</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Gather Requirements" elicit="true">
|
||||
<critical>Understand WHY the user wants to modify the workflow, not just WHAT to change</critical>
|
||||
|
||||
<action>Ask Question 1: "Which workflow file do you want to modify?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Provide file path - Specify exact path to workflow JSON
|
||||
2. Search in workflows folder - List available workflows
|
||||
3. Paste workflow JSON - Provide workflow content directly
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
|
||||
<check if="selection is 1">
|
||||
<action>Ask: "Please provide the file path to the workflow JSON"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 2">
|
||||
<action>List all .json files in workflows/ directory</action>
|
||||
<action>Present as numbered list</action>
|
||||
<action>WAIT for user selection</action>
|
||||
<action>Store selected file path in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Ask: "Please paste the workflow JSON content"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Create temporary file with content</action>
|
||||
<action>Store temp file path in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 2: "What problem are you trying to solve by modifying this workflow?"</action>
|
||||
<action>Examples: "It's not handling errors properly", "Need to add Slack notifications", "Missing data validation"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{problem_to_solve}}</action>
|
||||
|
||||
<action>Ask Question 3: "What's currently not working or missing?"</action>
|
||||
<action>Encourage specific details about the issue or gap</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{current_issue}}</action>
|
||||
|
||||
<action>Ask Question 4: "What should the workflow do after these changes?"</action>
|
||||
<action>Focus on the desired behavior and outcome</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{desired_behavior}}</action>
|
||||
|
||||
<action>Ask Question 5: "Are there any specific nodes, integrations, or logic you want to change?"</action>
|
||||
<action>Examples: "Add a Slack node after approval", "Change the IF condition to check status", "Remove the delay node"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{specific_changes}}</action>
|
||||
|
||||
<action>Summarize understanding:</action>
|
||||
<action>- Problem: {{problem_to_solve}}</action>
|
||||
<action>- Current Issue: {{current_issue}}</action>
|
||||
<action>- Desired Behavior: {{desired_behavior}}</action>
|
||||
<action>- Specific Changes: {{specific_changes}}</action>
|
||||
|
||||
<action>Ask: "Does this capture what you need?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Proceed with modifications
|
||||
2. No - Let me clarify
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "What needs clarification?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Update relevant variables</action>
|
||||
<action>Repeat summary and confirmation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Load Existing Workflow">
|
||||
<critical>Only load the workflow file when user provides it - never preload</critical>
|
||||
|
||||
<action>Read workflow file from {{workflow_file}}</action>
|
||||
<action>Parse JSON content</action>
|
||||
<action>Validate JSON structure</action>
|
||||
|
||||
<check if="JSON is invalid">
|
||||
<action>Inform user: "The workflow file has invalid JSON syntax"</action>
|
||||
<action>Show error details</action>
|
||||
<action>Ask: "Would you like me to fix the JSON syntax first? (yes/no)"</action>
|
||||
<action>WAIT for user response</action>
|
||||
|
||||
<check if="user says yes">
|
||||
<action>Fix JSON syntax errors</action>
|
||||
<action>Save corrected file</action>
|
||||
<action>Proceed with loading</action>
|
||||
</check>
|
||||
|
||||
<check if="user says no">
|
||||
<action>Exit workflow with error</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Extract workflow structure:</action>
|
||||
<action>- Workflow name</action>
|
||||
<action>- List of nodes (names, types, IDs)</action>
|
||||
<action>- Connections map</action>
|
||||
<action>- Current settings</action>
|
||||
|
||||
<action>Display workflow summary to user:</action>
|
||||
<action>- Name: [workflow name]</action>
|
||||
<action>- Nodes: [count] nodes</action>
|
||||
<action>- Node list: [node names and types]</action>
|
||||
<action>- Connections: [connection count]</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Create Backup">
|
||||
<action>Create backup of original workflow</action>
|
||||
<action>Save backup to: {{workflow_file}}.backup-{timestamp}</action>
|
||||
<action>Store true in {{backup_created}}</action>
|
||||
<action>Inform user: "Backup created at {{workflow_file}}.backup-{timestamp}"</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Research n8n Documentation for Modifications">
|
||||
<critical>Search for n8n documentation relevant to the modifications needed</critical>
|
||||
|
||||
<action>Inform user: "Researching n8n documentation for your modifications..."</action>
|
||||
|
||||
<action>Perform web search based on modification needs:</action>
|
||||
<action>- Problem to solve: {{problem_to_solve}}</action>
|
||||
<action>- Specific changes: {{specific_changes}}</action>
|
||||
<action>- Desired behavior: {{desired_behavior}}</action>
|
||||
|
||||
<action>Search queries to use:</action>
|
||||
<action>- "n8n [specific feature] documentation"</action>
|
||||
<action>- "n8n [node type] configuration"</action>
|
||||
<action>- "n8n workflow modification best practices"</action>
|
||||
<action>- "n8n [integration] setup"</action>
|
||||
|
||||
<action>Focus on official n8n documentation at docs.n8n.io</action>
|
||||
<action>Store relevant node configurations and modification patterns</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Verify Modification Approach">
|
||||
<action>Summarize modification strategy based on documentation:</action>
|
||||
<action>- How to implement {{desired_behavior}}</action>
|
||||
<action>- Required node changes or additions</action>
|
||||
<action>- Configuration updates needed</action>
|
||||
<action>- Best practices for these modifications</action>
|
||||
|
||||
<action>Inform user: "Based on n8n documentation, I've identified how to implement your changes."</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Plan Modifications">
|
||||
<action>Load {{helpers}} for node creation guidelines</action>
|
||||
<action>Analyze current workflow structure</action>
|
||||
<action>Plan modifications based on requirements:</action>
|
||||
|
||||
<check if="adding nodes">
|
||||
<action>1. Identify where new nodes should be inserted</action>
|
||||
<action>2. Determine node types needed</action>
|
||||
<action>3. Plan connections to/from new nodes</action>
|
||||
<action>4. Calculate positions for new nodes</action>
|
||||
</check>
|
||||
|
||||
<check if="modifying nodes">
|
||||
<action>1. Identify nodes to modify by name or ID</action>
|
||||
<action>2. Determine what parameters to change</action>
|
||||
<action>3. Validate new parameter values</action>
|
||||
</check>
|
||||
|
||||
<check if="removing nodes">
|
||||
<action>1. Identify nodes to remove by name or ID</action>
|
||||
<action>2. Identify connections that will be affected</action>
|
||||
<action>3. Plan how to reconnect remaining nodes</action>
|
||||
</check>
|
||||
|
||||
<check if="changing connections">
|
||||
<action>1. Identify connections to modify</action>
|
||||
<action>2. Validate new connection targets exist</action>
|
||||
<action>3. Update connection indices if needed</action>
|
||||
</check>
|
||||
|
||||
<action>Present modification plan to user:</action>
|
||||
<action>- Changes to be made: [detailed list]</action>
|
||||
<action>- Nodes affected: [list]</action>
|
||||
<action>- New connections: [list]</action>
|
||||
<action>- Removed connections: [list]</action>
|
||||
|
||||
<action>Ask: "Does this modification plan look correct?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Proceed with modifications
|
||||
2. No - Adjust the plan
|
||||
3. Add more changes - Include additional modifications
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
|
||||
<check if="selection is 2 or 3">
|
||||
<action>Ask: "What changes or additions are needed?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Adjust plan based on feedback</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Apply Modifications">
|
||||
<critical>Follow guidelines from {{helpers}} for proper node creation</critical>
|
||||
|
||||
<action>Load current workflow JSON into memory</action>
|
||||
|
||||
<substep>If Adding Nodes:
|
||||
1. Generate unique node IDs (check against existing IDs)
|
||||
2. Create node objects with proper structure
|
||||
3. Calculate positions (avoid overlaps with existing nodes)
|
||||
4. Add nodes to workflow.nodes array
|
||||
5. Create connections to/from new nodes
|
||||
6. Update connections object
|
||||
</substep>
|
||||
|
||||
<substep>If Modifying Nodes:
|
||||
1. Find nodes by name or ID
|
||||
2. Update parameters as specified
|
||||
3. Preserve node ID and other unchanged properties
|
||||
4. Validate new parameter values
|
||||
5. Update node in workflow.nodes array
|
||||
</substep>
|
||||
|
||||
<substep>If Removing Nodes:
|
||||
1. Find nodes by name or ID
|
||||
2. Remove from workflow.nodes array
|
||||
3. Remove all connections to/from removed nodes
|
||||
4. Update connections object
|
||||
5. Reconnect remaining nodes if needed
|
||||
</substep>
|
||||
|
||||
<substep>If Changing Connections:
|
||||
1. Update connections object
|
||||
2. Validate all referenced nodes exist
|
||||
3. Ensure connection indices are correct
|
||||
4. Remove orphaned connections
|
||||
</substep>
|
||||
|
||||
<substep>If Updating Error Handling:
|
||||
1. Find affected nodes
|
||||
2. Add or update error handling properties:
|
||||
- continueOnFail
|
||||
- retryOnFail
|
||||
- maxTries
|
||||
- waitBetweenTries
|
||||
3. Validate error handling configuration
|
||||
</substep>
|
||||
|
||||
<action>Validate modified workflow:</action>
|
||||
<action>- All node IDs are unique</action>
|
||||
<action>- All node names are unique</action>
|
||||
<action>- All connections reference existing nodes</action>
|
||||
<action>- No orphaned nodes (except trigger)</action>
|
||||
<action>- Node positions don't overlap</action>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Save Modified Workflow">
|
||||
<action>Save modified workflow to {{workflow_file}}</action>
|
||||
<action>Preserve original file location</action>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Validate JSON Syntax">
|
||||
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
|
||||
|
||||
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{workflow_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
|
||||
|
||||
<check if="validation fails (exit code 1)">
|
||||
<action>Read the error message carefully - it shows the syntax error and position</action>
|
||||
<action>Open the file and navigate to the error location</action>
|
||||
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
|
||||
<action>Save the file</action>
|
||||
<action>Re-run validation with the same command</action>
|
||||
<action>Repeat until validation passes</action>
|
||||
</check>
|
||||
|
||||
<action>Once validation passes, confirm with user: "Workflow modified successfully at {{workflow_file}}"</action>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Summarize Changes">
|
||||
<action>Provide summary of changes made:</action>
|
||||
<action>- Nodes added: [count and names]</action>
|
||||
<action>- Nodes modified: [count and names]</action>
|
||||
<action>- Nodes removed: [count and names]</action>
|
||||
<action>- Connections changed: [count]</action>
|
||||
<action>- Backup location: {{workflow_file}}.backup-{timestamp}</action>
|
||||
|
||||
<action>Ask: "Would you like to make additional changes?"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - I'm done
|
||||
2. Yes - Make more modifications
|
||||
3. Revert - Restore from backup
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
|
||||
<check if="selection is 2">
|
||||
<action>Return to Step 1 with current workflow</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Restore workflow from backup</action>
|
||||
<action>Confirm restoration to user</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="11" goal="Validate Content">
|
||||
<invoke-task>Validate against checklist at {{validation}} using {{bmad_folder}}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
name: modify-workflow
|
||||
description: "Edit or update existing n8n workflow"
|
||||
author: "Saif"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/autominator/workflows/modify-workflow"
|
||||
shared_path: "{project-root}/{bmad_folder}/autominator/workflows/_shared"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Shared resources
|
||||
helpers: "{shared_path}/n8n-helpers.md"
|
||||
templates: "{shared_path}/n8n-templates.yaml"
|
||||
platform_mappings: "{shared_path}/platform-mappings.yaml"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
workflow_file: "" # Will be elicited
|
||||
modification_type: "" # Will be elicited
|
||||
changes_description: "" # Will be elicited
|
||||
nodes_to_add: [] # Will be elicited
|
||||
nodes_to_modify: [] # Will be elicited
|
||||
nodes_to_remove: [] # Will be elicited
|
||||
backup_created: false # Will be set
|
||||
|
||||
default_output_file: "" # Will use existing file location
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -0,0 +1,130 @@
|
|||
# Optimize n8n Workflow - Validation Checklist
|
||||
|
||||
## Pre-Optimization
|
||||
|
||||
- [ ] Original workflow was successfully loaded
|
||||
- [ ] Workflow JSON was valid before optimization
|
||||
- [ ] Optimization focus areas were identified
|
||||
- [ ] Backup was created before making changes
|
||||
- [ ] User requirements were clearly understood
|
||||
|
||||
## Analysis Completeness
|
||||
|
||||
- [ ] Performance analysis was conducted
|
||||
- [ ] Error handling was reviewed
|
||||
- [ ] Code quality was assessed
|
||||
- [ ] Structure was evaluated
|
||||
- [ ] Best practices were checked
|
||||
- [ ] Security was reviewed
|
||||
- [ ] All issues were documented
|
||||
|
||||
## Recommendations Quality
|
||||
|
||||
- [ ] Recommendations are specific and actionable
|
||||
- [ ] Recommendations are prioritized correctly
|
||||
- [ ] Impact of each recommendation is clear
|
||||
- [ ] Implementation steps are provided
|
||||
- [ ] Expected improvements are quantified
|
||||
- [ ] No breaking changes are recommended
|
||||
|
||||
## Performance Optimizations
|
||||
|
||||
- [ ] Unnecessary nodes were identified/removed
|
||||
- [ ] Data transformations were optimized
|
||||
- [ ] Batch processing opportunities were identified
|
||||
- [ ] Redundant API calls were consolidated
|
||||
- [ ] Parallel execution opportunities were identified
|
||||
- [ ] Node execution order was optimized
|
||||
|
||||
## Error Handling Improvements
|
||||
|
||||
- [ ] Critical nodes have retry logic
|
||||
- [ ] continueOnFail is set appropriately
|
||||
- [ ] Error workflows are configured where needed
|
||||
- [ ] Timeout configurations are appropriate
|
||||
- [ ] Error notifications are set up
|
||||
- [ ] Error handling doesn't mask real issues
|
||||
|
||||
## Code Quality Improvements
|
||||
|
||||
- [ ] Set nodes are properly configured
|
||||
- [ ] Code nodes are optimized
|
||||
- [ ] Expressions use correct syntax
|
||||
- [ ] Data types are handled correctly
|
||||
- [ ] Hardcoded values are replaced with variables
|
||||
- [ ] Node names are descriptive and consistent
|
||||
|
||||
## Structure Improvements
|
||||
|
||||
- [ ] Node positions are logical and organized
|
||||
- [ ] Complex branches are simplified where possible
|
||||
- [ ] Duplicate logic is eliminated
|
||||
- [ ] Merge points are optimized
|
||||
- [ ] Connection patterns are clean
|
||||
- [ ] Workflow flow is easy to follow
|
||||
|
||||
## Best Practices Applied
|
||||
|
||||
- [ ] Credentials are used correctly
|
||||
- [ ] Security issues are addressed
|
||||
- [ ] Node types are appropriate
|
||||
- [ ] Node versions are up to date
|
||||
- [ ] Data handling follows best practices
|
||||
- [ ] Workflow settings are optimal
|
||||
|
||||
## Security Improvements
|
||||
|
||||
- [ ] No credentials are exposed
|
||||
- [ ] Sensitive data is handled properly
|
||||
- [ ] No hardcoded secrets remain
|
||||
- [ ] Authentication is properly configured
|
||||
- [ ] Data is sanitized where needed
|
||||
- [ ] Security best practices are followed
|
||||
|
||||
## Workflow Integrity
|
||||
|
||||
- [ ] All node IDs remain unique
|
||||
- [ ] All node names remain unique
|
||||
- [ ] All connections are valid
|
||||
- [ ] No functionality is lost
|
||||
- [ ] Workflow still achieves original purpose
|
||||
- [ ] No breaking changes introduced
|
||||
|
||||
## Validation
|
||||
|
||||
- [ ] Optimized workflow passes JSON validation
|
||||
- [ ] All optimizations were applied correctly
|
||||
- [ ] No unintended changes were made
|
||||
- [ ] Workflow structure is still logical
|
||||
- [ ] All improvements are documented
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
- [ ] Backup file was created successfully
|
||||
- [ ] Backup location was communicated to user
|
||||
- [ ] Original workflow can be restored if needed
|
||||
|
||||
## Testing Readiness
|
||||
|
||||
- [ ] Optimized workflow can be imported into n8n
|
||||
- [ ] Test scenarios are identified
|
||||
- [ ] Expected improvements are measurable
|
||||
- [ ] Comparison approach is defined
|
||||
- [ ] Monitoring plan is suggested
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] Analysis report is comprehensive
|
||||
- [ ] All findings are documented
|
||||
- [ ] Recommendations are clearly explained
|
||||
- [ ] Expected improvements are quantified
|
||||
- [ ] Testing recommendations are provided
|
||||
- [ ] User understands all changes made
|
||||
|
||||
## Expected Improvements
|
||||
|
||||
- [ ] Performance improvements are quantified
|
||||
- [ ] Reliability improvements are identified
|
||||
- [ ] Maintainability improvements are clear
|
||||
- [ ] Security improvements are documented
|
||||
- [ ] Cost savings are estimated (if applicable)
|
||||
|
|
@ -0,0 +1,475 @@
|
|||
# Optimize n8n Workflow - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow analyzes and optimizes existing n8n workflows for performance and best practices.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Contextual Analysis (Smart Elicitation)">
|
||||
<critical>Before asking any questions, analyze what the user has already told you</critical>
|
||||
|
||||
<action>Review the user's initial request and conversation history</action>
|
||||
<action>Extract any mentioned: workflow file path, performance issues, optimization goals</action>
|
||||
|
||||
<check if="ALL requirements are clear from context">
|
||||
<action>Summarize your understanding</action>
|
||||
<action>Skip directly to Step 2 (Load Workflow)</action>
|
||||
</check>
|
||||
|
||||
<check if="SOME requirements are clear">
|
||||
<action>Note what you already know</action>
|
||||
<action>Only ask about missing information in Step 1</action>
|
||||
</check>
|
||||
|
||||
<check if="requirements are unclear or minimal">
|
||||
<action>Proceed with full elicitation in Step 1</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Gather Optimization Requirements" elicit="true">
|
||||
<critical>Understand the REAL PROBLEMS the user is experiencing, not just generic optimization goals</critical>
|
||||
|
||||
<action>Ask Question 1: "Which workflow do you want to optimize?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Provide file path - Specify exact path to workflow JSON
|
||||
2. Search in workflows folder - List available workflows
|
||||
3. Paste workflow JSON - Provide workflow content directly
|
||||
</action>
|
||||
<action>WAIT for user selection (1-3)</action>
|
||||
|
||||
<check if="selection is 1">
|
||||
<action>Ask: "Please provide the file path to the workflow JSON"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 2">
|
||||
<action>List all .json files in workflows/ directory</action>
|
||||
<action>Present as numbered list</action>
|
||||
<action>WAIT for user selection</action>
|
||||
<action>Store selected file path in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Ask: "Please paste the workflow JSON content"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Create temporary file with content</action>
|
||||
<action>Store temp file path in {{workflow_file}}</action>
|
||||
</check>
|
||||
|
||||
<action>Ask Question 2: "What problems are you experiencing with this workflow?"</action>
|
||||
<action>Examples: "Takes too long to run", "Fails frequently", "Hard to understand", "Doesn't handle errors well"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{problems_experienced}}</action>
|
||||
|
||||
<action>Ask Question 3: "What's the business impact of these problems?"</action>
|
||||
<action>Examples: "Delays customer responses", "Wastes team time", "Causes data issues", "Costs money"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{business_impact}}</action>
|
||||
|
||||
<action>Ask Question 4: "What would 'better' look like for this workflow?"</action>
|
||||
<action>Focus on desired outcomes: "Faster execution", "More reliable", "Easier to maintain", "Better error recovery"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store in {{desired_improvements}}</action>
|
||||
|
||||
<action>Ask Question 5: "Are there specific areas you want me to focus on?"</action>
|
||||
<action>Present numbered options (can select multiple):
|
||||
1. Performance - Speed and efficiency
|
||||
2. Reliability - Error handling and retries
|
||||
3. Maintainability - Code quality and structure
|
||||
4. Security - Credential and data handling
|
||||
5. All - Comprehensive review
|
||||
6. Let you decide - Analyze and recommend
|
||||
</action>
|
||||
<action>WAIT for user selection (1-6 or multiple)</action>
|
||||
<action>Store selections in {{optimization_focus}}</action>
|
||||
|
||||
<action>Summarize understanding:</action>
|
||||
<action>- Problems: {{problems_experienced}}</action>
|
||||
<action>- Business Impact: {{business_impact}}</action>
|
||||
<action>- Desired Improvements: {{desired_improvements}}</action>
|
||||
<action>- Focus Areas: {{optimization_focus}}</action>
|
||||
|
||||
<action>Ask: "Does this capture your optimization needs?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Proceed with analysis
|
||||
2. No - Let me clarify
|
||||
</action>
|
||||
<action>WAIT for user selection (1-2)</action>
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "What needs clarification?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Update relevant variables</action>
|
||||
<action>Repeat summary and confirmation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Load Workflow">
|
||||
<critical>Only load the workflow file when user provides it - never preload</critical>
|
||||
|
||||
<action>Read workflow file from {{workflow_file}}</action>
|
||||
<action>Parse JSON content</action>
|
||||
<action>Validate JSON structure</action>
|
||||
|
||||
<check if="JSON is invalid">
|
||||
<action>Inform user: "The workflow file has invalid JSON syntax"</action>
|
||||
<action>Show error details</action>
|
||||
<action>Ask: "Would you like me to fix the JSON syntax first? (yes/no)"</action>
|
||||
<action>WAIT for user response</action>
|
||||
|
||||
<check if="user says yes">
|
||||
<action>Fix JSON syntax errors</action>
|
||||
<action>Save corrected file</action>
|
||||
<action>Proceed with loading</action>
|
||||
</check>
|
||||
|
||||
<check if="user says no">
|
||||
<action>Exit workflow with error</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Extract workflow structure:</action>
|
||||
<action>- Workflow name and settings</action>
|
||||
<action>- Node count and types</action>
|
||||
<action>- Connection patterns</action>
|
||||
<action>- Error handling configuration</action>
|
||||
<action>- Credential usage</action>
|
||||
|
||||
<action>Display workflow summary to user:</action>
|
||||
<action>- Name: [workflow name]</action>
|
||||
<action>- Nodes: [count] nodes</action>
|
||||
<action>- Complexity: [simple/medium/complex]</action>
|
||||
<action>- Integrations: [list of services]</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Research n8n Best Practices and Optimization">
|
||||
<critical>Search for n8n documentation on optimization and best practices</critical>
|
||||
|
||||
<action>Inform user: "Researching n8n best practices and optimization techniques..."</action>
|
||||
|
||||
<action>Perform web search for:</action>
|
||||
<action>1. n8n performance optimization</action>
|
||||
<action>2. n8n error handling best practices</action>
|
||||
<action>3. n8n workflow structure patterns</action>
|
||||
<action>4. n8n security best practices</action>
|
||||
<action>5. Solutions for: {{problems_experienced}}</action>
|
||||
|
||||
<action>Search queries to use:</action>
|
||||
<action>- "n8n workflow optimization best practices"</action>
|
||||
<action>- "n8n performance tuning"</action>
|
||||
<action>- "n8n error handling patterns"</action>
|
||||
<action>- "n8n workflow security"</action>
|
||||
<action>- "n8n [specific problem] solution"</action>
|
||||
|
||||
<action>Focus on official n8n documentation at docs.n8n.io</action>
|
||||
<action>Store relevant optimization techniques and best practices</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Verify Optimization Strategy">
|
||||
<action>Summarize optimization approach based on documentation:</action>
|
||||
<action>- Solutions for {{problems_experienced}}</action>
|
||||
<action>- Best practices to apply</action>
|
||||
<action>- Performance improvements available</action>
|
||||
<action>- Expected impact on {{business_impact}}</action>
|
||||
|
||||
<action>Inform user: "Based on n8n best practices, I've identified optimization opportunities."</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Analyze Workflow">
|
||||
<action>Load {{helpers}} for best practices reference</action>
|
||||
|
||||
<action>Perform comprehensive analysis based on {{optimization_focus}}:</action>
|
||||
|
||||
<substep>Performance Analysis:
|
||||
- Check for unnecessary nodes
|
||||
- Identify inefficient data transformations
|
||||
- Look for missing batch processing opportunities
|
||||
- Check for redundant API calls
|
||||
- Analyze node execution order
|
||||
- Identify parallel execution opportunities
|
||||
</substep>
|
||||
|
||||
<substep>Error Handling Analysis:
|
||||
- Check if critical nodes have retry logic
|
||||
- Verify continueOnFail settings
|
||||
- Look for missing error workflows
|
||||
- Check timeout configurations
|
||||
- Verify error notification setup
|
||||
</substep>
|
||||
|
||||
<substep>Code Quality Analysis:
|
||||
- Review Set node configurations
|
||||
- Review Code node implementations
|
||||
- Check expression syntax and efficiency
|
||||
- Verify data type handling
|
||||
- Check for hardcoded values
|
||||
- Review node naming conventions
|
||||
</substep>
|
||||
|
||||
<substep>Structure Analysis:
|
||||
- Check node positioning and layout
|
||||
- Verify logical flow organization
|
||||
- Look for overly complex branches
|
||||
- Check for duplicate logic
|
||||
- Verify proper use of merge nodes
|
||||
- Check connection patterns
|
||||
</substep>
|
||||
|
||||
<substep>Best Practices Analysis:
|
||||
- Verify proper credential usage
|
||||
- Check for security issues
|
||||
- Verify proper use of node types
|
||||
- Check for deprecated node versions
|
||||
- Verify proper data handling
|
||||
- Check workflow settings
|
||||
</substep>
|
||||
|
||||
<substep>Security Analysis:
|
||||
- Check credential exposure
|
||||
- Verify sensitive data handling
|
||||
- Check for hardcoded secrets
|
||||
- Verify proper authentication
|
||||
- Check data sanitization
|
||||
</substep>
|
||||
|
||||
<action>Store all findings in {{issues_found}}</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Generate Recommendations">
|
||||
<action>For each issue found, generate specific recommendations:</action>
|
||||
|
||||
<action>Categorize recommendations by priority:</action>
|
||||
<action>- Critical: Security issues, major performance problems</action>
|
||||
<action>- High: Error handling gaps, significant inefficiencies</action>
|
||||
<action>- Medium: Code quality improvements, minor optimizations</action>
|
||||
<action>- Low: Cosmetic improvements, nice-to-haves</action>
|
||||
|
||||
<action>For each recommendation, provide:</action>
|
||||
<action>1. Issue description</action>
|
||||
<action>2. Impact explanation</action>
|
||||
<action>3. Specific solution</action>
|
||||
<action>4. Implementation steps</action>
|
||||
<action>5. Expected improvement</action>
|
||||
|
||||
<action>Store recommendations in {{recommendations}}</action>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Present Analysis Report" elicit="true">
|
||||
<action>Present comprehensive optimization report:</action>
|
||||
|
||||
<action>## Workflow Analysis Report</action>
|
||||
<action>Workflow: {{workflow_name}}</action>
|
||||
<action>Analysis Date: {timestamp}</action>
|
||||
<action>Optimization Focus: {{optimization_focus}}</action>
|
||||
|
||||
<action>### Summary</action>
|
||||
<action>- Total Issues Found: [count]</action>
|
||||
<action>- Critical: [count]</action>
|
||||
<action>- High Priority: [count]</action>
|
||||
<action>- Medium Priority: [count]</action>
|
||||
<action>- Low Priority: [count]</action>
|
||||
|
||||
<action>### Detailed Findings</action>
|
||||
<action>Present each issue with:</action>
|
||||
<action>- Priority level</action>
|
||||
<action>- Issue description</action>
|
||||
<action>- Current state</action>
|
||||
<action>- Recommended solution</action>
|
||||
<action>- Expected impact</action>
|
||||
|
||||
<action>### Performance Opportunities</action>
|
||||
<action>List specific performance improvements with estimated impact</action>
|
||||
|
||||
<action>### Best Practice Violations</action>
|
||||
<action>List n8n best practices not being followed</action>
|
||||
|
||||
<action>Ask: "Would you like me to apply these optimizations?"</action>
|
||||
<action>Present numbered options:
|
||||
1. Yes - Apply all recommendations
|
||||
2. Yes - Apply only critical and high priority
|
||||
3. Yes - Let me choose which to apply
|
||||
4. No - Just provide the report
|
||||
5. Explain more - I need more details first
|
||||
</action>
|
||||
<action>WAIT for user selection (1-5)</action>
|
||||
|
||||
<check if="selection is 5">
|
||||
<action>Ask: "Which recommendations would you like explained?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Provide detailed explanation</action>
|
||||
<action>Repeat this step</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Present recommendations as numbered list</action>
|
||||
<action>Ask: "Select recommendations to apply (comma-separated numbers)"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Store selected recommendations</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 1, 2, or 3">
|
||||
<action>Store true in {{apply_changes}}</action>
|
||||
<action>Proceed to Step 8</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 4">
|
||||
<action>Store false in {{apply_changes}}</action>
|
||||
<action>Skip to Step 11 (provide report only)</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Create Backup">
|
||||
<check if="{{apply_changes}} is true">
|
||||
<action>Create backup of original workflow</action>
|
||||
<action>Save backup to: {{workflow_file}}.backup-{timestamp}</action>
|
||||
<action>Store true in {{backup_created}}</action>
|
||||
<action>Inform user: "Backup created at {{workflow_file}}.backup-{timestamp}"</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Apply Optimizations">
|
||||
<critical>Follow guidelines from {{helpers}} for proper node configuration</critical>
|
||||
|
||||
<action>Load current workflow JSON into memory</action>
|
||||
|
||||
<action>Apply each selected recommendation:</action>
|
||||
|
||||
<substep>Performance Optimizations:
|
||||
- Remove unnecessary nodes
|
||||
- Optimize data transformations
|
||||
- Add batch processing where applicable
|
||||
- Consolidate redundant API calls
|
||||
- Optimize node execution order
|
||||
- Add parallel execution where possible
|
||||
</substep>
|
||||
|
||||
<substep>Error Handling Improvements:
|
||||
- Add retry logic to critical nodes
|
||||
- Set appropriate continueOnFail values
|
||||
- Add error workflows if needed
|
||||
- Configure timeouts
|
||||
- Add error notifications
|
||||
</substep>
|
||||
|
||||
<substep>Code Quality Improvements:
|
||||
- Refactor Set node configurations
|
||||
- Optimize Code node implementations
|
||||
- Improve expression syntax
|
||||
- Fix data type handling
|
||||
- Replace hardcoded values with variables
|
||||
- Improve node naming
|
||||
</substep>
|
||||
|
||||
<substep>Structure Improvements:
|
||||
- Reorganize node positions
|
||||
- Simplify complex branches
|
||||
- Remove duplicate logic
|
||||
- Optimize merge points
|
||||
- Improve connection patterns
|
||||
</substep>
|
||||
|
||||
<substep>Best Practice Applications:
|
||||
- Fix credential usage
|
||||
- Address security issues
|
||||
- Update deprecated nodes
|
||||
- Improve data handling
|
||||
- Update workflow settings
|
||||
</substep>
|
||||
|
||||
<action>Validate optimized workflow:</action>
|
||||
<action>- All node IDs remain unique</action>
|
||||
<action>- All connections are valid</action>
|
||||
<action>- No functionality is lost</action>
|
||||
<action>- All improvements are applied</action>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Save Optimized Workflow">
|
||||
<action>Save optimized workflow to {{workflow_file}}</action>
|
||||
</step>
|
||||
|
||||
<step n="11" goal="Validate JSON Syntax">
|
||||
<check if="{{apply_changes}} is true">
|
||||
<critical>NEVER delete the file if validation fails - always fix syntax errors</critical>
|
||||
|
||||
<action>Run: node -e "JSON.parse(require('fs').readFileSync('{{workflow_file}}', 'utf8')); console.log('✓ Valid JSON')"</action>
|
||||
|
||||
<check if="validation fails (exit code 1)">
|
||||
<action>Read the error message carefully - it shows the syntax error and position</action>
|
||||
<action>Open the file and navigate to the error location</action>
|
||||
<action>Fix the syntax error (add missing comma, bracket, or quote as indicated)</action>
|
||||
<action>Save the file</action>
|
||||
<action>Re-run validation with the same command</action>
|
||||
<action>Repeat until validation passes</action>
|
||||
</check>
|
||||
|
||||
<action>Once validation passes, confirm with user: "Workflow optimized successfully at {{workflow_file}}"</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="12" goal="Provide Optimization Summary">
|
||||
<action>Provide comprehensive summary:</action>
|
||||
|
||||
<check if="{{apply_changes}} is true">
|
||||
<action>### Optimizations Applied</action>
|
||||
<action>- Total changes: [count]</action>
|
||||
<action>- Performance improvements: [list]</action>
|
||||
<action>- Error handling added: [list]</action>
|
||||
<action>- Code quality fixes: [list]</action>
|
||||
<action>- Structure improvements: [list]</action>
|
||||
<action>- Best practices applied: [list]</action>
|
||||
<action>- Backup location: {{workflow_file}}.backup-{timestamp}</action>
|
||||
|
||||
<action>### Expected Improvements</action>
|
||||
<action>- Execution speed: [estimated improvement]</action>
|
||||
<action>- Reliability: [improvements]</action>
|
||||
<action>- Maintainability: [improvements]</action>
|
||||
<action>- Security: [improvements]</action>
|
||||
|
||||
<action>### Testing Recommendations</action>
|
||||
<action>1. Import optimized workflow into n8n</action>
|
||||
<action>2. Test with sample data</action>
|
||||
<action>3. Compare execution times with original</action>
|
||||
<action>4. Verify all functionality works correctly</action>
|
||||
<action>5. Monitor error rates</action>
|
||||
</check>
|
||||
|
||||
<check if="{{apply_changes}} is false">
|
||||
<action>### Optimization Report</action>
|
||||
<action>Report saved with all recommendations</action>
|
||||
<action>No changes applied to workflow</action>
|
||||
<action>Review recommendations and apply manually if desired</action>
|
||||
</check>
|
||||
|
||||
<action>Ask: "Would you like additional help?"</action>
|
||||
<action>Present numbered options:
|
||||
1. No - I'm done
|
||||
2. Yes - Explain specific optimizations
|
||||
3. Yes - Optimize another workflow
|
||||
4. Revert - Restore from backup
|
||||
</action>
|
||||
<action>WAIT for user selection (1-4)</action>
|
||||
|
||||
<check if="selection is 2">
|
||||
<action>Ask: "Which optimization would you like explained?"</action>
|
||||
<action>WAIT for user input</action>
|
||||
<action>Provide detailed explanation</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 3">
|
||||
<action>Return to Step 1 for new workflow</action>
|
||||
</check>
|
||||
|
||||
<check if="selection is 4">
|
||||
<action>Restore workflow from backup</action>
|
||||
<action>Confirm restoration to user</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="13" goal="Validate Content">
|
||||
<invoke-task>Validate against checklist at {{validation}} using {{bmad_folder}}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
name: optimize-workflow
|
||||
description: "Review and improve existing n8n workflows for performance and best practices"
|
||||
author: "Saif"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/autominator/workflows/optimize-workflow"
|
||||
shared_path: "{project-root}/{bmad_folder}/autominator/workflows/_shared"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Shared resources
|
||||
helpers: "{shared_path}/n8n-helpers.md"
|
||||
templates: "{shared_path}/n8n-templates.yaml"
|
||||
platform_mappings: "{shared_path}/platform-mappings.yaml"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
workflow_file: "" # Will be elicited
|
||||
optimization_focus: [] # Will be elicited
|
||||
issues_found: [] # Will be identified
|
||||
recommendations: [] # Will be generated
|
||||
apply_changes: false # Will be elicited
|
||||
backup_created: false # Will be set
|
||||
|
||||
default_output_file: "" # Will use existing file location
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -8,7 +8,7 @@ const chalk = require('chalk');
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Object} options.coreConfig - Core configuration containing user_name
|
||||
* @param {Array<string>} options.installedIDEs - Array of IDE codes that were installed
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Production-ready examples in `/src/modules/bmb/reference/agents/`:
|
|||
|
||||
For installing standalone simple and expert agents, see:
|
||||
|
||||
- [Custom Agent Installation](/docs/custom-agent-installation.md)
|
||||
- [Custom Agent Installation](/docs/custom-content-installation.md)
|
||||
|
||||
## Key Concepts
|
||||
|
||||
|
|
|
|||
|
|
@ -113,8 +113,8 @@ For a [module type] module, we'll create this structure:"
|
|||
│ └── [template-files]
|
||||
├── data/ # Module data files
|
||||
│ └── [data-files]
|
||||
├── module.yaml # Required
|
||||
├── _module-installer/ # Installation configuration
|
||||
│ ├── install-config.yaml # Required
|
||||
│ ├── installer.js # Optional
|
||||
│ └── assets/ # Optional install assets
|
||||
└── README.md # Module documentation
|
||||
|
|
|
|||
|
|
@ -184,7 +184,7 @@ Update module-plan.md with configuration section:
|
|||
|
||||
### Result Configuration Structure
|
||||
|
||||
The install-config.yaml will generate:
|
||||
The module.yaml will generate:
|
||||
- Module configuration at: {bmad_folder}/{module_code}/config.yaml
|
||||
- User settings stored as: [describe structure]
|
||||
````
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ partyModeWorkflow: '{project-root}/{bmad_folder}/core/workflows/party-mode/workf
|
|||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Use configuration plan from step 5
|
||||
- 💾 Create install-config.yaml with all fields
|
||||
- 💾 Create module.yaml with all fields
|
||||
- 📖 Add "step-08-installer" to stepsCompleted array` before loading next step
|
||||
- 🚫 FORBIDDEN to load next step until user selects 'C'
|
||||
|
||||
|
|
@ -50,7 +50,7 @@ partyModeWorkflow: '{project-root}/{bmad_folder}/core/workflows/party-mode/workf
|
|||
|
||||
## STEP GOAL:
|
||||
|
||||
To create the module installer configuration (install-config.yaml) that defines how users will install and configure the module.
|
||||
To create the module installer configuration (module.yaml) that defines how users will install and configure the module.
|
||||
|
||||
## INSTALLER SETUP PROCESS:
|
||||
|
||||
|
|
@ -74,11 +74,11 @@ From step 5, we planned these configuration fields:
|
|||
Ensure \_module-installer directory exists
|
||||
Directory: {custom_module_location}/{module_name}/\_module-installer/
|
||||
|
||||
### 3. Create install-config.yaml
|
||||
### 3. Create module.yaml
|
||||
|
||||
"I'll create the install-config.yaml file based on your configuration plan. This is the core installer configuration file."
|
||||
"I'll create the module.yaml file based on your configuration plan. This is the core installer configuration file."
|
||||
|
||||
Create file: {custom_module_location}/{module_name}/\_module-installer/install-config.yaml from template {installConfigTemplate}
|
||||
Create file: {custom_module_location}/{module_name}/module.yaml from template {installConfigTemplate}
|
||||
|
||||
### 4. Handle Custom Installation Logic
|
||||
|
||||
|
|
@ -117,7 +117,7 @@ Update module-plan.md with installer section:
|
|||
|
||||
### Install Configuration
|
||||
|
||||
- File: \_module-installer/install-config.yaml
|
||||
- File: module.yaml
|
||||
- Module code: {module_name}
|
||||
- Default selected: false
|
||||
- Configuration fields: [count]
|
||||
|
|
@ -166,7 +166,7 @@ Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Conti
|
|||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- install-config.yaml created with all planned fields
|
||||
- module.yaml created with all planned fields
|
||||
- YAML syntax valid
|
||||
- Custom installation logic prepared (if needed)
|
||||
- Installer follows BMAD standards
|
||||
|
|
@ -174,7 +174,7 @@ Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Conti
|
|||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not creating install-config.yaml
|
||||
- Not creating module.yaml
|
||||
- Invalid YAML syntax
|
||||
- Missing required fields
|
||||
- Not using proper path templates
|
||||
|
|
|
|||
|
|
@ -133,7 +133,8 @@ bmad install {module_name}
|
|||
├── tasks/ # Task files
|
||||
├── templates/ # Shared templates
|
||||
├── data/ # Module data
|
||||
├── _module-installer/ # Installation config
|
||||
├── _module-installer/ # Installation optional js file with custom install routine
|
||||
├── module.yaml # yaml config and install questions
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -210,6 +210,7 @@ workflow {workflow_name}
|
|||
├── data/ # ✅ Created
|
||||
├── _module-installer/ # ✅ Configured
|
||||
└── README.md # ✅ Complete
|
||||
└── module.yaml # ✅ Complete
|
||||
```
|
||||
|
||||
## Completion Criteria
|
||||
|
|
|
|||
|
|
@ -73,8 +73,8 @@ Expected Structure:
|
|||
├── templates/ [✅/❌]
|
||||
├── data/ [✅/❌]
|
||||
├── _module-installer/ [✅/❌]
|
||||
│ ├── install-config.yaml [✅/❌]
|
||||
│ └── installer.js [✅/N/A]
|
||||
├── module.yaml [✅/❌]
|
||||
└── README.md [✅/❌]
|
||||
```
|
||||
|
||||
|
|
@ -87,7 +87,7 @@ Expected Structure:
|
|||
"**2. Configuration Files Check**"
|
||||
|
||||
**Install Configuration:**
|
||||
Validate install-config.yaml
|
||||
Validate module.yaml
|
||||
|
||||
- [ ] YAML syntax valid
|
||||
- [ ] Module code matches folder name
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@
|
|||
/**
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - Project root directory
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Array} options.installedIDEs - List of IDE codes being configured
|
||||
* @param {Object} options.logger - Logger instance (log, warn, error methods)
|
||||
* @returns {boolean} - true if successful, false to abort installation
|
||||
|
|
|
|||
|
|
@ -13,15 +13,15 @@ This document provides the validation criteria used in step-11-validate.md to en
|
|||
- [ ] data/ - Module data
|
||||
- [ ] \_module-installer/ - Installation config
|
||||
- [ ] README.md - Module documentation
|
||||
- [ ] module.yaml - module config file
|
||||
|
||||
### Required Files in \_module-installer/
|
||||
### Optional File in \_module-installer/
|
||||
|
||||
- [ ] install-config.yaml - Installation configuration
|
||||
- [ ] installer.js - Custom logic (if needed)
|
||||
|
||||
## Configuration Validation
|
||||
|
||||
### install-config.yaml
|
||||
### module.yaml
|
||||
|
||||
- [ ] Valid YAML syntax
|
||||
- [ ] Module code matches folder name
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ After getting the workflow name:
|
|||
Based on the module selection, confirm the target location:
|
||||
|
||||
- For bmb module: `{custom_workflow_location}` (defaults to `{bmad_folder}/custom/src/workflows`)
|
||||
- For other modules: Check their install-config.yaml for custom workflow locations
|
||||
- For other modules: Check their module.yaml for custom workflow locations
|
||||
- Confirm the exact folder path where the workflow will be created
|
||||
- Store the confirmed path as `{targetWorkflowPath}`
|
||||
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ Create the workflow folder structure in the target location:
|
|||
```
|
||||
|
||||
For bmb module, this will be: `{bmad_folder}/custom/src/workflows/{workflow_name}/`
|
||||
For other modules, check their install-config.yaml for custom_workflow_location
|
||||
For other modules, check their module.yaml for custom_workflow_location
|
||||
|
||||
### 3. Generate workflow.md
|
||||
|
||||
|
|
|
|||
|
|
@ -129,8 +129,9 @@ bmgd/
|
|||
│ (Uses BMM workflows via cross-module references)
|
||||
├── templates/
|
||||
├── data/
|
||||
├── module.yaml
|
||||
└── _module-installer/
|
||||
└── install-config.yaml
|
||||
└── installer.js (optional)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ const platformCodes = require(path.join(__dirname, '../../../../tools/cli/lib/pl
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Array<string>} options.installedIDEs - Array of IDE codes that were installed
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
* @returns {Promise<boolean>} - Success status
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ const chalk = require('chalk');
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
* @param {Object} options.platformInfo - Platform metadata from global config
|
||||
* @returns {Promise<boolean>} - Success status
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ const chalk = require('chalk');
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
* @returns {Promise<boolean>} - Success status
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ const chalk = require('chalk');
|
|||
*
|
||||
* @param {Object} options - Installation options
|
||||
* @param {string} options.projectRoot - The root directory of the target project
|
||||
* @param {Object} options.config - Module configuration from install-config.yaml
|
||||
* @param {Object} options.config - Module configuration from module.yaml
|
||||
* @param {Array<string>} options.installedIDEs - Array of IDE codes that were installed
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
* @returns {Promise<boolean>} - Success status
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ The installer is a multi-stage system that handles agent compilation, IDE integr
|
|||
```
|
||||
1. Collect User Input
|
||||
- Target directory, modules, IDEs
|
||||
- Custom module configuration (via install-config.yaml)
|
||||
- Custom module configuration (via module.yaml)
|
||||
|
||||
2. Pre-Installation
|
||||
- Validate target, check conflicts, backup existing installations
|
||||
|
|
@ -183,12 +183,12 @@ The installer supports **15 IDE environments** through a base-derived architectu
|
|||
|
||||
### Custom Module Configuration
|
||||
|
||||
Modules define interactive configuration menus via `install-config.yaml` files in their `_module-installer/` directories.
|
||||
Modules define interactive configuration menus via `module.yaml` files in their `_module-installer/` directories.
|
||||
|
||||
**Config File Location**:
|
||||
|
||||
- Core: `src/core/_module-installer/install-config.yaml`
|
||||
- Modules: `src/modules/{module}/_module-installer/install-config.yaml`
|
||||
- Core: `src/core/module.yaml`
|
||||
- Modules: `src/modules/{module}/module.yaml`
|
||||
|
||||
**Configuration Types**:
|
||||
|
||||
|
|
|
|||
|
|
@ -183,24 +183,28 @@ class ConfigCollector {
|
|||
|
||||
// Load module's install config schema
|
||||
// First, try the standard src/modules location
|
||||
let installerConfigPath = path.join(getModulePath(moduleName), '_module-installer', 'install-config.yaml');
|
||||
let installerConfigPath = path.join(getModulePath(moduleName), '_module-installer', 'module.yaml');
|
||||
let moduleConfigPath = path.join(getModulePath(moduleName), 'module.yaml');
|
||||
|
||||
// If not found in src/modules, we need to find it by searching the project
|
||||
if (!(await fs.pathExists(installerConfigPath))) {
|
||||
if (!(await fs.pathExists(installerConfigPath)) && !(await fs.pathExists(moduleConfigPath))) {
|
||||
// Use the module manager to find the module source
|
||||
const { ModuleManager } = require('../modules/manager');
|
||||
const moduleManager = new ModuleManager();
|
||||
const moduleSourcePath = await moduleManager.findModuleSource(moduleName);
|
||||
|
||||
if (moduleSourcePath) {
|
||||
installerConfigPath = path.join(moduleSourcePath, '_module-installer', 'install-config.yaml');
|
||||
installerConfigPath = path.join(moduleSourcePath, '_module-installer', 'module.yaml');
|
||||
moduleConfigPath = path.join(moduleSourcePath, 'module.yaml');
|
||||
}
|
||||
}
|
||||
|
||||
let configPath = null;
|
||||
let isCustomModule = false;
|
||||
|
||||
if (await fs.pathExists(installerConfigPath)) {
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
configPath = moduleConfigPath;
|
||||
} else if (await fs.pathExists(installerConfigPath)) {
|
||||
configPath = installerConfigPath;
|
||||
} else {
|
||||
// Check if this is a custom module with custom.yaml
|
||||
|
|
@ -448,22 +452,26 @@ class ConfigCollector {
|
|||
}
|
||||
// Load module's config
|
||||
// First, try the standard src/modules location
|
||||
let installerConfigPath = path.join(getModulePath(moduleName), '_module-installer', 'install-config.yaml');
|
||||
let installerConfigPath = path.join(getModulePath(moduleName), '_module-installer', 'module.yaml');
|
||||
let moduleConfigPath = path.join(getModulePath(moduleName), 'module.yaml');
|
||||
|
||||
// If not found in src/modules, we need to find it by searching the project
|
||||
if (!(await fs.pathExists(installerConfigPath))) {
|
||||
if (!(await fs.pathExists(installerConfigPath)) && !(await fs.pathExists(moduleConfigPath))) {
|
||||
// Use the module manager to find the module source
|
||||
const { ModuleManager } = require('../modules/manager');
|
||||
const moduleManager = new ModuleManager();
|
||||
const moduleSourcePath = await moduleManager.findModuleSource(moduleName);
|
||||
|
||||
if (moduleSourcePath) {
|
||||
installerConfigPath = path.join(moduleSourcePath, '_module-installer', 'install-config.yaml');
|
||||
installerConfigPath = path.join(moduleSourcePath, '_module-installer', 'module.yaml');
|
||||
moduleConfigPath = path.join(moduleSourcePath, 'module.yaml');
|
||||
}
|
||||
}
|
||||
|
||||
let configPath = null;
|
||||
if (await fs.pathExists(installerConfigPath)) {
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
configPath = moduleConfigPath;
|
||||
} else if (await fs.pathExists(installerConfigPath)) {
|
||||
configPath = installerConfigPath;
|
||||
} else {
|
||||
// No config for this module
|
||||
|
|
|
|||
|
|
@ -0,0 +1,239 @@
|
|||
/**
|
||||
* Custom Module Source Cache
|
||||
* Caches custom module sources under _cfg/custom/ to ensure they're never lost
|
||||
* and can be checked into source control
|
||||
*/
|
||||
|
||||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const crypto = require('node:crypto');
|
||||
|
||||
class CustomModuleCache {
|
||||
constructor(bmadDir) {
|
||||
this.bmadDir = bmadDir;
|
||||
this.customCacheDir = path.join(bmadDir, '_cfg', 'custom');
|
||||
this.manifestPath = path.join(this.customCacheDir, 'cache-manifest.yaml');
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure the custom cache directory exists
|
||||
*/
|
||||
async ensureCacheDir() {
|
||||
await fs.ensureDir(this.customCacheDir);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get cache manifest
|
||||
*/
|
||||
async getCacheManifest() {
|
||||
if (!(await fs.pathExists(this.manifestPath))) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const content = await fs.readFile(this.manifestPath, 'utf8');
|
||||
const yaml = require('js-yaml');
|
||||
return yaml.load(content) || {};
|
||||
}
|
||||
|
||||
/**
|
||||
* Update cache manifest
|
||||
*/
|
||||
async updateCacheManifest(manifest) {
|
||||
const yaml = require('js-yaml');
|
||||
const content = yaml.dump(manifest, {
|
||||
indent: 2,
|
||||
lineWidth: -1,
|
||||
noRefs: true,
|
||||
sortKeys: false,
|
||||
});
|
||||
|
||||
await fs.writeFile(this.manifestPath, content);
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate hash of a file or directory
|
||||
*/
|
||||
async calculateHash(sourcePath) {
|
||||
const hash = crypto.createHash('sha256');
|
||||
|
||||
const isDir = (await fs.stat(sourcePath)).isDirectory();
|
||||
|
||||
if (isDir) {
|
||||
// For directories, hash all files
|
||||
const files = [];
|
||||
async function collectFiles(dir) {
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isFile()) {
|
||||
files.push(path.join(dir, entry.name));
|
||||
} else if (entry.isDirectory() && !entry.name.startsWith('.')) {
|
||||
await collectFiles(path.join(dir, entry.name));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
await collectFiles(sourcePath);
|
||||
files.sort(); // Ensure consistent order
|
||||
|
||||
for (const file of files) {
|
||||
const content = await fs.readFile(file);
|
||||
const relativePath = path.relative(sourcePath, file);
|
||||
hash.update(relativePath + '|' + content.toString('base64'));
|
||||
}
|
||||
} else {
|
||||
// For single files
|
||||
const content = await fs.readFile(sourcePath);
|
||||
hash.update(content);
|
||||
}
|
||||
|
||||
return hash.digest('hex');
|
||||
}
|
||||
|
||||
/**
|
||||
* Cache a custom module source
|
||||
* @param {string} moduleId - Module ID
|
||||
* @param {string} sourcePath - Original source path
|
||||
* @param {Object} metadata - Additional metadata to store
|
||||
* @returns {Object} Cached module info
|
||||
*/
|
||||
async cacheModule(moduleId, sourcePath, metadata = {}) {
|
||||
await this.ensureCacheDir();
|
||||
|
||||
const cacheDir = path.join(this.customCacheDir, moduleId);
|
||||
const cacheManifest = await this.getCacheManifest();
|
||||
|
||||
// Check if already cached and unchanged
|
||||
if (cacheManifest[moduleId]) {
|
||||
const cached = cacheManifest[moduleId];
|
||||
if (cached.originalHash && cached.originalHash === (await this.calculateHash(sourcePath))) {
|
||||
// Source unchanged, return existing cache info
|
||||
return {
|
||||
moduleId,
|
||||
cachePath: cacheDir,
|
||||
...cached,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Remove existing cache if it exists
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
await fs.remove(cacheDir);
|
||||
}
|
||||
|
||||
// Copy module to cache
|
||||
await fs.copy(sourcePath, cacheDir, {
|
||||
filter: (src) => {
|
||||
const relative = path.relative(sourcePath, src);
|
||||
// Skip node_modules, .git, and other common ignore patterns
|
||||
return !relative.includes('node_modules') && !relative.startsWith('.git') && !relative.startsWith('.DS_Store');
|
||||
},
|
||||
});
|
||||
|
||||
// Calculate hash of the source
|
||||
const sourceHash = await this.calculateHash(sourcePath);
|
||||
const cacheHash = await this.calculateHash(cacheDir);
|
||||
|
||||
// Update manifest - don't store originalPath for source control friendliness
|
||||
cacheManifest[moduleId] = {
|
||||
originalHash: sourceHash,
|
||||
cacheHash: cacheHash,
|
||||
cachedAt: new Date().toISOString(),
|
||||
...metadata,
|
||||
};
|
||||
|
||||
await this.updateCacheManifest(cacheManifest);
|
||||
|
||||
return {
|
||||
moduleId,
|
||||
cachePath: cacheDir,
|
||||
...cacheManifest[moduleId],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get cached module info
|
||||
* @param {string} moduleId - Module ID
|
||||
* @returns {Object|null} Cached module info or null
|
||||
*/
|
||||
async getCachedModule(moduleId) {
|
||||
const cacheManifest = await this.getCacheManifest();
|
||||
const cached = cacheManifest[moduleId];
|
||||
|
||||
if (!cached) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const cacheDir = path.join(this.customCacheDir, moduleId);
|
||||
|
||||
if (!(await fs.pathExists(cacheDir))) {
|
||||
// Cache dir missing, remove from manifest
|
||||
delete cacheManifest[moduleId];
|
||||
await this.updateCacheManifest(cacheManifest);
|
||||
return null;
|
||||
}
|
||||
|
||||
// Verify cache integrity
|
||||
const currentCacheHash = await this.calculateHash(cacheDir);
|
||||
if (currentCacheHash !== cached.cacheHash) {
|
||||
console.warn(`Warning: Cache integrity check failed for ${moduleId}`);
|
||||
}
|
||||
|
||||
return {
|
||||
moduleId,
|
||||
cachePath: cacheDir,
|
||||
...cached,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all cached modules
|
||||
* @returns {Array} Array of cached module info
|
||||
*/
|
||||
async getAllCachedModules() {
|
||||
const cacheManifest = await this.getCacheManifest();
|
||||
const cached = [];
|
||||
|
||||
for (const [moduleId, info] of Object.entries(cacheManifest)) {
|
||||
const cachedModule = await this.getCachedModule(moduleId);
|
||||
if (cachedModule) {
|
||||
cached.push(cachedModule);
|
||||
}
|
||||
}
|
||||
|
||||
return cached;
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a cached module
|
||||
* @param {string} moduleId - Module ID to remove
|
||||
*/
|
||||
async removeCachedModule(moduleId) {
|
||||
const cacheManifest = await this.getCacheManifest();
|
||||
const cacheDir = path.join(this.customCacheDir, moduleId);
|
||||
|
||||
// Remove cache directory
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
await fs.remove(cacheDir);
|
||||
}
|
||||
|
||||
// Remove from manifest
|
||||
delete cacheManifest[moduleId];
|
||||
await this.updateCacheManifest(cacheManifest);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync cached modules with a list of module IDs
|
||||
* @param {Array<string>} moduleIds - Module IDs to keep
|
||||
*/
|
||||
async syncCache(moduleIds) {
|
||||
const cached = await this.getAllCachedModules();
|
||||
|
||||
for (const cachedModule of cached) {
|
||||
if (!moduleIds.includes(cachedModule.moduleId)) {
|
||||
await this.removeCachedModule(cachedModule.moduleId);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { CustomModuleCache };
|
||||
|
|
@ -17,6 +17,7 @@ class Detector {
|
|||
hasCore: false,
|
||||
modules: [],
|
||||
ides: [],
|
||||
customModules: [],
|
||||
manifest: null,
|
||||
};
|
||||
|
||||
|
|
@ -32,6 +33,10 @@ class Detector {
|
|||
result.manifest = manifestData;
|
||||
result.version = manifestData.version;
|
||||
result.installed = true;
|
||||
// Copy custom modules if they exist
|
||||
if (manifestData.customModules) {
|
||||
result.customModules = manifestData.customModules;
|
||||
}
|
||||
}
|
||||
|
||||
// Check for core
|
||||
|
|
@ -275,10 +280,9 @@ class Detector {
|
|||
hasV6Installation = true;
|
||||
// Don't break - continue scanning to be thorough
|
||||
} else {
|
||||
// Not V6+, check if folder name contains "bmad" (case insensitive)
|
||||
const nameLower = name.toLowerCase();
|
||||
if (nameLower.includes('bmad')) {
|
||||
// Potential V4 legacy folder
|
||||
// Not V6+, check if this is the exact V4 folder name "bmad-method"
|
||||
if (name === 'bmad-method') {
|
||||
// This is the V4 default folder - flag it as legacy
|
||||
potentialV4Folders.push(fullPath);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ const path = require('node:path');
|
|||
const fs = require('fs-extra');
|
||||
const chalk = require('chalk');
|
||||
const ora = require('ora');
|
||||
const inquirer = require('inquirer');
|
||||
const { Detector } = require('./detector');
|
||||
const { Manifest } = require('./manifest');
|
||||
const { ModuleManager } = require('../modules/manager');
|
||||
|
|
@ -129,7 +130,7 @@ class Installer {
|
|||
*/
|
||||
async copyFileWithPlaceholderReplacement(sourcePath, targetPath, bmadFolderName) {
|
||||
// List of text file extensions that should have placeholder replacement
|
||||
const textExtensions = ['.md', '.yaml', '.yml', '.txt', '.json', '.js', '.ts', '.html', '.css', '.sh', '.bat', '.csv'];
|
||||
const textExtensions = ['.md', '.yaml', '.yml', '.txt', '.json', '.js', '.ts', '.html', '.css', '.sh', '.bat', '.csv', '.xml'];
|
||||
const ext = path.extname(sourcePath).toLowerCase();
|
||||
|
||||
// Check if this is a text file that might contain placeholders
|
||||
|
|
@ -750,13 +751,81 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
spinner.text = 'Creating directory structure...';
|
||||
await this.createDirectoryStructure(bmadDir);
|
||||
|
||||
// Resolve dependencies for selected modules
|
||||
spinner.text = 'Resolving dependencies...';
|
||||
// Get project root
|
||||
const projectRoot = getProjectRoot();
|
||||
const modulesToInstall = config.installCore ? ['core', ...config.modules] : config.modules;
|
||||
|
||||
// Step 1: Install core module first (if requested)
|
||||
if (config.installCore) {
|
||||
spinner.start('Installing BMAD core...');
|
||||
await this.installCoreWithDependencies(bmadDir, { core: {} });
|
||||
spinner.succeed('Core installed');
|
||||
|
||||
// Generate core config file
|
||||
await this.generateModuleConfigs(bmadDir, { core: config.coreConfig || {} });
|
||||
}
|
||||
|
||||
// Custom content is already handled in UI before module selection
|
||||
let finalCustomContent = config.customContent;
|
||||
|
||||
// Step 3: Prepare modules list including cached custom modules
|
||||
let allModules = [...(config.modules || [])];
|
||||
|
||||
// During quick update, we might have custom module sources from the manifest
|
||||
if (config._customModuleSources) {
|
||||
// Add custom modules from stored sources
|
||||
for (const [moduleId, customInfo] of config._customModuleSources) {
|
||||
if (!allModules.includes(moduleId) && (await fs.pathExists(customInfo.sourcePath))) {
|
||||
allModules.push(moduleId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add cached custom modules
|
||||
if (finalCustomContent && finalCustomContent.cachedModules) {
|
||||
for (const cachedModule of finalCustomContent.cachedModules) {
|
||||
if (!allModules.includes(cachedModule.id)) {
|
||||
allModules.push(cachedModule.id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Regular custom content from user input (non-cached)
|
||||
if (finalCustomContent && finalCustomContent.selected && finalCustomContent.selectedFiles) {
|
||||
// Add custom modules to the installation list
|
||||
for (const customFile of finalCustomContent.selectedFiles) {
|
||||
const { CustomHandler } = require('../custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customInfo = await customHandler.getCustomInfo(customFile, projectDir);
|
||||
if (customInfo && customInfo.id) {
|
||||
allModules.push(customInfo.id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Don't include core again if already installed
|
||||
if (config.installCore) {
|
||||
allModules = allModules.filter((m) => m !== 'core');
|
||||
}
|
||||
|
||||
const modulesToInstall = allModules;
|
||||
|
||||
// For dependency resolution, we need to pass the project root
|
||||
const resolution = await this.dependencyResolver.resolve(projectRoot, config.modules || [], { verbose: config.verbose });
|
||||
// Create a temporary module manager that knows about custom content locations
|
||||
const tempModuleManager = new ModuleManager({
|
||||
scanProjectForModules: true,
|
||||
bmadDir: bmadDir, // Pass bmadDir so we can check cache
|
||||
});
|
||||
|
||||
// Make sure custom modules are discoverable
|
||||
if (config.customContent && config.customContent.selected && config.customContent.selectedFiles) {
|
||||
// The dependency resolver needs to know about these modules
|
||||
// We'll handle custom modules separately in the installation loop
|
||||
}
|
||||
|
||||
const resolution = await this.dependencyResolver.resolve(projectRoot, allModules, {
|
||||
verbose: config.verbose,
|
||||
moduleManager: tempModuleManager,
|
||||
});
|
||||
|
||||
if (config.verbose) {
|
||||
spinner.succeed('Dependencies resolved');
|
||||
|
|
@ -764,24 +833,159 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
spinner.succeed('Dependencies resolved');
|
||||
}
|
||||
|
||||
// Install core if requested or if dependencies require it
|
||||
if (config.installCore || resolution.byModule.core) {
|
||||
spinner.start('Installing BMAD core...');
|
||||
await this.installCoreWithDependencies(bmadDir, resolution.byModule.core);
|
||||
spinner.succeed('Core installed');
|
||||
}
|
||||
// Core is already installed above, skip if included in resolution
|
||||
|
||||
// Install modules with their dependencies
|
||||
if (config.modules && config.modules.length > 0) {
|
||||
for (const moduleName of config.modules) {
|
||||
if (allModules && allModules.length > 0) {
|
||||
const installedModuleNames = new Set();
|
||||
|
||||
for (const moduleName of allModules) {
|
||||
// Skip if already installed
|
||||
if (installedModuleNames.has(moduleName)) {
|
||||
continue;
|
||||
}
|
||||
installedModuleNames.add(moduleName);
|
||||
|
||||
spinner.start(`Installing module: ${moduleName}...`);
|
||||
|
||||
// Check if this is a custom module
|
||||
let isCustomModule = false;
|
||||
let customInfo = null;
|
||||
let useCache = false;
|
||||
|
||||
// First check if we have a cached version
|
||||
if (finalCustomContent && finalCustomContent.cachedModules) {
|
||||
const cachedModule = finalCustomContent.cachedModules.find((m) => m.id === moduleName);
|
||||
if (cachedModule) {
|
||||
isCustomModule = true;
|
||||
customInfo = {
|
||||
id: moduleName,
|
||||
path: cachedModule.cachePath,
|
||||
config: {},
|
||||
};
|
||||
useCache = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Then check if we have custom module sources from the manifest (for quick update)
|
||||
if (!isCustomModule && config._customModuleSources && config._customModuleSources.has(moduleName)) {
|
||||
customInfo = config._customModuleSources.get(moduleName);
|
||||
isCustomModule = true;
|
||||
|
||||
// Check if this is a cached module (source path starts with _cfg)
|
||||
if (customInfo.sourcePath && (customInfo.sourcePath.startsWith('_cfg') || customInfo.sourcePath.includes('_cfg/custom'))) {
|
||||
useCache = true;
|
||||
// Make sure we have the right path structure
|
||||
if (!customInfo.path) {
|
||||
customInfo.path = customInfo.sourcePath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Finally check regular custom content
|
||||
if (!isCustomModule && finalCustomContent && finalCustomContent.selected && finalCustomContent.selectedFiles) {
|
||||
const { CustomHandler } = require('../custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
for (const customFile of finalCustomContent.selectedFiles) {
|
||||
const info = await customHandler.getCustomInfo(customFile, projectDir);
|
||||
if (info && info.id === moduleName) {
|
||||
isCustomModule = true;
|
||||
customInfo = info;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (isCustomModule && customInfo) {
|
||||
// Install custom module using CustomHandler but as a proper module
|
||||
const { CustomHandler } = require('../custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
|
||||
// Install to module directory instead of custom directory
|
||||
const moduleTargetPath = path.join(bmadDir, moduleName);
|
||||
await fs.ensureDir(moduleTargetPath);
|
||||
|
||||
const result = await customHandler.install(
|
||||
customInfo.path,
|
||||
path.join(bmadDir, 'temp-custom'),
|
||||
{ ...config.coreConfig, ...customInfo.config, _bmadDir: bmadDir },
|
||||
(filePath) => {
|
||||
// Track installed files with correct path
|
||||
const relativePath = path.relative(path.join(bmadDir, 'temp-custom'), filePath);
|
||||
const finalPath = path.join(moduleTargetPath, relativePath);
|
||||
this.installedFiles.push(finalPath);
|
||||
},
|
||||
);
|
||||
|
||||
// Move from temp-custom to actual module directory
|
||||
const tempCustomPath = path.join(bmadDir, 'temp-custom');
|
||||
if (await fs.pathExists(tempCustomPath)) {
|
||||
const customDir = path.join(tempCustomPath, 'custom');
|
||||
if (await fs.pathExists(customDir)) {
|
||||
// Move contents to module directory
|
||||
const items = await fs.readdir(customDir);
|
||||
for (const item of items) {
|
||||
const srcPath = path.join(customDir, item);
|
||||
const destPath = path.join(moduleTargetPath, item);
|
||||
|
||||
// If destination exists, remove it first (or we could merge)
|
||||
if (await fs.pathExists(destPath)) {
|
||||
await fs.remove(destPath);
|
||||
}
|
||||
|
||||
await fs.move(srcPath, destPath);
|
||||
}
|
||||
}
|
||||
await fs.remove(tempCustomPath);
|
||||
}
|
||||
|
||||
// Create module config
|
||||
await this.generateModuleConfigs(bmadDir, { [moduleName]: { ...config.coreConfig, ...customInfo.config } });
|
||||
|
||||
// Store custom module info for later manifest update
|
||||
if (!config._customModulesToTrack) {
|
||||
config._customModulesToTrack = [];
|
||||
}
|
||||
|
||||
// For cached modules, use appropriate path handling
|
||||
let sourcePath;
|
||||
if (useCache) {
|
||||
// Check if we have cached modules info (from initial install)
|
||||
if (finalCustomContent && finalCustomContent.cachedModules) {
|
||||
sourcePath = finalCustomContent.cachedModules.find((m) => m.id === moduleName)?.relativePath;
|
||||
} else {
|
||||
// During update, the sourcePath is already cache-relative if it starts with _cfg
|
||||
sourcePath =
|
||||
customInfo.sourcePath && customInfo.sourcePath.startsWith('_cfg')
|
||||
? customInfo.sourcePath
|
||||
: path.relative(bmadDir, customInfo.path || customInfo.sourcePath);
|
||||
}
|
||||
} else {
|
||||
sourcePath = path.resolve(customInfo.path || customInfo.sourcePath);
|
||||
}
|
||||
|
||||
config._customModulesToTrack.push({
|
||||
id: customInfo.id,
|
||||
name: customInfo.name,
|
||||
sourcePath: sourcePath,
|
||||
installDate: new Date().toISOString(),
|
||||
});
|
||||
} else {
|
||||
// Regular module installation
|
||||
// Special case for core module
|
||||
if (moduleName === 'core') {
|
||||
await this.installCoreWithDependencies(bmadDir, resolution.byModule[moduleName]);
|
||||
} else {
|
||||
await this.installModuleWithDependencies(moduleName, bmadDir, resolution.byModule[moduleName]);
|
||||
}
|
||||
}
|
||||
|
||||
spinner.succeed(`Module installed: ${moduleName}`);
|
||||
}
|
||||
|
||||
// Install partial modules (only dependencies)
|
||||
for (const [module, files] of Object.entries(resolution.byModule)) {
|
||||
if (!config.modules.includes(module) && module !== 'core') {
|
||||
if (!allModules.includes(module) && module !== 'core') {
|
||||
const totalFiles =
|
||||
files.agents.length +
|
||||
files.tasks.length +
|
||||
|
|
@ -798,6 +1002,72 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
}
|
||||
}
|
||||
|
||||
// Install custom content if provided AND selected
|
||||
// Process custom content that wasn't installed as modules
|
||||
// This is now handled in the module installation loop above
|
||||
// This section is kept for backward compatibility with any custom content
|
||||
// that doesn't have a module structure
|
||||
const remainingCustomContent = [];
|
||||
if (
|
||||
config.customContent &&
|
||||
config.customContent.hasCustomContent &&
|
||||
config.customContent.customPath &&
|
||||
config.customContent.selected &&
|
||||
config.customContent.selectedFiles
|
||||
) {
|
||||
// Filter out custom modules that were already installed
|
||||
for (const customFile of config.customContent.selectedFiles) {
|
||||
const { CustomHandler } = require('../custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customInfo = await customHandler.getCustomInfo(customFile, projectDir);
|
||||
|
||||
// Skip if this was installed as a module
|
||||
if (!customInfo || !customInfo.id || !allModules.includes(customInfo.id)) {
|
||||
remainingCustomContent.push(customFile);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (remainingCustomContent.length > 0) {
|
||||
spinner.start('Installing remaining custom content...');
|
||||
const { CustomHandler } = require('../custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
|
||||
// Use the remaining files
|
||||
const customFiles = remainingCustomContent;
|
||||
|
||||
if (customFiles.length > 0) {
|
||||
console.log(chalk.cyan(`\n Found ${customFiles.length} custom content file(s):`));
|
||||
for (const customFile of customFiles) {
|
||||
const customInfo = await customHandler.getCustomInfo(customFile, projectDir);
|
||||
if (customInfo) {
|
||||
console.log(chalk.dim(` • ${customInfo.name} (${customInfo.relativePath})`));
|
||||
|
||||
// Install the custom content
|
||||
const result = await customHandler.install(
|
||||
customInfo.path,
|
||||
bmadDir,
|
||||
{ ...config.coreConfig, ...customInfo.config },
|
||||
(filePath) => {
|
||||
// Track installed files
|
||||
this.installedFiles.push(filePath);
|
||||
},
|
||||
);
|
||||
|
||||
if (result.errors.length > 0) {
|
||||
console.log(chalk.yellow(` ⚠️ ${result.errors.length} error(s) occurred`));
|
||||
for (const error of result.errors) {
|
||||
console.log(chalk.dim(` - ${error}`));
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.green(` ✓ Installed ${result.agentsInstalled} agents, ${result.workflowsInstalled} workflows`));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
spinner.succeed('Custom content installed');
|
||||
}
|
||||
|
||||
// Generate clean config.yaml files for each installed module
|
||||
spinner.start('Generating module configurations...');
|
||||
await this.generateModuleConfigs(bmadDir, moduleConfigs);
|
||||
|
|
@ -820,14 +1090,37 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
spinner.start('Generating workflow and agent manifests...');
|
||||
const manifestGen = new ManifestGenerator();
|
||||
|
||||
// Include preserved modules (from quick update) in the manifest
|
||||
const allModulesToList = config._preserveModules ? [...(config.modules || []), ...config._preserveModules] : config.modules || [];
|
||||
// For quick update, we need ALL installed modules in the manifest
|
||||
// Not just the ones being updated
|
||||
const allModulesForManifest = config._quickUpdate
|
||||
? config._existingModules || allModules || []
|
||||
: config._preserveModules
|
||||
? [...allModules, ...config._preserveModules]
|
||||
: allModules || [];
|
||||
|
||||
const manifestStats = await manifestGen.generateManifests(bmadDir, config.modules || [], this.installedFiles, {
|
||||
// For regular installs (including when called from quick update), use what we have
|
||||
let modulesForCsvPreserve;
|
||||
if (config._quickUpdate) {
|
||||
// Quick update - use existing modules or fall back to modules being updated
|
||||
modulesForCsvPreserve = config._existingModules || allModules || [];
|
||||
} else {
|
||||
// Regular install - use the modules we're installing plus any preserved ones
|
||||
modulesForCsvPreserve = config._preserveModules ? [...allModules, ...config._preserveModules] : allModules;
|
||||
}
|
||||
|
||||
const manifestStats = await manifestGen.generateManifests(bmadDir, allModulesForManifest, this.installedFiles, {
|
||||
ides: config.ides || [],
|
||||
preservedModules: config._preserveModules || [], // Scan these from installed bmad/ dir
|
||||
preservedModules: modulesForCsvPreserve, // Scan these from installed bmad/ dir
|
||||
});
|
||||
|
||||
// Add custom modules to manifest (now that it exists)
|
||||
if (config._customModulesToTrack && config._customModulesToTrack.length > 0) {
|
||||
spinner.text = 'Storing custom module sources...';
|
||||
for (const customModule of config._customModulesToTrack) {
|
||||
await this.manifest.addCustomModule(bmadDir, customModule);
|
||||
}
|
||||
}
|
||||
|
||||
spinner.succeed(
|
||||
`Manifests generated: ${manifestStats.workflows} workflows, ${manifestStats.agents} agents, ${manifestStats.tasks} tasks, ${manifestStats.tools} tools, ${manifestStats.files} files`,
|
||||
);
|
||||
|
|
@ -1090,6 +1383,30 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
const currentVersion = existingInstall.version;
|
||||
const newVersion = require(path.join(getProjectRoot(), 'package.json')).version;
|
||||
|
||||
// Check for custom modules with missing sources before update
|
||||
const customModuleSources = new Map();
|
||||
if (existingInstall.customModules) {
|
||||
for (const customModule of existingInstall.customModules) {
|
||||
customModuleSources.set(customModule.id, customModule);
|
||||
}
|
||||
}
|
||||
|
||||
if (customModuleSources.size > 0) {
|
||||
spinner.stop();
|
||||
console.log(chalk.yellow('\nChecking custom module sources before update...'));
|
||||
|
||||
const projectRoot = getProjectRoot();
|
||||
await this.handleMissingCustomSources(
|
||||
customModuleSources,
|
||||
bmadDir,
|
||||
projectRoot,
|
||||
'update',
|
||||
existingInstall.modules.map((m) => m.id),
|
||||
);
|
||||
|
||||
spinner.start('Preparing update...');
|
||||
}
|
||||
|
||||
if (config.dryRun) {
|
||||
spinner.stop();
|
||||
console.log(chalk.cyan('\n🔍 Update Preview (Dry Run)\n'));
|
||||
|
|
@ -1547,6 +1864,9 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
// DO NOT replace {project-root} - LLMs understand this placeholder at runtime
|
||||
// const processedContent = xmlContent.replaceAll('{project-root}', projectDir);
|
||||
|
||||
// Replace {bmad_folder} with actual folder name
|
||||
xmlContent = xmlContent.replaceAll('{bmad_folder}', this.bmadFolderName || 'bmad');
|
||||
|
||||
// Replace {agent_sidecar_folder} if configured
|
||||
const coreConfig = this.configCollector.collectedConfig.core || {};
|
||||
if (coreConfig.agent_sidecar_folder && xmlContent.includes('{agent_sidecar_folder}')) {
|
||||
|
|
@ -1858,6 +2178,24 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
throw new Error(`BMAD not installed at ${bmadDir}`);
|
||||
}
|
||||
|
||||
// Check for custom modules with missing sources
|
||||
const manifest = await this.manifest.read(bmadDir);
|
||||
if (manifest && manifest.customModules && manifest.customModules.length > 0) {
|
||||
spinner.stop();
|
||||
console.log(chalk.yellow('\nChecking custom module sources before compilation...'));
|
||||
|
||||
const customModuleSources = new Map();
|
||||
for (const customModule of manifest.customModules) {
|
||||
customModuleSources.set(customModule.id, customModule);
|
||||
}
|
||||
|
||||
const projectRoot = getProjectRoot();
|
||||
const installedModules = manifest.modules || [];
|
||||
await this.handleMissingCustomSources(customModuleSources, bmadDir, projectRoot, 'compile-agents', installedModules);
|
||||
|
||||
spinner.start('Rebuilding agent files...');
|
||||
}
|
||||
|
||||
let agentCount = 0;
|
||||
let taskCount = 0;
|
||||
|
||||
|
|
@ -2002,17 +2340,245 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
const existingInstall = await this.detector.detect(bmadDir);
|
||||
const installedModules = existingInstall.modules.map((m) => m.id);
|
||||
const configuredIdes = existingInstall.ides || [];
|
||||
const projectRoot = path.dirname(bmadDir);
|
||||
|
||||
// Get custom module sources from manifest
|
||||
const customModuleSources = new Map();
|
||||
if (existingInstall.customModules) {
|
||||
for (const customModule of existingInstall.customModules) {
|
||||
// Ensure we have an absolute sourcePath
|
||||
let absoluteSourcePath = customModule.sourcePath;
|
||||
|
||||
// Check if sourcePath is a cache-relative path (starts with _cfg/)
|
||||
if (absoluteSourcePath && absoluteSourcePath.startsWith('_cfg')) {
|
||||
// Convert cache-relative path to absolute path
|
||||
absoluteSourcePath = path.join(bmadDir, absoluteSourcePath);
|
||||
}
|
||||
// If no sourcePath but we have relativePath, convert it
|
||||
else if (!absoluteSourcePath && customModule.relativePath) {
|
||||
// relativePath is relative to the project root (parent of bmad dir)
|
||||
absoluteSourcePath = path.resolve(projectRoot, customModule.relativePath);
|
||||
}
|
||||
// Ensure sourcePath is absolute for anything else
|
||||
else if (absoluteSourcePath && !path.isAbsolute(absoluteSourcePath)) {
|
||||
absoluteSourcePath = path.resolve(absoluteSourcePath);
|
||||
}
|
||||
|
||||
// Update the custom module object with the absolute path
|
||||
const updatedModule = {
|
||||
...customModule,
|
||||
sourcePath: absoluteSourcePath,
|
||||
};
|
||||
|
||||
customModuleSources.set(customModule.id, updatedModule);
|
||||
}
|
||||
}
|
||||
|
||||
// Load saved IDE configurations
|
||||
const savedIdeConfigs = await this.ideConfigManager.loadAllIdeConfigs(bmadDir);
|
||||
|
||||
// Get available modules (what we have source for)
|
||||
const availableModules = await this.moduleManager.listAvailable();
|
||||
const availableModuleIds = new Set(availableModules.map((m) => m.id));
|
||||
const availableModulesData = await this.moduleManager.listAvailable();
|
||||
const availableModules = [...availableModulesData.modules, ...availableModulesData.customModules];
|
||||
|
||||
// Add custom modules from manifest if their sources exist
|
||||
for (const [moduleId, customModule] of customModuleSources) {
|
||||
// Use the absolute sourcePath
|
||||
const sourcePath = customModule.sourcePath;
|
||||
|
||||
// Check if source exists at the recorded path
|
||||
if (
|
||||
sourcePath &&
|
||||
(await fs.pathExists(sourcePath)) && // Add to available modules if not already there
|
||||
!availableModules.some((m) => m.id === moduleId)
|
||||
) {
|
||||
availableModules.push({
|
||||
id: moduleId,
|
||||
name: customModule.name || moduleId,
|
||||
path: sourcePath,
|
||||
isCustom: true,
|
||||
fromManifest: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for untracked custom modules (installed but not in manifest)
|
||||
const untrackedCustomModules = [];
|
||||
for (const installedModule of installedModules) {
|
||||
// Skip standard modules and core
|
||||
const standardModuleIds = ['bmb', 'bmgd', 'bmm', 'cis', 'core'];
|
||||
if (standardModuleIds.includes(installedModule)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if this installed module is not tracked in customModules
|
||||
if (!customModuleSources.has(installedModule)) {
|
||||
const modulePath = path.join(bmadDir, installedModule);
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
untrackedCustomModules.push({
|
||||
id: installedModule,
|
||||
name: installedModule, // We don't have the original name
|
||||
path: modulePath,
|
||||
untracked: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we found untracked custom modules, offer to track them
|
||||
if (untrackedCustomModules.length > 0) {
|
||||
spinner.stop();
|
||||
console.log(chalk.yellow(`\n⚠️ Found ${untrackedCustomModules.length} custom module(s) not tracked in manifest:`));
|
||||
|
||||
for (const untracked of untrackedCustomModules) {
|
||||
console.log(chalk.dim(` • ${untracked.id} (installed at ${path.relative(projectRoot, untracked.path)})`));
|
||||
}
|
||||
|
||||
const { trackModules } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'trackModules',
|
||||
message: chalk.cyan('Would you like to scan for their source locations?'),
|
||||
default: true,
|
||||
},
|
||||
]);
|
||||
|
||||
if (trackModules) {
|
||||
const { scanDirectory } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'scanDirectory',
|
||||
message: 'Enter directory to scan for custom module sources (or leave blank to skip):',
|
||||
default: projectRoot,
|
||||
validate: async (input) => {
|
||||
if (input && input.trim() !== '') {
|
||||
const expandedPath = path.resolve(input.trim());
|
||||
if (!(await fs.pathExists(expandedPath))) {
|
||||
return 'Directory does not exist';
|
||||
}
|
||||
const stats = await fs.stat(expandedPath);
|
||||
if (!stats.isDirectory()) {
|
||||
return 'Path must be a directory';
|
||||
}
|
||||
}
|
||||
return true;
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
if (scanDirectory && scanDirectory.trim() !== '') {
|
||||
console.log(chalk.dim('\nScanning for custom module sources...'));
|
||||
|
||||
// Scan for all module.yaml files
|
||||
const allModulePaths = await this.moduleManager.findModulesInProject(scanDirectory);
|
||||
const { ModuleManager } = require('../modules/manager');
|
||||
const mm = new ModuleManager({ scanProjectForModules: true });
|
||||
|
||||
for (const untracked of untrackedCustomModules) {
|
||||
let foundSource = null;
|
||||
|
||||
// Try to find by module ID
|
||||
for (const modulePath of allModulePaths) {
|
||||
try {
|
||||
const moduleInfo = await mm.getModuleInfo(modulePath);
|
||||
if (moduleInfo && moduleInfo.id === untracked.id) {
|
||||
foundSource = {
|
||||
path: modulePath,
|
||||
info: moduleInfo,
|
||||
};
|
||||
break;
|
||||
}
|
||||
} catch {
|
||||
// Continue searching
|
||||
}
|
||||
}
|
||||
|
||||
if (foundSource) {
|
||||
console.log(chalk.green(` ✓ Found source for ${untracked.id}: ${path.relative(projectRoot, foundSource.path)}`));
|
||||
|
||||
// Add to manifest
|
||||
await this.manifest.addCustomModule(bmadDir, {
|
||||
id: untracked.id,
|
||||
name: foundSource.info.name || untracked.name,
|
||||
sourcePath: path.resolve(foundSource.path),
|
||||
installDate: new Date().toISOString(),
|
||||
tracked: true,
|
||||
});
|
||||
|
||||
// Add to customModuleSources for processing
|
||||
customModuleSources.set(untracked.id, {
|
||||
id: untracked.id,
|
||||
name: foundSource.info.name || untracked.name,
|
||||
sourcePath: path.resolve(foundSource.path),
|
||||
});
|
||||
} else {
|
||||
console.log(chalk.yellow(` ⚠ Could not find source for ${untracked.id}`));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.dim('\nUntracked custom modules will remain installed but cannot be updated without their source.'));
|
||||
spinner.start('Preparing update...');
|
||||
}
|
||||
|
||||
// Handle missing custom module sources using shared method
|
||||
const customModuleResult = await this.handleMissingCustomSources(
|
||||
customModuleSources,
|
||||
bmadDir,
|
||||
projectRoot,
|
||||
'update',
|
||||
installedModules,
|
||||
);
|
||||
|
||||
// Handle both old return format (array) and new format (object)
|
||||
let validCustomModules = [];
|
||||
let keptModulesWithoutSources = [];
|
||||
|
||||
if (Array.isArray(customModuleResult)) {
|
||||
// Old format - just an array
|
||||
validCustomModules = customModuleResult;
|
||||
} else if (customModuleResult && typeof customModuleResult === 'object') {
|
||||
// New format - object with two arrays
|
||||
validCustomModules = customModuleResult.validCustomModules || [];
|
||||
keptModulesWithoutSources = customModuleResult.keptModulesWithoutSources || [];
|
||||
}
|
||||
|
||||
const customModulesFromManifest = validCustomModules.map((m) => ({
|
||||
...m,
|
||||
isCustom: true,
|
||||
hasUpdate: true,
|
||||
}));
|
||||
|
||||
// Add untracked modules to the update list but mark them as untrackable
|
||||
for (const untracked of untrackedCustomModules) {
|
||||
if (!customModuleSources.has(untracked.id)) {
|
||||
customModulesFromManifest.push({
|
||||
...untracked,
|
||||
isCustom: true,
|
||||
hasUpdate: false, // Can't update without source
|
||||
untracked: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const allAvailableModules = [...availableModules, ...customModulesFromManifest];
|
||||
const availableModuleIds = new Set(allAvailableModules.map((m) => m.id));
|
||||
|
||||
// Core module is special - never include it in update flow
|
||||
const nonCoreInstalledModules = installedModules.filter((id) => id !== 'core');
|
||||
|
||||
// Only update modules that are BOTH installed AND available (we have source for)
|
||||
const modulesToUpdate = installedModules.filter((id) => availableModuleIds.has(id));
|
||||
const skippedModules = installedModules.filter((id) => !availableModuleIds.has(id));
|
||||
const modulesToUpdate = nonCoreInstalledModules.filter((id) => availableModuleIds.has(id));
|
||||
const skippedModules = nonCoreInstalledModules.filter((id) => !availableModuleIds.has(id));
|
||||
|
||||
// Add custom modules that were kept without sources to the skipped modules
|
||||
// This ensures their agents are preserved in the manifest
|
||||
for (const keptModule of keptModulesWithoutSources) {
|
||||
if (!skippedModules.includes(keptModule)) {
|
||||
skippedModules.push(keptModule);
|
||||
}
|
||||
}
|
||||
|
||||
spinner.succeed(`Found ${modulesToUpdate.length} module(s) to update and ${configuredIdes.length} configured tool(s)`);
|
||||
|
||||
|
|
@ -2077,6 +2643,8 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
_quickUpdate: true, // Flag to skip certain prompts
|
||||
_preserveModules: skippedModules, // Preserve these in manifest even though we didn't update them
|
||||
_savedIdeConfigs: savedIdeConfigs, // Pass saved IDE configs to installer
|
||||
_customModuleSources: customModuleSources, // Pass custom module sources for updates
|
||||
_existingModules: installedModules, // Pass all installed modules for manifest generation
|
||||
};
|
||||
|
||||
// Call the standard install method
|
||||
|
|
@ -2716,6 +3284,230 @@ If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle missing custom module sources interactively
|
||||
* @param {Map} customModuleSources - Map of custom module ID to info
|
||||
* @param {string} bmadDir - BMAD directory
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {string} operation - Current operation ('update', 'compile', etc.)
|
||||
* @param {Array} installedModules - Array of installed module IDs (will be modified)
|
||||
* @returns {Object} Object with validCustomModules array and keptModulesWithoutSources array
|
||||
*/
|
||||
async handleMissingCustomSources(customModuleSources, bmadDir, projectRoot, operation, installedModules) {
|
||||
const validCustomModules = [];
|
||||
const keptModulesWithoutSources = []; // Track modules kept without sources
|
||||
const customModulesWithMissingSources = [];
|
||||
|
||||
// Check which sources exist
|
||||
for (const [moduleId, customInfo] of customModuleSources) {
|
||||
if (await fs.pathExists(customInfo.sourcePath)) {
|
||||
validCustomModules.push({
|
||||
id: moduleId,
|
||||
name: customInfo.name,
|
||||
path: customInfo.sourcePath,
|
||||
info: customInfo,
|
||||
});
|
||||
} else {
|
||||
customModulesWithMissingSources.push({
|
||||
id: moduleId,
|
||||
name: customInfo.name,
|
||||
sourcePath: customInfo.sourcePath,
|
||||
relativePath: customInfo.relativePath,
|
||||
info: customInfo,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// If no missing sources, return immediately
|
||||
if (customModulesWithMissingSources.length === 0) {
|
||||
return validCustomModules;
|
||||
}
|
||||
|
||||
// Stop any spinner for interactive prompts
|
||||
const currentSpinner = ora();
|
||||
if (currentSpinner.isSpinning) {
|
||||
currentSpinner.stop();
|
||||
}
|
||||
|
||||
console.log(chalk.yellow(`\n⚠️ Found ${customModulesWithMissingSources.length} custom module(s) with missing sources:`));
|
||||
|
||||
const inquirer = require('inquirer');
|
||||
let keptCount = 0;
|
||||
let updatedCount = 0;
|
||||
let removedCount = 0;
|
||||
|
||||
for (const missing of customModulesWithMissingSources) {
|
||||
console.log(chalk.dim(` • ${missing.name} (${missing.id})`));
|
||||
console.log(chalk.dim(` Original source: ${missing.relativePath}`));
|
||||
console.log(chalk.dim(` Full path: ${missing.sourcePath}`));
|
||||
|
||||
const choices = [
|
||||
{
|
||||
name: 'Keep installed (will not be processed)',
|
||||
value: 'keep',
|
||||
short: 'Keep',
|
||||
},
|
||||
{
|
||||
name: 'Specify new source location',
|
||||
value: 'update',
|
||||
short: 'Update',
|
||||
},
|
||||
];
|
||||
|
||||
// Only add remove option if not just compiling agents
|
||||
if (operation !== 'compile-agents') {
|
||||
choices.push({
|
||||
name: '⚠️ REMOVE module completely (destructive!)',
|
||||
value: 'remove',
|
||||
short: 'Remove',
|
||||
});
|
||||
}
|
||||
|
||||
const { action } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'action',
|
||||
message: `How would you like to handle "${missing.name}"?`,
|
||||
choices,
|
||||
},
|
||||
]);
|
||||
|
||||
switch (action) {
|
||||
case 'update': {
|
||||
const { newSourcePath } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'newSourcePath',
|
||||
message: 'Enter the new path to the custom module:',
|
||||
default: missing.sourcePath,
|
||||
validate: async (input) => {
|
||||
if (!input || input.trim() === '') {
|
||||
return 'Please enter a path';
|
||||
}
|
||||
const expandedPath = path.resolve(input.trim());
|
||||
if (!(await fs.pathExists(expandedPath))) {
|
||||
return 'Path does not exist';
|
||||
}
|
||||
// Check if it looks like a valid module
|
||||
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
|
||||
const agentsPath = path.join(expandedPath, 'agents');
|
||||
const workflowsPath = path.join(expandedPath, 'workflows');
|
||||
|
||||
if (!(await fs.pathExists(moduleYamlPath)) && !(await fs.pathExists(agentsPath)) && !(await fs.pathExists(workflowsPath))) {
|
||||
return 'Path does not appear to contain a valid custom module';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
// Update the source in manifest
|
||||
const resolvedPath = path.resolve(newSourcePath.trim());
|
||||
missing.info.sourcePath = resolvedPath;
|
||||
// Remove relativePath - we only store absolute sourcePath now
|
||||
delete missing.info.relativePath;
|
||||
await this.manifest.addCustomModule(bmadDir, missing.info);
|
||||
|
||||
validCustomModules.push({
|
||||
id: moduleId,
|
||||
name: missing.name,
|
||||
path: resolvedPath,
|
||||
info: missing.info,
|
||||
});
|
||||
|
||||
updatedCount++;
|
||||
console.log(chalk.green(`✓ Updated source location`));
|
||||
|
||||
break;
|
||||
}
|
||||
case 'remove': {
|
||||
// Extra confirmation for destructive remove
|
||||
console.log(chalk.red.bold(`\n⚠️ WARNING: This will PERMANENTLY DELETE "${missing.name}" and all its files!`));
|
||||
console.log(chalk.red(` Module location: ${path.join(bmadDir, moduleId)}`));
|
||||
|
||||
const { confirm } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirm',
|
||||
message: chalk.red.bold('Are you absolutely sure you want to delete this module?'),
|
||||
default: false,
|
||||
},
|
||||
]);
|
||||
|
||||
if (confirm) {
|
||||
const { typedConfirm } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'typedConfirm',
|
||||
message: chalk.red.bold('Type "DELETE" to confirm permanent deletion:'),
|
||||
validate: (input) => {
|
||||
if (input !== 'DELETE') {
|
||||
return chalk.red('You must type "DELETE" exactly to proceed');
|
||||
}
|
||||
return true;
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
if (typedConfirm === 'DELETE') {
|
||||
// Remove the module from filesystem and manifest
|
||||
const modulePath = path.join(bmadDir, moduleId);
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
const fsExtra = require('fs-extra');
|
||||
await fsExtra.remove(modulePath);
|
||||
console.log(chalk.yellow(` ✓ Deleted module directory: ${path.relative(projectRoot, modulePath)}`));
|
||||
}
|
||||
|
||||
await this.manifest.removeModule(bmadDir, moduleId);
|
||||
await this.manifest.removeCustomModule(bmadDir, moduleId);
|
||||
console.log(chalk.yellow(` ✓ Removed from manifest`));
|
||||
|
||||
// Also remove from installedModules list
|
||||
if (installedModules && installedModules.includes(moduleId)) {
|
||||
const index = installedModules.indexOf(moduleId);
|
||||
if (index !== -1) {
|
||||
installedModules.splice(index, 1);
|
||||
}
|
||||
}
|
||||
|
||||
removedCount++;
|
||||
console.log(chalk.red.bold(`✓ "${missing.name}" has been permanently removed`));
|
||||
} else {
|
||||
console.log(chalk.dim(' Removal cancelled - module will be kept'));
|
||||
keptCount++;
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.dim(' Removal cancelled - module will be kept'));
|
||||
keptCount++;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case 'keep': {
|
||||
keptCount++;
|
||||
keptModulesWithoutSources.push(moduleId);
|
||||
console.log(chalk.dim(` Module will be kept as-is`));
|
||||
|
||||
break;
|
||||
}
|
||||
// No default
|
||||
}
|
||||
}
|
||||
|
||||
// Show summary
|
||||
if (keptCount > 0 || updatedCount > 0 || removedCount > 0) {
|
||||
console.log(chalk.dim(`\nSummary for custom modules with missing sources:`));
|
||||
if (keptCount > 0) console.log(chalk.dim(` • ${keptCount} module(s) kept as-is`));
|
||||
if (updatedCount > 0) console.log(chalk.dim(` • ${updatedCount} module(s) updated with new sources`));
|
||||
if (removedCount > 0) console.log(chalk.red(` • ${removedCount} module(s) permanently deleted`));
|
||||
}
|
||||
|
||||
return {
|
||||
validCustomModules,
|
||||
keptModulesWithoutSources,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Installer };
|
||||
|
|
|
|||
|
|
@ -41,7 +41,11 @@ class ManifestGenerator {
|
|||
// Deduplicate modules list to prevent duplicates
|
||||
this.modules = [...new Set(['core', ...selectedModules, ...preservedModules, ...installedModules])];
|
||||
this.updatedModules = [...new Set(['core', ...selectedModules, ...installedModules])]; // All installed modules get rescanned
|
||||
this.preservedModules = preservedModules; // These stay as-is in CSVs
|
||||
|
||||
// For CSV manifests, we need to include ALL modules that are installed
|
||||
// preservedModules controls which modules stay as-is in the CSV (don't get rescanned)
|
||||
// But all modules should be included in the final manifest
|
||||
this.preservedModules = [...new Set([...preservedModules, ...selectedModules, ...installedModules])]; // Include all installed modules
|
||||
this.bmadDir = bmadDir;
|
||||
this.bmadFolderName = path.basename(bmadDir); // Get the actual folder name (e.g., '.bmad' or 'bmad')
|
||||
this.allInstalledFiles = installedFiles;
|
||||
|
|
@ -61,14 +65,14 @@ class ManifestGenerator {
|
|||
// Collect workflow data
|
||||
await this.collectWorkflows(selectedModules);
|
||||
|
||||
// Collect agent data
|
||||
await this.collectAgents(selectedModules);
|
||||
// Collect agent data - use updatedModules which includes all installed modules
|
||||
await this.collectAgents(this.updatedModules);
|
||||
|
||||
// Collect task data
|
||||
await this.collectTasks(selectedModules);
|
||||
await this.collectTasks(this.updatedModules);
|
||||
|
||||
// Collect tool data
|
||||
await this.collectTools(selectedModules);
|
||||
await this.collectTools(this.updatedModules);
|
||||
|
||||
// Write manifest files and collect their paths
|
||||
const manifestFiles = [
|
||||
|
|
@ -450,6 +454,21 @@ class ManifestGenerator {
|
|||
async writeMainManifest(cfgDir) {
|
||||
const manifestPath = path.join(cfgDir, 'manifest.yaml');
|
||||
|
||||
// Read existing manifest to preserve custom modules
|
||||
let existingCustomModules = [];
|
||||
if (await fs.pathExists(manifestPath)) {
|
||||
try {
|
||||
const existingContent = await fs.readFile(manifestPath, 'utf8');
|
||||
const existingManifest = yaml.load(existingContent);
|
||||
if (existingManifest && existingManifest.customModules) {
|
||||
existingCustomModules = existingManifest.customModules;
|
||||
}
|
||||
} catch {
|
||||
// If we can't read the existing manifest, continue without preserving custom modules
|
||||
console.warn('Warning: Could not read existing manifest to preserve custom modules');
|
||||
}
|
||||
}
|
||||
|
||||
const manifest = {
|
||||
installation: {
|
||||
version: packageJson.version,
|
||||
|
|
@ -457,6 +476,7 @@ class ManifestGenerator {
|
|||
lastUpdated: new Date().toISOString(),
|
||||
},
|
||||
modules: this.modules,
|
||||
customModules: existingCustomModules, // Preserve custom modules
|
||||
ides: this.selectedIdes,
|
||||
};
|
||||
|
||||
|
|
@ -562,12 +582,47 @@ class ManifestGenerator {
|
|||
async writeWorkflowManifest(cfgDir) {
|
||||
const csvPath = path.join(cfgDir, 'workflow-manifest.csv');
|
||||
|
||||
// Read existing manifest to preserve entries
|
||||
const existingEntries = new Map();
|
||||
if (await fs.pathExists(csvPath)) {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim());
|
||||
|
||||
// Skip header
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
if (line) {
|
||||
// Parse CSV (simple parsing assuming no commas in quoted fields)
|
||||
const parts = line.split('","');
|
||||
if (parts.length >= 4) {
|
||||
const name = parts[0].replace(/^"/, '');
|
||||
const module = parts[2];
|
||||
existingEntries.set(`${module}:${name}`, line);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create CSV header - removed standalone column as ALL workflows now generate commands
|
||||
let csv = 'name,description,module,path\n';
|
||||
|
||||
// Add all workflows - no standalone property needed anymore
|
||||
// Combine existing and new workflows
|
||||
const allWorkflows = new Map();
|
||||
|
||||
// Add existing entries
|
||||
for (const [key, value] of existingEntries) {
|
||||
allWorkflows.set(key, value);
|
||||
}
|
||||
|
||||
// Add/update new workflows
|
||||
for (const workflow of this.workflows) {
|
||||
csv += `"${workflow.name}","${workflow.description}","${workflow.module}","${workflow.path}"\n`;
|
||||
const key = `${workflow.module}:${workflow.name}`;
|
||||
allWorkflows.set(key, `"${workflow.name}","${workflow.description}","${workflow.module}","${workflow.path}"`);
|
||||
}
|
||||
|
||||
// Write all workflows
|
||||
for (const [, value] of allWorkflows) {
|
||||
csv += value + '\n';
|
||||
}
|
||||
|
||||
await fs.writeFile(csvPath, csv);
|
||||
|
|
@ -581,12 +636,50 @@ class ManifestGenerator {
|
|||
async writeAgentManifest(cfgDir) {
|
||||
const csvPath = path.join(cfgDir, 'agent-manifest.csv');
|
||||
|
||||
// Read existing manifest to preserve entries
|
||||
const existingEntries = new Map();
|
||||
if (await fs.pathExists(csvPath)) {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim());
|
||||
|
||||
// Skip header
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
if (line) {
|
||||
// Parse CSV (simple parsing assuming no commas in quoted fields)
|
||||
const parts = line.split('","');
|
||||
if (parts.length >= 11) {
|
||||
const name = parts[0].replace(/^"/, '');
|
||||
const module = parts[8];
|
||||
existingEntries.set(`${module}:${name}`, line);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create CSV header with persona fields
|
||||
let csv = 'name,displayName,title,icon,role,identity,communicationStyle,principles,module,path\n';
|
||||
|
||||
// Add all agents
|
||||
// Combine existing and new agents, preferring new data for duplicates
|
||||
const allAgents = new Map();
|
||||
|
||||
// Add existing entries
|
||||
for (const [key, value] of existingEntries) {
|
||||
allAgents.set(key, value);
|
||||
}
|
||||
|
||||
// Add/update new agents
|
||||
for (const agent of this.agents) {
|
||||
csv += `"${agent.name}","${agent.displayName}","${agent.title}","${agent.icon}","${agent.role}","${agent.identity}","${agent.communicationStyle}","${agent.principles}","${agent.module}","${agent.path}"\n`;
|
||||
const key = `${agent.module}:${agent.name}`;
|
||||
allAgents.set(
|
||||
key,
|
||||
`"${agent.name}","${agent.displayName}","${agent.title}","${agent.icon}","${agent.role}","${agent.identity}","${agent.communicationStyle}","${agent.principles}","${agent.module}","${agent.path}"`,
|
||||
);
|
||||
}
|
||||
|
||||
// Write all agents
|
||||
for (const [, value] of allAgents) {
|
||||
csv += value + '\n';
|
||||
}
|
||||
|
||||
await fs.writeFile(csvPath, csv);
|
||||
|
|
@ -600,12 +693,47 @@ class ManifestGenerator {
|
|||
async writeTaskManifest(cfgDir) {
|
||||
const csvPath = path.join(cfgDir, 'task-manifest.csv');
|
||||
|
||||
// Read existing manifest to preserve entries
|
||||
const existingEntries = new Map();
|
||||
if (await fs.pathExists(csvPath)) {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim());
|
||||
|
||||
// Skip header
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
if (line) {
|
||||
// Parse CSV (simple parsing assuming no commas in quoted fields)
|
||||
const parts = line.split('","');
|
||||
if (parts.length >= 6) {
|
||||
const name = parts[0].replace(/^"/, '');
|
||||
const module = parts[3];
|
||||
existingEntries.set(`${module}:${name}`, line);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create CSV header with standalone column
|
||||
let csv = 'name,displayName,description,module,path,standalone\n';
|
||||
|
||||
// Add all tasks
|
||||
// Combine existing and new tasks
|
||||
const allTasks = new Map();
|
||||
|
||||
// Add existing entries
|
||||
for (const [key, value] of existingEntries) {
|
||||
allTasks.set(key, value);
|
||||
}
|
||||
|
||||
// Add/update new tasks
|
||||
for (const task of this.tasks) {
|
||||
csv += `"${task.name}","${task.displayName}","${task.description}","${task.module}","${task.path}","${task.standalone}"\n`;
|
||||
const key = `${task.module}:${task.name}`;
|
||||
allTasks.set(key, `"${task.name}","${task.displayName}","${task.description}","${task.module}","${task.path}","${task.standalone}"`);
|
||||
}
|
||||
|
||||
// Write all tasks
|
||||
for (const [, value] of allTasks) {
|
||||
csv += value + '\n';
|
||||
}
|
||||
|
||||
await fs.writeFile(csvPath, csv);
|
||||
|
|
@ -619,12 +747,47 @@ class ManifestGenerator {
|
|||
async writeToolManifest(cfgDir) {
|
||||
const csvPath = path.join(cfgDir, 'tool-manifest.csv');
|
||||
|
||||
// Read existing manifest to preserve entries
|
||||
const existingEntries = new Map();
|
||||
if (await fs.pathExists(csvPath)) {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim());
|
||||
|
||||
// Skip header
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
if (line) {
|
||||
// Parse CSV (simple parsing assuming no commas in quoted fields)
|
||||
const parts = line.split('","');
|
||||
if (parts.length >= 6) {
|
||||
const name = parts[0].replace(/^"/, '');
|
||||
const module = parts[3];
|
||||
existingEntries.set(`${module}:${name}`, line);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create CSV header with standalone column
|
||||
let csv = 'name,displayName,description,module,path,standalone\n';
|
||||
|
||||
// Add all tools
|
||||
// Combine existing and new tools
|
||||
const allTools = new Map();
|
||||
|
||||
// Add existing entries
|
||||
for (const [key, value] of existingEntries) {
|
||||
allTools.set(key, value);
|
||||
}
|
||||
|
||||
// Add/update new tools
|
||||
for (const tool of this.tools) {
|
||||
csv += `"${tool.name}","${tool.displayName}","${tool.description}","${tool.module}","${tool.path}","${tool.standalone}"\n`;
|
||||
const key = `${tool.module}:${tool.name}`;
|
||||
allTools.set(key, `"${tool.name}","${tool.displayName}","${tool.description}","${tool.module}","${tool.path}","${tool.standalone}"`);
|
||||
}
|
||||
|
||||
// Write all tools
|
||||
for (const [, value] of allTools) {
|
||||
csv += value + '\n';
|
||||
}
|
||||
|
||||
await fs.writeFile(csvPath, csv);
|
||||
|
|
|
|||
|
|
@ -61,6 +61,7 @@ class Manifest {
|
|||
installDate: manifestData.installation?.installDate,
|
||||
lastUpdated: manifestData.installation?.lastUpdated,
|
||||
modules: manifestData.modules || [],
|
||||
customModules: manifestData.customModules || [],
|
||||
ides: manifestData.ides || [],
|
||||
};
|
||||
} catch (error) {
|
||||
|
|
@ -93,6 +94,7 @@ class Manifest {
|
|||
lastUpdated: manifest.lastUpdated,
|
||||
},
|
||||
modules: manifest.modules || [],
|
||||
customModules: manifest.customModules || [],
|
||||
ides: manifest.ides || [],
|
||||
};
|
||||
|
||||
|
|
@ -535,6 +537,51 @@ class Manifest {
|
|||
|
||||
return configs;
|
||||
}
|
||||
/**
|
||||
* Add a custom module to the manifest with its source path
|
||||
* @param {string} bmadDir - Path to bmad directory
|
||||
* @param {Object} customModule - Custom module info
|
||||
*/
|
||||
async addCustomModule(bmadDir, customModule) {
|
||||
const manifest = await this.read(bmadDir);
|
||||
if (!manifest) {
|
||||
throw new Error('No manifest found');
|
||||
}
|
||||
|
||||
if (!manifest.customModules) {
|
||||
manifest.customModules = [];
|
||||
}
|
||||
|
||||
// Check if custom module already exists
|
||||
const existingIndex = manifest.customModules.findIndex((m) => m.id === customModule.id);
|
||||
if (existingIndex === -1) {
|
||||
// Add new entry
|
||||
manifest.customModules.push(customModule);
|
||||
} else {
|
||||
// Update existing entry
|
||||
manifest.customModules[existingIndex] = customModule;
|
||||
}
|
||||
|
||||
await this.update(bmadDir, { customModules: manifest.customModules });
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a custom module from the manifest
|
||||
* @param {string} bmadDir - Path to bmad directory
|
||||
* @param {string} moduleId - Module ID to remove
|
||||
*/
|
||||
async removeCustomModule(bmadDir, moduleId) {
|
||||
const manifest = await this.read(bmadDir);
|
||||
if (!manifest || !manifest.customModules) {
|
||||
return;
|
||||
}
|
||||
|
||||
const index = manifest.customModules.findIndex((m) => m.id === moduleId);
|
||||
if (index !== -1) {
|
||||
manifest.customModules.splice(index, 1);
|
||||
await this.update(bmadDir, { customModules: manifest.customModules });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Manifest };
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ const fs = require('fs-extra');
|
|||
const chalk = require('chalk');
|
||||
const yaml = require('js-yaml');
|
||||
const { FileOps } = require('../../../lib/file-ops');
|
||||
const { XmlHandler } = require('../../../lib/xml-handler');
|
||||
|
||||
/**
|
||||
* Handler for custom content (custom.yaml)
|
||||
|
|
@ -11,6 +12,7 @@ const { FileOps } = require('../../../lib/file-ops');
|
|||
class CustomHandler {
|
||||
constructor() {
|
||||
this.fileOps = new FileOps();
|
||||
this.xmlHandler = new XmlHandler();
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -52,6 +54,12 @@ class CustomHandler {
|
|||
} else if (entry.name === 'custom.yaml') {
|
||||
// Found a custom.yaml file
|
||||
customPaths.push(fullPath);
|
||||
} else if (
|
||||
entry.name === 'module.yaml' && // Check if this is a custom module (either in _module-installer or in root directory)
|
||||
// Skip if it's in src/modules (those are standard modules)
|
||||
!fullPath.includes(path.join('src', 'modules'))
|
||||
) {
|
||||
customPaths.push(fullPath);
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
|
|
@ -66,37 +74,44 @@ class CustomHandler {
|
|||
}
|
||||
|
||||
/**
|
||||
* Get custom content info from a custom.yaml file
|
||||
* @param {string} customYamlPath - Path to custom.yaml file
|
||||
* Get custom content info from a custom.yaml or module.yaml file
|
||||
* @param {string} configPath - Path to config file
|
||||
* @param {string} projectRoot - Project root directory for calculating relative paths
|
||||
* @returns {Object|null} Custom content info
|
||||
*/
|
||||
async getCustomInfo(customYamlPath) {
|
||||
async getCustomInfo(configPath, projectRoot = null) {
|
||||
try {
|
||||
const configContent = await fs.readFile(customYamlPath, 'utf8');
|
||||
const configContent = await fs.readFile(configPath, 'utf8');
|
||||
|
||||
// Try to parse YAML with error handling
|
||||
let config;
|
||||
try {
|
||||
config = yaml.load(configContent);
|
||||
} catch (parseError) {
|
||||
console.warn(chalk.yellow(`Warning: YAML parse error in ${customYamlPath}:`, parseError.message));
|
||||
console.warn(chalk.yellow(`Warning: YAML parse error in ${configPath}:`, parseError.message));
|
||||
return null;
|
||||
}
|
||||
|
||||
const customDir = path.dirname(customYamlPath);
|
||||
const relativePath = path.relative(process.cwd(), customDir);
|
||||
// Check if this is an module.yaml (module) or custom.yaml (custom content)
|
||||
const isInstallConfig = configPath.endsWith('module.yaml');
|
||||
const configDir = path.dirname(configPath);
|
||||
|
||||
// Use provided projectRoot or fall back to process.cwd()
|
||||
const basePath = projectRoot || process.cwd();
|
||||
const relativePath = path.relative(basePath, configDir);
|
||||
|
||||
return {
|
||||
id: config.code || path.basename(customDir),
|
||||
name: config.name || `Custom: ${path.basename(customDir)}`,
|
||||
description: config.description || 'Custom agents and workflows',
|
||||
path: customDir,
|
||||
id: config.code || 'unknown-code',
|
||||
name: config.name,
|
||||
description: config.description || '',
|
||||
path: configDir,
|
||||
relativePath: relativePath,
|
||||
defaultSelected: config.default_selected === true,
|
||||
config: config,
|
||||
isInstallConfig: isInstallConfig, // Track which type this is
|
||||
};
|
||||
} catch (error) {
|
||||
console.warn(chalk.yellow(`Warning: Failed to read ${customYamlPath}:`, error.message));
|
||||
console.warn(chalk.yellow(`Warning: Failed to read ${configPath}:`, error.message));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
|
@ -128,10 +143,10 @@ class CustomHandler {
|
|||
await fs.ensureDir(bmadAgentsDir);
|
||||
await fs.ensureDir(bmadWorkflowsDir);
|
||||
|
||||
// Process agents - copy entire agents directory structure
|
||||
// Process agents - compile and copy agents
|
||||
const agentsDir = path.join(customPath, 'agents');
|
||||
if (await fs.pathExists(agentsDir)) {
|
||||
await this.copyDirectory(agentsDir, bmadAgentsDir, results, fileTrackingCallback, config);
|
||||
await this.compileAndCopyAgents(agentsDir, bmadAgentsDir, bmadDir, config, fileTrackingCallback, results);
|
||||
|
||||
// Count agent files
|
||||
const agentFiles = await this.findFilesRecursively(agentsDir, ['.agent.yaml', '.md']);
|
||||
|
|
@ -236,13 +251,20 @@ class CustomHandler {
|
|||
// Copy with placeholder replacement for text files
|
||||
const textExtensions = ['.md', '.yaml', '.yml', '.txt', '.json'];
|
||||
if (textExtensions.some((ext) => entry.name.endsWith(ext))) {
|
||||
await this.fileOps.copyFile(sourcePath, targetPath, {
|
||||
bmadFolder: config.bmad_folder || 'bmad',
|
||||
userName: config.user_name || 'User',
|
||||
communicationLanguage: config.communication_language || 'English',
|
||||
outputFolder: config.output_folder || 'docs',
|
||||
});
|
||||
// Read source content
|
||||
let content = await fs.readFile(sourcePath, 'utf8');
|
||||
|
||||
// Replace placeholders
|
||||
content = content.replaceAll('{bmad_folder}', config.bmad_folder || 'bmad');
|
||||
content = content.replaceAll('{user_name}', config.user_name || 'User');
|
||||
content = content.replaceAll('{communication_language}', config.communication_language || 'English');
|
||||
content = content.replaceAll('{output_folder}', config.output_folder || 'docs');
|
||||
|
||||
// Write to target
|
||||
await fs.ensureDir(path.dirname(targetPath));
|
||||
await fs.writeFile(targetPath, content, 'utf8');
|
||||
} else {
|
||||
// Copy binary files as-is
|
||||
await fs.copy(sourcePath, targetPath);
|
||||
}
|
||||
|
||||
|
|
@ -261,6 +283,114 @@ class CustomHandler {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compile .agent.yaml files to .md format and handle sidecars
|
||||
* @param {string} sourceAgentsPath - Source agents directory
|
||||
* @param {string} targetAgentsPath - Target agents directory
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Object} config - Configuration for placeholder replacement
|
||||
* @param {Function} fileTrackingCallback - Optional callback to track installed files
|
||||
* @param {Object} results - Results object to update
|
||||
*/
|
||||
async compileAndCopyAgents(sourceAgentsPath, targetAgentsPath, bmadDir, config, fileTrackingCallback, results) {
|
||||
// Get all .agent.yaml files recursively
|
||||
const agentFiles = await this.findFilesRecursively(sourceAgentsPath, ['.agent.yaml']);
|
||||
|
||||
for (const agentFile of agentFiles) {
|
||||
const relativePath = path.relative(sourceAgentsPath, agentFile);
|
||||
const targetDir = path.join(targetAgentsPath, path.dirname(relativePath));
|
||||
|
||||
await fs.ensureDir(targetDir);
|
||||
|
||||
const agentName = path.basename(agentFile, '.agent.yaml');
|
||||
const targetMdPath = path.join(targetDir, `${agentName}.md`);
|
||||
// Use the actual bmadDir if available (for when installing to temp dir)
|
||||
const actualBmadDir = config._bmadDir || bmadDir;
|
||||
const customizePath = path.join(actualBmadDir, '_cfg', 'agents', `custom-${agentName}.customize.yaml`);
|
||||
|
||||
// Read and compile the YAML
|
||||
try {
|
||||
const yamlContent = await fs.readFile(agentFile, 'utf8');
|
||||
const { compileAgent } = require('../../../lib/agent/compiler');
|
||||
|
||||
// Create customize template if it doesn't exist
|
||||
if (!(await fs.pathExists(customizePath))) {
|
||||
const { getSourcePath } = require('../../../lib/project-root');
|
||||
const genericTemplatePath = getSourcePath('utility', 'templates', 'agent.customize.template.yaml');
|
||||
if (await fs.pathExists(genericTemplatePath)) {
|
||||
// Copy with placeholder replacement
|
||||
let templateContent = await fs.readFile(genericTemplatePath, 'utf8');
|
||||
templateContent = templateContent.replaceAll('{bmad_folder}', config.bmad_folder || 'bmad');
|
||||
await fs.writeFile(customizePath, templateContent, 'utf8');
|
||||
console.log(chalk.dim(` Created customize: custom-${agentName}.customize.yaml`));
|
||||
}
|
||||
}
|
||||
|
||||
// Compile the agent
|
||||
const { xml } = compileAgent(yamlContent, {}, agentName, relativePath, { config });
|
||||
|
||||
// Replace placeholders in the compiled content
|
||||
let processedXml = xml;
|
||||
processedXml = processedXml.replaceAll('{bmad_folder}', config.bmad_folder || 'bmad');
|
||||
processedXml = processedXml.replaceAll('{user_name}', config.user_name || 'User');
|
||||
processedXml = processedXml.replaceAll('{communication_language}', config.communication_language || 'English');
|
||||
processedXml = processedXml.replaceAll('{output_folder}', config.output_folder || 'docs');
|
||||
|
||||
// Write the compiled MD file
|
||||
await fs.writeFile(targetMdPath, processedXml, 'utf8');
|
||||
|
||||
// Check if agent has sidecar
|
||||
let hasSidecar = false;
|
||||
try {
|
||||
const yamlLib = require('yaml');
|
||||
const agentYaml = yamlLib.parse(yamlContent);
|
||||
hasSidecar = agentYaml?.agent?.metadata?.hasSidecar === true;
|
||||
} catch {
|
||||
// Continue without sidecar processing
|
||||
}
|
||||
|
||||
// Copy sidecar files if agent has hasSidecar flag
|
||||
if (hasSidecar && config.agent_sidecar_folder) {
|
||||
const { copyAgentSidecarFiles } = require('../../../lib/agent/installer');
|
||||
|
||||
// Resolve agent sidecar folder path
|
||||
const projectDir = path.dirname(bmadDir);
|
||||
const resolvedSidecarFolder = config.agent_sidecar_folder
|
||||
.replaceAll('{project-root}', projectDir)
|
||||
.replaceAll('{bmad_folder}', path.basename(bmadDir));
|
||||
|
||||
// Create sidecar directory for this agent
|
||||
const agentSidecarDir = path.join(resolvedSidecarFolder, agentName);
|
||||
await fs.ensureDir(agentSidecarDir);
|
||||
|
||||
// Copy sidecar files
|
||||
const sidecarResult = copyAgentSidecarFiles(path.dirname(agentFile), agentSidecarDir, agentFile);
|
||||
|
||||
if (sidecarResult.copied.length > 0) {
|
||||
console.log(chalk.dim(` Copied ${sidecarResult.copied.length} sidecar file(s) to: ${agentSidecarDir}`));
|
||||
}
|
||||
if (sidecarResult.preserved.length > 0) {
|
||||
console.log(chalk.dim(` Preserved ${sidecarResult.preserved.length} existing sidecar file(s)`));
|
||||
}
|
||||
}
|
||||
|
||||
// Track the file
|
||||
if (fileTrackingCallback) {
|
||||
fileTrackingCallback(targetMdPath);
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.dim(
|
||||
` Compiled agent: ${agentName} -> ${path.relative(targetAgentsPath, targetMdPath)}${hasSidecar ? ' (with sidecar)' : ''}`,
|
||||
),
|
||||
);
|
||||
} catch (error) {
|
||||
console.warn(chalk.yellow(` Failed to compile agent ${agentName}:`, error.message));
|
||||
results.errors.push(`Failed to compile agent ${agentName}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { CustomHandler };
|
||||
|
|
|
|||
|
|
@ -22,11 +22,12 @@ const { getProjectRoot, getSourcePath, getModulePath } = require('../../../lib/p
|
|||
* await manager.install('core-module', '/path/to/bmad');
|
||||
*/
|
||||
class ModuleManager {
|
||||
constructor() {
|
||||
constructor(options = {}) {
|
||||
// Path to source modules directory
|
||||
this.modulesSourcePath = getSourcePath('modules');
|
||||
this.xmlHandler = new XmlHandler();
|
||||
this.bmadFolderName = 'bmad'; // Default, can be overridden
|
||||
this.scanProjectForModules = options.scanProjectForModules !== false; // Default to true for backward compatibility
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -106,7 +107,7 @@ class ModuleManager {
|
|||
}
|
||||
|
||||
/**
|
||||
* Find all modules in the project by searching for install-config.yaml files
|
||||
* Find all modules in the project by searching for module.yaml files
|
||||
* @returns {Array} List of module paths
|
||||
*/
|
||||
async findModulesInProject() {
|
||||
|
|
@ -143,12 +144,14 @@ class ModuleManager {
|
|||
continue;
|
||||
}
|
||||
|
||||
// Check if this directory contains a module (install-config.yaml OR custom.yaml)
|
||||
const installerConfigPath = path.join(fullPath, '_module-installer', 'install-config.yaml');
|
||||
// Check if this directory contains a module (module.yaml OR custom.yaml)
|
||||
const moduleConfigPath = path.join(fullPath, 'module.yaml');
|
||||
const installerConfigPath = path.join(fullPath, '_module-installer', 'module.yaml');
|
||||
const customConfigPath = path.join(fullPath, '_module-installer', 'custom.yaml');
|
||||
const rootCustomConfigPath = path.join(fullPath, 'custom.yaml');
|
||||
|
||||
if (
|
||||
(await fs.pathExists(moduleConfigPath)) ||
|
||||
(await fs.pathExists(installerConfigPath)) ||
|
||||
(await fs.pathExists(customConfigPath)) ||
|
||||
(await fs.pathExists(rootCustomConfigPath))
|
||||
|
|
@ -175,10 +178,11 @@ class ModuleManager {
|
|||
|
||||
/**
|
||||
* List all available modules (excluding core which is always installed)
|
||||
* @returns {Array} List of available modules with metadata
|
||||
* @returns {Object} Object with modules array and customModules array
|
||||
*/
|
||||
async listAvailable() {
|
||||
const modules = [];
|
||||
const customModules = [];
|
||||
|
||||
// First, scan src/modules (the standard location)
|
||||
if (await fs.pathExists(this.modulesSourcePath)) {
|
||||
|
|
@ -187,12 +191,17 @@ class ModuleManager {
|
|||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
const modulePath = path.join(this.modulesSourcePath, entry.name);
|
||||
// Check for module structure (install-config.yaml OR custom.yaml)
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'install-config.yaml');
|
||||
// Check for module structure (module.yaml OR custom.yaml)
|
||||
const moduleConfigPath = path.join(modulePath, 'module.yaml');
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'module.yaml');
|
||||
const customConfigPath = path.join(modulePath, '_module-installer', 'custom.yaml');
|
||||
|
||||
// Skip if this doesn't look like a module
|
||||
if (!(await fs.pathExists(installerConfigPath)) && !(await fs.pathExists(customConfigPath))) {
|
||||
if (
|
||||
!(await fs.pathExists(moduleConfigPath)) &&
|
||||
!(await fs.pathExists(installerConfigPath)) &&
|
||||
!(await fs.pathExists(customConfigPath))
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
@ -209,7 +218,8 @@ class ModuleManager {
|
|||
}
|
||||
}
|
||||
|
||||
// Then, find all other modules in the project
|
||||
// Then, find all other modules in the project (only if scanning is enabled)
|
||||
if (this.scanProjectForModules) {
|
||||
const otherModulePaths = await this.findModulesInProject();
|
||||
for (const modulePath of otherModulePaths) {
|
||||
const moduleName = path.basename(modulePath);
|
||||
|
|
@ -221,13 +231,37 @@ class ModuleManager {
|
|||
}
|
||||
|
||||
const moduleInfo = await this.getModuleInfo(modulePath, moduleName, relativePath);
|
||||
if (moduleInfo && !modules.some((m) => m.id === moduleInfo.id)) {
|
||||
if (moduleInfo && !modules.some((m) => m.id === moduleInfo.id) && !customModules.some((m) => m.id === moduleInfo.id)) {
|
||||
// Avoid duplicates - skip if we already have this module ID
|
||||
if (moduleInfo.isCustom) {
|
||||
customModules.push(moduleInfo);
|
||||
} else {
|
||||
modules.push(moduleInfo);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return modules;
|
||||
// Also check for cached custom modules in _cfg/custom/
|
||||
if (this.bmadDir) {
|
||||
const customCacheDir = path.join(this.bmadDir, '_cfg', 'custom');
|
||||
if (await fs.pathExists(customCacheDir)) {
|
||||
const cacheEntries = await fs.readdir(customCacheDir, { withFileTypes: true });
|
||||
for (const entry of cacheEntries) {
|
||||
if (entry.isDirectory()) {
|
||||
const cachePath = path.join(customCacheDir, entry.name);
|
||||
const moduleInfo = await this.getModuleInfo(cachePath, entry.name, '_cfg/custom');
|
||||
if (moduleInfo && !modules.some((m) => m.id === moduleInfo.id) && !customModules.some((m) => m.id === moduleInfo.id)) {
|
||||
moduleInfo.isCustom = true;
|
||||
moduleInfo.fromCache = true;
|
||||
customModules.push(moduleInfo);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { modules, customModules };
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -238,13 +272,16 @@ class ModuleManager {
|
|||
* @returns {Object|null} Module info or null if not a valid module
|
||||
*/
|
||||
async getModuleInfo(modulePath, defaultName, sourceDescription) {
|
||||
// Check for module structure (install-config.yaml OR custom.yaml)
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'install-config.yaml');
|
||||
// Check for module structure (module.yaml OR custom.yaml)
|
||||
const moduleConfigPath = path.join(modulePath, 'module.yaml');
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'module.yaml');
|
||||
const customConfigPath = path.join(modulePath, '_module-installer', 'custom.yaml');
|
||||
const rootCustomConfigPath = path.join(modulePath, 'custom.yaml');
|
||||
let configPath = null;
|
||||
|
||||
if (await fs.pathExists(installerConfigPath)) {
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
configPath = moduleConfigPath;
|
||||
} else if (await fs.pathExists(installerConfigPath)) {
|
||||
configPath = installerConfigPath;
|
||||
} else if (await fs.pathExists(customConfigPath)) {
|
||||
configPath = customConfigPath;
|
||||
|
|
@ -305,10 +342,11 @@ class ModuleManager {
|
|||
// First, check src/modules
|
||||
const srcModulePath = path.join(this.modulesSourcePath, moduleName);
|
||||
if (await fs.pathExists(srcModulePath)) {
|
||||
// Check if this looks like a module (has install-config.yaml)
|
||||
const installerConfigPath = path.join(srcModulePath, '_module-installer', 'install-config.yaml');
|
||||
// Check if this looks like a module (has module.yaml)
|
||||
const moduleConfigPath = path.join(srcModulePath, 'module.yaml');
|
||||
const installerConfigPath = path.join(srcModulePath, '_module-installer', 'module.yaml');
|
||||
|
||||
if (await fs.pathExists(installerConfigPath)) {
|
||||
if ((await fs.pathExists(moduleConfigPath)) || (await fs.pathExists(installerConfigPath))) {
|
||||
return srcModulePath;
|
||||
}
|
||||
|
||||
|
|
@ -330,12 +368,15 @@ class ModuleManager {
|
|||
// Also check by module ID (not just folder name)
|
||||
// Need to read configs to match by ID
|
||||
for (const modulePath of allModulePaths) {
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'install-config.yaml');
|
||||
const moduleConfigPath = path.join(modulePath, 'module.yaml');
|
||||
const installerConfigPath = path.join(modulePath, '_module-installer', 'module.yaml');
|
||||
const customConfigPath = path.join(modulePath, '_module-installer', 'custom.yaml');
|
||||
const rootCustomConfigPath = path.join(modulePath, 'custom.yaml');
|
||||
|
||||
let configPath = null;
|
||||
if (await fs.pathExists(installerConfigPath)) {
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
configPath = moduleConfigPath;
|
||||
} else if (await fs.pathExists(installerConfigPath)) {
|
||||
configPath = installerConfigPath;
|
||||
} else if (await fs.pathExists(customConfigPath)) {
|
||||
configPath = customConfigPath;
|
||||
|
|
@ -576,7 +617,7 @@ class ModuleManager {
|
|||
}
|
||||
|
||||
// Skip _module-installer directory - it's only needed at install time
|
||||
if (file.startsWith('_module-installer/')) {
|
||||
if (file.startsWith('_module-installer/') || file === 'module.yaml') {
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
@ -812,8 +853,13 @@ class ModuleManager {
|
|||
// Compile with customizations if any
|
||||
const { xml } = compileAgent(yamlContent, {}, agentName, relativePath, { config: this.coreConfig });
|
||||
|
||||
// Write the compiled MD file
|
||||
// Replace {bmad_folder} placeholder if needed
|
||||
if (xml.includes('{bmad_folder}') && this.bmadFolderName) {
|
||||
const processedXml = xml.replaceAll('{bmad_folder}', this.bmadFolderName);
|
||||
await fs.writeFile(targetMdPath, processedXml, 'utf8');
|
||||
} else {
|
||||
await fs.writeFile(targetMdPath, xml, 'utf8');
|
||||
}
|
||||
|
||||
// Copy sidecar files if agent has hasSidecar flag
|
||||
if (hasSidecar) {
|
||||
|
|
|
|||
|
|
@ -445,17 +445,9 @@ function compileAgent(yamlContent, answers = {}, agentName = '', targetPath = ''
|
|||
// Parse YAML
|
||||
const agentYaml = yaml.parse(yamlContent);
|
||||
|
||||
// Inject custom agent name into metadata.name if provided
|
||||
// This is the user's chosen persona name (e.g., "Fred" instead of "Inkwell Von Comitizen")
|
||||
if (agentName && agentYaml.agent && agentYaml.agent.metadata) {
|
||||
// Convert kebab-case to title case for the name field
|
||||
// e.g., "fred-commit-poet" → "Fred Commit Poet"
|
||||
const titleCaseName = agentName
|
||||
.split('-')
|
||||
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
|
||||
.join(' ');
|
||||
agentYaml.agent.metadata.name = titleCaseName;
|
||||
}
|
||||
// Note: agentName parameter is for UI display only, not for modifying the YAML
|
||||
// The persona name (metadata.name) should always come from the YAML file
|
||||
// We should NEVER modify metadata.name as it's part of the agent's identity
|
||||
|
||||
// Extract install_config
|
||||
const installConfig = extractInstallConfig(agentYaml);
|
||||
|
|
|
|||
|
|
@ -242,7 +242,8 @@ function installAgent(agentInfo, answers, targetPath, options = {}) {
|
|||
const { xml, metadata, processedYaml } = compileAgent(fs.readFileSync(agentInfo.yamlFile, 'utf8'), answers);
|
||||
|
||||
// Determine target agent folder name
|
||||
const agentFolderName = metadata.name ? metadata.name.toLowerCase().replaceAll(/\s+/g, '-') : agentInfo.name;
|
||||
// Use the folder name from agentInfo, NOT the persona name from metadata
|
||||
const agentFolderName = agentInfo.name;
|
||||
|
||||
const agentTargetDir = path.join(targetPath, agentFolderName);
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ const boxen = require('boxen');
|
|||
const wrapAnsi = require('wrap-ansi');
|
||||
const figlet = require('figlet');
|
||||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
|
||||
const CLIUtils = {
|
||||
/**
|
||||
|
|
@ -84,8 +85,8 @@ const CLIUtils = {
|
|||
/**
|
||||
* Display module configuration header
|
||||
* @param {string} moduleName - Module name (fallback if no custom header)
|
||||
* @param {string} header - Custom header from install-config.yaml
|
||||
* @param {string} subheader - Custom subheader from install-config.yaml
|
||||
* @param {string} header - Custom header from module.yaml
|
||||
* @param {string} subheader - Custom subheader from module.yaml
|
||||
*/
|
||||
displayModuleConfigHeader(moduleName, header = null, subheader = null) {
|
||||
// Simple blue banner with custom header/subheader if provided
|
||||
|
|
@ -100,8 +101,8 @@ const CLIUtils = {
|
|||
/**
|
||||
* Display module with no custom configuration
|
||||
* @param {string} moduleName - Module name (fallback if no custom header)
|
||||
* @param {string} header - Custom header from install-config.yaml
|
||||
* @param {string} subheader - Custom subheader from install-config.yaml
|
||||
* @param {string} header - Custom header from module.yaml
|
||||
* @param {string} subheader - Custom subheader from module.yaml
|
||||
*/
|
||||
displayModuleNoConfig(moduleName, header = null, subheader = null) {
|
||||
// Show full banner with header/subheader, just like modules with config
|
||||
|
|
@ -205,6 +206,22 @@ const CLIUtils = {
|
|||
// No longer clear screen or show boxes - just a simple completion message
|
||||
// This is deprecated but kept for backwards compatibility
|
||||
},
|
||||
|
||||
/**
|
||||
* Expand path with ~ expansion
|
||||
* @param {string} inputPath - Path to expand
|
||||
* @returns {string} Expanded path
|
||||
*/
|
||||
expandPath(inputPath) {
|
||||
if (!inputPath) return inputPath;
|
||||
|
||||
// Expand ~ to home directory
|
||||
if (inputPath.startsWith('~')) {
|
||||
return path.join(os.homedir(), inputPath.slice(1));
|
||||
}
|
||||
|
||||
return inputPath;
|
||||
},
|
||||
};
|
||||
|
||||
module.exports = { CLIUtils };
|
||||
|
|
|
|||
|
|
@ -59,6 +59,17 @@ class UI {
|
|||
const bmadDir = await installer.findBmadDir(confirmedDirectory);
|
||||
const hasExistingInstall = await fs.pathExists(bmadDir);
|
||||
|
||||
// Always ask for custom content, but we'll handle it differently for new installs
|
||||
let customContentConfig = { hasCustomContent: false };
|
||||
if (hasExistingInstall) {
|
||||
// Existing installation - prompt to add/update custom content
|
||||
customContentConfig = await this.promptCustomContentForExisting();
|
||||
} else {
|
||||
// New installation - we'll prompt after creating the directory structure
|
||||
// For now, set a flag to indicate we should ask later
|
||||
customContentConfig._shouldAsk = true;
|
||||
}
|
||||
|
||||
// Track action type (only set if there's an existing installation)
|
||||
let actionType;
|
||||
|
||||
|
|
@ -85,9 +96,11 @@ class UI {
|
|||
|
||||
// Handle quick update separately
|
||||
if (actionType === 'quick-update') {
|
||||
// Quick update doesn't install custom content - just updates existing modules
|
||||
return {
|
||||
actionType: 'quick-update',
|
||||
directory: confirmedDirectory,
|
||||
customContent: { hasCustomContent: false },
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -117,6 +130,64 @@ class UI {
|
|||
const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory);
|
||||
const coreConfig = await this.collectCoreConfig(confirmedDirectory);
|
||||
|
||||
// For new installations, create the directory structure first so we can cache custom content
|
||||
if (!hasExistingInstall && customContentConfig._shouldAsk) {
|
||||
// Create the bmad directory based on core config
|
||||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const bmadFolderName = coreConfig.bmad_folder || 'bmad';
|
||||
const bmadDir = path.join(confirmedDirectory, bmadFolderName);
|
||||
|
||||
await fs.ensureDir(bmadDir);
|
||||
await fs.ensureDir(path.join(bmadDir, '_cfg'));
|
||||
await fs.ensureDir(path.join(bmadDir, '_cfg', 'custom'));
|
||||
|
||||
// Now prompt for custom content
|
||||
customContentConfig = await this.promptCustomContentLocation();
|
||||
|
||||
// If custom content found, cache it
|
||||
if (customContentConfig.hasCustomContent) {
|
||||
const { CustomModuleCache } = require('../installers/lib/core/custom-module-cache');
|
||||
const cache = new CustomModuleCache(bmadDir);
|
||||
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customFiles = await customHandler.findCustomContent(customContentConfig.customPath);
|
||||
|
||||
for (const customFile of customFiles) {
|
||||
const customInfo = await customHandler.getCustomInfo(customFile);
|
||||
if (customInfo && customInfo.id) {
|
||||
// Cache the module source
|
||||
await cache.cacheModule(customInfo.id, customInfo.path, {
|
||||
name: customInfo.name,
|
||||
type: 'custom',
|
||||
});
|
||||
|
||||
console.log(chalk.dim(` Cached ${customInfo.name} to _cfg/custom/${customInfo.id}`));
|
||||
}
|
||||
}
|
||||
|
||||
// Update config to use cached modules
|
||||
customContentConfig.cachedModules = [];
|
||||
for (const customFile of customFiles) {
|
||||
const customInfo = await customHandler.getCustomInfo(customFile);
|
||||
if (customInfo && customInfo.id) {
|
||||
customContentConfig.cachedModules.push({
|
||||
id: customInfo.id,
|
||||
cachePath: path.join(bmadDir, '_cfg', 'custom', customInfo.id),
|
||||
// Store relative path from cache for the manifest
|
||||
relativePath: path.join('_cfg', 'custom', customInfo.id),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.green(`✓ Cached ${customFiles.length} custom module(s)`));
|
||||
}
|
||||
|
||||
// Clear the flag
|
||||
delete customContentConfig._shouldAsk;
|
||||
}
|
||||
|
||||
// Skip module selection during update/reinstall - keep existing modules
|
||||
let selectedModules;
|
||||
if (actionType === 'update' || actionType === 'reinstall') {
|
||||
|
|
@ -125,8 +196,52 @@ class UI {
|
|||
console.log(chalk.cyan('\n📦 Keeping existing modules: ') + selectedModules.join(', '));
|
||||
} else {
|
||||
// Only show module selection for new installs
|
||||
const moduleChoices = await this.getModuleChoices(installedModuleIds);
|
||||
const moduleChoices = await this.getModuleChoices(installedModuleIds, customContentConfig);
|
||||
selectedModules = await this.selectModules(moduleChoices);
|
||||
|
||||
// Check which custom content items were selected
|
||||
const selectedCustomContent = selectedModules.filter((mod) => mod.startsWith('__CUSTOM_CONTENT__'));
|
||||
|
||||
// For cached modules (new installs), check if any cached modules were selected
|
||||
let selectedCachedModules = [];
|
||||
if (customContentConfig.cachedModules) {
|
||||
selectedCachedModules = selectedModules.filter(
|
||||
(mod) => !mod.startsWith('__CUSTOM_CONTENT__') && customContentConfig.cachedModules.some((cm) => cm.id === mod),
|
||||
);
|
||||
}
|
||||
|
||||
if (selectedCustomContent.length > 0 || selectedCachedModules.length > 0) {
|
||||
customContentConfig.selected = true;
|
||||
|
||||
// Handle directory-based custom content (existing installs)
|
||||
if (selectedCustomContent.length > 0) {
|
||||
customContentConfig.selectedFiles = selectedCustomContent.map((mod) => mod.replace('__CUSTOM_CONTENT__', ''));
|
||||
// Convert custom content to module IDs for installation
|
||||
const customContentModuleIds = [];
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
for (const customFile of customContentConfig.selectedFiles) {
|
||||
// Get the module info to extract the ID
|
||||
const customInfo = await customHandler.getCustomInfo(customFile);
|
||||
if (customInfo) {
|
||||
customContentModuleIds.push(customInfo.id);
|
||||
}
|
||||
}
|
||||
// Filter out custom content markers and add module IDs
|
||||
selectedModules = [...selectedModules.filter((mod) => !mod.startsWith('__CUSTOM_CONTENT__')), ...customContentModuleIds];
|
||||
}
|
||||
|
||||
// For cached modules, they're already module IDs, just mark as selected
|
||||
if (selectedCachedModules.length > 0) {
|
||||
customContentConfig.selectedCachedModules = selectedCachedModules;
|
||||
// No need to filter since they're already proper module IDs
|
||||
}
|
||||
} else if (customContentConfig.hasCustomContent) {
|
||||
// User provided custom content but didn't select any
|
||||
customContentConfig.selected = false;
|
||||
customContentConfig.selectedFiles = [];
|
||||
customContentConfig.selectedCachedModules = [];
|
||||
}
|
||||
}
|
||||
|
||||
// Prompt for AgentVibes TTS integration
|
||||
|
|
@ -147,7 +262,9 @@ class UI {
|
|||
ides: toolSelection.ides,
|
||||
skipIde: toolSelection.skipIde,
|
||||
coreConfig: coreConfig, // Pass collected core config to installer
|
||||
enableAgentVibes: agentVibesConfig.enabled, // AgentVibes TTS integration
|
||||
// Custom content configuration
|
||||
customContent: customContentConfig,
|
||||
enableAgentVibes: agentVibesConfig.enabled,
|
||||
agentVibesInstalled: agentVibesConfig.alreadyInstalled,
|
||||
};
|
||||
}
|
||||
|
|
@ -483,19 +600,142 @@ class UI {
|
|||
/**
|
||||
* Get module choices for selection
|
||||
* @param {Set} installedModuleIds - Currently installed module IDs
|
||||
* @param {Object} customContentConfig - Custom content configuration
|
||||
* @returns {Array} Module choices for inquirer
|
||||
*/
|
||||
async getModuleChoices(installedModuleIds) {
|
||||
const { ModuleManager } = require('../installers/lib/modules/manager');
|
||||
const moduleManager = new ModuleManager();
|
||||
const availableModules = await moduleManager.listAvailable();
|
||||
|
||||
async getModuleChoices(installedModuleIds, customContentConfig = null) {
|
||||
const moduleChoices = [];
|
||||
const isNewInstallation = installedModuleIds.size === 0;
|
||||
const moduleChoices = availableModules.map((mod) => ({
|
||||
name: mod.isCustom ? `${mod.name} ${chalk.red('(Custom)')}` : mod.name,
|
||||
|
||||
const customContentItems = [];
|
||||
const hasCustomContentItems = false;
|
||||
|
||||
// Add custom content items
|
||||
if (customContentConfig && customContentConfig.hasCustomContent) {
|
||||
if (customContentConfig.cachedModules) {
|
||||
// New installation - show cached modules
|
||||
for (const cachedModule of customContentConfig.cachedModules) {
|
||||
// Get the module info from cache
|
||||
const yaml = require('js-yaml');
|
||||
const fs = require('fs-extra');
|
||||
|
||||
// Try multiple possible config file locations
|
||||
const possibleConfigPaths = [
|
||||
path.join(cachedModule.cachePath, 'module.yaml'),
|
||||
path.join(cachedModule.cachePath, 'custom.yaml'),
|
||||
path.join(cachedModule.cachePath, '_module-installer', 'module.yaml'),
|
||||
path.join(cachedModule.cachePath, '_module-installer', 'custom.yaml'),
|
||||
];
|
||||
|
||||
let moduleData = null;
|
||||
let foundPath = null;
|
||||
|
||||
for (const configPath of possibleConfigPaths) {
|
||||
if (await fs.pathExists(configPath)) {
|
||||
try {
|
||||
const yamlContent = await fs.readFile(configPath, 'utf8');
|
||||
moduleData = yaml.load(yamlContent);
|
||||
foundPath = configPath;
|
||||
break;
|
||||
} catch {
|
||||
// Continue to next path
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (moduleData) {
|
||||
// Use the name from the custom info if we have it
|
||||
const moduleName = cachedModule.name || moduleData.name || cachedModule.id;
|
||||
|
||||
customContentItems.push({
|
||||
name: `${chalk.cyan('✓')} ${moduleName} ${chalk.gray('(cached)')}`,
|
||||
value: cachedModule.id, // Use module ID directly
|
||||
checked: true, // Default to selected
|
||||
cached: true,
|
||||
});
|
||||
} else {
|
||||
// Debug: show what paths we tried to check
|
||||
console.log(chalk.dim(`DEBUG: No module config found for ${cachedModule.id}`));
|
||||
console.log(
|
||||
chalk.dim(
|
||||
`DEBUG: Tried paths:`,
|
||||
possibleConfigPaths.map((p) => p.replace(cachedModule.cachePath, '.')),
|
||||
),
|
||||
);
|
||||
console.log(chalk.dim(`DEBUG: cachedModule:`, JSON.stringify(cachedModule, null, 2)));
|
||||
}
|
||||
}
|
||||
} else if (customContentConfig.customPath) {
|
||||
// Existing installation - show from directory
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customFiles = await customHandler.findCustomContent(customContentConfig.customPath);
|
||||
|
||||
for (const customFile of customFiles) {
|
||||
const customInfo = await customHandler.getCustomInfo(customFile);
|
||||
if (customInfo) {
|
||||
customContentItems.push({
|
||||
name: `${chalk.cyan('✓')} ${customInfo.name} ${chalk.gray(`(${customInfo.relativePath})`)}`,
|
||||
value: `__CUSTOM_CONTENT__${customFile}`, // Unique value for each custom content
|
||||
checked: true, // Default to selected since user chose to provide custom content
|
||||
path: customInfo.path, // Track path to avoid duplicates
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add official modules
|
||||
const { ModuleManager } = require('../installers/lib/modules/manager');
|
||||
// For new installations, don't scan project yet (will do after custom content is discovered)
|
||||
// For existing installations, scan if user selected custom content
|
||||
const shouldScanProject =
|
||||
!isNewInstallation && customContentConfig && customContentConfig.hasCustomContent && customContentConfig.selected;
|
||||
const moduleManager = new ModuleManager({
|
||||
scanProjectForModules: shouldScanProject,
|
||||
});
|
||||
const { modules: availableModules, customModules: customModulesFromProject } = await moduleManager.listAvailable();
|
||||
|
||||
// First, add all items to appropriate sections
|
||||
const allCustomModules = [];
|
||||
|
||||
// Add custom content items from directory
|
||||
allCustomModules.push(...customContentItems);
|
||||
|
||||
// Add custom modules from project scan (if scanning is enabled)
|
||||
for (const mod of customModulesFromProject) {
|
||||
// Skip if this module is already in customContentItems (by path)
|
||||
const isDuplicate = allCustomModules.some((item) => item.path && mod.path && path.resolve(item.path) === path.resolve(mod.path));
|
||||
|
||||
if (!isDuplicate) {
|
||||
allCustomModules.push({
|
||||
name: `${chalk.cyan('✓')} ${mod.name} ${chalk.gray(`(${mod.source})`)}`,
|
||||
value: mod.id,
|
||||
checked: isNewInstallation ? mod.defaultSelected || false : installedModuleIds.has(mod.id),
|
||||
}));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Add separators and modules in correct order
|
||||
if (allCustomModules.length > 0) {
|
||||
// Add separator for custom content, all custom modules, and official content separator
|
||||
moduleChoices.push(
|
||||
new inquirer.Separator('── Custom Content ──'),
|
||||
...allCustomModules,
|
||||
new inquirer.Separator('── Official Content ──'),
|
||||
);
|
||||
}
|
||||
|
||||
// Add official modules (only non-custom ones)
|
||||
for (const mod of availableModules) {
|
||||
if (!mod.isCustom) {
|
||||
moduleChoices.push({
|
||||
name: mod.name,
|
||||
value: mod.id,
|
||||
checked: isNewInstallation ? mod.defaultSelected || false : installedModuleIds.has(mod.id),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return moduleChoices;
|
||||
}
|
||||
|
|
@ -574,6 +814,116 @@ class UI {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt for custom content location
|
||||
* @returns {Object} Custom content configuration
|
||||
*/
|
||||
async promptCustomContentLocation() {
|
||||
try {
|
||||
CLIUtils.displaySection('Custom Content', 'Optional: Add custom agents, workflows, and modules');
|
||||
|
||||
const { hasCustomContent } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'hasCustomContent',
|
||||
message: 'Do you have custom content to install?',
|
||||
choices: [
|
||||
{ name: 'No (skip custom content)', value: 'none' },
|
||||
{ name: 'Enter a directory path', value: 'directory' },
|
||||
{ name: 'Enter a URL', value: 'url' },
|
||||
],
|
||||
default: 'none',
|
||||
},
|
||||
]);
|
||||
|
||||
if (hasCustomContent === 'none') {
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
|
||||
if (hasCustomContent === 'url') {
|
||||
console.log(chalk.yellow('\nURL-based custom content installation is coming soon!'));
|
||||
console.log(chalk.cyan('For now, please download your custom content and choose "Enter a directory path".\n'));
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
|
||||
if (hasCustomContent === 'directory') {
|
||||
let customPath;
|
||||
while (!customPath) {
|
||||
let expandedPath;
|
||||
const { directory } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'directory',
|
||||
message: 'Enter directory to search for custom content (will scan subfolders):',
|
||||
default: process.cwd(), // Use actual current working directory
|
||||
validate: async (input) => {
|
||||
if (!input || input.trim() === '') {
|
||||
return 'Please enter a directory path';
|
||||
}
|
||||
|
||||
try {
|
||||
expandedPath = this.expandUserPath(input.trim());
|
||||
} catch (error) {
|
||||
return error.message;
|
||||
}
|
||||
|
||||
// Check if the path exists
|
||||
const pathExists = await fs.pathExists(expandedPath);
|
||||
if (!pathExists) {
|
||||
return 'Directory does not exist';
|
||||
}
|
||||
|
||||
return true;
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
// Now expand the path for use after the prompt
|
||||
expandedPath = this.expandUserPath(directory.trim());
|
||||
|
||||
// Check if directory has custom content
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customFiles = await customHandler.findCustomContent(expandedPath);
|
||||
|
||||
if (customFiles.length === 0) {
|
||||
console.log(chalk.yellow(`\nNo custom content found in ${expandedPath}`));
|
||||
|
||||
const { tryAgain } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'tryAgain',
|
||||
message: 'Try a different directory?',
|
||||
default: true,
|
||||
},
|
||||
]);
|
||||
|
||||
if (tryAgain) {
|
||||
continue;
|
||||
} else {
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
}
|
||||
|
||||
customPath = expandedPath;
|
||||
console.log(chalk.green(`\n✓ Found ${customFiles.length} custom content item(s):`));
|
||||
for (const file of customFiles) {
|
||||
const relativePath = path.relative(expandedPath, path.dirname(file));
|
||||
const folderName = path.dirname(file).split(path.sep).pop();
|
||||
console.log(chalk.dim(` • ${folderName} ${chalk.gray(`(${relativePath})`)}`));
|
||||
}
|
||||
}
|
||||
|
||||
return { hasCustomContent: true, customPath };
|
||||
}
|
||||
|
||||
return { hasCustomContent: false };
|
||||
} catch (error) {
|
||||
console.error(chalk.red('Error in custom content prompt:'), error);
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Confirm directory selection
|
||||
* @param {string} directory - The directory path
|
||||
|
|
@ -859,6 +1209,144 @@ class UI {
|
|||
|
||||
return (await fs.pathExists(hookPath)) && (await fs.pathExists(playTtsPath));
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt for custom content for existing installations
|
||||
* @returns {Object} Custom content configuration
|
||||
*/
|
||||
async promptCustomContentForExisting() {
|
||||
try {
|
||||
CLIUtils.displaySection('Custom Content', 'Add new custom agents, workflows, or modules to your installation');
|
||||
|
||||
const { hasCustomContent } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'hasCustomContent',
|
||||
message: 'Do you want to add or update custom content?',
|
||||
choices: [
|
||||
{
|
||||
name: 'No, continue with current installation only',
|
||||
value: false,
|
||||
},
|
||||
{
|
||||
name: 'Yes, I have custom content to add or update',
|
||||
value: true,
|
||||
},
|
||||
],
|
||||
default: false,
|
||||
},
|
||||
]);
|
||||
|
||||
if (!hasCustomContent) {
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
|
||||
// Get directory path
|
||||
const { customPath } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customPath',
|
||||
message: 'Enter directory to search for custom content (will scan subfolders):',
|
||||
default: process.cwd(),
|
||||
validate: async (input) => {
|
||||
if (!input || input.trim() === '') {
|
||||
return 'Please enter a directory path';
|
||||
}
|
||||
|
||||
// Normalize and check if path exists
|
||||
const expandedPath = CLIUtils.expandPath(input.trim());
|
||||
const pathExists = await fs.pathExists(expandedPath);
|
||||
if (!pathExists) {
|
||||
return 'Directory does not exist';
|
||||
}
|
||||
|
||||
// Check if it's actually a directory
|
||||
const stats = await fs.stat(expandedPath);
|
||||
if (!stats.isDirectory()) {
|
||||
return 'Path must be a directory';
|
||||
}
|
||||
|
||||
return true;
|
||||
},
|
||||
transformer: (input) => {
|
||||
return CLIUtils.expandPath(input);
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
const resolvedPath = CLIUtils.expandPath(customPath);
|
||||
|
||||
// Find custom content
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const customHandler = new CustomHandler();
|
||||
const customFiles = await customHandler.findCustomContent(resolvedPath);
|
||||
|
||||
if (customFiles.length === 0) {
|
||||
console.log(chalk.yellow(`\nNo custom content found in ${resolvedPath}`));
|
||||
|
||||
const { tryDifferent } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'tryDifferent',
|
||||
message: 'Try a different directory?',
|
||||
default: true,
|
||||
},
|
||||
]);
|
||||
|
||||
if (tryDifferent) {
|
||||
return await this.promptCustomContentForExisting();
|
||||
}
|
||||
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
|
||||
// Display found items
|
||||
console.log(chalk.cyan(`\nFound ${customFiles.length} custom content file(s):`));
|
||||
const { CustomHandler: CustomHandler2 } = require('../installers/lib/custom/handler');
|
||||
const customHandler2 = new CustomHandler2();
|
||||
const customContentItems = [];
|
||||
|
||||
for (const customFile of customFiles) {
|
||||
const customInfo = await customHandler2.getCustomInfo(customFile);
|
||||
if (customInfo) {
|
||||
customContentItems.push({
|
||||
name: `${chalk.cyan('✓')} ${customInfo.name} ${chalk.gray(`(${customInfo.relativePath})`)}`,
|
||||
value: `__CUSTOM_CONTENT__${customFile}`,
|
||||
checked: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Add option to keep existing custom content
|
||||
console.log(chalk.yellow('\nExisting custom modules will be preserved unless you remove them'));
|
||||
|
||||
const { selectedFiles } = await inquirer.prompt([
|
||||
{
|
||||
type: 'checkbox',
|
||||
name: 'selectedFiles',
|
||||
message: 'Select custom content to add:',
|
||||
choices: customContentItems,
|
||||
pageSize: 15,
|
||||
validate: (answer) => {
|
||||
if (answer.length === 0) {
|
||||
return 'You must select at least one item';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
return {
|
||||
hasCustomContent: true,
|
||||
customPath: resolvedPath,
|
||||
selected: true,
|
||||
selectedFiles: selectedFiles,
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(chalk.red('Error configuring custom content:'), error);
|
||||
return { hasCustomContent: false };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { UI };
|
||||
|
|
|
|||
|
|
@ -0,0 +1,55 @@
|
|||
# Raven's Verdict - Deep PR Review Tool
|
||||
|
||||
Adversarial code review for GitHub PRs. Works with any LLM agent.
|
||||
|
||||
> **Status: Experimental.** We're still figuring out how to use this effectively. Expect the workflow to evolve.
|
||||
|
||||
## How It Works
|
||||
|
||||
Point your agent at `review-pr.md` and ask it to review a specific PR:
|
||||
|
||||
> "Read tools/maintainer/review-pr.md and apply it to PR #123"
|
||||
|
||||
The tool will:
|
||||
|
||||
1. Check out the PR branch locally
|
||||
2. Run an adversarial review (find at least 5 issues)
|
||||
3. Transform findings into professional tone
|
||||
4. Preview the review and ask before posting
|
||||
|
||||
See `review-pr.md` for full prompt structure, severity ratings, and sandboxing rules.
|
||||
|
||||
## When to Use
|
||||
|
||||
**Good candidates:**
|
||||
|
||||
- PRs with meaningful logic changes
|
||||
- Refactors touching multiple files
|
||||
- New features or architectural changes
|
||||
|
||||
**Skip it for:**
|
||||
|
||||
- Trivial PRs (typo fixes, version bumps, single-line changes)
|
||||
- PRs you've already reviewed manually
|
||||
- PRs where you haven't agreed on the approach yet — fix the direction before the implementation
|
||||
|
||||
## Workflow Tips
|
||||
|
||||
**Always review before posting.** The preview step exists for a reason:
|
||||
|
||||
- **[y] Yes** — Post as-is (only if you're confident)
|
||||
- **[e] Edit** — Modify findings before posting
|
||||
- **[s] Save only** — Write to file, don't post
|
||||
|
||||
The save option is useful when you want to:
|
||||
|
||||
- Hand-edit the review before posting
|
||||
- Use the findings as input for a second opinion ("Hey Claude, here's what Raven found — what do you think?")
|
||||
- Cherry-pick specific findings
|
||||
|
||||
**Trust but verify.** LLM reviews can miss context or flag non-issues. Skim the findings before they hit the PR.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `gh` CLI installed and authenticated (`gh auth status`)
|
||||
- Any LLM agent capable of running bash commands
|
||||
|
|
@ -0,0 +1,242 @@
|
|||
# Raven's Verdict - Deep PR Review Tool
|
||||
|
||||
A cynical adversarial review, transformed into cold engineering professionalism.
|
||||
|
||||
<orientation>
|
||||
CRITICAL: Sandboxed Execution Rules
|
||||
|
||||
Before proceeding, you MUST verify:
|
||||
|
||||
- [ ] PR number or URL was EXPLICITLY provided in the user's message
|
||||
- [ ] You are NOT inferring the PR from conversation history
|
||||
- [ ] You are NOT looking at git branches, recent commits, or local state
|
||||
- [ ] You are NOT guessing or assuming any PR numbers
|
||||
|
||||
**If no explicit PR number/URL was provided, STOP immediately and ask:**
|
||||
"What PR number or URL should I review?"
|
||||
</orientation>
|
||||
|
||||
<preflight-checks>
|
||||
|
||||
## Preflight Checks
|
||||
|
||||
### 0.1 Parse PR Input
|
||||
|
||||
Extract PR number from user input. Examples of valid formats:
|
||||
|
||||
- `123` (just the number)
|
||||
- `#123` (with hash)
|
||||
- `https://github.com/owner/repo/pull/123` (full URL)
|
||||
|
||||
If a URL specifies a different repository than the current one:
|
||||
|
||||
```bash
|
||||
# Check current repo
|
||||
gh repo view --json nameWithOwner -q '.nameWithOwner'
|
||||
```
|
||||
|
||||
If mismatch detected, ask user:
|
||||
|
||||
> "This PR is from `{detected_repo}` but we're in `{current_repo}`. Proceed with reviewing `{detected_repo}#123`? (y/n)"
|
||||
|
||||
If user confirms, store `{REPO}` for use in all subsequent `gh` commands.
|
||||
|
||||
### 0.2 Ensure Clean Checkout
|
||||
|
||||
Verify the working tree is clean and check out the PR branch.
|
||||
|
||||
```bash
|
||||
# Check for uncommitted changes
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
If output is non-empty, STOP and tell user:
|
||||
|
||||
> "You have uncommitted changes. Please commit or stash them before running a PR review."
|
||||
|
||||
If clean, fetch and checkout the PR branch:
|
||||
|
||||
```bash
|
||||
# Fetch and checkout PR branch
|
||||
# For cross-repo PRs, include --repo {REPO}
|
||||
gh pr checkout {PR_NUMBER} [--repo {REPO}]
|
||||
```
|
||||
|
||||
If checkout fails, STOP and report the error.
|
||||
|
||||
Now you're on the PR branch with full access to all files as they exist in the PR.
|
||||
|
||||
### 0.3 Check PR Size
|
||||
|
||||
```bash
|
||||
# For cross-repo PRs, include --repo {REPO}
|
||||
gh pr view {PR_NUMBER} [--repo {REPO}] --json additions,deletions,changedFiles -q '{"additions": .additions, "deletions": .deletions, "files": .changedFiles}'
|
||||
```
|
||||
|
||||
**Size thresholds:**
|
||||
|
||||
| Metric | Warning Threshold |
|
||||
| ------------- | ----------------- |
|
||||
| Files changed | > 50 |
|
||||
| Lines changed | > 5000 |
|
||||
|
||||
If thresholds exceeded, ask user:
|
||||
|
||||
> "This PR has {X} files and {Y} line changes. That's large.
|
||||
>
|
||||
> **[f] Focus** - Pick specific files or directories to review
|
||||
> **[p] Proceed** - Review everything (may be slow/expensive)
|
||||
> **[a] Abort** - Stop here"
|
||||
|
||||
### 0.4 Note Binary Files
|
||||
|
||||
```bash
|
||||
# For cross-repo PRs, include --repo {REPO}
|
||||
gh pr diff {PR_NUMBER} [--repo {REPO}] --name-only | grep -E '\.(png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot|pdf|zip|tar|gz|bin|exe|dll|so|dylib)$' || echo "No binary files detected"
|
||||
```
|
||||
|
||||
Store list of binary files to skip. Note them in final output.
|
||||
|
||||
</preflight-checks>
|
||||
|
||||
<adversarial-review>
|
||||
|
||||
### 1.1 Run Cynical Review
|
||||
|
||||
**INTERNAL PERSONA - Never post this directly:**
|
||||
|
||||
Task: You are a cynical, jaded code reviewer with zero patience for sloppy work. This PR was submitted by a clueless weasel and you expect to find problems. Find at least five issues to fix or improve in it. Number them. Be skeptical of everything. Ultrathink.
|
||||
|
||||
Output format:
|
||||
|
||||
```markdown
|
||||
### [NUMBER]. [FINDING TITLE] [likely]
|
||||
|
||||
**Severity:** [EMOJI] [LEVEL]
|
||||
|
||||
[DESCRIPTION - be specific, include file:line references]
|
||||
```
|
||||
|
||||
Severity scale:
|
||||
|
||||
| Level | Emoji | Meaning |
|
||||
| -------- | ----- | ------------------------------------------------------- |
|
||||
| Critical | 🔴 | Security issue, data loss risk, or broken functionality |
|
||||
| Moderate | 🟡 | Bug, performance issue, or significant code smell |
|
||||
| Minor | 🟢 | Style, naming, minor improvement opportunity |
|
||||
|
||||
Likely tag:
|
||||
|
||||
- Add `[likely]` to findings with high confidence, e.g. with direct evidence
|
||||
- Sort findings by severity (Critical → Moderate → Minor), not by confidence
|
||||
|
||||
</adversarial-review>
|
||||
|
||||
<tone-transformation>
|
||||
|
||||
**Transform the cynical output into cold engineering professionalism.**
|
||||
|
||||
**Transformation rules:**
|
||||
|
||||
1. Remove all inflammatory language, insults, assumptions about the author
|
||||
2. Keep all technical substance, file references, severity ratings and likely tag
|
||||
3. Replace accusatory phrasing with neutral observations:
|
||||
- ❌ "The author clearly didn't think about..."
|
||||
- ✅ "This implementation may not account for..."
|
||||
4. Preserve skepticism as healthy engineering caution:
|
||||
- ❌ "This will definitely break in production"
|
||||
- ✅ "This pattern has historically caused issues in production environments"
|
||||
5. Add the suggested fixes.
|
||||
6. Keep suggestions actionable and specific
|
||||
|
||||
Output format after transformation:
|
||||
|
||||
```markdown
|
||||
## PR Review: #{PR_NUMBER}
|
||||
|
||||
**Title:** {PR_TITLE}
|
||||
**Author:** @{AUTHOR}
|
||||
**Branch:** {HEAD} → {BASE}
|
||||
|
||||
---
|
||||
|
||||
### Findings
|
||||
|
||||
[TRANSFORMED FINDINGS HERE]
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
**Critical:** {COUNT} | **Moderate:** {COUNT} | **Minor:** {COUNT}
|
||||
|
||||
[BINARY_FILES_NOTE if any]
|
||||
|
||||
---
|
||||
|
||||
_Review generated by Raven's Verdict. LLM-produced analysis - findings may be incorrect or lack context. Verify before acting._
|
||||
```
|
||||
|
||||
</tone-transformation>
|
||||
|
||||
<post-review>
|
||||
### 3.1 Preview
|
||||
|
||||
Display the complete transformed review to the user.
|
||||
|
||||
```
|
||||
══════════════════════════════════════════════════════
|
||||
PREVIEW - This will be posted to PR #{PR_NUMBER}
|
||||
══════════════════════════════════════════════════════
|
||||
|
||||
[FULL REVIEW CONTENT]
|
||||
|
||||
══════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
### 3.2 Confirm
|
||||
|
||||
Ask user for explicit confirmation:
|
||||
|
||||
> **Ready to post this review to PR #{PR_NUMBER}?**
|
||||
>
|
||||
> **[y] Yes** - Post as comment
|
||||
> **[n] No** - Abort, do not post
|
||||
> **[e] Edit** - Let me modify before posting
|
||||
> **[s] Save only** - Save locally, don't post
|
||||
|
||||
### 3.3 Post or Save
|
||||
|
||||
**Write review to a temp file, then post:**
|
||||
|
||||
1. Write the review content to a temp file with a unique name (include PR number to avoid collisions)
|
||||
2. Post using `gh pr comment {PR_NUMBER} [--repo {REPO}] --body-file {path}`
|
||||
3. Delete the temp file after successful post
|
||||
|
||||
Do NOT use heredocs or `echo` - Markdown code blocks will break shell parsing. Use your file writing tool instead.
|
||||
|
||||
**If auth fails or post fails:**
|
||||
|
||||
1. Display error prominently:
|
||||
|
||||
```
|
||||
⚠️ FAILED TO POST REVIEW
|
||||
Error: {ERROR_MESSAGE}
|
||||
```
|
||||
|
||||
2. Keep the temp file and tell the user where it is, so they can post manually with:
|
||||
`gh pr comment {PR_NUMBER} [--repo {REPO}] --body-file {path}`
|
||||
|
||||
**If save only (s):**
|
||||
|
||||
Keep the temp file and inform user of location.
|
||||
|
||||
</post-review>
|
||||
|
||||
<notes>
|
||||
- The "cynical asshole" phase is internal only - never posted
|
||||
- Tone transform MUST happen before any external output
|
||||
- When in doubt, ask the user - never assume
|
||||
- If you're unsure about severity, err toward higher severity
|
||||
- If you're unsure about confidence, be honest and use Medium or Low
|
||||
</notes>
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
/**
|
||||
* Migration script to convert relative paths to absolute paths in custom module manifests
|
||||
* This should be run once to update existing installations
|
||||
*/
|
||||
|
||||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const chalk = require('chalk');
|
||||
|
||||
/**
|
||||
* Find BMAD directory in project
|
||||
*/
|
||||
function findBmadDir(projectDir = process.cwd()) {
|
||||
const possibleNames = ['bmad', '.bmad'];
|
||||
|
||||
for (const name of possibleNames) {
|
||||
const bmadDir = path.join(projectDir, name);
|
||||
if (fs.existsSync(bmadDir)) {
|
||||
return bmadDir;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update manifest to use absolute paths
|
||||
*/
|
||||
async function updateManifest(manifestPath, projectRoot) {
|
||||
console.log(chalk.cyan(`\nUpdating manifest: ${manifestPath}`));
|
||||
|
||||
const content = await fs.readFile(manifestPath, 'utf8');
|
||||
const manifest = yaml.parse(content);
|
||||
|
||||
if (!manifest.customModules || manifest.customModules.length === 0) {
|
||||
console.log(chalk.dim(' No custom modules found'));
|
||||
return false;
|
||||
}
|
||||
|
||||
let updated = false;
|
||||
|
||||
for (const customModule of manifest.customModules) {
|
||||
if (customModule.relativePath && !customModule.sourcePath) {
|
||||
// Convert relative path to absolute
|
||||
const absolutePath = path.resolve(projectRoot, customModule.relativePath);
|
||||
customModule.sourcePath = absolutePath;
|
||||
|
||||
// Remove the old relativePath
|
||||
delete customModule.relativePath;
|
||||
|
||||
console.log(chalk.green(` ✓ Updated ${customModule.id}: ${customModule.relativePath} → ${absolutePath}`));
|
||||
updated = true;
|
||||
} else if (customModule.sourcePath && !path.isAbsolute(customModule.sourcePath)) {
|
||||
// Source path exists but is not absolute
|
||||
const absolutePath = path.resolve(customModule.sourcePath);
|
||||
customModule.sourcePath = absolutePath;
|
||||
|
||||
console.log(chalk.green(` ✓ Updated ${customModule.id}: ${customModule.sourcePath} → ${absolutePath}`));
|
||||
updated = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (updated) {
|
||||
// Write back the updated manifest
|
||||
const yamlStr = yaml.dump(manifest, {
|
||||
indent: 2,
|
||||
lineWidth: -1,
|
||||
noRefs: true,
|
||||
sortKeys: false,
|
||||
});
|
||||
|
||||
await fs.writeFile(manifestPath, yamlStr);
|
||||
console.log(chalk.green(' Manifest updated successfully'));
|
||||
} else {
|
||||
console.log(chalk.dim(' All paths already absolute'));
|
||||
}
|
||||
|
||||
return updated;
|
||||
}
|
||||
|
||||
/**
|
||||
* Main migration function
|
||||
*/
|
||||
async function migrate(directory) {
|
||||
const projectRoot = path.resolve(directory || process.cwd());
|
||||
const bmadDir = findBmadDir(projectRoot);
|
||||
|
||||
if (!bmadDir) {
|
||||
console.error(chalk.red('✗ No BMAD installation found in directory'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.blue.bold('🔄 BMAD Custom Module Path Migration'));
|
||||
console.log(chalk.dim(`Project: ${projectRoot}`));
|
||||
console.log(chalk.dim(`BMAD Directory: ${bmadDir}`));
|
||||
|
||||
const manifestPath = path.join(bmadDir, '_cfg', 'manifest.yaml');
|
||||
|
||||
if (!fs.existsSync(manifestPath)) {
|
||||
console.error(chalk.red('✗ No manifest.yaml found'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const updated = await updateManifest(manifestPath, projectRoot);
|
||||
|
||||
if (updated) {
|
||||
console.log(chalk.green.bold('\n✨ Migration completed successfully!'));
|
||||
console.log(chalk.dim('Custom modules now use absolute source paths.'));
|
||||
} else {
|
||||
console.log(chalk.yellow('\n⚠ No migration needed - paths already absolute'));
|
||||
}
|
||||
}
|
||||
|
||||
// Run migration if called directly
|
||||
if (require.main === module) {
|
||||
const directory = process.argv[2];
|
||||
migrate(directory).catch((error) => {
|
||||
console.error(chalk.red('\n✗ Migration failed:'), error.message);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
module.exports = { migrate };
|
||||
Loading…
Reference in New Issue