Compare commits

...

6 Commits

Author SHA1 Message Date
Brian af5e1c929d
Merge branch 'main' into feature/more-cynical-review 2025-12-18 16:19:25 +08:00
sjennings e39aa33eea
fix(bmgd): add workflow status update to game-architecture completion (#1161)
* fix(bmgd): add workflow status update to game-architecture completion

The game-architecture workflow was not updating the bmgd-workflow-status.yaml
file on completion, unlike other BMGD workflows (narrative, brainstorm-game).

Changes:
- Add step 4 "Update Workflow Status" to update create-architecture status
- Renumber subsequent steps (5-8 → 6-9)
- Add success metric for workflow status update
- Add failure condition for missing status update

* feat(bmgd): add generate-project-context workflow for game development

Adds a new workflow to create optimized project-context.md files for AI agent
consistency in game development projects.

New workflow files:
- workflow.md: Main workflow entry point
- project-context-template.md: Template for context file
- steps/step-01-discover.md: Context discovery & initialization
- steps/step-02-generate.md: Rules generation with A/P/C menus
- steps/step-03-complete.md: Finalization & optimization

Integration:
- Added generate-project-context trigger to game-architect agent menu
- Added project context creation option to game-architecture completion step
- Renumbered steps 6-9 → 7-10 to accommodate new step 6

Adapted from BMM generate-project-context with game-specific:
- Engine patterns (Unity, Unreal, Godot)
- Performance and frame budget rules
- Platform-specific requirements
- Game testing patterns

---------

Co-authored-by: Scott Jennings <scott.jennings+CIGINT@cloudimperiumgames.com>
Co-authored-by: Brian <bmadcode@gmail.com>
2025-12-18 16:14:18 +08:00
Alex Verkhovsky 6c56a28e7c refactor(bmm): convert quick-dev workflow to sharded format
Convert Quick Dev from monolithic (workflow.yaml + instructions.md) to
sharded architecture (workflow.md + steps/) to combat "lost in the middle"
problem during long implementation sessions.

Changes:
- Add 6 step files for focused execution (mode-detection, context-gathering,
  execute, self-check, adversarial-review, resolve-findings)
- Add checkpoint handlers for [a] Advanced Elicitation and [p] Party Mode
- Create review-adversarial-general.xml as reusable core task
- Remove CLI fallback (claude --print) for platform-agnostic design

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 01:07:28 -07:00
Alex Verkhovsky 2da9aebaa8
docs: add DigitalOcean sponsor attribution (#1162) 2025-12-18 15:58:54 +08:00
Brian Madison 5c756b6404 chore: bump version to 6.0.0-alpha.19
Bug fix:
- Fixed _bmad folder stutter with agent custom files
- Removed unnecessary backup file causing installer bloat
- Improved path handling for agent customizations
2025-12-18 12:52:10 +08:00
Brian Madison 23f650ff4d fixed _bmad folder stutter with agent custom files 2025-12-18 03:22:46 +08:00
24 changed files with 1911 additions and 4037 deletions

View File

@ -1,5 +1,25 @@
# Changelog
## [6.0.0-alpha.19]
**Release: December 18, 2025**
### 🐛 Bug Fixes
**Installer Stability:**
- **Fixed \_bmad Folder Stutter**: Resolved issue with duplicate \_bmad folder creation when applying agent custom files
- **Cleaner Installation**: Removed unnecessary backup file that was causing bloat in the installer
- **Streamlined Agent Customization**: Fixed path handling for agent custom files to prevent folder duplication
### 📊 Statistics
- **3 files changed** with critical fix
- **3,688 lines removed** by eliminating backup files
- **Improved installer performance** and stability
---
## [6.0.0-alpha.18]
**Release: December 18, 2025**

View File

@ -231,6 +231,8 @@ MIT License - See [LICENSE](LICENSE) for details.
**Trademarks:** BMad™ and BMAD-METHOD™ are trademarks of BMad Code, LLC.
Supported by:&nbsp;&nbsp;<a href="https://m.do.co/c/00f11bd932bb"><img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" height="24" alt="DigitalOcean" style="vertical-align: middle;"></a>
---
<p align="center">

View File

@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "bmad-method",
"version": "6.0.0-alpha.18",
"version": "6.0.0-alpha.19",
"description": "Breakthrough Method of Agile AI-driven Development",
"keywords": [
"agile",

View File

@ -0,0 +1,82 @@
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review (General)">
<objective>Cynically review content and produce numbered findings with severity and classification</objective>
<inputs>
<input name="content" desc="Content to review - diff, spec, story, doc, or any artifact" />
</inputs>
<llm critical="true">
<i>You are a cynical, jaded reviewer with zero patience for sloppy work</i>
<i>The content was submitted by a clueless weasel and you expect to find problems</i>
<i>Find at least five issues to fix or improve - be skeptical of everything</i>
<i>Zero findings is suspicious - if you find nothing, halt and question your analysis</i>
</llm>
<flow>
<step n="1" title="Receive Content">
<action>Load the content to review from provided input or context</action>
<action>Identify content type (diff, spec, story, doc, etc.) to calibrate review approach</action>
</step>
<step n="2" title="Adversarial Analysis" critical="true">
<mandate>Review with extreme skepticism - assume problems exist</mandate>
<analysis-areas>
<area>Correctness - Is it actually right? Look for logic errors, bugs, gaps</area>
<area>Completeness - What's missing? Edge cases, error handling, validation</area>
<area>Consistency - Does it match patterns, conventions, existing code?</area>
<area>Clarity - Is it understandable? Naming, structure, documentation</area>
<area>Quality - Is it good enough? Performance, security, maintainability</area>
</analysis-areas>
<action>Find at least 5 issues - dig deep, don't accept surface-level "looks good"</action>
</step>
<step n="3" title="Classify Findings">
<action>For each finding, assign:</action>
<finding-id>F1, F2, F3... (sequential)</finding-id>
<severity>
<level name="critical">Must fix - blocks ship, causes failures</level>
<level name="high">Should fix - significant issue, notable risk</level>
<level name="medium">Consider fixing - minor issue, small improvement</level>
<level name="low">Nitpick - optional, stylistic, nice-to-have</level>
</severity>
<classification>
<type name="real">Confirmed issue - should address</type>
<type name="noise">False positive - no action needed</type>
<type name="uncertain">Needs discussion - could go either way</type>
</classification>
</step>
<step n="4" title="Present Findings">
<action>Output findings in structured format</action>
</step>
</flow>
<findings-format>
**Adversarial Review Findings**
| ID | Severity | Classification | Finding |
|----|----------|----------------|---------|
| F1 | {severity} | {classification} | {description} |
| F2 | {severity} | {classification} | {description} |
| ... | | | |
**Summary:** {count} findings - {critical_count} critical, {high_count} high, {medium_count} medium, {low_count} low
</findings-format>
<halt-conditions>
<condition>HALT if zero findings - this is suspicious, re-analyze or ask for guidance</condition>
<condition>HALT if content is empty or unreadable</condition>
</halt-conditions>
<critical-rules>
<rule>NEVER accept "looks good" without deep analysis</rule>
<rule>ALWAYS find at least 5 issues - if you can't, you're not looking hard enough</rule>
<rule>ALWAYS assign ID, severity, and classification to each finding</rule>
<rule>Be cynical but fair - classify noise as noise, real as real</rule>
</critical-rules>
</task>

View File

@ -33,6 +33,10 @@ agent:
exec: "{project-root}/_bmad/bmgd/workflows/3-technical/game-architecture/workflow.md"
description: Produce a Scale Adaptive Game Architecture
- trigger: generate-project-context
exec: "{project-root}/_bmad/bmgd/workflows/3-technical/generate-project-context/workflow.md"
description: Create optimized project-context.md for AI agent consistency
- trigger: correct-course
workflow: "{project-root}/_bmad/bmgd/workflows/4-production/correct-course/workflow.yaml"
description: Course Correction Analysis (when implementation is off-track)

View File

@ -12,6 +12,7 @@ outputFile: '{output_folder}/game-architecture.md'
# Handoff References
epicWorkflow: '{project-root}/_bmad/bmgd/workflows/4-production/epic-workflow/workflow.yaml'
projectContextWorkflow: '{project-root}/_bmad/bmgd/workflows/3-technical/generate-project-context/workflow.md'
---
# Step 9: Completion
@ -131,7 +132,17 @@ platform: '{{platform}}'
---
````
### 4. Present Completion Summary
### 4. Update Workflow Status
**If not in standalone mode:**
Load `{output_folder}/bmgd-workflow-status.yaml` and:
- Update `create-architecture` status to the output file path
- Preserve all comments and structure
- Determine next workflow in sequence
### 5. Present Completion Summary
"**Architecture Complete!**
@ -158,9 +169,50 @@ platform: '{{platform}}'
**Document saved to:** `{outputFile}`
Do you want to review or adjust anything before we finalize?"
Do you want to review or adjust anything before we finalize?
### 5. Handle Review Requests
**Optional Enhancement: Project Context File**
Would you like to create a `project-context.md` file? This is a concise, optimized guide for AI agents that captures:
- Critical engine-specific rules they might miss
- Specific patterns and conventions for your game project
- Performance and optimization requirements
- Anti-patterns and edge cases to avoid
{if_existing_project_context}
I noticed you already have a project context file. Would you like to update it with your new architectural decisions?
{else}
This file helps ensure AI agents implement game code consistently with your project's unique requirements and patterns.
{/if_existing_project_context}
**Create/Update project context?** [Y/N]"
### 6. Handle Project Context Creation Choice
If user responds 'Y' or 'yes' to creating/updating project context:
"Excellent choice! Let me launch the Generate Project Context workflow to create a comprehensive guide for AI agents.
This will help ensure consistent implementation by capturing:
- Engine-specific patterns and rules
- Performance and optimization conventions from your architecture
- Testing and quality standards
- Anti-patterns to avoid
The workflow will collaborate with you to create an optimized `project-context.md` file that AI agents will read before implementing any game code."
**Execute the Generate Project Context workflow:**
- Load and execute: `{projectContextWorkflow}`
- The workflow will handle discovery, generation, and completion of the project context file
- After completion, return here for final handoff
If user responds 'N' or 'no':
"Understood! Your architecture is complete and ready for implementation. You can always create a project context file later using the Generate Project Context workflow if needed."
### 7. Handle Review Requests
**If user wants to review:**
@ -179,7 +231,7 @@ Or type 'all' to see the complete document."
**Show requested section and allow edits.**
### 6. Present Next Steps Menu
### 8. Present Next Steps Menu
**After user confirms completion:**
@ -204,7 +256,7 @@ Or type 'all' to see the complete document."
2. Proceed to Epic creation workflow
3. Exit workflow"
### 7. Handle User Selection
### 9. Handle User Selection
Based on user choice:
@ -224,7 +276,7 @@ Based on user choice:
- Confirm document is saved and complete
- Exit workflow gracefully
### 8. Provide Handoff Guidance
### 10. Provide Handoff Guidance
**For Epic Creation handoff:**
@ -270,6 +322,7 @@ This is the final step. Ensure:
- Development setup is complete
- Document status updated to 'complete'
- Frontmatter shows all steps completed
- Workflow status updated (if tracking)
- User has clear next steps
- Document saved and ready for AI agent consumption
@ -278,6 +331,7 @@ This is the final step. Ensure:
- Missing executive summary
- Incomplete development setup
- Frontmatter not updated
- Status not updated when tracking
- No clear next steps provided
- User left without actionable guidance

View File

@ -0,0 +1,20 @@
---
project_name: '{{project_name}}'
user_name: '{{user_name}}'
date: '{{date}}'
sections_completed: []
---
# Project Context for AI Agents
_This file contains critical rules and patterns that AI agents must follow when implementing game code in this project. Focus on unobvious details that agents might otherwise miss._
---
## Technology Stack & Versions
_Documented after discovery phase_
## Critical Implementation Rules
_Documented after discovery phase_

View File

@ -0,0 +1,201 @@
# Step 1: Context Discovery & Initialization
## MANDATORY EXECUTION RULES (READ FIRST):
- NEVER generate content without user input
- ALWAYS treat this as collaborative discovery between technical peers
- YOU ARE A FACILITATOR, not a content generator
- FOCUS on discovering existing project context and technology stack
- IDENTIFY critical implementation rules that AI agents need
- ABSOLUTELY NO TIME ESTIMATES
## EXECUTION PROTOCOLS:
- Show your analysis before taking any action
- Read existing project files to understand current context
- Initialize document and update frontmatter
- FORBIDDEN to load next step until discovery is complete
## CONTEXT BOUNDARIES:
- Variables from workflow.md are available in memory
- Focus on existing project files and architecture decisions
- Look for patterns, conventions, and unique requirements
- Prioritize rules that prevent implementation mistakes
## YOUR TASK:
Discover the project's game engine, technology stack, existing patterns, and critical implementation rules that AI agents must follow when writing game code.
## DISCOVERY SEQUENCE:
### 1. Check for Existing Project Context
First, check if project context already exists:
- Look for file at `{output_folder}/project-context.md`
- If exists: Read complete file to understand existing rules
- Present to user: "Found existing project context with {number_of_sections} sections. Would you like to update this or create a new one?"
### 2. Discover Game Engine & Technology Stack
Load and analyze project files to identify technologies:
**Architecture Document:**
- Look for `{output_folder}/game-architecture.md` or `{output_folder}/architecture.md`
- Extract engine choice with specific version (Unity, Unreal, Godot, custom)
- Note architectural decisions that affect implementation
**Engine-Specific Files:**
- Unity: Check for `ProjectSettings/ProjectVersion.txt`, `Packages/manifest.json`
- Unreal: Check for `.uproject` files, `Config/DefaultEngine.ini`
- Godot: Check for `project.godot`, `export_presets.cfg`
- Custom: Check for engine config files, build scripts
**Package/Dependency Files:**
- Unity: `Packages/manifest.json`, NuGet packages
- Unreal: `.Build.cs` files, plugin configs
- Godot: `addons/` directory, GDExtension configs
- Web-based: `package.json`, `requirements.txt`
**Configuration Files:**
- Build tool configs
- Linting and formatting configs
- Testing configurations
- CI/CD pipeline configs
### 3. Identify Existing Code Patterns
Search through existing codebase for patterns:
**Naming Conventions:**
- Script/class naming patterns
- Asset naming conventions
- Scene/level naming patterns
- Test file naming patterns
**Code Organization:**
- How components/scripts are structured
- Where utilities and helpers are placed
- How systems are organized
- Folder hierarchy patterns
**Engine-Specific Patterns:**
- Unity: MonoBehaviour patterns, ScriptableObject usage, serialization rules
- Unreal: Actor/Component patterns, Blueprint integration, UE macros
- Godot: Node patterns, signal usage, autoload patterns
### 4. Extract Critical Implementation Rules
Look for rules that AI agents might miss:
**Engine-Specific Rules:**
- Unity: Assembly definitions, Unity lifecycle methods, coroutine patterns
- Unreal: UPROPERTY/UFUNCTION usage, garbage collection rules, tick patterns
- Godot: `_ready` vs `_enter_tree`, node ownership, scene instancing
**Performance Rules:**
- Frame budget constraints
- Memory allocation patterns
- Hot path optimization requirements
- Object pooling patterns
**Platform-Specific Rules:**
- Target platform constraints
- Input handling conventions
- Platform-specific code patterns
- Build configuration rules
**Testing Rules:**
- Test structure requirements
- Mock usage conventions
- Integration vs unit test boundaries
- Play mode vs edit mode testing
### 5. Initialize Project Context Document
Based on discovery, create or update the context document:
#### A. Fresh Document Setup (if no existing context)
Copy template from `{installed_path}/project-context-template.md` to `{output_folder}/project-context.md`
Initialize frontmatter with:
```yaml
---
project_name: '{{project_name}}'
user_name: '{{user_name}}'
date: '{{date}}'
sections_completed: ['technology_stack']
existing_patterns_found: { { number_of_patterns_discovered } }
---
```
#### B. Existing Document Update
Load existing context and prepare for updates
Set frontmatter `sections_completed` to track what will be updated
### 6. Present Discovery Summary
Report findings to user:
"Welcome {{user_name}}! I've analyzed your game project for {{project_name}} to discover the context that AI agents need.
**Game Engine & Stack Discovered:**
{{engine_and_version}}
{{list_of_technologies_with_versions}}
**Existing Patterns Found:**
- {{number_of_patterns}} implementation patterns
- {{number_of_conventions}} coding conventions
- {{number_of_rules}} critical rules
**Key Areas for Context Rules:**
- {{area_1}} (e.g., Engine lifecycle and patterns)
- {{area_2}} (e.g., Performance and optimization)
- {{area_3}} (e.g., Platform-specific requirements)
{if_existing_context}
**Existing Context:** Found {{sections}} sections already defined. We can update or add to these.
{/if_existing_context}
Ready to create/update your project context. This will help AI agents implement game code consistently with your project's standards.
[C] Continue to context generation"
## SUCCESS METRICS:
- Existing project context properly detected and handled
- Game engine and technology stack accurately identified with versions
- Critical implementation patterns discovered
- Project context document properly initialized
- Discovery findings clearly presented to user
- User ready to proceed with context generation
## FAILURE MODES:
- Not checking for existing project context before creating new one
- Missing critical engine versions or configurations
- Overlooking important coding patterns or conventions
- Not initializing frontmatter properly
- Not presenting clear discovery summary to user
## NEXT STEP:
After user selects [C] to continue, load `./step-02-generate.md` to collaboratively generate the specific project context rules.
Remember: Do NOT proceed to step-02 until user explicitly selects [C] from the menu and discovery is confirmed!

View File

@ -0,0 +1,373 @@
# Step 2: Context Rules Generation
## MANDATORY EXECUTION RULES (READ FIRST):
- NEVER generate content without user input
- ALWAYS treat this as collaborative discovery between technical peers
- YOU ARE A FACILITATOR, not a content generator
- FOCUS on unobvious rules that AI agents need to be reminded of
- KEEP CONTENT LEAN - optimize for LLM context efficiency
- ABSOLUTELY NO TIME ESTIMATES
## EXECUTION PROTOCOLS:
- Show your analysis before taking any action
- Focus on specific, actionable rules rather than general advice
- Present A/P/C menu after each major rule category
- ONLY save when user chooses C (Continue)
- Update frontmatter with completed sections
- FORBIDDEN to load next step until all sections are complete
## COLLABORATION MENUS (A/P/C):
This step will generate content and present choices for each rule category:
- **A (Advanced Elicitation)**: Use discovery protocols to explore nuanced implementation rules
- **P (Party Mode)**: Bring multiple perspectives to identify critical edge cases
- **C (Continue)**: Save the current rules and proceed to next category
## PROTOCOL INTEGRATION:
- When 'A' selected: Execute {project-root}/\_bmad/core/tasks/advanced-elicitation.xml
- When 'P' selected: Execute {project-root}/\_bmad/core/workflows/party-mode
- PROTOCOLS always return to display this step's A/P/C menu after the A or P have completed
- User accepts/rejects protocol changes before proceeding
## CONTEXT BOUNDARIES:
- Discovery results from step-1 are available
- Game engine and existing patterns are identified
- Focus on rules that prevent implementation mistakes
- Prioritize unobvious details that AI agents might miss
## YOUR TASK:
Collaboratively generate specific, critical rules that AI agents must follow when implementing game code in this project.
## CONTEXT GENERATION SEQUENCE:
### 1. Technology Stack & Versions
Document the exact technology stack from discovery:
**Core Technologies:**
Based on user skill level, present findings:
**Expert Mode:**
"Technology stack from your architecture and project files:
{{exact_technologies_with_versions}}
Any critical version constraints I should document for agents?"
**Intermediate Mode:**
"I found your technology stack:
**Game Engine:**
{{engine_with_version}}
**Key Dependencies:**
{{important_dependencies_with_versions}}
Are there any version constraints or compatibility notes agents should know about?"
**Beginner Mode:**
"Here are the technologies you're using:
**Game Engine:**
{{friendly_description_of_engine}}
**Important Notes:**
{{key_things_agents_need_to_know_about_versions}}
Should I document any special version rules or compatibility requirements?"
### 2. Engine-Specific Rules
Focus on unobvious engine patterns agents might miss:
**Unity Rules (if applicable):**
"Based on your Unity project, I notice some specific patterns:
**Lifecycle Rules:**
{{unity_lifecycle_patterns}}
**Serialization Rules:**
{{serialization_requirements}}
**Assembly Definitions:**
{{assembly_definition_rules}}
**Coroutine/Async Patterns:**
{{async_patterns}}
Are these patterns correct? Any other Unity-specific rules agents should follow?"
**Unreal Rules (if applicable):**
"Based on your Unreal project, I notice some specific patterns:
**UPROPERTY/UFUNCTION Rules:**
{{macro_usage_patterns}}
**Blueprint Integration:**
{{blueprint_rules}}
**Garbage Collection:**
{{gc_patterns}}
**Tick Patterns:**
{{tick_optimization_rules}}
Are these patterns correct? Any other Unreal-specific rules agents should follow?"
**Godot Rules (if applicable):**
"Based on your Godot project, I notice some specific patterns:
**Node Lifecycle:**
{{node_lifecycle_patterns}}
**Signal Usage:**
{{signal_conventions}}
**Scene Instancing:**
{{scene_patterns}}
**Autoload Patterns:**
{{autoload_rules}}
Are these patterns correct? Any other Godot-specific rules agents should follow?"
### 3. Performance Rules
Document performance-critical patterns:
**Frame Budget Rules:**
"Your game has these performance requirements:
**Target Frame Rate:**
{{target_fps}}
**Frame Budget:**
{{milliseconds_per_frame}}
**Critical Systems:**
{{systems_that_must_meet_budget}}
**Hot Path Rules:**
{{hot_path_patterns}}
Any other performance rules agents must follow?"
**Memory Management:**
"Memory patterns for your project:
**Allocation Rules:**
{{allocation_patterns}}
**Pooling Requirements:**
{{object_pooling_rules}}
**Asset Loading:**
{{asset_loading_patterns}}
Are there memory constraints agents should know about?"
### 4. Code Organization Rules
Document project structure and organization:
**Folder Structure:**
"Your project organization:
**Script Organization:**
{{script_folder_structure}}
**Asset Organization:**
{{asset_folder_patterns}}
**Scene/Level Organization:**
{{scene_organization}}
Any organization rules agents must follow?"
**Naming Conventions:**
"Your naming patterns:
**Script/Class Names:**
{{class_naming_patterns}}
**Asset Names:**
{{asset_naming_patterns}}
**Variable/Method Names:**
{{variable_naming_patterns}}
Any other naming rules?"
### 5. Testing Rules
Focus on testing patterns that ensure consistency:
**Test Structure Rules:**
"Your testing setup shows these patterns:
**Test Organization:**
{{test_file_organization}}
**Test Categories:**
{{unit_vs_integration_boundaries}}
**Mocking Patterns:**
{{mock_usage_conventions}}
**Play Mode Testing:**
{{play_mode_test_patterns}}
Are there testing rules agents should always follow?"
### 6. Platform & Build Rules
Document platform-specific requirements:
**Target Platforms:**
"Your platform configuration:
**Primary Platform:**
{{primary_platform}}
**Platform-Specific Code:**
{{platform_conditional_patterns}}
**Build Configurations:**
{{build_config_rules}}
**Input Handling:**
{{input_abstraction_patterns}}
Any platform rules agents must know?"
### 7. Critical Don't-Miss Rules
Identify rules that prevent common mistakes:
**Anti-Patterns to Avoid:**
"Based on your codebase, here are critical things agents must NOT do:
{{critical_anti_patterns_with_examples}}
**Edge Cases:**
{{specific_edge_cases_agents_should_handle}}
**Common Gotchas:**
{{engine_specific_gotchas}}
**Performance Traps:**
{{performance_patterns_to_avoid}}
Are there other 'gotchas' agents should know about?"
### 8. Generate Context Content
For each category, prepare lean content for the project context file:
#### Content Structure:
```markdown
## Technology Stack & Versions
{{concise_technology_list_with_exact_versions}}
## Critical Implementation Rules
### Engine-Specific Rules
{{bullet_points_of_engine_rules}}
### Performance Rules
{{bullet_points_of_performance_requirements}}
### Code Organization Rules
{{bullet_points_of_organization_patterns}}
### Testing Rules
{{bullet_points_of_testing_requirements}}
### Platform & Build Rules
{{bullet_points_of_platform_requirements}}
### Critical Don't-Miss Rules
{{bullet_points_of_anti_patterns_and_gotchas}}
```
### 9. Present Content and Menu
After each category, show the generated rules and present choices:
"I've drafted the {{category_name}} rules for your project context.
**Here's what I'll add:**
[Show the complete markdown content for this category]
**What would you like to do?**
[A] Advanced Elicitation - Explore nuanced rules for this category
[P] Party Mode - Review from different implementation perspectives
[C] Continue - Save these rules and move to next category"
### 10. Handle Menu Selection
#### If 'A' (Advanced Elicitation):
- Execute advanced-elicitation.xml with current category rules
- Process enhanced rules that come back
- Ask user: "Accept these enhanced rules for {{category}}? (y/n)"
- If yes: Update content, then return to A/P/C menu
- If no: Keep original content, then return to A/P/C menu
#### If 'P' (Party Mode):
- Execute party-mode workflow with category rules context
- Process collaborative insights on implementation patterns
- Ask user: "Accept these changes to {{category}} rules? (y/n)"
- If yes: Update content, then return to A/P/C menu
- If no: Keep original content, then return to A/P/C menu
#### If 'C' (Continue):
- Save the current category content to project context file
- Update frontmatter: `sections_completed: [...]`
- Proceed to next category or step-03 if complete
## APPEND TO PROJECT CONTEXT:
When user selects 'C' for a category, append the content directly to `{output_folder}/project-context.md` using the structure from step 8.
## SUCCESS METRICS:
- All critical technology versions accurately documented
- Engine-specific rules cover unobvious patterns
- Performance rules capture project-specific requirements
- Code organization rules maintain project standards
- Testing rules ensure consistent test quality
- Platform rules prevent cross-platform issues
- Content is lean and optimized for LLM context
- A/P/C menu presented and handled correctly for each category
## FAILURE MODES:
- Including obvious rules that agents already know
- Making content too verbose for LLM context efficiency
- Missing critical anti-patterns or edge cases
- Not getting user validation for each rule category
- Not documenting exact versions and configurations
- Not presenting A/P/C menu after content generation
## NEXT STEP:
After completing all rule categories and user selects 'C' for the final category, load `./step-03-complete.md` to finalize the project context file.
Remember: Do NOT proceed to step-03 until all categories are complete and user explicitly selects 'C' for each!

View File

@ -0,0 +1,279 @@
# Step 3: Context Completion & Finalization
## MANDATORY EXECUTION RULES (READ FIRST):
- NEVER generate content without user input
- ALWAYS treat this as collaborative completion between technical peers
- YOU ARE A FACILITATOR, not a content generator
- FOCUS on finalizing a lean, LLM-optimized project context
- ENSURE all critical rules are captured and actionable
- ABSOLUTELY NO TIME ESTIMATES
## EXECUTION PROTOCOLS:
- Show your analysis before taking any action
- Review and optimize content for LLM context efficiency
- Update frontmatter with completion status
- NO MORE STEPS - this is the final step
## CONTEXT BOUNDARIES:
- All rule categories from step-2 are complete
- Technology stack and versions are documented
- Focus on final review, optimization, and completion
- Ensure the context file is ready for AI agent consumption
## YOUR TASK:
Complete the project context file, optimize it for LLM efficiency, and provide guidance for usage and maintenance.
## COMPLETION SEQUENCE:
### 1. Review Complete Context File
Read the entire project context file and analyze:
**Content Analysis:**
- Total length and readability for LLMs
- Clarity and specificity of rules
- Coverage of all critical areas
- Actionability of each rule
**Structure Analysis:**
- Logical organization of sections
- Consistency of formatting
- Absence of redundant or obvious information
- Optimization for quick scanning
### 2. Optimize for LLM Context
Ensure the file is lean and efficient:
**Content Optimization:**
- Remove any redundant rules or obvious information
- Combine related rules into concise bullet points
- Use specific, actionable language
- Ensure each rule provides unique value
**Formatting Optimization:**
- Use consistent markdown formatting
- Implement clear section hierarchy
- Ensure scannability with strategic use of bolding
- Maintain readability while maximizing information density
### 3. Final Content Structure
Ensure the final structure follows this optimized format:
```markdown
# Project Context for AI Agents
_This file contains critical rules and patterns that AI agents must follow when implementing game code in this project. Focus on unobvious details that agents might otherwise miss._
---
## Technology Stack & Versions
{{concise_technology_list}}
## Critical Implementation Rules
### Engine-Specific Rules
{{engine_rules}}
### Performance Rules
{{performance_requirements}}
### Code Organization Rules
{{organization_patterns}}
### Testing Rules
{{testing_requirements}}
### Platform & Build Rules
{{platform_requirements}}
### Critical Don't-Miss Rules
{{anti_patterns_and_gotchas}}
---
## Usage Guidelines
**For AI Agents:**
- Read this file before implementing any game code
- Follow ALL rules exactly as documented
- When in doubt, prefer the more restrictive option
- Update this file if new patterns emerge
**For Humans:**
- Keep this file lean and focused on agent needs
- Update when technology stack changes
- Review quarterly for outdated rules
- Remove rules that become obvious over time
Last Updated: {{date}}
```
### 4. Present Completion Summary
Based on user skill level, present the completion:
**Expert Mode:**
"Project context complete. Optimized for LLM consumption with {{rule_count}} critical rules across {{section_count}} sections.
File saved to: `{output_folder}/project-context.md`
Ready for AI agent integration."
**Intermediate Mode:**
"Your project context is complete and optimized for AI agents!
**What we created:**
- {{rule_count}} critical implementation rules
- Technology stack with exact versions
- Engine-specific patterns and conventions
- Performance and optimization guidelines
- Testing and platform requirements
**Key benefits:**
- AI agents will implement consistently with your standards
- Reduced context switching and implementation errors
- Clear guidance for unobvious project requirements
**Next steps:**
- AI agents should read this file before implementing
- Update as your project evolves
- Review periodically for optimization"
**Beginner Mode:**
"Excellent! Your project context guide is ready!
**What this does:**
Think of this as a 'rules of the road' guide for AI agents working on your game. It ensures they all follow the same patterns and avoid common mistakes.
**What's included:**
- Exact engine and technology versions to use
- Critical coding rules they might miss
- Performance and optimization standards
- Testing and platform requirements
**How AI agents use it:**
They read this file before writing any code, ensuring everything they create follows your project's standards perfectly.
Your project context is saved and ready to help agents implement consistently!"
### 5. Final File Updates
Update the project context file with completion information:
**Frontmatter Update:**
```yaml
---
project_name: '{{project_name}}'
user_name: '{{user_name}}'
date: '{{date}}'
sections_completed:
['technology_stack', 'engine_rules', 'performance_rules', 'organization_rules', 'testing_rules', 'platform_rules', 'anti_patterns']
status: 'complete'
rule_count: { { total_rules } }
optimized_for_llm: true
---
```
**Add Usage Section:**
Append the usage guidelines from step 3 to complete the document.
### 6. Completion Validation
Final checks before completion:
**Content Validation:**
- All critical technology versions documented
- Engine-specific rules are specific and actionable
- Performance rules capture project requirements
- Code organization rules maintain standards
- Testing rules ensure consistency
- Platform rules prevent cross-platform issues
- Anti-pattern rules prevent common mistakes
**Format Validation:**
- Content is lean and optimized for LLMs
- Structure is logical and scannable
- No redundant or obvious information
- Consistent formatting throughout
### 7. Completion Message
Present final completion to user:
"**Project Context Generation Complete!**
Your optimized project context file is ready at:
`{output_folder}/project-context.md`
**Context Summary:**
- {{rule_count}} critical rules for AI agents
- {{section_count}} comprehensive sections
- Optimized for LLM context efficiency
- Ready for immediate agent integration
**Key Benefits:**
- Consistent implementation across all AI agents
- Reduced common mistakes and edge cases
- Clear guidance for project-specific patterns
- Minimal LLM context usage
**Next Steps:**
1. AI agents will automatically read this file when implementing
2. Update this file when your technology stack or patterns evolve
3. Review quarterly to optimize and remove outdated rules
Your project context will help ensure high-quality, consistent game implementation across all development work. Great work capturing your project's critical implementation requirements!"
## SUCCESS METRICS:
- Complete project context file with all critical rules
- Content optimized for LLM context efficiency
- All technology versions and patterns documented
- File structure is logical and scannable
- Usage guidelines included for agents and humans
- Frontmatter properly updated with completion status
- User provided with clear next steps and benefits
## FAILURE MODES:
- Final content is too verbose for LLM consumption
- Missing critical implementation rules or patterns
- Not optimizing content for agent readability
- Not providing clear usage guidelines
- Frontmatter not properly updated
- Not validating file completion before ending
## WORKFLOW COMPLETE:
This is the final step of the Generate Project Context workflow. The user now has a comprehensive, optimized project context file that will ensure consistent, high-quality game implementation across all AI agents working on the project.
The project context file serves as the critical "rules of the road" that agents need to implement game code consistently with the project's standards and patterns.

View File

@ -0,0 +1,48 @@
---
name: generate-project-context
description: Creates a concise project-context.md file with critical rules and patterns that AI agents must follow when implementing game code. Optimized for LLM context efficiency.
---
# Generate Project Context Workflow
**Goal:** Create a concise, optimized `project-context.md` file containing critical rules, patterns, and guidelines that AI agents must follow when implementing game code. This file focuses on unobvious details that LLMs need to be reminded of.
**Your Role:** You are a technical facilitator working with a peer to capture the essential implementation rules that will ensure consistent, high-quality game code generation across all AI agents working on the project.
---
## WORKFLOW ARCHITECTURE
This uses **micro-file architecture** for disciplined execution:
- Each step is a self-contained file with embedded rules
- Sequential progression with user control at each step
- Document state tracked in frontmatter
- Focus on lean, LLM-optimized content generation
- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmgd/config.yaml` and resolve:
- `project_name`, `output_folder`, `user_name`
- `communication_language`, `document_output_language`, `game_dev_experience`
- `date` as system-generated current datetime
### Paths
- `installed_path` = `{project-root}/_bmad/bmgd/workflows/3-technical/generate-project-context`
- `template_path` = `{installed_path}/project-context-template.md`
- `output_file` = `{output_folder}/project-context.md`
---
## EXECUTION
Load and execute `steps/step-01-discover.md` to begin the workflow.
**Note:** Input document discovery and initialization protocols are handled in step-01-discover.md.

View File

@ -1,33 +0,0 @@
# Quick-Dev Checklist
## Before Implementation
- [ ] Context loaded (tech-spec or user guidance)
- [ ] Files to modify identified
- [ ] Patterns understood
## Implementation
- [ ] All tasks completed
- [ ] Code follows existing patterns
- [ ] Error handling appropriate
## Testing
- [ ] Tests written (where appropriate)
- [ ] All tests passing
- [ ] No regressions
## Completion
- [ ] Acceptance criteria satisfied
- [ ] Tech-spec updated (if applicable)
- [ ] Summary provided to user
## Adversarial Review
- [ ] Diff constructed (tracked changes from {baseline_commit} + new untracked files)
- [ ] Adversarial review executed (subagent preferred)
- [ ] Findings presented with severity and classification
- [ ] User chose handling approach (walk through / auto-fix / skip)
- [ ] Findings resolved or acknowledged

View File

@ -1,276 +0,0 @@
# Quick-Dev - Flexible Development Workflow
<workflow>
<critical>Communicate in {communication_language}, tailored to {user_skill_level}</critical>
<critical>Execute continuously until COMPLETE - do not stop for milestones</critical>
<critical>Flexible - handles tech-specs OR direct instructions</critical>
<critical>ALWAYS respect {project_context} if it exists - it defines project standards</critical>
<checkpoint-handlers>
<on-select key="a">Load and execute {advanced_elicitation}, then return</on-select>
<on-select key="p">Load and execute {party_mode_workflow}, then return</on-select>
<on-select key="t">Load and execute {create_tech_spec_workflow}</on-select>
</checkpoint-handlers>
<step n="1" goal="Load project context and determine execution mode">
<action>Record current HEAD as baseline for later review. Run `git rev-parse HEAD` and store the result as {baseline_commit}.</action>
<action>Check if {project_context} exists. If yes, load it - this is your foundational reference for ALL implementation decisions (patterns, conventions, architecture).</action>
<action>Parse user input:
**Mode A: Tech-Spec** - e.g., `quick-dev tech-spec-auth.md`
→ Load spec, extract tasks/context/AC, goto step 3
**Mode B: Direct Instructions** - e.g., `refactor src/foo.ts...`
→ Offer planning choice
</action>
<check if="Mode A">
<action>Load tech-spec, extract tasks/context/AC</action>
<goto>step_3</goto>
</check>
<check if="Mode B">
<!-- Escalation Threshold: Lightweight check - should we invoke scale-adaptive? -->
<action>Evaluate escalation threshold against user input (minimal tokens, no file loading):
**Triggers escalation** (if 2+ signals present):
- Multiple components mentioned (e.g., dashboard + api + database)
- System-level language (e.g., platform, integration, architecture)
- Uncertainty about approach (e.g., "how should I", "best way to")
- Multi-layer scope (e.g., UI + backend + data together)
- Extended timeframe (e.g., "this week", "over the next few days")
**Reduces signal:**
- Simplicity markers (e.g., "just", "quickly", "fix", "bug", "typo", "simple", "basic", "minor")
- Single file/component focus
- Confident, specific request
Use holistic judgment, not mechanical keyword matching.</action>
<!-- No Escalation: Simple request, offer existing choice -->
<check if="escalation threshold NOT triggered">
<ask>**[t] Plan first** - Create tech-spec then implement
**[e] Execute directly** - Start now</ask>
<check if="t">
<action>Load and execute {create_tech_spec_workflow}</action>
<action>Continue to implementation after spec complete</action>
</check>
<check if="e">
<ask>Any additional guidance before I begin? (patterns, files, constraints) Or "go" to start.</ask>
<goto>step_2</goto>
</check>
</check>
<!-- Escalation Triggered: Load scale-adaptive and evaluate level -->
<check if="escalation threshold triggered">
<action>Load {project_levels} and evaluate user input against detection_hints.keywords</action>
<action>Determine level (0-4) using scale-adaptive definitions</action>
<!-- Level 0: Scale-adaptive confirms simple, fall back to standard choice -->
<check if="level 0">
<ask>**[t] Plan first** - Create tech-spec then implement
**[e] Execute directly** - Start now</ask>
<check if="t">
<action>Load and execute {create_tech_spec_workflow}</action>
<action>Continue to implementation after spec complete</action>
</check>
<check if="e">
<ask>Any additional guidance before I begin? (patterns, files, constraints) Or "go" to start.</ask>
<goto>step_2</goto>
</check>
</check>
<check if="level 1 or 2 or couldn't determine level">
<ask>This looks like a focused feature with multiple components.
**[t] Create tech-spec first** (recommended)
**[w] Seems bigger than quick-dev** — see what BMad Method recommends (workflow-init)
**[e] Execute directly**</ask>
<check if="t">
<action>Load and execute {create_tech_spec_workflow}</action>
<action>Continue to implementation after spec complete</action>
</check>
<check if="w">
<action>Load and execute {workflow_init}</action>
<action>EXIT quick-dev - user has been routed to BMad Method</action>
</check>
<check if="e">
<ask>Any additional guidance before I begin? (patterns, files, constraints) Or "go" to start.</ask>
<goto>step_2</goto>
</check>
</check>
<!-- Level 3+: BMad Method territory, recommend workflow-init -->
<check if="level 3 or higher">
<ask>This sounds like platform/system work.
**[w] Start BMad Method** (recommended) (workflow-init)
**[t] Create tech-spec** (lighter planning)
**[e] Execute directly** - feeling lucky</ask>
<check if="w">
<action>Load and execute {workflow_init}</action>
<action>EXIT quick-dev - user has been routed to BMad Method</action>
</check>
<check if="t">
<action>Load and execute {create_tech_spec_workflow}</action>
<action>Continue to implementation after spec complete</action>
</check>
<check if="e">
<ask>Any additional guidance before I begin? (patterns, files, constraints) Or "go" to start.</ask>
<goto>step_2</goto>
</check>
</check>
</check>
</check>
</step>
<step n="2" goal="Quick context gathering (direct mode)">
<action>Identify files to modify, find relevant patterns, note dependencies</action>
<action>Create mental plan: tasks, acceptance criteria, files to touch</action>
</step>
<step n="3" goal="Execute implementation" id="step_3">
<action>For each task:
1. **Load Context** - read files from spec or relevant to change
2. **Implement** - follow patterns, handle errors, follow conventions
3. **Test** - write tests, run existing tests, verify AC
4. **Mark Complete** - check off task [x], continue
</action>
<action if="3 failures">HALT and request guidance</action>
<action if="tests fail">Fix before continuing</action>
<critical>Continue through ALL tasks without stopping</critical>
</step>
<step n="4" goal="Verify and transition to review">
<action>Verify: all tasks [x], tests passing, AC satisfied, patterns followed</action>
<check if="using tech-spec">
<action>Update tech-spec status to "Completed", mark all tasks [x]</action>
</check>
<output>**Implementation Complete!**
**Summary:** {{implementation_summary}}
**Files Modified:** {{files_list}}
**Tests:** {{test_summary}}
**AC Status:** {{ac_status}}
Running adversarial code review...
</output>
<action>Proceed immediately to step 5</action>
</step>
<step n="5" goal="Adversarial code review (automatic)">
<action>Construct diff of all changes since workflow started and capture as {diff_output}:
**Tracked file changes:**
```bash
git diff {baseline_commit}
```
**New files created by this workflow:**
Only include untracked files that YOU actually created during steps 2-4. Do not include pre-existing untracked files. For each new file you created, include its full content as a "new file" addition.
Combine both into {diff_output} for review. Do NOT `git add` anything - this is read-only inspection.</action>
<action>Execute adversarial review using this hierarchy (try in order until one succeeds):
1. **Spawn subagent** (preferred) - pass the diff output along with this prompt:
```
You are a cynical, jaded code reviewer with zero patience for sloppy work. This diff was submitted by a clueless weasel and you expect to find problems. Find at least five issues to fix or improve. Number them. Be skeptical of everything.
<diff>
{diff_output}
</diff>
```
2. **CLI fallback** - pipe diff to `claude --print` with same prompt
3. **Inline self-review** - Review the diff output yourself using the cynical reviewer persona above
</action>
<check if="zero findings returned">
<action>HALT - Zero findings is suspicious. Adversarial review should always find something. Request user guidance.</action>
</check>
<action>Process findings:
- Assign IDs: F1, F2, F3...
- Assign severity: 🔴 Critical | 🟠 High | 🟡 Medium | 🟢 Low
- Classify each: **real** (confirmed issue) | **noise** (false positive) | **uncertain** (needs discussion)
</action>
<output>**Adversarial Review Findings**
| ID | Severity | Classification | Finding |
| --- | -------- | -------------- | ------- |
| F1 | 🟠 | real | ... |
| F2 | 🟡 | noise | ... |
| ... |
</output>
<ask>How would you like to handle these findings?
**[1] Walk through** - Discuss each finding individually
**[2] Auto-fix** - Automatically fix issues classified as "real"
**[3] Skip** - Acknowledge and proceed to commit</ask>
<check if="1">
<action>Present each finding one by one. For each, ask: fix now / skip / discuss</action>
<action>Apply fixes as approved</action>
</check>
<check if="2">
<action>Automatically fix all findings classified as "real"</action>
<action>Report what was fixed</action>
</check>
<check if="3">
<action>Acknowledge findings were reviewed and user chose to skip</action>
</check>
<output>**Review complete. Ready to commit.**</output>
<action>Explain what was implemented based on {user_skill_level}</action>
</step>
</workflow>

View File

@ -0,0 +1,148 @@
---
name: 'step-01-mode-detection'
description: 'Determine execution mode (tech-spec vs direct), handle escalation, set state variables'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-01-mode-detection.md'
nextStepFile_modeA: '{workflow_path}/steps/step-03-execute.md'
nextStepFile_modeB: '{workflow_path}/steps/step-02-context-gathering.md'
---
# Step 1: Mode Detection
**Goal:** Determine execution mode, capture baseline, handle escalation if needed.
---
## STATE VARIABLES (capture now, persist throughout)
These variables MUST be set in this step and available to all subsequent steps:
- `{baseline_commit}` - Git HEAD at workflow start
- `{execution_mode}` - "tech-spec" or "direct"
- `{tech_spec_path}` - Path to tech-spec file (if Mode A)
---
## EXECUTION SEQUENCE
### 1. Capture Baseline
Run `git rev-parse HEAD` and store result as `{baseline_commit}`.
### 2. Load Project Context
Check if `{project_context}` exists (`**/project-context.md`). If found, load it - this is foundational reference for ALL implementation decisions.
### 3. Parse User Input
Analyze the user's input to determine mode:
**Mode A: Tech-Spec**
- User provided a path to a tech-spec file (e.g., `quick-dev tech-spec-auth.md`)
- Load the spec, extract tasks/context/AC
- Set `{execution_mode}` = "tech-spec"
- Set `{tech_spec_path}` = provided path
- **NEXT:** Load `step-03-execute.md`
**Mode B: Direct Instructions**
- User provided task description directly (e.g., `refactor src/foo.ts...`)
- Set `{execution_mode}` = "direct"
- **NEXT:** Evaluate escalation threshold, then proceed
---
## ESCALATION THRESHOLD (Mode B only)
Evaluate user input with minimal token usage (no file loading):
**Triggers escalation (if 2+ signals present):**
- Multiple components mentioned (dashboard + api + database)
- System-level language (platform, integration, architecture)
- Uncertainty about approach ("how should I", "best way to")
- Multi-layer scope (UI + backend + data together)
- Extended timeframe ("this week", "over the next few days")
**Reduces signal:**
- Simplicity markers ("just", "quickly", "fix", "bug", "typo", "simple")
- Single file/component focus
- Confident, specific request
Use holistic judgment, not mechanical keyword matching.
---
## ESCALATION HANDLING
### No Escalation (simple request)
Present choice:
```
**[t] Plan first** - Create tech-spec then implement
**[e] Execute directly** - Start now
```
- **[t]:** Direct user to `{create_tech_spec_workflow}`. **EXIT Quick Dev.**
- **[e]:** Ask for any additional guidance, then **NEXT:** Load `step-02-context-gathering.md`
### Escalation Triggered - Level 0-2
```
This looks like a focused feature with multiple components.
**[t] Create tech-spec first** (recommended)
**[w] Seems bigger than quick-dev** - see what BMad Method recommends
**[e] Execute directly**
```
- **[t]:** Direct to `{create_tech_spec_workflow}`. **EXIT Quick Dev.**
- **[w]:** Direct to `{workflow_init}`. **EXIT Quick Dev.**
- **[e]:** Ask for guidance, then **NEXT:** Load `step-02-context-gathering.md`
### Escalation Triggered - Level 3+
```
This sounds like platform/system work.
**[w] Start BMad Method** (recommended)
**[t] Create tech-spec** (lighter planning)
**[e] Execute directly** - feeling lucky
```
- **[w]:** Direct to `{workflow_init}`. **EXIT Quick Dev.**
- **[t]:** Direct to `{create_tech_spec_workflow}`. **EXIT Quick Dev.**
- **[e]:** Ask for guidance, then **NEXT:** Load `step-02-context-gathering.md`
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state which step to load:
- Mode A (tech-spec): "**NEXT:** Loading `step-03-execute.md`"
- Mode B (direct, [e] selected): "**NEXT:** Loading `step-02-context-gathering.md`"
- Escalation ([t] or [w]): "**EXITING Quick Dev.** Follow the directed workflow."
---
## SUCCESS METRICS
- `{baseline_commit}` captured and stored
- `{execution_mode}` determined ("tech-spec" or "direct")
- `{tech_spec_path}` set if Mode A
- Project context loaded if exists
- Escalation evaluated appropriately (Mode B)
- Explicit NEXT directive provided
## FAILURE MODES
- Proceeding without capturing baseline commit
- Not setting execution_mode variable
- Loading step-02 when Mode A (tech-spec provided)
- Attempting to "return" after escalation instead of EXIT
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,117 @@
---
name: 'step-02-context-gathering'
description: 'Quick context gathering for direct mode - identify files, patterns, dependencies'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-02-context-gathering.md'
nextStepFile: '{workflow_path}/steps/step-03-execute.md'
---
# Step 2: Context Gathering (Direct Mode)
**Goal:** Quickly gather context for direct instructions - files, patterns, dependencies.
**Note:** This step only runs for Mode B (direct instructions). If `{execution_mode}` is "tech-spec", this step was skipped.
---
## AVAILABLE STATE
From step-01:
- `{baseline_commit}` - Git HEAD at workflow start
- `{execution_mode}` - Should be "direct"
- `{project_context}` - Loaded if exists
---
## EXECUTION SEQUENCE
### 1. Identify Files to Modify
Based on user's direct instructions:
- Search for relevant files using glob/grep
- Identify the specific files that need changes
- Note file locations and purposes
### 2. Find Relevant Patterns
Examine the identified files and their surroundings:
- Code style and conventions used
- Existing patterns for similar functionality
- Import/export patterns
- Error handling approaches
- Test patterns (if tests exist nearby)
### 3. Note Dependencies
Identify:
- External libraries used
- Internal module dependencies
- Configuration files that may need updates
- Related files that might be affected
### 4. Create Mental Plan
Synthesize gathered context into:
- List of tasks to complete
- Acceptance criteria (inferred from user request)
- Order of operations
- Files to touch
---
## PRESENT PLAN
Display to user:
```
**Context Gathered:**
**Files to modify:**
- {list files}
**Patterns identified:**
- {key patterns}
**Plan:**
1. {task 1}
2. {task 2}
...
**Inferred AC:**
- {acceptance criteria}
Ready to execute? (y/n/adjust)
```
- **y:** Proceed to execution
- **n:** Gather more context or clarify
- **adjust:** Modify the plan based on feedback
---
## NEXT STEP
When user confirms ready, load `step-03-execute.md`.
---
## SUCCESS METRICS
- Files to modify identified
- Relevant patterns documented
- Dependencies noted
- Mental plan created with tasks and AC
- User confirmed readiness to proceed
## FAILURE MODES
- Executing this step when Mode A (tech-spec)
- Proceeding without identifying files to modify
- Not presenting plan for user confirmation
- Missing obvious patterns in existing code

View File

@ -0,0 +1,113 @@
---
name: 'step-03-execute'
description: 'Execute implementation - iterate through tasks, write code, run tests'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-03-execute.md'
nextStepFile: '{workflow_path}/steps/step-04-self-check.md'
---
# Step 3: Execute Implementation
**Goal:** Implement all tasks, write tests, follow patterns, handle errors.
**Critical:** Continue through ALL tasks without stopping for milestones.
---
## AVAILABLE STATE
From previous steps:
- `{baseline_commit}` - Git HEAD at workflow start
- `{execution_mode}` - "tech-spec" or "direct"
- `{tech_spec_path}` - Tech-spec file (if Mode A)
- `{project_context}` - Project patterns (if exists)
From context:
- Mode A: Tasks and AC extracted from tech-spec
- Mode B: Tasks and AC from step-02 mental plan
---
## EXECUTION LOOP
For each task:
### 1. Load Context
- Read files relevant to this task
- Review patterns from project-context or observed code
- Understand dependencies
### 2. Implement
- Write code following existing patterns
- Handle errors appropriately
- Follow conventions observed in codebase
- Add appropriate comments where non-obvious
### 3. Test
- Write tests if appropriate for the change
- Run existing tests to catch regressions
- Verify the specific AC for this task
### 4. Mark Complete
- Check off task: `- [x] Task N`
- Continue to next task immediately
---
## HALT CONDITIONS
**HALT and request guidance if:**
- 3 consecutive failures on same task
- Tests fail and fix is not obvious
- Blocking dependency discovered
- Ambiguity that requires user decision
**Do NOT halt for:**
- Minor issues that can be noted and continued
- Warnings that don't block functionality
- Style preferences (follow existing patterns)
---
## CONTINUOUS EXECUTION
**Critical:** Do not stop between tasks for approval.
- Execute all tasks in sequence
- Only halt for blocking issues
- Tests failing = fix before continuing
- Track all completed work for self-check
---
## NEXT STEP
When ALL tasks are complete (or halted on blocker), load `step-04-self-check.md`.
---
## SUCCESS METRICS
- All tasks attempted
- Code follows existing patterns
- Error handling appropriate
- Tests written where appropriate
- Tests passing
- No unnecessary halts
## FAILURE MODES
- Stopping for approval between tasks
- Ignoring existing patterns
- Not running tests after changes
- Giving up after first failure
- Not following project-context rules (if exists)

View File

@ -0,0 +1,113 @@
---
name: 'step-04-self-check'
description: 'Self-audit implementation against tasks, tests, AC, and patterns'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-04-self-check.md'
nextStepFile: '{workflow_path}/steps/step-05-adversarial-review.md'
---
# Step 4: Self-Check
**Goal:** Audit completed work against tasks, tests, AC, and patterns before external review.
---
## AVAILABLE STATE
From previous steps:
- `{baseline_commit}` - Git HEAD at workflow start
- `{execution_mode}` - "tech-spec" or "direct"
- `{tech_spec_path}` - Tech-spec file (if Mode A)
- `{project_context}` - Project patterns (if exists)
---
## SELF-CHECK AUDIT
### 1. Tasks Complete
Verify all tasks are marked complete:
- [ ] All tasks from tech-spec or mental plan marked `[x]`
- [ ] No tasks skipped without documented reason
- [ ] Any blocked tasks have clear explanation
### 2. Tests Passing
Verify test status:
- [ ] All existing tests still pass
- [ ] New tests written for new functionality
- [ ] No test warnings or skipped tests without reason
### 3. Acceptance Criteria Satisfied
For each AC:
- [ ] AC is demonstrably met
- [ ] Can explain how implementation satisfies AC
- [ ] Edge cases considered
### 4. Patterns Followed
Verify code quality:
- [ ] Follows existing code patterns in codebase
- [ ] Follows project-context rules (if exists)
- [ ] Error handling consistent with codebase
- [ ] No obvious code smells introduced
---
## UPDATE TECH-SPEC (Mode A only)
If `{execution_mode}` is "tech-spec":
1. Load `{tech_spec_path}`
2. Mark all tasks as `[x]` complete
3. Update status to "Implementation Complete"
4. Save changes
---
## IMPLEMENTATION SUMMARY
Present summary to transition to review:
```
**Implementation Complete!**
**Summary:** {what was implemented}
**Files Modified:** {list of files}
**Tests:** {test summary - passed/added/etc}
**AC Status:** {all satisfied / issues noted}
Proceeding to adversarial code review...
```
---
## NEXT STEP
Proceed immediately to `step-05-adversarial-review.md`.
---
## SUCCESS METRICS
- All tasks verified complete
- All tests passing
- All AC satisfied
- Patterns followed
- Tech-spec updated (if Mode A)
- Summary presented
## FAILURE MODES
- Claiming tasks complete when they're not
- Not running tests before proceeding
- Missing AC verification
- Ignoring pattern violations
- Not updating tech-spec status (Mode A)

View File

@ -0,0 +1,96 @@
---
name: 'step-05-adversarial-review'
description: 'Construct diff and invoke adversarial review task'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-05-adversarial-review.md'
nextStepFile: '{workflow_path}/steps/step-06-resolve-findings.md'
---
# Step 5: Adversarial Code Review
**Goal:** Construct diff of all changes, invoke adversarial review task, present findings.
---
## AVAILABLE STATE
From previous steps:
- `{baseline_commit}` - Git HEAD at workflow start (CRITICAL for diff)
- `{execution_mode}` - "tech-spec" or "direct"
- `{tech_spec_path}` - Tech-spec file (if Mode A)
---
## STEP 1: CONSTRUCT DIFF
Build complete diff of all changes since workflow started.
### Tracked File Changes
```bash
git diff {baseline_commit}
```
### New Untracked Files
Only include untracked files that YOU created during this workflow (steps 2-4).
Do not include pre-existing untracked files.
For each new file created, include its full content as a "new file" addition.
### Capture as {diff_output}
Merge tracked changes and new files into `{diff_output}`.
**Note:** Do NOT `git add` anything - this is read-only inspection.
---
## STEP 2: INVOKE ADVERSARIAL REVIEW
With `{diff_output}` constructed, invoke the review task:
```xml
<invoke-task input="{diff_output}">{project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
```
**Platform fallback:** If task invocation not available, load the task file and execute its instructions inline, passing `{diff_output}` as the content input.
The task will:
- Review with cynical skepticism
- Find at least 5 issues
- Assign IDs (F1, F2...), severity (critical/high/medium/low), classification (real/noise/uncertain)
- Return structured findings table
---
## STEP 3: RECEIVE FINDINGS
Capture the findings from the task output.
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
---
## NEXT STEP
With findings in hand, load `step-06-resolve-findings.md` for user to choose resolution approach.
---
## SUCCESS METRICS
- Diff constructed from baseline_commit
- New files included in diff
- Task invoked with diff as input
- Findings received with IDs, severity, classification
- Zero-findings case handled appropriately
## FAILURE MODES
- Missing baseline_commit (can't construct accurate diff)
- Not including new untracked files in diff
- Invoking task without providing diff input
- Accepting zero findings without questioning

View File

@ -0,0 +1,140 @@
---
name: 'step-06-resolve-findings'
description: 'Handle review findings interactively, apply fixes, update tech-spec with final status'
workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev'
thisStepFile: '{workflow_path}/steps/step-06-resolve-findings.md'
---
# Step 6: Resolve Findings
**Goal:** Handle adversarial review findings interactively, apply fixes, finalize tech-spec.
---
## AVAILABLE STATE
From previous steps:
- `{baseline_commit}` - Git HEAD at workflow start
- `{execution_mode}` - "tech-spec" or "direct"
- `{tech_spec_path}` - Tech-spec file (if Mode A)
- Findings table from step-05
---
## RESOLUTION OPTIONS
Present choice to user:
```
How would you like to handle these findings?
**[1] Walk through** - Discuss each finding individually
**[2] Auto-fix** - Automatically fix issues classified as "real"
**[3] Skip** - Acknowledge and proceed to commit
```
---
## OPTION 1: WALK THROUGH
For each finding in order:
1. Present the finding with context
2. Ask: **fix now / skip / discuss**
3. If fix: Apply the fix immediately
4. If skip: Note as acknowledged, continue
5. If discuss: Provide more context, re-ask
6. Move to next finding
After all findings processed, summarize what was fixed/skipped.
---
## OPTION 2: AUTO-FIX
1. Filter findings to only those classified as "real"
2. Apply fixes for each real finding
3. Report what was fixed:
```
**Auto-fix Applied:**
- F1: {description of fix}
- F3: {description of fix}
...
Skipped (noise/uncertain): F2, F4
```
---
## OPTION 3: SKIP
1. Acknowledge all findings were reviewed
2. Note that user chose to proceed without fixes
3. Continue to completion
---
## UPDATE TECH-SPEC (Mode A only)
If `{execution_mode}` is "tech-spec":
1. Load `{tech_spec_path}`
2. Update status to "Completed"
3. Add review notes:
```
## Review Notes
- Adversarial review completed
- Findings: {count} total, {fixed} fixed, {skipped} skipped
- Resolution approach: {walk-through/auto-fix/skip}
```
4. Save changes
---
## COMPLETION OUTPUT
```
**Review complete. Ready to commit.**
**Implementation Summary:**
- {what was implemented}
- Files modified: {count}
- Tests: {status}
- Review findings: {X} addressed, {Y} skipped
{Explain what was implemented based on user_skill_level}
```
---
## WORKFLOW COMPLETE
This is the final step. The Quick Dev workflow is now complete.
User can:
- Commit changes
- Run additional tests
- Start new Quick Dev session
---
## SUCCESS METRICS
- User presented with resolution options
- Chosen approach executed correctly
- Fixes applied cleanly (if applicable)
- Tech-spec updated with final status (Mode A)
- Completion summary provided
- User understands what was implemented
## FAILURE MODES
- Not presenting resolution options
- Auto-fixing "noise" or "uncertain" findings
- Not updating tech-spec after resolution (Mode A)
- No completion summary
- Leaving user unclear on next steps

View File

@ -0,0 +1,62 @@
---
name: quick-dev
description: 'Flexible development - execute tech-specs OR direct instructions with optional planning.'
---
# Quick Dev Workflow
**Goal:** Execute implementation tasks efficiently, either from a tech-spec or direct user instructions.
**Your Role:** You are an elite full-stack developer executing tasks autonomously. Follow patterns, ship code, run tests. Every response moves the project forward.
---
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for focused execution:
- Each step loads fresh to combat "lost in the middle"
- State persists via variables: `{baseline_commit}`, `{execution_mode}`, `{tech_spec_path}`
- Sequential progression through implementation phases
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `user_name`, `communication_language`, `user_skill_level`
- `output_folder`, `sprint_artifacts`
- `date` as system-generated current datetime
### Paths
- `installed_path` = `{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev`
- `project_context` = `**/project-context.md` (load if exists)
- `project_levels` = `{project-root}/_bmad/bmm/workflows/workflow-status/project-levels.yaml`
### Related Workflows
- `create_tech_spec_workflow` = `{project-root}/_bmad/bmm/workflows/bmad-quick-flow/create-tech-spec/workflow.yaml`
- `workflow_init` = `{project-root}/_bmad/bmm/workflows/workflow-status/init/workflow.yaml`
- `party_mode_exec` = `{project-root}/_bmad/core/workflows/party-mode/workflow.md`
- `advanced_elicitation` = `{project-root}/_bmad/core/tasks/advanced-elicitation.xml`
---
## CHECKPOINT HANDLERS
At any checkpoint throughout this workflow, the following options are available:
- **[a] Advanced Elicitation** - Invoke `{advanced_elicitation}` for deeper analysis using First Principles, Pre-mortem, or other techniques
- **[p] Party Mode** - Invoke `{party_mode_exec}` to bring in multiple agent perspectives for complex decisions
These are optional power tools - use when stuck, facing ambiguity, or wanting diverse input.
---
## EXECUTION
Load and execute `steps/step-01-mode-detection.md` to begin the workflow.

View File

@ -1,33 +0,0 @@
# Quick-Flow: Quick-Dev
name: quick-dev
description: "Flexible development - execute tech-specs OR direct instructions with optional planning."
author: "BMad"
# Config
config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
sprint_artifacts: "{config_source}:sprint_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated
# Project context
project_context: "**/project-context.md"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev"
instructions: "{installed_path}/instructions.md"
checklist: "{installed_path}/checklist.md"
# Related workflows
create_tech_spec_workflow: "{project-root}/_bmad/bmm/workflows/bmad-quick-flow/create-tech-spec/workflow.yaml"
party_mode_exec: "{project-root}/_bmad/core/workflows/party-mode/workflow.md"
advanced_elicitation: "{project-root}/_bmad/core/tasks/advanced-elicitation.xml"
# Routing resources (lazy-loaded)
project_levels: "{project-root}/_bmad/bmm/workflows/workflow-status/project-levels.yaml"
workflow_init: "{project-root}/_bmad/bmm/workflows/workflow-status/init/workflow.yaml"
standalone: true
web_bundle: false

View File

@ -13,12 +13,13 @@ const { XmlHandler } = require('../../../lib/xml-handler');
const { DependencyResolver } = require('./dependency-resolver');
const { ConfigCollector } = require('./config-collector');
const { getProjectRoot, getSourcePath, getModulePath } = require('../../../lib/project-root');
const { AgentPartyGenerator } = require('../../../lib/agent-party-generator');
const { CLIUtils } = require('../../../lib/cli-utils');
const { ManifestGenerator } = require('./manifest-generator');
const { IdeConfigManager } = require('./ide-config-manager');
const { CustomHandler } = require('../custom/handler');
const { filterCustomizationData } = require('../../../lib/agent/compiler');
// BMAD installation folder name - this is constant and should never change
const BMAD_FOLDER_NAME = '_bmad';
class Installer {
constructor() {
@ -34,58 +35,35 @@ class Installer {
this.ideConfigManager = new IdeConfigManager();
this.installedFiles = new Set(); // Track all installed files
this.ttsInjectedFiles = []; // Track files with TTS injection applied
this.bmadFolderName = BMAD_FOLDER_NAME;
}
/**
* Find the bmad installation directory in a project
* V6+ installations can use ANY folder name but ALWAYS have _config/manifest.yaml
* Always uses the standard _bmad folder name
* Also checks for legacy _cfg folder for migration
* @param {string} projectDir - Project directory
* @returns {Promise<Object>} { bmadDir: string, hasLegacyCfg: boolean }
*/
async findBmadDir(projectDir) {
const bmadDir = path.join(projectDir, BMAD_FOLDER_NAME);
// Check if project directory exists
if (!(await fs.pathExists(projectDir))) {
// Project doesn't exist yet, return default
return { bmadDir: path.join(projectDir, '_bmad'), hasLegacyCfg: false };
return { bmadDir, hasLegacyCfg: false };
}
let bmadDir = null;
// Check for legacy _cfg folder if bmad directory exists
let hasLegacyCfg = false;
try {
const entries = await fs.readdir(projectDir, { withFileTypes: true });
for (const entry of entries) {
if (entry.isDirectory()) {
const bmadPath = path.join(projectDir, entry.name);
// Check for current _config folder
const manifestPath = path.join(bmadPath, '_config', 'manifest.yaml');
if (await fs.pathExists(manifestPath)) {
// Found a V6+ installation with current _config folder
return { bmadDir: bmadPath, hasLegacyCfg: false };
}
// Check for legacy _cfg folder
const legacyManifestPath = path.join(bmadPath, '_cfg', 'manifest.yaml');
if (await fs.pathExists(legacyManifestPath)) {
bmadDir = bmadPath;
hasLegacyCfg = true;
}
}
if (await fs.pathExists(bmadDir)) {
const legacyCfgPath = path.join(bmadDir, '_cfg');
if (await fs.pathExists(legacyCfgPath)) {
hasLegacyCfg = true;
}
} catch {
console.log(chalk.red('Error reading project directory for BMAD installation detection'));
}
// If we found a bmad directory (with or without legacy _cfg)
if (bmadDir) {
return { bmadDir, hasLegacyCfg };
}
// No V6+ installation found, return default
// This will be used for new installations
return { bmadDir: path.join(projectDir, '_bmad'), hasLegacyCfg: false };
return { bmadDir, hasLegacyCfg };
}
/**
@ -120,7 +98,7 @@ class Installer {
*
* 3. Document marker in instructions.md (if applicable)
*/
async copyFileWithPlaceholderReplacement(sourcePath, targetPath, bmadFolderName) {
async copyFileWithPlaceholderReplacement(sourcePath, targetPath) {
// List of text file extensions that should have placeholder replacement
const textExtensions = ['.md', '.yaml', '.yml', '.txt', '.json', '.js', '.ts', '.html', '.css', '.sh', '.bat', '.csv', '.xml'];
const ext = path.extname(sourcePath).toLowerCase();
@ -285,7 +263,7 @@ class Installer {
// Check for already configured IDEs
const { Detector } = require('./detector');
const detector = new Detector();
const bmadDir = path.join(projectDir, this.bmadFolderName || 'bmad');
const bmadDir = path.join(projectDir, BMAD_FOLDER_NAME);
// During full reinstall, use the saved previous IDEs since bmad dir was deleted
// Otherwise detect from existing installation
@ -532,18 +510,14 @@ class Installer {
}
}
// Always use _bmad as the folder name
const bmadFolderName = '_bmad';
this.bmadFolderName = bmadFolderName; // Store for use in other methods
// Store AgentVibes configuration for injection point processing
this.enableAgentVibes = config.enableAgentVibes || false;
// Set bmad folder name on module manager and IDE manager for placeholder replacement
this.moduleManager.setBmadFolderName(bmadFolderName);
this.moduleManager.setBmadFolderName(BMAD_FOLDER_NAME);
this.moduleManager.setCoreConfig(moduleConfigs.core || {});
this.moduleManager.setCustomModulePaths(customModulePaths);
this.ideManager.setBmadFolderName(bmadFolderName);
this.ideManager.setBmadFolderName(BMAD_FOLDER_NAME);
// Tool selection will be collected after we determine if it's a reinstall/update/new install
@ -553,14 +527,8 @@ class Installer {
// Resolve target directory (path.resolve handles platform differences)
const projectDir = path.resolve(config.directory);
let existingBmadDir = null;
let existingBmadFolderName = null;
if (await fs.pathExists(projectDir)) {
const result = await this.findBmadDir(projectDir);
existingBmadDir = result.bmadDir;
existingBmadFolderName = path.basename(existingBmadDir);
}
// Always use the standard _bmad folder name
const bmadDir = path.join(projectDir, BMAD_FOLDER_NAME);
// Create a project directory if it doesn't exist (user already confirmed)
if (!(await fs.pathExists(projectDir))) {
@ -582,8 +550,6 @@ class Installer {
}
}
const bmadDir = path.join(projectDir, bmadFolderName);
// Check existing installation
spinner.text = 'Checking for existing installation...';
const existingInstall = await this.detector.detect(bmadDir);
@ -1606,7 +1572,7 @@ class Installer {
const targetPath = path.join(agentsDir, fileName);
if (await fs.pathExists(sourcePath)) {
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath);
this.installedFiles.add(targetPath);
}
}
@ -1622,7 +1588,7 @@ class Installer {
const targetPath = path.join(tasksDir, fileName);
if (await fs.pathExists(sourcePath)) {
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath);
this.installedFiles.add(targetPath);
}
}
@ -1638,7 +1604,7 @@ class Installer {
const targetPath = path.join(toolsDir, fileName);
if (await fs.pathExists(sourcePath)) {
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath);
this.installedFiles.add(targetPath);
}
}
@ -1654,7 +1620,7 @@ class Installer {
const targetPath = path.join(templatesDir, fileName);
if (await fs.pathExists(sourcePath)) {
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath);
this.installedFiles.add(targetPath);
}
}
@ -1669,7 +1635,7 @@ class Installer {
await fs.ensureDir(path.dirname(targetPath));
if (await fs.pathExists(dataPath)) {
await this.copyFileWithPlaceholderReplacement(dataPath, targetPath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(dataPath, targetPath);
this.installedFiles.add(targetPath);
}
}
@ -1759,14 +1725,9 @@ class Installer {
}
}
// Check if this is a workflow.yaml file
if (file.endsWith('workflow.yaml')) {
await fs.ensureDir(path.dirname(targetFile));
await this.copyWorkflowYamlStripped(sourceFile, targetFile);
} else {
// Copy the file with placeholder replacement
await this.copyFileWithPlaceholderReplacement(sourceFile, targetFile, this.bmadFolderName || 'bmad');
}
// Copy the file with placeholder replacement
await fs.ensureDir(path.dirname(targetFile));
await this.copyFileWithPlaceholderReplacement(sourceFile, targetFile);
// Track the installed file
this.installedFiles.add(targetFile);
@ -1844,7 +1805,7 @@ class Installer {
if (!(await fs.pathExists(customizePath))) {
const genericTemplatePath = getSourcePath('utility', 'agent-components', 'agent.customize.template.yaml');
if (await fs.pathExists(genericTemplatePath)) {
await this.copyFileWithPlaceholderReplacement(genericTemplatePath, customizePath, this.bmadFolderName || 'bmad');
await this.copyFileWithPlaceholderReplacement(genericTemplatePath, customizePath);
if (process.env.BMAD_VERBOSE_INSTALL === 'true') {
console.log(chalk.dim(` Created customize: ${moduleName}-${agentName}.customize.yaml`));
}
@ -1853,235 +1814,6 @@ class Installer {
}
}
/**
* Build standalone agents in bmad/agents/ directory
* @param {string} bmadDir - Path to bmad directory
* @param {string} projectDir - Path to project directory
*/
async buildStandaloneAgents(bmadDir, projectDir) {
const standaloneAgentsPath = path.join(bmadDir, 'agents');
const cfgAgentsDir = path.join(bmadDir, '_config', 'agents');
// Check if standalone agents directory exists
if (!(await fs.pathExists(standaloneAgentsPath))) {
return;
}
// Get all subdirectories in agents/
const agentDirs = await fs.readdir(standaloneAgentsPath, { withFileTypes: true });
for (const agentDir of agentDirs) {
if (!agentDir.isDirectory()) continue;
const agentDirPath = path.join(standaloneAgentsPath, agentDir.name);
// Find any .agent.yaml file in the directory
const files = await fs.readdir(agentDirPath);
const yamlFile = files.find((f) => f.endsWith('.agent.yaml'));
if (!yamlFile) continue;
const agentName = path.basename(yamlFile, '.agent.yaml');
const sourceYamlPath = path.join(agentDirPath, yamlFile);
const targetMdPath = path.join(agentDirPath, `${agentName}.md`);
const customizePath = path.join(cfgAgentsDir, `${agentName}.customize.yaml`);
// Check for customizations
const customizeExists = await fs.pathExists(customizePath);
let customizedFields = [];
if (customizeExists) {
const customizeContent = await fs.readFile(customizePath, 'utf8');
const yaml = require('yaml');
const customizeYaml = yaml.parse(customizeContent);
// Detect what fields are customized (similar to rebuildAgentFiles)
if (customizeYaml) {
if (customizeYaml.persona) {
for (const [key, value] of Object.entries(customizeYaml.persona)) {
if (value !== '' && value !== null && !(Array.isArray(value) && value.length === 0)) {
customizedFields.push(`persona.${key}`);
}
}
}
if (customizeYaml.agent?.metadata) {
for (const [key, value] of Object.entries(customizeYaml.agent.metadata)) {
if (value !== '' && value !== null) {
customizedFields.push(`metadata.${key}`);
}
}
}
if (customizeYaml.critical_actions && customizeYaml.critical_actions.length > 0) {
customizedFields.push('critical_actions');
}
if (customizeYaml.menu && customizeYaml.menu.length > 0) {
customizedFields.push('menu');
}
}
}
// Build YAML to XML .md
let xmlContent = await this.xmlHandler.buildFromYaml(sourceYamlPath, customizeExists ? customizePath : null, {
includeMetadata: true,
});
// DO NOT replace {project-root} - LLMs understand this placeholder at runtime
// const processedContent = xmlContent.replaceAll('{project-root}', projectDir);
// Process TTS injection points (pass targetPath for tracking)
xmlContent = this.processTTSInjectionPoints(xmlContent, targetMdPath);
// Write the built .md file with POSIX-compliant final newline
const content = xmlContent.endsWith('\n') ? xmlContent : xmlContent + '\n';
await fs.writeFile(targetMdPath, content, 'utf8');
// Display result
if (customizedFields.length > 0) {
console.log(chalk.dim(` Built standalone agent: ${agentName}.md `) + chalk.yellow(`(customized: ${customizedFields.join(', ')})`));
} else {
console.log(chalk.dim(` Built standalone agent: ${agentName}.md`));
}
}
}
/**
* Rebuild agent files from installer source (for compile command)
* @param {string} modulePath - Path to module in bmad/ installation
* @param {string} moduleName - Module name
*/
async rebuildAgentFiles(modulePath, moduleName) {
// Get source agents directory from installer
const sourceAgentsPath =
moduleName === 'core' ? path.join(getModulePath('core'), 'agents') : path.join(getSourcePath(`modules/${moduleName}`), 'agents');
if (!(await fs.pathExists(sourceAgentsPath))) {
return; // No source agents to rebuild
}
// Determine project directory (parent of bmad/ directory)
const bmadDir = path.dirname(modulePath);
const projectDir = path.dirname(bmadDir);
const cfgAgentsDir = path.join(bmadDir, '_config', 'agents');
const targetAgentsPath = path.join(modulePath, 'agents');
// Ensure target directory exists
await fs.ensureDir(targetAgentsPath);
// Get all YAML agent files from source
const sourceFiles = await fs.readdir(sourceAgentsPath);
for (const file of sourceFiles) {
if (file.endsWith('.agent.yaml')) {
const agentName = file.replace('.agent.yaml', '');
const sourceYamlPath = path.join(sourceAgentsPath, file);
const targetMdPath = path.join(targetAgentsPath, `${agentName}.md`);
const customizePath = path.join(cfgAgentsDir, `${moduleName}-${agentName}.customize.yaml`);
// Check for customizations
const customizeExists = await fs.pathExists(customizePath);
let customizedFields = [];
if (customizeExists) {
const customizeContent = await fs.readFile(customizePath, 'utf8');
const yaml = require('yaml');
const customizeYaml = yaml.parse(customizeContent);
// Detect what fields are customized
if (customizeYaml) {
if (customizeYaml.persona) {
for (const [key, value] of Object.entries(customizeYaml.persona)) {
if (value !== '' && value !== null && !(Array.isArray(value) && value.length === 0)) {
customizedFields.push(`persona.${key}`);
}
}
}
if (customizeYaml.agent?.metadata) {
for (const [key, value] of Object.entries(customizeYaml.agent.metadata)) {
if (value !== '' && value !== null) {
customizedFields.push(`metadata.${key}`);
}
}
}
if (customizeYaml.critical_actions && customizeYaml.critical_actions.length > 0) {
customizedFields.push('critical_actions');
}
if (customizeYaml.memories && customizeYaml.memories.length > 0) {
customizedFields.push('memories');
}
if (customizeYaml.menu && customizeYaml.menu.length > 0) {
customizedFields.push('menu');
}
if (customizeYaml.prompts && customizeYaml.prompts.length > 0) {
customizedFields.push('prompts');
}
}
}
// Read the YAML content
const yamlContent = await fs.readFile(sourceYamlPath, 'utf8');
// Read customize content if exists
let customizeData = {};
if (customizeExists) {
const customizeContent = await fs.readFile(customizePath, 'utf8');
const yaml = require('yaml');
customizeData = yaml.parse(customizeContent);
}
// Build agent answers from customize data (filter empty values)
const answers = {};
if (customizeData.persona) {
Object.assign(answers, filterCustomizationData(customizeData.persona));
}
if (customizeData.agent?.metadata) {
const filteredMetadata = filterCustomizationData(customizeData.agent.metadata);
if (Object.keys(filteredMetadata).length > 0) {
Object.assign(answers, { metadata: filteredMetadata });
}
}
if (customizeData.critical_actions && customizeData.critical_actions.length > 0) {
answers.critical_actions = customizeData.critical_actions;
}
if (customizeData.memories && customizeData.memories.length > 0) {
answers.memories = customizeData.memories;
}
const coreConfigPath = path.join(bmadDir, 'core', 'config.yaml');
let coreConfig = {};
if (await fs.pathExists(coreConfigPath)) {
const yaml = require('yaml');
const coreConfigContent = await fs.readFile(coreConfigPath, 'utf8');
coreConfig = yaml.parse(coreConfigContent);
}
// Compile using the same compiler as initial installation
const { compileAgent } = require('../../../lib/agent/compiler');
const result = await compileAgent(yamlContent, answers, agentName, path.relative(bmadDir, targetMdPath), {
config: coreConfig,
});
// Check if compilation succeeded
if (!result || !result.xml) {
throw new Error(`Failed to compile agent ${agentName}: No XML returned from compiler`);
}
// Replace _bmad with actual folder name if needed
const finalXml = result.xml.replaceAll('_bmad', path.basename(bmadDir));
// Write the rebuilt .md file with POSIX-compliant final newline
const content = finalXml.endsWith('\n') ? finalXml : finalXml + '\n';
await fs.writeFile(targetMdPath, content, 'utf8');
// Display result with customizations if any
if (customizedFields.length > 0) {
console.log(chalk.dim(` Rebuilt agent: ${agentName}.md `) + chalk.yellow(`(customized: ${customizedFields.join(', ')})`));
} else {
console.log(chalk.dim(` Rebuilt agent: ${agentName}.md`));
}
}
}
}
/**
* Private: Update core
*/
@ -2677,190 +2409,6 @@ class Installer {
return { customFiles, modifiedFiles };
}
/**
* Private: Create agent configuration files
* @param {string} bmadDir - BMAD installation directory
* @param {Object} userInfo - User information including name and language
*/
async createAgentConfigs(bmadDir, userInfo = null) {
const agentConfigDir = path.join(bmadDir, '_config', 'agents');
await fs.ensureDir(agentConfigDir);
// Get all agents from all modules
const agents = [];
const agentDetails = []; // For manifest generation
// Check modules for agents (including core)
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
for (const entry of entries) {
if (entry.isDirectory() && entry.name !== '_config') {
const moduleAgentsPath = path.join(bmadDir, entry.name, 'agents');
if (await fs.pathExists(moduleAgentsPath)) {
const agentFiles = await fs.readdir(moduleAgentsPath);
for (const agentFile of agentFiles) {
if (agentFile.endsWith('.md')) {
const agentPath = path.join(moduleAgentsPath, agentFile);
const agentContent = await fs.readFile(agentPath, 'utf8');
// Skip agents with localskip="true"
const hasLocalSkip = agentContent.match(/<agent[^>]*\slocalskip="true"[^>]*>/);
if (hasLocalSkip) {
continue; // Skip this agent - it should not have been installed
}
const agentName = path.basename(agentFile, '.md');
// Extract any nodes with agentConfig="true"
const agentConfigNodes = this.extractAgentConfigNodes(agentContent);
agents.push({
name: agentName,
module: entry.name,
agentConfigNodes: agentConfigNodes,
});
// Use shared AgentPartyGenerator to extract details
let details = AgentPartyGenerator.extractAgentDetails(agentContent, entry.name, agentName);
// Apply config overrides if they exist
if (details) {
const configPath = path.join(agentConfigDir, `${entry.name}-${agentName}.md`);
if (await fs.pathExists(configPath)) {
const configContent = await fs.readFile(configPath, 'utf8');
details = AgentPartyGenerator.applyConfigOverrides(details, configContent);
}
agentDetails.push(details);
}
}
}
}
}
}
// Create config file for each agent
let createdCount = 0;
let skippedCount = 0;
// Load agent config template
const templatePath = getSourcePath('utility', 'models', 'agent-config-template.md');
const templateContent = await fs.readFile(templatePath, 'utf8');
for (const agent of agents) {
const configPath = path.join(agentConfigDir, `${agent.module}-${agent.name}.md`);
// Skip if config file already exists (preserve custom configurations)
if (await fs.pathExists(configPath)) {
skippedCount++;
continue;
}
// Build config content header
let configContent = `# Agent Config: ${agent.name}\n\n`;
// Process template and add agent-specific config nodes
let processedTemplate = templateContent;
// Replace {core:user_name} placeholder with actual user name if available
if (userInfo && userInfo.userName) {
processedTemplate = processedTemplate.replaceAll('{core:user_name}', userInfo.userName);
}
// Replace {core:communication_language} placeholder with actual language if available
if (userInfo && userInfo.responseLanguage) {
processedTemplate = processedTemplate.replaceAll('{core:communication_language}', userInfo.responseLanguage);
}
// If this agent has agentConfig nodes, add them after the existing comment
if (agent.agentConfigNodes && agent.agentConfigNodes.length > 0) {
// Find the agent-specific configuration nodes comment
const commentPattern = /(\s*<!-- Agent-specific configuration nodes -->)/;
const commentMatch = processedTemplate.match(commentPattern);
if (commentMatch) {
// Add nodes right after the comment
let agentSpecificNodes = '';
for (const node of agent.agentConfigNodes) {
agentSpecificNodes += `\n ${node}`;
}
processedTemplate = processedTemplate.replace(commentPattern, `$1${agentSpecificNodes}`);
}
}
configContent += processedTemplate;
// Ensure POSIX-compliant final newline
if (!configContent.endsWith('\n')) {
configContent += '\n';
}
await fs.writeFile(configPath, configContent, 'utf8');
this.installedFiles.add(configPath); // Track agent config files
createdCount++;
}
// Generate agent manifest with overrides applied
await this.generateAgentManifest(bmadDir, agentDetails);
return { total: agents.length, created: createdCount, skipped: skippedCount };
}
/**
* Generate agent manifest XML file
* @param {string} bmadDir - BMAD installation directory
* @param {Array} agentDetails - Array of agent details
*/
async generateAgentManifest(bmadDir, agentDetails) {
const manifestPath = path.join(bmadDir, '_config', 'agent-manifest.csv');
await AgentPartyGenerator.writeAgentParty(manifestPath, agentDetails, { forWeb: false });
}
/**
* Extract nodes with agentConfig="true" from agent content
* @param {string} content - Agent file content
* @returns {Array} Array of XML nodes that should be added to agent config
*/
extractAgentConfigNodes(content) {
const nodes = [];
try {
// Find all XML nodes with agentConfig="true"
// Match self-closing tags and tags with content
const selfClosingPattern = /<([a-zA-Z][a-zA-Z0-9_-]*)\s+[^>]*agentConfig="true"[^>]*\/>/g;
const withContentPattern = /<([a-zA-Z][a-zA-Z0-9_-]*)\s+[^>]*agentConfig="true"[^>]*>([\s\S]*?)<\/\1>/g;
// Extract self-closing tags
let match;
while ((match = selfClosingPattern.exec(content)) !== null) {
// Extract just the tag without children (structure only)
const tagMatch = match[0].match(/<([a-zA-Z][a-zA-Z0-9_-]*)([^>]*)\/>/);
if (tagMatch) {
const tagName = tagMatch[1];
const attributes = tagMatch[2].replace(/\s*agentConfig="true"/, ''); // Remove agentConfig attribute
nodes.push(`<${tagName}${attributes}></${tagName}>`);
}
}
// Extract tags with content
while ((match = withContentPattern.exec(content)) !== null) {
const fullMatch = match[0];
const tagName = match[1];
// Extract opening tag with attributes (removing agentConfig="true")
const openingTagMatch = fullMatch.match(new RegExp(`<${tagName}([^>]*)>`));
if (openingTagMatch) {
const attributes = openingTagMatch[1].replace(/\s*agentConfig="true"/, '');
// Add empty node structure (no children)
nodes.push(`<${tagName}${attributes}></${tagName}>`);
}
}
} catch (error) {
console.error('Error extracting agentConfig nodes:', error);
}
return nodes;
}
/**
* Handle missing custom module sources interactively
* @param {Map} customModuleSources - Map of custom module ID to info
@ -2999,7 +2547,7 @@ class Installer {
await this.manifest.addCustomModule(bmadDir, missing.info);
validCustomModules.push({
id: moduleId,
id: missing.id,
name: missing.name,
path: resolvedPath,
info: missing.info,
@ -3013,7 +2561,7 @@ class Installer {
case 'remove': {
// Extra confirmation for destructive remove
console.log(chalk.red.bold(`\n⚠️ WARNING: This will PERMANENTLY DELETE "${missing.name}" and all its files!`));
console.log(chalk.red(` Module location: ${path.join(bmadDir, moduleId)}`));
console.log(chalk.red(` Module location: ${path.join(bmadDir, missing.id)}`));
const { confirm } = await inquirer.prompt([
{

File diff suppressed because it is too large Load Diff

View File

@ -731,7 +731,7 @@ class ModuleManager {
async compileModuleAgents(sourcePath, targetPath, moduleName, bmadDir, installer = null) {
const sourceAgentsPath = path.join(sourcePath, 'agents');
const targetAgentsPath = path.join(targetPath, 'agents');
const cfgAgentsDir = path.join(bmadDir, '_bmad', '_config', 'agents');
const cfgAgentsDir = path.join(bmadDir, '_config', 'agents');
// Check if agents directory exists in source
if (!(await fs.pathExists(sourceAgentsPath))) {