fix: resolve workflow guardrails and installer regressions

This commit is contained in:
Dicky Moore 2026-02-08 00:37:42 +00:00
parent 2a9b447e04
commit 3feb0d378f
18 changed files with 495 additions and 237 deletions

View File

@ -69,6 +69,15 @@ Products, platforms, complex features — structured planning then build:
Every step tells you what's next. Optional phases (brainstorming, research, UX design) are available when you need them — ask `/bmad-help` anytime. For a detailed walkthrough, see the [Getting Started Tutorial](http://docs.bmad-method.org/tutorials/getting-started/).
### Workflow Path Resolution
BMad workflow/task files often reference placeholders such as `{project-root}` and installed paths like `{project-root}/_bmad/...`.
- In installed projects, `{project-root}` resolves to the consuming repository root where BMAD is installed.
- `_bmad/...` paths point to the installed BMAD runtime content in that repository.
- In this source repository, equivalent source files typically live under `src/...`.
- When a workflow must run directly from source (without install), use explicit `src/...` paths in that workflow step.
## Modules
BMad Method extends with official modules for specialized domains. Modules are available during installation and can be added to your project at any time. After the V6 beta period these will also be available as Plugins and Granular Skills.

View File

@ -6,7 +6,7 @@ description: 'Document Discovery & Confirmation - Handle fresh context validatio
nextStepFile: './step-v-02-format-detection.md'
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.md'
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
prdPurpose: '{project-root}/_bmad/bmm/workflows/2-plan-workflows/create-prd/data/prd-purpose.md'
prdPurpose: '../data/prd-purpose.md'
---
# Step 1: Document Discovery & Confirmation

View File

@ -30,7 +30,7 @@ This step will generate content and present choices:
## PROTOCOL INTEGRATION:
- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.md
- When 'A' selected: Read fully and follow: src/core/workflows/advanced-elicitation/workflow.md
- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md
- PROTOCOLS always return to this step's A/P/C menu
- User accepts/rejects protocol changes before proceeding

View File

@ -2,16 +2,29 @@
name: 'step-03-execute-review'
description: 'Execute full adversarial review and record actionable findings'
nextStepFile: './step-04-present-and-resolve.md'
reviewFindingsFile: '{story_dir}/review-findings.json'
---
<step n="3" goal="Execute adversarial review">
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<critical>Every issue MUST be captured using the structured findings contract below</critical>
<action>Initialize findings artifacts:
- Set {{review_findings}} = [] (in-memory array)
- Set {{review_findings_file}} = {reviewFindingsFile}
- Each finding record MUST contain:
id, severity, type, summary, detail, file_line, proof, suggested_fix, reviewer, timestamp
- `file_line` format MUST be `path/to/file:line`
- `reviewer` value MUST be `senior-dev-review`
- `timestamp` MUST use system ISO datetime
</action>
<!-- Git vs Story Discrepancies -->
<action>Review git vs story File List discrepancies:
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** → HIGH finding (false claims)
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
For every discrepancy, append a finding object to {{review_findings}}.
</action>
<!-- Use combined file list: story File List + git discovered files -->
@ -21,8 +34,29 @@ nextStepFile: './step-04-present-and-resolve.md'
<action>For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL → HIGH SEVERITY finding
3. Determine: IMPLEMENTED, PARTIAL, or MISSING using this algorithm:
- IMPLEMENTED:
- Direct code evidence exists for ALL AC clauses, and
- At least one corroborating test OR deterministic runtime verification exists, and
- Any docs/comments are supported by code/test evidence.
- PARTIAL:
- Some AC clauses have direct implementation evidence but one or more clauses are missing OR only indirectly covered, or
- Evidence is helper/utility code not clearly wired to the story path, or
- Evidence is docs/comments only without strong corroboration.
- MISSING:
- No credible code/test/docs evidence addresses the AC clauses.
4. Evidence-strength rules:
- Code + tests = strong evidence
- Code only = medium evidence
- Docs/comments/README only = weak evidence (cannot justify IMPLEMENTED alone)
5. Indirect evidence rules:
- Generic helpers/utilities count as PARTIAL unless explicitly wired by call sites OR integration tests.
6. Severity mapping for AC gaps:
- MISSING critical-path AC → HIGH
- MISSING non-critical AC → MEDIUM
- PARTIAL critical-path AC → HIGH
- PARTIAL non-critical AC → MEDIUM
7. If AC is PARTIAL or MISSING, append a finding object to {{review_findings}}.
</action>
<!-- Task Completion Audit -->
@ -31,6 +65,7 @@ nextStepFile: './step-04-present-and-resolve.md'
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
4. Record specific proof (file:line)
5. Append finding object to {{review_findings}} when mismatch is found
</action>
<!-- Code Quality Deep Dive -->
@ -40,6 +75,7 @@ nextStepFile: './step-04-present-and-resolve.md'
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
6. For each issue, append finding object to {{review_findings}}
</action>
<check if="total_issues_found lt 3">
@ -54,6 +90,27 @@ nextStepFile: './step-04-present-and-resolve.md'
</action>
<action>Find at least 3 more specific, actionable issues</action>
</check>
<action>Persist findings contract for downstream step:
- Save {{review_findings}} as JSON array to {{review_findings_file}}
- Ensure JSON is valid and each finding includes all required fields
- Set {{findings_contract}} = "JSON array at {{review_findings_file}}"
</action>
<action>Example finding record (must match real records):
{
"id": "AC-003-MISSING-001",
"severity": "HIGH",
"type": "acceptance-criteria",
"summary": "AC-3 missing null-check in API handler",
"detail": "Endpoint accepts null payload despite AC requiring rejection with 400.",
"file_line": "src/api/handler.ts:87",
"proof": "No guard before dereference; test suite lacks AC-3 rejection test.",
"suggested_fix": "Add null guard + 400 response and add regression test in test/api/handler.test.ts.",
"reviewer": "senior-dev-review",
"timestamp": "2026-02-08T00:00:00.000Z"
}
</action>
</step>
## Next

View File

@ -2,9 +2,15 @@
name: 'step-04-present-and-resolve'
description: 'Present findings and either apply fixes or create follow-up action items'
nextStepFile: './step-05-update-status.md'
reviewFindingsFile: '{story_dir}/review-findings.json'
---
<step n="4" goal="Present findings and fix them">
<action>Load structured findings from {reviewFindingsFile}</action>
<action>Validate findings schema for each entry:
id, severity, type, summary, detail, file_line, proof, suggested_fix, reviewer, timestamp
</action>
<action>If findings file missing or malformed: HALT with explicit error and return to step 3 generation</action>
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
<action>Set {{fixed_count}} = 0</action>
<action>Set {{action_count}} = 0</action>

View File

@ -17,7 +17,7 @@ web_bundle: false
- `project_knowledge`
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `date` (system-generated)
- `installed_path` = `{project-root}/_bmad/bmm/workflows/4-implementation/correct-course`
- `installed_path` = `src/bmm/workflows/4-implementation/correct-course`
- `default_output_file` = `{planning_artifacts}/sprint-change-proposal-{date}.md`
<workflow>
@ -28,6 +28,6 @@ web_bundle: false
</step>
<step n="2" goal="Validate proposal quality">
<invoke-task>Validate against checklist at {installed_path}/checklist.md using _bmad/core/tasks/validate-workflow.md</invoke-task>
<invoke-task>Validate against checklist at {installed_path}/checklist.md using src/core/tasks/validate-workflow.md</invoke-task>
</step>
</workflow>

View File

@ -310,7 +310,7 @@
</step>
<step n="6" goal="Update sprint status and finalize">
<invoke-task>Validate against checklist at {installed_path}/checklist.md using _bmad/core/tasks/validate-workflow.md</invoke-task>
<invoke-task>Validate against checklist at {installed_path}/checklist.md using src/core/tasks/validate-workflow.md</invoke-task>
<action>Save story document unconditionally</action>
<!-- Update sprint status -->

View File

@ -8,8 +8,25 @@ nextStepFile: './step-03-detect-review-continuation.md'
<critical>Load all available context to inform implementation</critical>
<action>Load {project_context} for coding standards and project-wide patterns (if exists)</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Load comprehensive context from story file's Dev Notes section</action>
<action>Validate story source before parsing:
- Verify story file exists and is readable
- If missing/unreadable: emit explicit error and HALT
</action>
<action>Parse and validate required sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Status
- If section missing, empty, or malformed: emit explicit error with section name and HALT
- Dev Notes is CRITICAL and MUST be present with non-empty actionable content
</action>
<action>Parse and validate optional section: Change Log
- If missing/empty: create warning and continue with safe default ("No prior change log entries")
</action>
<action>Validate structure before extraction:
- Story: identifiable title + narrative structure
- Acceptance Criteria: parseable list/numbered clauses
- Tasks/Subtasks: checkbox task format with stable task boundaries
- Dev Agent Record/File List/Status: parseable heading + body content
- If malformed structure prevents reliable parsing: emit explicit error and HALT
</action>
<action>Load comprehensive context from story file's Dev Notes section ONLY after validation passes</action>
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
<action>Use enhanced story context to inform implementation decisions and approaches</action>
<output>✅ **Context Loaded**

View File

@ -7,6 +7,13 @@ nextStepFile: './step-09-mark-review-ready.md'
<step n="8" goal="Validate and mark task complete ONLY when fully done">
<critical>NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING</critical>
<action>Initialize review-tracking variables before checks:
- If {{resolved_review_items}} is undefined: set {{resolved_review_items}} = []
- If {{unresolved_review_items}} is undefined: set {{unresolved_review_items}} = []
- Set {{review_continuation}} by checking current task title/original task list for prefix "[AI-Review]"
- Set {{date}} from system-generated timestamp in project date format
</action>
<!-- VALIDATION GATES -->
<action>Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%</action>
<action>Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features</action>
@ -16,14 +23,26 @@ nextStepFile: './step-09-mark-review-ready.md'
<!-- REVIEW FOLLOW-UP HANDLING -->
<check if="task is review follow-up (has [AI-Review] prefix)">
<action>Extract review item details (severity, description, related AC/file)</action>
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
<action>Add current review task to resolution tracking list: append structured entry to {{resolved_review_items}}</action>
<!-- Mark task in Review Follow-ups section -->
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
<!-- CRITICAL: Also mark corresponding action item in review section -->
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
<action>Mark that action item checkbox [x] as resolved</action>
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" using fuzzy matching:
1. Normalize strings (lowercase, trim, remove "[AI-Review]" prefix/punctuation)
2. Try exact and substring matches first
3. If none, compute token-overlap/Jaccard score per candidate
4. Select highest-scoring candidate when score >= 0.60
5. If tie at best score, prefer the candidate with more shared tokens; log ambiguity
</action>
<check if="matching action item found">
<action>Mark that action item checkbox [x] as resolved</action>
</check>
<check if="no candidate meets threshold">
<action>Log warning and append task to {{unresolved_review_items}}</action>
<action>Add resolution note in Dev Agent Record that no corresponding action item was found</action>
</check>
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
</check>
@ -41,7 +60,7 @@ nextStepFile: './step-09-mark-review-ready.md'
</check>
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
<action>Count total resolved review items in this session</action>
<action>Set {{resolved_count}} = length({{resolved_review_items}})</action>
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
</check>

View File

@ -10,6 +10,10 @@ nextStepFile: './step-10-closeout.md'
<action>Confirm File List includes every changed file</action>
<action>Execute enhanced definition-of-done validation</action>
<action>Update the story Status to: "review"</action>
<action>Initialize sprint tracking state:
- If {sprint_status} exists and is readable, load file and set {{current_sprint_status}} from tracking mode/content
- If file does not exist, unreadable, or indicates no sprint tracking, set {{current_sprint_status}} = "no-sprint-tracking"
</action>
<!-- Enhanced Definition of Done Validation -->
<action>Validate definition-of-done checklist with essential requirements:

View File

@ -1,7 +1,8 @@
---
name: dev-story
description: "Execute a story by implementing tasks/subtasks, writing tests, validating, and updating the story file per acceptance criteria"
main_config: '{project-root}/_bmad/bmm/config.yaml'
projectRoot: '{project-root}'
main_config: '{projectRoot}/_bmad/bmm/config.yaml'
web_bundle: false
---
@ -16,7 +17,11 @@ Implement a ready story end-to-end with strict validation gates, accurate progre
- Do not pre-load future step files.
## Initialization
- Load config from `{project-root}/_bmad/bmm/config.yaml`.
- Resolve `projectRoot`:
- Prefer `{project-root}` when provided by runtime.
- If unavailable, resolve repo root from current working directory (locate-repo-root helper / process cwd) and set `projectRoot`.
- Validate that `{projectRoot}/_bmad/bmm/config.yaml` exists and is readable before continuing.
- Load config from `{projectRoot}/_bmad/bmm/config.yaml`.
- Resolve variables:
- `user_name`
- `communication_language`
@ -28,7 +33,7 @@ Implement a ready story end-to-end with strict validation gates, accurate progre
- `story_file` (if provided)
- `project_context` = `**/project-context.md`
- `date` (system-generated)
- `installed_path` = `{project-root}/_bmad/bmm/workflows/4-implementation/dev-story`
- `installed_path` = `{projectRoot}/_bmad/bmm/workflows/4-implementation/dev-story`
## Critical Rules
- Communicate in `{communication_language}` and tailor explanations to `{user_skill_level}`.
@ -42,5 +47,20 @@ Implement a ready story end-to-end with strict validation gates, accurate progre
- Execute steps in order and do not skip validation gates.
- Continue until the story is complete unless a defined HALT condition triggers.
## HALT Definition
- HALT triggers:
- Required inputs/files are missing or unreadable.
- Validation gates fail and cannot be remediated in current step.
- Test/regression failures persist after fix attempts.
- Story state becomes inconsistent (e.g., malformed task structure preventing safe updates).
- HALT behavior:
- Stop executing further steps immediately.
- Persist current story-file edits and workflow state safely.
- Emit explicit user-facing error message describing trigger and remediation needed.
- Do not apply partial completion marks after HALT.
- Resume semantics:
- Manual resume only after user confirms the blocking issue is resolved.
- Resume from the last incomplete step checkpoint, re-running validations before progressing.
## Execution
Read fully and follow: `steps/step-01-find-story.md`.

View File

@ -8,8 +8,14 @@ standalone: true
## Initialization
- Load config from `{project-root}/_bmad/core/config.yaml`.
- Resolve variables (if available):
- `communication_language`, `user_name`, `document_output_language`
- Validate config load before continuing:
- Verify file exists and is readable.
- Parse YAML and fail fast with explicit error if parsing fails.
- Require `user_name`; if missing, abort initialization with descriptive error.
- Apply explicit defaults when optional keys are absent:
- `communication_language = "en"`
- `document_output_language = "en"`
- Log resolved values and config source path.
## Purpose
Execute a validation checklist against a target file and report findings clearly and consistently.
@ -20,8 +26,16 @@ Execute a validation checklist against a target file and report findings clearly
- If not provided, ask the user for the checklist path.
2. **Load target file**
- Infer the target file from the checklist context or workflow inputs.
- If unclear, ask the user for the exact file path to validate.
- Infer candidate target path in this order:
- Explicit keys in workflow/checklist inputs: `file`, `path`, `target`, `filePath`
- Path-like tokens in checklist items
- First matching path from glob patterns supplied by checklist/input
- Normalize all candidate paths relative to repo root and resolve `.`/`..`.
- Validate candidate existence and expected file type (`.yaml`, `.yml`, `.json`, or checklist-defined extension).
- If multiple valid candidates remain, prefer explicit key fields over inferred tokens.
- If no valid candidate is found, prompt user with schema example:
- `Please provide the exact file path (relative to repo root), e.g. ./workflows/ci.yml`
- Validate user-supplied path before proceeding.
3. **Run the checklist**
- Read the checklist fully.
@ -33,8 +47,14 @@ Execute a validation checklist against a target file and report findings clearly
- Provide actionable fixes for each issue.
5. **Edits (if applicable)**
- If the checklist instructs updates or auto-fixes, ask for confirmation before editing.
- Only apply changes after user approval.
- If checklist requires edits/auto-fixes, follow safe-edit protocol:
- Ask for confirmation before editing.
- Create backup snapshot of target file before changes.
- Generate reversible diff preview and show it to user.
- Apply edits only after user approval.
- Run syntax/validation checks against edited file.
- If validation fails or user cancels, rollback from backup and report rollback status.
- Record backup/diff locations in task output.
6. **Finalize**
- Confirm completion and provide the final validation summary.

View File

@ -1,7 +1,15 @@
<handler type="validate-workflow">
When command has: validate-workflow="path/to/workflow.md"
1. You MUST LOAD the file at: {project-root}/_bmad/core/tasks/validate-workflow.md
2. READ its entire contents and EXECUTE all instructions in that file
3. Pass the workflow, and also check the workflow validation property to find and load the validation schema to pass as the checklist
4. The workflow should try to identify the file to validate based on checklist context or else you will ask the user to specify
1. Resolve loader paths safely:
- Primary: {project-root}/_bmad/core/tasks/validate-workflow.md
- Fallback: {project-root}/src/core/tasks/validate-workflow.md
2. Verify primary path exists and is readable before loading
3. Wrap read/parse in try/catch and log path + underlying error on failure
4. If primary fails, attempt fallback and log warning that fallback mode is active
5. If fallback also fails:
- Log clear error with both attempted paths and caught errors
- Fail fast with deterministic exception (do not continue with partial state)
6. READ entire resolved task file and EXECUTE all instructions
7. Pass the workflow and inspect workflow validation property to find/load checklist schema
8. If target file cannot be inferred from checklist context, prompt user for exact path
</handler>

View File

@ -12,10 +12,15 @@
*/
const path = require('node:path');
const os = require('node:os');
const fs = require('fs-extra');
const yaml = require('yaml');
const { YamlXmlBuilder } = require('../tools/cli/lib/yaml-xml-builder');
const { ManifestGenerator } = require('../tools/cli/installers/lib/core/manifest-generator');
const { WorkflowCommandGenerator } = require('../tools/cli/installers/lib/ide/shared/workflow-command-generator');
const { TaskToolCommandGenerator } = require('../tools/cli/installers/lib/ide/shared/task-tool-command-generator');
const { IdeManager } = require('../tools/cli/installers/lib/ide/manager');
const { ModuleManager } = require('../tools/cli/installers/lib/modules/manager');
const { BMAD_FOLDER_NAME } = require('../tools/cli/installers/lib/ide/shared/path-utils');
// ANSI colors
@ -416,6 +421,143 @@ async function runTests() {
console.log('');
// ============================================================
// Test 11: Gemini Template Extension Regression Guard
// ============================================================
console.log(`${colors.yellow}Test Suite 11: Gemini Template Extension Guard${colors.reset}\n`);
try {
const tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-gemini-install-'));
const projectDir = path.join(tmpRoot, 'project');
const bmadDir = path.join(tmpRoot, BMAD_FOLDER_NAME);
await fs.ensureDir(projectDir);
await fs.copy(path.join(projectRoot, 'src', 'core'), path.join(bmadDir, 'core'));
await fs.copy(path.join(projectRoot, 'src', 'bmm'), path.join(bmadDir, 'bmm'));
const manifestGenerator = new ManifestGenerator();
await manifestGenerator.generateManifests(bmadDir, ['bmm'], [], { ides: ['gemini'] });
const ideManager = new IdeManager();
await ideManager.ensureInitialized();
await ideManager.setup('gemini', projectDir, bmadDir, { selectedModules: ['bmm'] });
const commandsDir = path.join(projectDir, '.gemini', 'commands');
const generated = await fs.readdir(commandsDir);
assert(
generated.some((file) => file.endsWith('.toml')),
'Gemini installer emits template-native TOML command files',
generated.join(', '),
);
assert(!generated.some((file) => file.endsWith('.md')), 'Gemini installer does not emit markdown command files', generated.join(', '));
await fs.remove(tmpRoot);
} catch (error) {
assert(false, 'Gemini template extension guard runs', error.message);
}
console.log('');
// ============================================================
// Test 12: Manifest Stale Entry Cleanup Guard
// ============================================================
console.log(`${colors.yellow}Test Suite 12: Manifest Stale Entry Cleanup Guard${colors.reset}\n`);
try {
const tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-manifest-clean-'));
const bmadDir = path.join(tmpRoot, BMAD_FOLDER_NAME);
await fs.copy(path.join(projectRoot, 'src', 'core'), path.join(bmadDir, 'core'));
await fs.copy(path.join(projectRoot, 'src', 'bmm'), path.join(bmadDir, 'bmm'));
const cfgDir = path.join(bmadDir, '_config');
await fs.ensureDir(cfgDir);
const staleManifestPath = path.join(cfgDir, 'workflow-manifest.csv');
await fs.writeFile(
staleManifestPath,
'name,description,module,path\n"old","old workflow","core","_bmad/core/workflows/old/workflow.md"\n',
);
const manifestGenerator = new ManifestGenerator();
await manifestGenerator.generateManifests(bmadDir, ['bmm'], [], { ides: ['claude-code'] });
const regenerated = await fs.readFile(staleManifestPath, 'utf8');
assert(
!regenerated.includes('"old","old workflow","core","_bmad/core/workflows/old/workflow.md"'),
'Workflow manifest regeneration removes stale/deleted rows',
);
await fs.remove(tmpRoot);
} catch (error) {
assert(false, 'Manifest stale entry cleanup guard runs', error.message);
}
console.log('');
// ============================================================
// Test 13: Internal Task Command Exposure Guard
// ============================================================
console.log(`${colors.yellow}Test Suite 13: Internal Task Exposure Guard${colors.reset}\n`);
try {
const tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-task-filter-'));
const projectDir = path.join(tmpRoot, 'project');
const bmadDir = path.join(tmpRoot, BMAD_FOLDER_NAME);
const commandsDir = path.join(tmpRoot, 'commands');
await fs.ensureDir(projectDir);
await fs.copy(path.join(projectRoot, 'src', 'core'), path.join(bmadDir, 'core'));
await fs.copy(path.join(projectRoot, 'src', 'bmm'), path.join(bmadDir, 'bmm'));
const manifestGenerator = new ManifestGenerator();
await manifestGenerator.generateManifests(bmadDir, ['bmm'], [], { ides: ['claude-code'] });
const taskToolGenerator = new TaskToolCommandGenerator();
await taskToolGenerator.generateDashTaskToolCommands(projectDir, bmadDir, commandsDir);
const generated = await fs.readdir(commandsDir);
assert(
!generated.some((file) => /^bmad-workflow\./.test(file)),
'Task/tool command generation excludes internal workflow runner task command',
generated.join(', '),
);
await fs.remove(tmpRoot);
} catch (error) {
assert(false, 'Internal task exposure guard runs', error.message);
}
console.log('');
// ============================================================
// Test 14: Workflow Frontmatter web_bundle Strip Guard
// ============================================================
console.log(`${colors.yellow}Test Suite 14: web_bundle Frontmatter Strip Guard${colors.reset}\n`);
try {
const manager = new ModuleManager();
const content = `---
name: demo-workflow
description: Demo
web_bundle:
enabled: true
bundle:
mode: strict
---
# Demo
`;
const stripped = manager.stripWebBundleFromFrontmatter(content);
const frontmatterMatch = stripped.match(/^---\n([\s\S]*?)\n---/);
const parsed = frontmatterMatch ? yaml.parse(frontmatterMatch[1]) : {};
assert(!stripped.includes('web_bundle:'), 'web_bundle strip removes nested web_bundle block from frontmatter');
assert(parsed.name === 'demo-workflow' && parsed.description === 'Demo', 'web_bundle strip preserves other frontmatter keys');
} catch (error) {
assert(false, 'web_bundle strip guard runs', error.message);
}
console.log('');
// ============================================================
// Summary
// ============================================================

View File

@ -701,60 +701,16 @@ class ManifestGenerator {
async writeWorkflowManifest(cfgDir) {
const csvPath = path.join(cfgDir, 'workflow-manifest.csv');
const escapeCsv = (value) => `"${String(value ?? '').replaceAll('"', '""')}"`;
const parseCsvLine = (line) => {
const columns = line.match(/(".*?"|[^",\s]+)(?=\s*,|\s*$)/g) || [];
return columns.map((c) => c.replaceAll(/^"|"$/g, ''));
};
// Read existing manifest to preserve entries
const existingEntries = new Map();
if (await fs.pathExists(csvPath)) {
const content = await fs.readFile(csvPath, 'utf8');
const lines = content.split('\n').filter((line) => line.trim());
// Skip header
for (let i = 1; i < lines.length; i++) {
const line = lines[i];
if (line) {
const parts = parseCsvLine(line);
if (parts.length >= 4) {
const [name, description, module, workflowPath] = parts;
existingEntries.set(`${module}:${name}`, {
name,
description,
module,
path: workflowPath,
});
}
}
}
}
// Create CSV header - standalone column removed, everything is canonicalized to 4 columns
let csv = 'name,description,module,path\n';
// Combine existing and new workflows
const allWorkflows = new Map();
// Add existing entries
for (const [key, value] of existingEntries) {
allWorkflows.set(key, value);
}
// Add/update new workflows
for (const workflow of this.workflows) {
const key = `${workflow.module}:${workflow.name}`;
allWorkflows.set(key, {
name: workflow.name,
description: workflow.description,
module: workflow.module,
path: workflow.path,
});
}
// Write all workflows
for (const [, value] of allWorkflows) {
const row = [escapeCsv(value.name), escapeCsv(value.description), escapeCsv(value.module), escapeCsv(value.path)].join(',');
// Regenerate from current install scan to avoid preserving stale/deleted entries
const sortedWorkflows = [...this.workflows].sort((a, b) => `${a.module}:${a.name}`.localeCompare(`${b.module}:${b.name}`));
for (const workflow of sortedWorkflows) {
const row = [escapeCsv(workflow.name), escapeCsv(workflow.description), escapeCsv(workflow.module), escapeCsv(workflow.path)].join(
',',
);
csv += row + '\n';
}
@ -769,50 +725,12 @@ class ManifestGenerator {
async writeAgentManifest(cfgDir) {
const csvPath = path.join(cfgDir, 'agent-manifest.csv');
// Read existing manifest to preserve entries
const existingEntries = new Map();
if (await fs.pathExists(csvPath)) {
const content = await fs.readFile(csvPath, 'utf8');
const lines = content.split('\n').filter((line) => line.trim());
// Skip header
for (let i = 1; i < lines.length; i++) {
const line = lines[i];
if (line) {
// Parse CSV (simple parsing assuming no commas in quoted fields)
const parts = line.split('","');
if (parts.length >= 10) {
const name = parts[0].replace(/^"/, '');
const module = parts[8];
existingEntries.set(`${module}:${name}`, line);
}
}
}
}
// Create CSV header with persona fields
let csv = 'name,displayName,title,icon,role,identity,communicationStyle,principles,module,path\n';
const sortedAgents = [...this.agents].sort((a, b) => `${a.module}:${a.name}`.localeCompare(`${b.module}:${b.name}`));
// Combine existing and new agents, preferring new data for duplicates
const allAgents = new Map();
// Add existing entries
for (const [key, value] of existingEntries) {
allAgents.set(key, value);
}
// Add/update new agents
for (const agent of this.agents) {
const key = `${agent.module}:${agent.name}`;
allAgents.set(
key,
`"${agent.name}","${agent.displayName}","${agent.title}","${agent.icon}","${agent.role}","${agent.identity}","${agent.communicationStyle}","${agent.principles}","${agent.module}","${agent.path}"`,
);
}
// Write all agents
for (const [, value] of allAgents) {
csv += value + '\n';
for (const agent of sortedAgents) {
csv += `"${agent.name}","${agent.displayName}","${agent.title}","${agent.icon}","${agent.role}","${agent.identity}","${agent.communicationStyle}","${agent.principles}","${agent.module}","${agent.path}"\n`;
}
await fs.writeFile(csvPath, csv);
@ -826,47 +744,11 @@ class ManifestGenerator {
async writeTaskManifest(cfgDir) {
const csvPath = path.join(cfgDir, 'task-manifest.csv');
// Read existing manifest to preserve entries
const existingEntries = new Map();
if (await fs.pathExists(csvPath)) {
const content = await fs.readFile(csvPath, 'utf8');
const lines = content.split('\n').filter((line) => line.trim());
// Skip header
for (let i = 1; i < lines.length; i++) {
const line = lines[i];
if (line) {
// Parse CSV (simple parsing assuming no commas in quoted fields)
const parts = line.split('","');
if (parts.length >= 6) {
const name = parts[0].replace(/^"/, '');
const module = parts[3];
existingEntries.set(`${module}:${name}`, line);
}
}
}
}
// Create CSV header with standalone column
let csv = 'name,displayName,description,module,path,standalone\n';
// Combine existing and new tasks
const allTasks = new Map();
// Add existing entries
for (const [key, value] of existingEntries) {
allTasks.set(key, value);
}
// Add/update new tasks
for (const task of this.tasks) {
const key = `${task.module}:${task.name}`;
allTasks.set(key, `"${task.name}","${task.displayName}","${task.description}","${task.module}","${task.path}","${task.standalone}"`);
}
// Write all tasks
for (const [, value] of allTasks) {
csv += value + '\n';
const sortedTasks = [...this.tasks].sort((a, b) => `${a.module}:${a.name}`.localeCompare(`${b.module}:${b.name}`));
for (const task of sortedTasks) {
csv += `"${task.name}","${task.displayName}","${task.description}","${task.module}","${task.path}","${task.standalone}"\n`;
}
await fs.writeFile(csvPath, csv);
@ -880,47 +762,11 @@ class ManifestGenerator {
async writeToolManifest(cfgDir) {
const csvPath = path.join(cfgDir, 'tool-manifest.csv');
// Read existing manifest to preserve entries
const existingEntries = new Map();
if (await fs.pathExists(csvPath)) {
const content = await fs.readFile(csvPath, 'utf8');
const lines = content.split('\n').filter((line) => line.trim());
// Skip header
for (let i = 1; i < lines.length; i++) {
const line = lines[i];
if (line) {
// Parse CSV (simple parsing assuming no commas in quoted fields)
const parts = line.split('","');
if (parts.length >= 6) {
const name = parts[0].replace(/^"/, '');
const module = parts[3];
existingEntries.set(`${module}:${name}`, line);
}
}
}
}
// Create CSV header with standalone column
let csv = 'name,displayName,description,module,path,standalone\n';
// Combine existing and new tools
const allTools = new Map();
// Add existing entries
for (const [key, value] of existingEntries) {
allTools.set(key, value);
}
// Add/update new tools
for (const tool of this.tools) {
const key = `${tool.module}:${tool.name}`;
allTools.set(key, `"${tool.name}","${tool.displayName}","${tool.description}","${tool.module}","${tool.path}","${tool.standalone}"`);
}
// Write all tools
for (const [, value] of allTools) {
csv += value + '\n';
const sortedTools = [...this.tools].sort((a, b) => `${a.module}:${a.name}`.localeCompare(`${b.module}:${b.name}`));
for (const tool of sortedTools) {
csv += `"${tool.name}","${tool.displayName}","${tool.description}","${tool.module}","${tool.path}","${tool.standalone}"\n`;
}
await fs.writeFile(csvPath, csv);

View File

@ -89,9 +89,10 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup {
// Install tasks and tools
if (!artifact_types || artifact_types.includes('tasks') || artifact_types.includes('tools')) {
const taskToolGen = new TaskToolCommandGenerator();
const taskToolResult = await taskToolGen.generateDashTaskToolCommands(projectDir, bmadDir, targetPath);
results.tasks = taskToolResult.tasks || 0;
results.tools = taskToolResult.tools || 0;
const { artifacts } = await taskToolGen.collectTaskToolArtifacts(bmadDir);
const taskToolResult = await this.writeTaskToolArtifacts(targetPath, artifacts, template_type, config, artifact_types);
results.tasks = taskToolResult.tasks;
results.tools = taskToolResult.tools;
}
await this.printSummary(results, target_dir, options);
@ -132,12 +133,12 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup {
*/
async writeAgentArtifacts(targetPath, artifacts, templateType, config = {}) {
// Try to load platform-specific template, fall back to default-agent
const template = await this.loadTemplate(templateType, 'agent', config, 'default-agent');
const { template, extension } = await this.loadTemplateWithMetadata(templateType, 'agent', config, 'default-agent');
let count = 0;
for (const artifact of artifacts) {
const content = this.renderTemplate(template, artifact);
const filename = this.generateFilename(artifact, 'agent');
const filename = this.generateFilename(artifact, 'agent', extension);
const filePath = path.join(targetPath, filename);
await this.writeFile(filePath, content);
count++;
@ -164,9 +165,9 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup {
// Fall back to default template if the requested one doesn't exist
const finalTemplateType = 'default-workflow';
const template = await this.loadTemplate(workflowTemplateType, 'workflow', config, finalTemplateType);
const { template, extension } = await this.loadTemplateWithMetadata(workflowTemplateType, 'workflow', config, finalTemplateType);
const content = this.renderTemplate(template, artifact);
const filename = this.generateFilename(artifact, 'workflow');
const filename = this.generateFilename(artifact, 'workflow', extension);
const filePath = path.join(targetPath, filename);
await this.writeFile(filePath, content);
count++;
@ -176,6 +177,51 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup {
return count;
}
/**
* Write task/tool artifacts to target directory
* @param {string} targetPath - Target directory path
* @param {Array} artifacts - Task/tool artifacts
* @param {string} templateType - Template type to use
* @param {Object} config - Installation configuration
* @param {Array<string>} artifactTypes - Optional include filter from installer config
* @returns {Promise<{tasks:number,tools:number}>} Count of artifacts written
*/
async writeTaskToolArtifacts(targetPath, artifacts, templateType, config = {}, artifactTypes = null) {
let tasks = 0;
let tools = 0;
const templateCache = new Map();
for (const artifact of artifacts) {
if (artifact.type !== 'task' && artifact.type !== 'tool') {
continue;
}
if (artifactTypes && !artifactTypes.includes(`${artifact.type}s`)) {
continue;
}
const cacheKey = `${templateType}:${artifact.type}`;
if (!templateCache.has(cacheKey)) {
const loaded = await this.loadTemplateWithMetadata(templateType, artifact.type, config, `default-${artifact.type}`);
templateCache.set(cacheKey, loaded);
}
const { template, extension } = templateCache.get(cacheKey);
const content = this.renderTemplate(template, artifact);
const filename = this.generateFilename(artifact, artifact.type, extension);
const filePath = path.join(targetPath, filename);
await this.writeFile(filePath, content);
if (artifact.type === 'task') {
tasks++;
} else {
tools++;
}
}
return { tasks, tools };
}
/**
* Load template based on type and configuration
* @param {string} templateType - Template type (claude, windsurf, etc.)
@ -185,31 +231,58 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup {
* @returns {Promise<string>} Template content
*/
async loadTemplate(templateType, artifactType, config = {}, fallbackTemplateType = null) {
const { template } = await this.loadTemplateWithMetadata(templateType, artifactType, config, fallbackTemplateType);
return template;
}
/**
* Load template with file extension metadata for extension-aware command generation
* @param {string} templateType - Template type (claude, windsurf, etc.)
* @param {string} artifactType - Artifact type (agent, workflow, task, tool)
* @param {Object} config - Installation configuration
* @param {string} fallbackTemplateType - Fallback template type if requested template not found
* @returns {Promise<{template:string, extension:string}>} Template content and extension
*/
async loadTemplateWithMetadata(templateType, artifactType, config = {}, fallbackTemplateType = null) {
const { header_template, body_template } = config;
const supportedExtensions = ['.md', '.toml', '.yaml', '.yml', '.json', '.txt'];
// Check for separate header/body templates
if (header_template || body_template) {
return await this.loadSplitTemplates(templateType, artifactType, header_template, body_template);
const template = await this.loadSplitTemplates(templateType, artifactType, header_template, body_template);
return { template, extension: '.md' };
}
// Load combined template
const templateName = `${templateType}-${artifactType}.md`;
const templatePath = path.join(__dirname, 'templates', 'combined', templateName);
if (await fs.pathExists(templatePath)) {
return await fs.readFile(templatePath, 'utf8');
// Load combined template with extension detection
for (const extension of supportedExtensions) {
const templateName = `${templateType}-${artifactType}${extension}`;
const templatePath = path.join(__dirname, 'templates', 'combined', templateName);
if (await fs.pathExists(templatePath)) {
return {
template: await fs.readFile(templatePath, 'utf8'),
extension,
};
}
}
// Fall back to default template (if provided)
if (fallbackTemplateType) {
const fallbackPath = path.join(__dirname, 'templates', 'combined', `${fallbackTemplateType}.md`);
if (await fs.pathExists(fallbackPath)) {
return await fs.readFile(fallbackPath, 'utf8');
for (const extension of supportedExtensions) {
const fallbackPath = path.join(__dirname, 'templates', 'combined', `${fallbackTemplateType}${extension}`);
if (await fs.pathExists(fallbackPath)) {
return {
template: await fs.readFile(fallbackPath, 'utf8'),
extension,
};
}
}
}
// Ultimate fallback - minimal template
return this.getDefaultTemplate(artifactType);
return {
template: this.getDefaultTemplate(artifactType),
extension: '.md',
};
}
/**
@ -325,11 +398,15 @@ LOAD and execute from: {project-root}/{{bmadFolderName}}/{{path}}
* @param {string} artifactType - Artifact type (agent, workflow, task, tool)
* @returns {string} Generated filename
*/
generateFilename(artifact, artifactType) {
generateFilename(artifact, artifactType, extension = '.md') {
const { toDashPath } = require('./shared/path-utils');
// toDashPath already handles the .agent.md suffix for agents correctly
// No need to add it again here
return toDashPath(artifact.relativePath);
const dashName = toDashPath(artifact.relativePath);
if (extension === '.md') {
return dashName;
}
return dashName.replace(/\.md$/i, extension);
}
/**

View File

@ -18,6 +18,15 @@ class TaskToolCommandGenerator {
this.bmadFolderName = bmadFolderName;
}
/**
* Determine if manifest entry is standalone/user-facing.
* @param {Object} item - Manifest row
* @returns {boolean} True when item should be exposed as a command
*/
isStandalone(item) {
return item?.standalone === 'true' || item?.standalone === true;
}
/**
* Collect task and tool artifacts for IDE installation
* @param {string} bmadDir - BMAD installation directory
@ -27,12 +36,14 @@ class TaskToolCommandGenerator {
const tasks = await this.loadTaskManifest(bmadDir);
const tools = await this.loadToolManifest(bmadDir);
// All tasks/tools in manifest are standalone (internal=true items are filtered during manifest generation)
const standaloneTasks = (tasks || []).filter((task) => this.isStandalone(task));
const standaloneTools = (tools || []).filter((tool) => this.isStandalone(tool));
const artifacts = [];
const bmadPrefix = `${BMAD_FOLDER_NAME}/`;
// Collect task artifacts
for (const task of tasks || []) {
for (const task of standaloneTasks) {
let taskPath = (task.path || '').replaceAll('\\', '/');
// Convert absolute paths to relative paths
if (path.isAbsolute(taskPath)) {
@ -57,7 +68,7 @@ class TaskToolCommandGenerator {
}
// Collect tool artifacts
for (const tool of tools || []) {
for (const tool of standaloneTools) {
let toolPath = (tool.path || '').replaceAll('\\', '/');
// Convert absolute paths to relative paths
if (path.isAbsolute(toolPath)) {
@ -84,8 +95,8 @@ class TaskToolCommandGenerator {
return {
artifacts,
counts: {
tasks: (tasks || []).length,
tools: (tools || []).length,
tasks: standaloneTasks.length,
tools: standaloneTools.length,
},
};
}
@ -99,6 +110,8 @@ class TaskToolCommandGenerator {
async generateTaskToolCommands(projectDir, bmadDir, baseCommandsDir = null) {
const tasks = await this.loadTaskManifest(bmadDir);
const tools = await this.loadToolManifest(bmadDir);
const standaloneTasks = (tasks || []).filter((task) => this.isStandalone(task));
const standaloneTools = (tools || []).filter((tool) => this.isStandalone(tool));
// Base commands directory - use provided or default to Claude Code structure
const commandsDir = baseCommandsDir || path.join(projectDir, '.claude', 'commands', 'bmad');
@ -106,7 +119,7 @@ class TaskToolCommandGenerator {
let generatedCount = 0;
// Generate command files for tasks
for (const task of tasks || []) {
for (const task of standaloneTasks) {
const moduleTasksDir = path.join(commandsDir, task.module, 'tasks');
await fs.ensureDir(moduleTasksDir);
@ -118,7 +131,7 @@ class TaskToolCommandGenerator {
}
// Generate command files for tools
for (const tool of tools || []) {
for (const tool of standaloneTools) {
const moduleToolsDir = path.join(commandsDir, tool.module, 'tools');
await fs.ensureDir(moduleToolsDir);
@ -131,8 +144,8 @@ class TaskToolCommandGenerator {
return {
generated: generatedCount,
tasks: (tasks || []).length,
tools: (tools || []).length,
tasks: standaloneTasks.length,
tools: standaloneTools.length,
};
}
@ -233,11 +246,13 @@ Follow all instructions in the ${type} file exactly as written.
async generateColonTaskToolCommands(projectDir, bmadDir, baseCommandsDir) {
const tasks = await this.loadTaskManifest(bmadDir);
const tools = await this.loadToolManifest(bmadDir);
const standaloneTasks = (tasks || []).filter((task) => this.isStandalone(task));
const standaloneTools = (tools || []).filter((tool) => this.isStandalone(tool));
let generatedCount = 0;
// Generate command files for tasks
for (const task of tasks || []) {
for (const task of standaloneTasks) {
const commandContent = this.generateCommandContent(task, 'task');
// Use underscore format: bmad_bmm_name.md
const flatName = toColonName(task.module, 'tasks', task.name);
@ -248,7 +263,7 @@ Follow all instructions in the ${type} file exactly as written.
}
// Generate command files for tools
for (const tool of tools || []) {
for (const tool of standaloneTools) {
const commandContent = this.generateCommandContent(tool, 'tool');
// Use underscore format: bmad_bmm_name.md
const flatName = toColonName(tool.module, 'tools', tool.name);
@ -260,8 +275,8 @@ Follow all instructions in the ${type} file exactly as written.
return {
generated: generatedCount,
tasks: (tasks || []).length,
tools: (tools || []).length,
tasks: standaloneTasks.length,
tools: standaloneTools.length,
};
}
@ -277,11 +292,13 @@ Follow all instructions in the ${type} file exactly as written.
async generateDashTaskToolCommands(projectDir, bmadDir, baseCommandsDir) {
const tasks = await this.loadTaskManifest(bmadDir);
const tools = await this.loadToolManifest(bmadDir);
const standaloneTasks = (tasks || []).filter((task) => this.isStandalone(task));
const standaloneTools = (tools || []).filter((tool) => this.isStandalone(tool));
let generatedCount = 0;
// Generate command files for tasks
for (const task of tasks || []) {
for (const task of standaloneTasks) {
const commandContent = this.generateCommandContent(task, 'task');
// Use dash format: bmad-bmm-name.md
const flatName = toDashPath(`${task.module}/tasks/${task.name}.md`);
@ -292,7 +309,7 @@ Follow all instructions in the ${type} file exactly as written.
}
// Generate command files for tools
for (const tool of tools || []) {
for (const tool of standaloneTools) {
const commandContent = this.generateCommandContent(tool, 'tool');
// Use dash format: bmad-bmm-name.md
const flatName = toDashPath(`${tool.module}/tools/${tool.name}.md`);
@ -304,8 +321,8 @@ Follow all instructions in the ${type} file exactly as written.
return {
generated: generatedCount,
tasks: (tasks || []).length,
tools: (tools || []).length,
tasks: standaloneTasks.length,
tools: standaloneTools.length,
};
}

View File

@ -809,12 +809,28 @@ class ModuleManager {
return content;
}
const frontmatter = frontmatterMatch[1]
.split('\n')
.filter((line) => !line.trim().startsWith('web_bundle:'))
.join('\n');
try {
const yaml = require('yaml');
const parsed = yaml.parse(frontmatterMatch[1]);
return content.replace(frontmatterMatch[0], `---\n${frontmatter}\n---`);
if (!parsed || typeof parsed !== 'object' || !Object.prototype.hasOwnProperty.call(parsed, 'web_bundle')) {
return content;
}
delete parsed.web_bundle;
const serialized = yaml
.stringify(parsed, {
indent: 2,
lineWidth: 0,
sortMapEntries: false,
})
.trimEnd();
return content.replace(frontmatterMatch[0], `---\n${serialized}\n---`);
} catch (error) {
console.warn(`Warning: Failed to parse workflow frontmatter for web_bundle removal: ${error.message}`);
return content;
}
}
/**