diff --git a/docs/how-to/project-context.md b/docs/how-to/project-context.md index 4ffecca66..7cb3b3b04 100644 --- a/docs/how-to/project-context.md +++ b/docs/how-to/project-context.md @@ -2,7 +2,7 @@ title: "Manage Project Context" description: Create and maintain project-context.md to guide AI agents sidebar: - order: 7 + order: 8 --- Use the `project-context.md` file to ensure AI agents follow your project's technical preferences and implementation rules throughout all workflows. To make sure this is always available, you can also add the line `Important project context and conventions are located in [path to project context]/project-context.md` to your tools context or always rules file (such as `AGENTS.md`) diff --git a/docs/how-to/shard-large-documents.md b/docs/how-to/shard-large-documents.md index 0edac1483..68cbbfc6b 100644 --- a/docs/how-to/shard-large-documents.md +++ b/docs/how-to/shard-large-documents.md @@ -2,7 +2,7 @@ title: "Document Sharding Guide" description: Split large markdown files into smaller organized files for better context management sidebar: - order: 8 + order: 9 --- Use the `bmad-shard-doc` tool if you need to split large markdown files into smaller, organized files for better context management. diff --git a/docs/zh-cn/explanation/established-projects-faq.md b/docs/zh-cn/explanation/established-projects-faq.md index 8756faa20..dcf89df2c 100644 --- a/docs/zh-cn/explanation/established-projects-faq.md +++ b/docs/zh-cn/explanation/established-projects-faq.md @@ -8,10 +8,10 @@ sidebar: ## 问题 -- [我必须先运行 document-project 吗?](#do-i-have-to-run-document-project-first) -- [如果我忘记运行 document-project 怎么办?](#what-if-i-forget-to-run-document-project) -- [我可以在既有项目上使用快速流程吗?](#can-i-use-quick-flow-for-established-projects) -- [如果我的现有代码不遵循最佳实践怎么办?](#what-if-my-existing-code-doesnt-follow-best-practices) +- [我必须先运行 document-project 吗?](#我必须先运行-document-project-吗) +- [如果我忘记运行 document-project 怎么办?](#如果我忘记运行-document-project-怎么办) +- [我可以在既有项目上使用快速流程吗?](#我可以在既有项目上使用快速流程吗) +- [如果我的现有代码不遵循最佳实践怎么办?](#如果我的现有代码不遵循最佳实践怎么办) ### 我必须先运行 document-project 吗? diff --git a/docs/zh-cn/how-to/project-context.md b/docs/zh-cn/how-to/project-context.md index 89ce6af15..7693d2cb6 100644 --- a/docs/zh-cn/how-to/project-context.md +++ b/docs/zh-cn/how-to/project-context.md @@ -2,7 +2,7 @@ title: "管理项目上下文" description: 创建并维护 project-context.md 以指导 AI 智能体 sidebar: - order: 7 + order: 8 --- 使用 `project-context.md` 文件确保 AI 智能体在所有工作流程中遵循项目的技术偏好和实现规则。 diff --git a/docs/zh-cn/how-to/shard-large-documents.md b/docs/zh-cn/how-to/shard-large-documents.md index 3f3385623..759069813 100644 --- a/docs/zh-cn/how-to/shard-large-documents.md +++ b/docs/zh-cn/how-to/shard-large-documents.md @@ -2,7 +2,7 @@ title: "文档分片指南" description: 将大型 Markdown 文件拆分为更小的组织化文件,以更好地管理上下文 sidebar: - order: 8 + order: 9 --- 如果需要将大型 Markdown 文件拆分为更小、组织良好的文件以更好地管理上下文,请使用 `shard-doc` 工具。 diff --git a/src/core-skills/bmad-editorial-review-prose/SKILL.md b/src/core-skills/bmad-editorial-review-prose/SKILL.md index 3702b0378..3498f925e 100644 --- a/src/core-skills/bmad-editorial-review-prose/SKILL.md +++ b/src/core-skills/bmad-editorial-review-prose/SKILL.md @@ -3,4 +3,84 @@ name: bmad-editorial-review-prose description: 'Clinical copy-editor that reviews text for communication issues. Use when user says review for prose or improve the prose' --- -Follow the instructions in ./workflow.md. +# Editorial Review - Prose + +**Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table. + +**Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. + +**CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed. + +**Inputs:** +- **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML) +- **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. +- **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus + + +## PRINCIPLES + +1. **Minimal intervention:** Apply the smallest fix that achieves clarity +2. **Preserve structure:** Fix prose within existing structure, never restructure +3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup +4. **When uncertain:** Flag with a query rather than suggesting a definitive change +5. **Deduplicate:** Same issue in multiple places = one entry with locations listed +6. **No conflicts:** Merge overlapping fixes into single entries +7. **Respect author voice:** Preserve intentional stylistic choices + +> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. + + +## STEPS + +### Step 1: Validate Input + +- Check if content is empty or contains fewer than 3 words + - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)" +- Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`) + - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'" +- Identify content type (markdown, plain text, XML with text) +- Note any code blocks, frontmatter, or structural markup to skip + +### Step 2: Analyze Style + +- Analyze the style, tone, and voice of the input text +- Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns) +- Calibrate review approach based on reader_type: + - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging + - If `humans`: Prioritize clarity, flow, readability, natural progression + +### Step 3: Editorial Review (CRITICAL) + +- If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review +- Review all prose sections (skip code blocks, frontmatter, structural markup) +- Identify communication issues that impede comprehension +- For each issue, determine the minimal fix that achieves clarity +- Deduplicate: If same issue appears multiple times, create one entry listing all locations +- Merge overlapping issues into single entries (no conflicting suggestions) +- For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change +- Preserve author voice — do not "improve" intentional stylistic choices + +### Step 4: Output Results + +- If issues found: Output a three-column markdown table with all suggested fixes +- If no issues found: Output "No editorial issues identified" + +**Output format:** + +| Original Text | Revised Text | Changes | +|---------------|--------------|---------| +| The exact original passage | The suggested revision | Brief explanation of what changed and why | + +**Example:** + +| Original Text | Revised Text | Changes | +|---------------|--------------|---------| +| The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" | +| Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) | + + +## HALT CONDITIONS + +- HALT with error if content is empty or fewer than 3 words +- HALT with error if reader_type is not `humans` or `llm` +- If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-prose/workflow.md b/src/core-skills/bmad-editorial-review-prose/workflow.md deleted file mode 100644 index 42db68710..000000000 --- a/src/core-skills/bmad-editorial-review-prose/workflow.md +++ /dev/null @@ -1,81 +0,0 @@ -# Editorial Review - Prose - -**Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table. - -**Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. - -**CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed. - -**Inputs:** -- **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML) -- **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. -- **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus - - -## PRINCIPLES - -1. **Minimal intervention:** Apply the smallest fix that achieves clarity -2. **Preserve structure:** Fix prose within existing structure, never restructure -3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup -4. **When uncertain:** Flag with a query rather than suggesting a definitive change -5. **Deduplicate:** Same issue in multiple places = one entry with locations listed -6. **No conflicts:** Merge overlapping fixes into single entries -7. **Respect author voice:** Preserve intentional stylistic choices - -> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. - - -## STEPS - -### Step 1: Validate Input - -- Check if content is empty or contains fewer than 3 words - - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)" -- Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`) - - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'" -- Identify content type (markdown, plain text, XML with text) -- Note any code blocks, frontmatter, or structural markup to skip - -### Step 2: Analyze Style - -- Analyze the style, tone, and voice of the input text -- Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns) -- Calibrate review approach based on reader_type: - - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging - - If `humans`: Prioritize clarity, flow, readability, natural progression - -### Step 3: Editorial Review (CRITICAL) - -- If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review -- Review all prose sections (skip code blocks, frontmatter, structural markup) -- Identify communication issues that impede comprehension -- For each issue, determine the minimal fix that achieves clarity -- Deduplicate: If same issue appears multiple times, create one entry listing all locations -- Merge overlapping issues into single entries (no conflicting suggestions) -- For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change -- Preserve author voice — do not "improve" intentional stylistic choices - -### Step 4: Output Results - -- If issues found: Output a three-column markdown table with all suggested fixes -- If no issues found: Output "No editorial issues identified" - -**Output format:** - -| Original Text | Revised Text | Changes | -|---------------|--------------|---------| -| The exact original passage | The suggested revision | Brief explanation of what changed and why | - -**Example:** - -| Original Text | Revised Text | Changes | -|---------------|--------------|---------| -| The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" | -| Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) | - - -## HALT CONDITIONS - -- HALT with error if content is empty or fewer than 3 words -- HALT with error if reader_type is not `humans` or `llm` -- If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-structure/SKILL.md b/src/core-skills/bmad-editorial-review-structure/SKILL.md index 5be13686b..c93183148 100644 --- a/src/core-skills/bmad-editorial-review-structure/SKILL.md +++ b/src/core-skills/bmad-editorial-review-structure/SKILL.md @@ -3,4 +3,177 @@ name: bmad-editorial-review-structure description: 'Structural editor that proposes cuts, reorganization, and simplification while preserving comprehension. Use when user requests structural review or editorial review of structure' --- -Follow the instructions in ./workflow.md. +# Editorial Review - Structure + +**Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing. + +**Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. + +> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. + +**Inputs:** +- **content** (required) -- Document to review (markdown, plain text, or structured content) +- **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. +- **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview') +- **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers') +- **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density +- **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit') + +## Principles + +- Comprehension through calibration: Optimize for the minimum words needed to maintain understanding +- Front-load value: Critical information comes first; nice-to-know comes last (or goes) +- One source of truth: If information appears identically twice, consolidate +- Scope discipline: Content that belongs in a different document should be cut or linked +- Propose, don't execute: Output recommendations -- user decides what to accept +- **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.** + +## Human-Reader Principles + +These elements serve human comprehension and engagement -- preserve unless clearly wasteful: + +- Visual aids: Diagrams, images, and flowcharts anchor understanding +- Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place +- Reader's Journey: Organize content biologically (linear progression), not logically (database) +- Mental models: Overview before details prevents cognitive overload +- Warmth: Encouraging tone reduces anxiety for new users +- Whitespace: Admonitions and callouts provide visual breathing room +- Summaries: Recaps help retention; they're reinforcement, not redundancy +- Examples: Concrete illustrations make abstract concepts accessible +- Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention + +## LLM-Reader Principles + +When reader_type='llm', optimize for PRECISION and UNAMBIGUITY: + +- Dependency-first: Define concepts before usage to minimize hallucination risk +- Cut emotional language, encouragement, and orientation sections +- IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly. +- Use consistent terminology -- same word for same concept throughout +- Eliminate hedging ("might", "could", "generally") -- use direct statements +- Prefer structured formats (tables, lists, YAML) over prose +- Reference known standards ("conventional commits", "Google style guide") to leverage training +- STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation +- Unambiguous references -- no unclear antecedents ("it", "this", "the above") +- Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth) + +## Structure Models + +### Tutorial/Guide (Linear) +**Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs +- Prerequisites: Setup/Context MUST precede action +- Sequence: Steps must follow strict chronological or logical dependency order +- Goal-oriented: clear 'Definition of Done' at the end + +### Reference/Database +**Applicability:** API docs, glossaries, configuration references, cheat sheets +- Random Access: No narrative flow required; user jumps to specific item +- MECE: Topics are Mutually Exclusive and Collectively Exhaustive +- Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns) + +### Explanation (Conceptual) +**Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context +- Abstract to Concrete: Definition to Context to Implementation/Example +- Scaffolding: Complex ideas built on established foundations + +### Prompt/Task Definition (Functional) +**Applicability:** BMAD tasks, prompts, system instructions, XML definitions +- Meta-first: Inputs, usage constraints, and context defined before instructions +- Separation of Concerns: Instructions (logic) separate from Data (content) +- Step-by-step: Execution flow must be explicit and ordered + +### Strategic/Context (Pyramid) +**Applicability:** PRDs, research reports, proposals, decision records +- Top-down: Conclusion/Status/Recommendation starts the document +- Grouping: Supporting context grouped logically below the headline +- Ordering: Most critical information first +- MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive +- Evidence: Data supports arguments, never leads + +## STEPS + +### Step 1: Validate Input + +- Check if content is empty or contains fewer than 3 words +- If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)" +- Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans") +- If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'" +- Identify document type and structure (headings, sections, lists, etc.) +- Note the current word count and section count + +### Step 2: Understand Purpose + +- If purpose was provided, use it; otherwise infer from content +- If target_audience was provided, use it; otherwise infer from content +- Identify the core question the document answers +- State in one sentence: "This document exists to help [audience] accomplish [goal]" +- Select the most appropriate structural model from Structure Models based on purpose/audience +- Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles) + +### Step 3: Structural Analysis (CRITICAL) + +- If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis +- Map the document structure: list each major section with its word count +- Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid) +- For each section, answer: Does this directly serve the stated purpose? +- If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged? +- Identify sections that could be: cut entirely, merged with another, moved to a different location, or split +- Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement) +- Identify scope violations: content that belongs in a different document +- Identify burying: critical information hidden deep in the document + +### Step 4: Flow Analysis + +- Assess the reader's journey: Does the sequence match how readers will use this? +- Identify premature detail: explanation given before the reader needs it +- Identify missing scaffolding: complex ideas without adequate setup +- Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim +- If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention? + +### Step 5: Generate Recommendations + +- Compile all findings into prioritized recommendations +- Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension) +- For each recommendation, state the rationale in one sentence +- Estimate impact: how many words would this save (or cost, for PRESERVE)? +- If length_target was provided, assess whether recommendations meet it +- If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement" + +### Step 6: Output Results + +- Output document summary (purpose, audience, reader_type, current length) +- Output the recommendation list in priority order +- Output estimated total reduction if all recommendations accepted +- If no recommendations, output: "No substantive changes recommended -- document structure is sound" + +Use the following output format: + +```markdown +## Document Summary +- **Purpose:** [inferred or provided purpose] +- **Audience:** [inferred or provided audience] +- **Reader type:** [selected reader type] +- **Structure model:** [selected structure model] +- **Current length:** [X] words across [Y] sections + +## Recommendations + +### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name] +**Rationale:** [One sentence explanation] +**Impact:** ~[X] words +**Comprehension note:** [If applicable, note impact on reader understanding] + +### 2. ... + +## Summary +- **Total recommendations:** [N] +- **Estimated reduction:** [X] words ([Y]% of original) +- **Meets length target:** [Yes/No/No target specified] +- **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity] +``` + +## HALT CONDITIONS + +- HALT with error if content is empty or fewer than 3 words +- HALT with error if reader_type is not "humans" or "llm" +- If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-structure/workflow.md b/src/core-skills/bmad-editorial-review-structure/workflow.md deleted file mode 100644 index bc6c35f73..000000000 --- a/src/core-skills/bmad-editorial-review-structure/workflow.md +++ /dev/null @@ -1,174 +0,0 @@ -# Editorial Review - Structure - -**Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing. - -**Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. - -> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. - -**Inputs:** -- **content** (required) -- Document to review (markdown, plain text, or structured content) -- **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. -- **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview') -- **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers') -- **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density -- **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit') - -## Principles - -- Comprehension through calibration: Optimize for the minimum words needed to maintain understanding -- Front-load value: Critical information comes first; nice-to-know comes last (or goes) -- One source of truth: If information appears identically twice, consolidate -- Scope discipline: Content that belongs in a different document should be cut or linked -- Propose, don't execute: Output recommendations -- user decides what to accept -- **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.** - -## Human-Reader Principles - -These elements serve human comprehension and engagement -- preserve unless clearly wasteful: - -- Visual aids: Diagrams, images, and flowcharts anchor understanding -- Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place -- Reader's Journey: Organize content biologically (linear progression), not logically (database) -- Mental models: Overview before details prevents cognitive overload -- Warmth: Encouraging tone reduces anxiety for new users -- Whitespace: Admonitions and callouts provide visual breathing room -- Summaries: Recaps help retention; they're reinforcement, not redundancy -- Examples: Concrete illustrations make abstract concepts accessible -- Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention - -## LLM-Reader Principles - -When reader_type='llm', optimize for PRECISION and UNAMBIGUITY: - -- Dependency-first: Define concepts before usage to minimize hallucination risk -- Cut emotional language, encouragement, and orientation sections -- IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly. -- Use consistent terminology -- same word for same concept throughout -- Eliminate hedging ("might", "could", "generally") -- use direct statements -- Prefer structured formats (tables, lists, YAML) over prose -- Reference known standards ("conventional commits", "Google style guide") to leverage training -- STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation -- Unambiguous references -- no unclear antecedents ("it", "this", "the above") -- Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth) - -## Structure Models - -### Tutorial/Guide (Linear) -**Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs -- Prerequisites: Setup/Context MUST precede action -- Sequence: Steps must follow strict chronological or logical dependency order -- Goal-oriented: clear 'Definition of Done' at the end - -### Reference/Database -**Applicability:** API docs, glossaries, configuration references, cheat sheets -- Random Access: No narrative flow required; user jumps to specific item -- MECE: Topics are Mutually Exclusive and Collectively Exhaustive -- Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns) - -### Explanation (Conceptual) -**Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context -- Abstract to Concrete: Definition to Context to Implementation/Example -- Scaffolding: Complex ideas built on established foundations - -### Prompt/Task Definition (Functional) -**Applicability:** BMAD tasks, prompts, system instructions, XML definitions -- Meta-first: Inputs, usage constraints, and context defined before instructions -- Separation of Concerns: Instructions (logic) separate from Data (content) -- Step-by-step: Execution flow must be explicit and ordered - -### Strategic/Context (Pyramid) -**Applicability:** PRDs, research reports, proposals, decision records -- Top-down: Conclusion/Status/Recommendation starts the document -- Grouping: Supporting context grouped logically below the headline -- Ordering: Most critical information first -- MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive -- Evidence: Data supports arguments, never leads - -## STEPS - -### Step 1: Validate Input - -- Check if content is empty or contains fewer than 3 words -- If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)" -- Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans") -- If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'" -- Identify document type and structure (headings, sections, lists, etc.) -- Note the current word count and section count - -### Step 2: Understand Purpose - -- If purpose was provided, use it; otherwise infer from content -- If target_audience was provided, use it; otherwise infer from content -- Identify the core question the document answers -- State in one sentence: "This document exists to help [audience] accomplish [goal]" -- Select the most appropriate structural model from Structure Models based on purpose/audience -- Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles) - -### Step 3: Structural Analysis (CRITICAL) - -- If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis -- Map the document structure: list each major section with its word count -- Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid) -- For each section, answer: Does this directly serve the stated purpose? -- If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged? -- Identify sections that could be: cut entirely, merged with another, moved to a different location, or split -- Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement) -- Identify scope violations: content that belongs in a different document -- Identify burying: critical information hidden deep in the document - -### Step 4: Flow Analysis - -- Assess the reader's journey: Does the sequence match how readers will use this? -- Identify premature detail: explanation given before the reader needs it -- Identify missing scaffolding: complex ideas without adequate setup -- Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim -- If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention? - -### Step 5: Generate Recommendations - -- Compile all findings into prioritized recommendations -- Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension) -- For each recommendation, state the rationale in one sentence -- Estimate impact: how many words would this save (or cost, for PRESERVE)? -- If length_target was provided, assess whether recommendations meet it -- If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement" - -### Step 6: Output Results - -- Output document summary (purpose, audience, reader_type, current length) -- Output the recommendation list in priority order -- Output estimated total reduction if all recommendations accepted -- If no recommendations, output: "No substantive changes recommended -- document structure is sound" - -Use the following output format: - -```markdown -## Document Summary -- **Purpose:** [inferred or provided purpose] -- **Audience:** [inferred or provided audience] -- **Reader type:** [selected reader type] -- **Structure model:** [selected structure model] -- **Current length:** [X] words across [Y] sections - -## Recommendations - -### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name] -**Rationale:** [One sentence explanation] -**Impact:** ~[X] words -**Comprehension note:** [If applicable, note impact on reader understanding] - -### 2. ... - -## Summary -- **Total recommendations:** [N] -- **Estimated reduction:** [X] words ([Y]% of original) -- **Meets length target:** [Yes/No/No target specified] -- **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity] -``` - -## HALT CONDITIONS - -- HALT with error if content is empty or fewer than 3 words -- HALT with error if reader_type is not "humans" or "llm" -- If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-help/SKILL.md b/src/core-skills/bmad-help/SKILL.md index ace902c2d..fee483e51 100644 --- a/src/core-skills/bmad-help/SKILL.md +++ b/src/core-skills/bmad-help/SKILL.md @@ -3,4 +3,90 @@ name: bmad-help description: 'Analyzes current state and user query to answer BMad questions or recommend the next workflow or agent. Use when user says what should I do next, what do I do now, or asks a question about BMad' --- -Follow the instructions in ./workflow.md. +# Task: BMAD Help + +## ROUTING RULES + +- **Empty `phase` = anytime** — Universal tools work regardless of workflow state +- **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module) +- **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is. +- **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering +- **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed") +- **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases +- **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows + +## DISPLAY RULES + +### Command-Based Workflows +When `command` field has a value: +- Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`) + +### Skill-Referenced Workflows +When `workflow-file` starts with `skill:`: +- The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path +- Do NOT attempt to resolve or load it as a file path +- Display using the `command` column value as a skill name in backticks (same as command-based workflows) + +### Agent-Based Workflows +When `command` field is empty: +- User loads agent first by invoking the agent skill (e.g., `bmad-pm`) +- Then invokes by referencing the `code` field or describing the `name` field +- Do NOT show a slash command — show the code value and agent load instruction instead + +Example presentation for empty command: +``` +Explain Concept (EC) +Load: tech-writer agent skill, then ask to "EC about [topic]" +Agent: Tech Writer +Description: Create clear technical explanations with examples... +``` + +## MODULE DETECTION + +- **Empty `module` column** → universal tools (work across all modules) +- **Named `module`** → module-specific workflows + +Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user. + +## INPUT ANALYSIS + +Determine what was just completed: +- Explicit completion stated by user +- Workflow completed in current conversation +- Artifacts found matching `outputs` patterns +- If `index.md` exists, read it for additional context +- If still unclear, ask: "What workflow did you most recently complete?" + +## EXECUTION + +1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv` + +2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config. + +3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so. + +4. **Detect active module** — Use MODULE DETECTION above + +5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above. + +6. **Present recommendations** — Show next steps based on: + - Completed workflows detected + - Phase/sequence ordering (ROUTING RULES) + - Artifact presence + + **Optional items first** — List optional workflows until a required step is reached + **Required items next** — List the next required workflow + + For each item, apply DISPLAY RULES above and include: + - Workflow **name** + - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES) + - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)") + - Brief **description** + +7. **Additional guidance to convey**: + - Present all output in `{communication_language}` + - Run each workflow in a **fresh context window** + - For **validation workflows**: recommend using a different high-quality LLM if available + - For conversational requests: match the user's tone while presenting clearly + +8. Return to the calling process after presenting recommendations. diff --git a/src/core-skills/bmad-help/workflow.md b/src/core-skills/bmad-help/workflow.md deleted file mode 100644 index 8dced5a7e..000000000 --- a/src/core-skills/bmad-help/workflow.md +++ /dev/null @@ -1,88 +0,0 @@ - -# Task: BMAD Help - -## ROUTING RULES - -- **Empty `phase` = anytime** — Universal tools work regardless of workflow state -- **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module) -- **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is. -- **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering -- **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed") -- **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases -- **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows - -## DISPLAY RULES - -### Command-Based Workflows -When `command` field has a value: -- Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`) - -### Skill-Referenced Workflows -When `workflow-file` starts with `skill:`: -- The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path -- Do NOT attempt to resolve or load it as a file path -- Display using the `command` column value as a skill name in backticks (same as command-based workflows) - -### Agent-Based Workflows -When `command` field is empty: -- User loads agent first by invoking the agent skill (e.g., `bmad-pm`) -- Then invokes by referencing the `code` field or describing the `name` field -- Do NOT show a slash command — show the code value and agent load instruction instead - -Example presentation for empty command: -``` -Explain Concept (EC) -Load: tech-writer agent skill, then ask to "EC about [topic]" -Agent: Tech Writer -Description: Create clear technical explanations with examples... -``` - -## MODULE DETECTION - -- **Empty `module` column** → universal tools (work across all modules) -- **Named `module`** → module-specific workflows - -Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user. - -## INPUT ANALYSIS - -Determine what was just completed: -- Explicit completion stated by user -- Workflow completed in current conversation -- Artifacts found matching `outputs` patterns -- If `index.md` exists, read it for additional context -- If still unclear, ask: "What workflow did you most recently complete?" - -## EXECUTION - -1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv` - -2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config. - -3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so. - -4. **Detect active module** — Use MODULE DETECTION above - -5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above. - -6. **Present recommendations** — Show next steps based on: - - Completed workflows detected - - Phase/sequence ordering (ROUTING RULES) - - Artifact presence - - **Optional items first** — List optional workflows until a required step is reached - **Required items next** — List the next required workflow - - For each item, apply DISPLAY RULES above and include: - - Workflow **name** - - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES) - - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)") - - Brief **description** - -7. **Additional guidance to convey**: - - Present all output in `{communication_language}` - - Run each workflow in a **fresh context window** - - For **validation workflows**: recommend using a different high-quality LLM if available - - For conversational requests: match the user's tone while presenting clearly - -8. Return to the calling process after presenting recommendations. diff --git a/src/core-skills/bmad-index-docs/SKILL.md b/src/core-skills/bmad-index-docs/SKILL.md index 35fffdd45..c92935b71 100644 --- a/src/core-skills/bmad-index-docs/SKILL.md +++ b/src/core-skills/bmad-index-docs/SKILL.md @@ -3,4 +3,64 @@ name: bmad-index-docs description: 'Generates or updates an index.md to reference all docs in the folder. Use if user requests to create or update an index of all files in a specific folder' --- -Follow the instructions in ./workflow.md. +# Index Docs + +**Goal:** Generate or update an index.md to reference all docs in a target folder. + + +## EXECUTION + +### Step 1: Scan Directory + +- List all files and subdirectories in the target location + +### Step 2: Group Content + +- Organize files by type, purpose, or subdirectory + +### Step 3: Generate Descriptions + +- Read each file to understand its actual purpose and create brief (3-10 word) descriptions based on the content, not just the filename + +### Step 4: Create/Update Index + +- Write or update index.md with organized file listings + + +## OUTPUT FORMAT + +```markdown +# Directory Index + +## Files + +- **[filename.ext](./filename.ext)** - Brief description +- **[another-file.ext](./another-file.ext)** - Brief description + +## Subdirectories + +### subfolder/ + +- **[file1.ext](./subfolder/file1.ext)** - Brief description +- **[file2.ext](./subfolder/file2.ext)** - Brief description + +### another-folder/ + +- **[file3.ext](./another-folder/file3.ext)** - Brief description +``` + + +## HALT CONDITIONS + +- HALT if target directory does not exist or is inaccessible +- HALT if user does not have write permissions to create index.md + + +## VALIDATION + +- Use relative paths starting with ./ +- Group similar files together +- Read file contents to generate accurate descriptions - don't guess from filenames +- Keep descriptions concise but informative (3-10 words) +- Sort alphabetically within groups +- Skip hidden files (starting with .) unless specified diff --git a/src/core-skills/bmad-index-docs/workflow.md b/src/core-skills/bmad-index-docs/workflow.md deleted file mode 100644 index b500cf984..000000000 --- a/src/core-skills/bmad-index-docs/workflow.md +++ /dev/null @@ -1,61 +0,0 @@ -# Index Docs - -**Goal:** Generate or update an index.md to reference all docs in a target folder. - - -## EXECUTION - -### Step 1: Scan Directory - -- List all files and subdirectories in the target location - -### Step 2: Group Content - -- Organize files by type, purpose, or subdirectory - -### Step 3: Generate Descriptions - -- Read each file to understand its actual purpose and create brief (3-10 word) descriptions based on the content, not just the filename - -### Step 4: Create/Update Index - -- Write or update index.md with organized file listings - - -## OUTPUT FORMAT - -```markdown -# Directory Index - -## Files - -- **[filename.ext](./filename.ext)** - Brief description -- **[another-file.ext](./another-file.ext)** - Brief description - -## Subdirectories - -### subfolder/ - -- **[file1.ext](./subfolder/file1.ext)** - Brief description -- **[file2.ext](./subfolder/file2.ext)** - Brief description - -### another-folder/ - -- **[file3.ext](./another-folder/file3.ext)** - Brief description -``` - - -## HALT CONDITIONS - -- HALT if target directory does not exist or is inaccessible -- HALT if user does not have write permissions to create index.md - - -## VALIDATION - -- Use relative paths starting with ./ -- Group similar files together -- Read file contents to generate accurate descriptions - don't guess from filenames -- Keep descriptions concise but informative (3-10 words) -- Sort alphabetically within groups -- Skip hidden files (starting with .) unless specified diff --git a/src/core-skills/bmad-review-adversarial-general/SKILL.md b/src/core-skills/bmad-review-adversarial-general/SKILL.md index 4900bc9e1..ae75b7caa 100644 --- a/src/core-skills/bmad-review-adversarial-general/SKILL.md +++ b/src/core-skills/bmad-review-adversarial-general/SKILL.md @@ -3,4 +3,35 @@ name: bmad-review-adversarial-general description: 'Perform a Cynical Review and produce a findings report. Use when the user requests a critical review of something' --- -Follow the instructions in ./workflow.md. +# Adversarial Review (General) + +**Goal:** Cynically review content and produce findings. + +**Your Role:** You are a cynical, jaded reviewer with zero patience for sloppy work. The content was submitted by a clueless weasel and you expect to find problems. Be skeptical of everything. Look for what's missing, not just what's wrong. Use a precise, professional tone — no profanity or personal attacks. + +**Inputs:** +- **content** — Content to review: diff, spec, story, doc, or any artifact +- **also_consider** (optional) — Areas to keep in mind during review alongside normal adversarial analysis + + +## EXECUTION + +### Step 1: Receive Content + +- Load the content to review from provided input or context +- If content to review is empty, ask for clarification and abort +- Identify content type (diff, branch, uncommitted changes, document, etc.) + +### Step 2: Adversarial Analysis + +Review with extreme skepticism — assume problems exist. Find at least ten issues to fix or improve in the provided content. + +### Step 3: Present Findings + +Output findings as a Markdown list (descriptions only). + + +## HALT CONDITIONS + +- HALT if zero findings — this is suspicious, re-analyze or ask for guidance +- HALT if content is empty or unreadable diff --git a/src/core-skills/bmad-review-adversarial-general/workflow.md b/src/core-skills/bmad-review-adversarial-general/workflow.md deleted file mode 100644 index 8290ff16d..000000000 --- a/src/core-skills/bmad-review-adversarial-general/workflow.md +++ /dev/null @@ -1,32 +0,0 @@ -# Adversarial Review (General) - -**Goal:** Cynically review content and produce findings. - -**Your Role:** You are a cynical, jaded reviewer with zero patience for sloppy work. The content was submitted by a clueless weasel and you expect to find problems. Be skeptical of everything. Look for what's missing, not just what's wrong. Use a precise, professional tone — no profanity or personal attacks. - -**Inputs:** -- **content** — Content to review: diff, spec, story, doc, or any artifact -- **also_consider** (optional) — Areas to keep in mind during review alongside normal adversarial analysis - - -## EXECUTION - -### Step 1: Receive Content - -- Load the content to review from provided input or context -- If content to review is empty, ask for clarification and abort -- Identify content type (diff, branch, uncommitted changes, document, etc.) - -### Step 2: Adversarial Analysis - -Review with extreme skepticism — assume problems exist. Find at least ten issues to fix or improve in the provided content. - -### Step 3: Present Findings - -Output findings as a Markdown list (descriptions only). - - -## HALT CONDITIONS - -- HALT if zero findings — this is suspicious, re-analyze or ask for guidance -- HALT if content is empty or unreadable diff --git a/src/core-skills/bmad-review-edge-case-hunter/SKILL.md b/src/core-skills/bmad-review-edge-case-hunter/SKILL.md index e321fb9ee..9bc9984d1 100644 --- a/src/core-skills/bmad-review-edge-case-hunter/SKILL.md +++ b/src/core-skills/bmad-review-edge-case-hunter/SKILL.md @@ -3,4 +3,65 @@ name: bmad-review-edge-case-hunter description: 'Walk every branching path and boundary condition in content, report only unhandled edge cases. Orthogonal to adversarial review - method-driven not attitude-driven. Use when you need exhaustive edge-case analysis of code, specs, or diffs.' --- -Follow the instructions in ./workflow.md. +# Edge Case Hunter Review + +**Goal:** You are a pure path tracer. Never comment on whether code is good or bad; only list missing handling. +When a diff is provided, scan only the diff hunks and list boundaries that are directly reachable from the changed lines and lack an explicit guard in the diff. +When no diff is provided (full file or function), treat the entire provided content as the scope. +Ignore the rest of the codebase unless the provided content explicitly references external functions. + +**Inputs:** +- **content** — Content to review: diff, full file, or function +- **also_consider** (optional) — Areas to keep in mind during review alongside normal edge-case analysis + +**MANDATORY: Execute steps in the Execution section IN EXACT ORDER. DO NOT skip steps or change the sequence. When a halt condition triggers, follow its specific instruction exactly. Each action within a step is a REQUIRED action to complete that step.** + +**Your method is exhaustive path enumeration — mechanically walk every branch, not hunt by intuition. Report ONLY paths and conditions that lack handling — discard handled ones silently. Do NOT editorialize or add filler — findings only.** + + +## EXECUTION + +### Step 1: Receive Content + +- Load the content to review strictly from provided input +- If content is empty, or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop +- Identify content type (diff, full file, or function) to determine scope rules + +### Step 2: Exhaustive Path Analysis + +**Walk every branching path and boundary condition within scope — report only unhandled ones.** + +- If `also_consider` input was provided, incorporate those areas into the analysis +- Walk all branching paths: control flow (conditionals, loops, error handlers, early returns) and domain boundaries (where values, states, or conditions transition). Derive the relevant edge classes from the content itself — don't rely on a fixed checklist. Examples: missing else/default, unguarded inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps +- For each path: determine whether the content handles it +- Collect only the unhandled paths as findings — discard handled ones silently + +### Step 3: Validate Completeness + +- Revisit every edge class from Step 2 — e.g., missing else/default, null/empty inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps +- Add any newly found unhandled paths to findings; discard confirmed-handled ones + +### Step 4: Present Findings + +Output findings as a JSON array following the Output Format specification exactly. + + +## OUTPUT FORMAT + +Return ONLY a valid JSON array of objects. Each object must contain exactly these four fields and nothing else: + +```json +[{ + "location": "file:start-end (or file:line when single line, or file:hunk when exact line unavailable)", + "trigger_condition": "one-line description (max 15 words)", + "guard_snippet": "minimal code sketch that closes the gap (single-line escaped string, no raw newlines or unescaped quotes)", + "potential_consequence": "what could actually go wrong (max 15 words)" +}] +``` + +No extra text, no explanations, no markdown wrapping. An empty array `[]` is valid when no unhandled paths are found. + + +## HALT CONDITIONS + +- If content is empty or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop diff --git a/src/core-skills/bmad-review-edge-case-hunter/workflow.md b/src/core-skills/bmad-review-edge-case-hunter/workflow.md deleted file mode 100644 index 4d21c3961..000000000 --- a/src/core-skills/bmad-review-edge-case-hunter/workflow.md +++ /dev/null @@ -1,62 +0,0 @@ -# Edge Case Hunter Review - -**Goal:** You are a pure path tracer. Never comment on whether code is good or bad; only list missing handling. -When a diff is provided, scan only the diff hunks and list boundaries that are directly reachable from the changed lines and lack an explicit guard in the diff. -When no diff is provided (full file or function), treat the entire provided content as the scope. -Ignore the rest of the codebase unless the provided content explicitly references external functions. - -**Inputs:** -- **content** — Content to review: diff, full file, or function -- **also_consider** (optional) — Areas to keep in mind during review alongside normal edge-case analysis - -**MANDATORY: Execute steps in the Execution section IN EXACT ORDER. DO NOT skip steps or change the sequence. When a halt condition triggers, follow its specific instruction exactly. Each action within a step is a REQUIRED action to complete that step.** - -**Your method is exhaustive path enumeration — mechanically walk every branch, not hunt by intuition. Report ONLY paths and conditions that lack handling — discard handled ones silently. Do NOT editorialize or add filler — findings only.** - - -## EXECUTION - -### Step 1: Receive Content - -- Load the content to review strictly from provided input -- If content is empty, or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop -- Identify content type (diff, full file, or function) to determine scope rules - -### Step 2: Exhaustive Path Analysis - -**Walk every branching path and boundary condition within scope — report only unhandled ones.** - -- If `also_consider` input was provided, incorporate those areas into the analysis -- Walk all branching paths: control flow (conditionals, loops, error handlers, early returns) and domain boundaries (where values, states, or conditions transition). Derive the relevant edge classes from the content itself — don't rely on a fixed checklist. Examples: missing else/default, unguarded inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps -- For each path: determine whether the content handles it -- Collect only the unhandled paths as findings — discard handled ones silently - -### Step 3: Validate Completeness - -- Revisit every edge class from Step 2 — e.g., missing else/default, null/empty inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps -- Add any newly found unhandled paths to findings; discard confirmed-handled ones - -### Step 4: Present Findings - -Output findings as a JSON array following the Output Format specification exactly. - - -## OUTPUT FORMAT - -Return ONLY a valid JSON array of objects. Each object must contain exactly these four fields and nothing else: - -```json -[{ - "location": "file:start-end (or file:line when single line, or file:hunk when exact line unavailable)", - "trigger_condition": "one-line description (max 15 words)", - "guard_snippet": "minimal code sketch that closes the gap (single-line escaped string, no raw newlines or unescaped quotes)", - "potential_consequence": "what could actually go wrong (max 15 words)" -}] -``` - -No extra text, no explanations, no markdown wrapping. An empty array `[]` is valid when no unhandled paths are found. - - -## HALT CONDITIONS - -- If content is empty or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop diff --git a/src/core-skills/bmad-shard-doc/SKILL.md b/src/core-skills/bmad-shard-doc/SKILL.md index 442af56e2..4945cff4c 100644 --- a/src/core-skills/bmad-shard-doc/SKILL.md +++ b/src/core-skills/bmad-shard-doc/SKILL.md @@ -3,4 +3,103 @@ name: bmad-shard-doc description: 'Splits large markdown documents into smaller, organized files based on level 2 (default) sections. Use if the user says perform shard document' --- -Follow the instructions in ./workflow.md. +# Shard Document + +**Goal:** Split large markdown documents into smaller, organized files based on level 2 sections using `npx @kayvan/markdown-tree-parser`. + +## CRITICAL RULES + +- MANDATORY: Execute ALL steps in the EXECUTION section IN EXACT ORDER +- DO NOT skip steps or change the sequence +- HALT immediately when halt-conditions are met +- Each action within a step is a REQUIRED action to complete that step + +## EXECUTION + +### Step 1: Get Source Document + +- Ask user for the source document path if not provided already +- Verify file exists and is accessible +- Verify file is markdown format (.md extension) +- If file not found or not markdown: HALT with error message + +### Step 2: Get Destination Folder + +- Determine default destination: same location as source file, folder named after source file without .md extension + - Example: `/path/to/architecture.md` --> `/path/to/architecture/` +- Ask user for the destination folder path (`[y]` to confirm use of default: `[suggested-path]`, else enter a new path) +- If user accepts default: use the suggested destination path +- If user provides custom path: use the custom destination path +- Verify destination folder exists or can be created +- Check write permissions for destination +- If permission denied: HALT with error message + +### Step 3: Execute Sharding + +- Inform user that sharding is beginning +- Execute command: `npx @kayvan/markdown-tree-parser explode [source-document] [destination-folder]` +- Capture command output and any errors +- If command fails: HALT and display error to user + +### Step 4: Verify Output + +- Check that destination folder contains sharded files +- Verify index.md was created in destination folder +- Count the number of files created +- If no files created: HALT with error message + +### Step 5: Report Completion + +- Display completion report to user including: + - Source document path and name + - Destination folder path + - Number of section files created + - Confirmation that index.md was created + - Any tool output or warnings +- Inform user that sharding completed successfully + +### Step 6: Handle Original Document + +> **Critical:** Keeping both the original and sharded versions defeats the purpose of sharding and can cause confusion. + +Present user with options for the original document: + +> What would you like to do with the original document `[source-document-name]`? +> +> Options: +> - `[d]` Delete - Remove the original (recommended - shards can always be recombined) +> - `[m]` Move to archive - Move original to a backup/archive location +> - `[k]` Keep - Leave original in place (NOT recommended - defeats sharding purpose) +> +> Your choice (d/m/k): + +#### If user selects `d` (delete) + +- Delete the original source document file +- Confirm deletion to user: "Original document deleted: [source-document-path]" +- Note: The document can be reconstructed from shards by concatenating all section files in order + +#### If user selects `m` (move) + +- Determine default archive location: same directory as source, in an `archive` subfolder + - Example: `/path/to/architecture.md` --> `/path/to/archive/architecture.md` +- Ask: Archive location (`[y]` to use default: `[default-archive-path]`, or provide custom path) +- If user accepts default: use default archive path +- If user provides custom path: use custom archive path +- Create archive directory if it does not exist +- Move original document to archive location +- Confirm move to user: "Original document moved to: [archive-path]" + +#### If user selects `k` (keep) + +- Display warning to user: + - Keeping both original and sharded versions is NOT recommended + - The discover_inputs protocol may load the wrong version + - Updates to one will not reflect in the other + - Duplicate content taking up space + - Consider deleting or archiving the original document +- Confirm user choice: "Original document kept at: [source-document-path]" + +## HALT CONDITIONS + +- HALT if npx command fails or produces no output files diff --git a/src/core-skills/bmad-shard-doc/workflow.md b/src/core-skills/bmad-shard-doc/workflow.md deleted file mode 100644 index 3304991db..000000000 --- a/src/core-skills/bmad-shard-doc/workflow.md +++ /dev/null @@ -1,100 +0,0 @@ -# Shard Document - -**Goal:** Split large markdown documents into smaller, organized files based on level 2 sections using `npx @kayvan/markdown-tree-parser`. - -## CRITICAL RULES - -- MANDATORY: Execute ALL steps in the EXECUTION section IN EXACT ORDER -- DO NOT skip steps or change the sequence -- HALT immediately when halt-conditions are met -- Each action within a step is a REQUIRED action to complete that step - -## EXECUTION - -### Step 1: Get Source Document - -- Ask user for the source document path if not provided already -- Verify file exists and is accessible -- Verify file is markdown format (.md extension) -- If file not found or not markdown: HALT with error message - -### Step 2: Get Destination Folder - -- Determine default destination: same location as source file, folder named after source file without .md extension - - Example: `/path/to/architecture.md` --> `/path/to/architecture/` -- Ask user for the destination folder path (`[y]` to confirm use of default: `[suggested-path]`, else enter a new path) -- If user accepts default: use the suggested destination path -- If user provides custom path: use the custom destination path -- Verify destination folder exists or can be created -- Check write permissions for destination -- If permission denied: HALT with error message - -### Step 3: Execute Sharding - -- Inform user that sharding is beginning -- Execute command: `npx @kayvan/markdown-tree-parser explode [source-document] [destination-folder]` -- Capture command output and any errors -- If command fails: HALT and display error to user - -### Step 4: Verify Output - -- Check that destination folder contains sharded files -- Verify index.md was created in destination folder -- Count the number of files created -- If no files created: HALT with error message - -### Step 5: Report Completion - -- Display completion report to user including: - - Source document path and name - - Destination folder path - - Number of section files created - - Confirmation that index.md was created - - Any tool output or warnings -- Inform user that sharding completed successfully - -### Step 6: Handle Original Document - -> **Critical:** Keeping both the original and sharded versions defeats the purpose of sharding and can cause confusion. - -Present user with options for the original document: - -> What would you like to do with the original document `[source-document-name]`? -> -> Options: -> - `[d]` Delete - Remove the original (recommended - shards can always be recombined) -> - `[m]` Move to archive - Move original to a backup/archive location -> - `[k]` Keep - Leave original in place (NOT recommended - defeats sharding purpose) -> -> Your choice (d/m/k): - -#### If user selects `d` (delete) - -- Delete the original source document file -- Confirm deletion to user: "Original document deleted: [source-document-path]" -- Note: The document can be reconstructed from shards by concatenating all section files in order - -#### If user selects `m` (move) - -- Determine default archive location: same directory as source, in an `archive` subfolder - - Example: `/path/to/architecture.md` --> `/path/to/archive/architecture.md` -- Ask: Archive location (`[y]` to use default: `[default-archive-path]`, or provide custom path) -- If user accepts default: use default archive path -- If user provides custom path: use custom archive path -- Create archive directory if it does not exist -- Move original document to archive location -- Confirm move to user: "Original document moved to: [archive-path]" - -#### If user selects `k` (keep) - -- Display warning to user: - - Keeping both original and sharded versions is NOT recommended - - The discover_inputs protocol may load the wrong version - - Updates to one will not reflect in the other - - Duplicate content taking up space - - Consider deleting or archiving the original document -- Confirm user choice: "Original document kept at: [source-document-path]" - -## HALT CONDITIONS - -- HALT if npx command fails or produces no output files diff --git a/test/test-installation-components.js b/test/test-installation-components.js index 0b977884f..0442594e8 100644 --- a/test/test-installation-components.js +++ b/test/test-installation-components.js @@ -49,34 +49,38 @@ function assert(condition, testName, errorMessage = '') { } async function createTestBmadFixture() { - const fixtureDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-')); + const fixtureRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-')); + const fixtureDir = path.join(fixtureRoot, '_bmad'); + await fs.ensureDir(fixtureDir); - // Minimal workflow manifest (generators check for this) + // Skill manifest CSV — the sole source of truth for IDE skill installation await fs.ensureDir(path.join(fixtureDir, '_config')); - await fs.writeFile(path.join(fixtureDir, '_config', 'workflow-manifest.csv'), ''); + await fs.writeFile( + path.join(fixtureDir, '_config', 'skill-manifest.csv'), + [ + 'canonicalId,name,description,module,path,install_to_bmad', + '"bmad-master","bmad-master","Minimal test agent fixture","core","_bmad/core/bmad-master/SKILL.md","true"', + '', + ].join('\n'), + ); - // Minimal compiled agent for core/agents (contains ', - 'Test persona', - '', - ].join('\n'); - - await fs.ensureDir(path.join(fixtureDir, 'core', 'agents')); - await fs.writeFile(path.join(fixtureDir, 'core', 'agents', 'bmad-master.md'), minimalAgent); - // Skill manifest so the installer uses 'bmad-master' as the canonical skill name - await fs.writeFile(path.join(fixtureDir, 'core', 'agents', 'bmad-skill-manifest.yaml'), 'bmad-master.md:\n canonicalId: bmad-master\n'); - - // Minimal compiled agent for bmm module (tests use selectedModules: ['bmm']) - await fs.ensureDir(path.join(fixtureDir, 'bmm', 'agents')); - await fs.writeFile(path.join(fixtureDir, 'bmm', 'agents', 'test-bmm-agent.md'), minimalAgent); + // Minimal SKILL.md for the skill entry + const skillDir = path.join(fixtureDir, 'core', 'bmad-master'); + await fs.ensureDir(skillDir); + await fs.writeFile( + path.join(skillDir, 'SKILL.md'), + [ + '---', + 'name: bmad-master', + 'description: Minimal test agent fixture', + '---', + '', + '', + 'You are a test agent.', + ].join('\n'), + ); + await fs.writeFile(path.join(skillDir, 'bmad-skill-manifest.yaml'), 'SKILL.md:\n type: skill\n'); + await fs.writeFile(path.join(skillDir, 'workflow.md'), '# Test Workflow\nStep 1: Do the thing.\n'); return fixtureDir; } @@ -253,7 +257,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir, '.windsurf', 'workflows'))), 'Windsurf setup removes legacy workflows dir'); await fs.remove(tempProjectDir); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'Windsurf native skills migration test succeeds', error.message); } @@ -301,7 +305,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir, '.kiro', 'steering'))), 'Kiro setup removes legacy steering dir'); await fs.remove(tempProjectDir); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'Kiro native skills migration test succeeds', error.message); } @@ -349,7 +353,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir, '.agent', 'workflows'))), 'Antigravity setup removes legacy workflows dir'); await fs.remove(tempProjectDir); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'Antigravity native skills migration test succeeds', error.message); } @@ -402,7 +406,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir, '.augment', 'commands'))), 'Auggie setup removes legacy commands dir'); await fs.remove(tempProjectDir); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'Auggie native skills migration test succeeds', error.message); } @@ -468,7 +472,7 @@ async function runTests() { } await fs.remove(tempProjectDir); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'OpenCode native skills migration test succeeds', error.message); } @@ -522,7 +526,7 @@ async function runTests() { assert(!(await fs.pathExists(legacyDir9)), 'Claude Code setup removes legacy commands dir'); await fs.remove(tempProjectDir9); - await fs.remove(installedBmadDir9); + await fs.remove(path.dirname(installedBmadDir9)); } catch (error) { assert(false, 'Claude Code native skills migration test succeeds', error.message); } @@ -561,7 +565,7 @@ async function runTests() { ); await fs.remove(tempRoot10); - await fs.remove(installedBmadDir10); + await fs.remove(path.dirname(installedBmadDir10)); } catch (error) { assert(false, 'Claude Code ancestor conflict protection test succeeds', error.message); } @@ -615,7 +619,7 @@ async function runTests() { assert(!(await fs.pathExists(legacyDir11)), 'Codex setup removes legacy prompts dir'); await fs.remove(tempProjectDir11); - await fs.remove(installedBmadDir11); + await fs.remove(path.dirname(installedBmadDir11)); } catch (error) { assert(false, 'Codex native skills migration test succeeds', error.message); } @@ -651,7 +655,7 @@ async function runTests() { assert(result12.handlerResult?.conflictDir === expectedConflictDir12, 'Codex ancestor rejection points at ancestor .agents/skills dir'); await fs.remove(tempRoot12); - await fs.remove(installedBmadDir12); + await fs.remove(path.dirname(installedBmadDir12)); } catch (error) { assert(false, 'Codex ancestor conflict protection test succeeds', error.message); } @@ -705,7 +709,7 @@ async function runTests() { assert(!(await fs.pathExists(legacyDir13c)), 'Cursor setup removes legacy commands dir'); await fs.remove(tempProjectDir13c); - await fs.remove(installedBmadDir13c); + await fs.remove(path.dirname(installedBmadDir13c)); } catch (error) { assert(false, 'Cursor native skills migration test succeeds', error.message); } @@ -770,7 +774,7 @@ async function runTests() { assert(await fs.pathExists(skillFile13), 'Roo reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir13); - await fs.remove(installedBmadDir13); + await fs.remove(path.dirname(installedBmadDir13)); } catch (error) { assert(false, 'Roo native skills migration test succeeds', error.message); } @@ -809,7 +813,7 @@ async function runTests() { ); await fs.remove(tempRoot); - await fs.remove(installedBmadDir); + await fs.remove(path.dirname(installedBmadDir)); } catch (error) { assert(false, 'OpenCode ancestor conflict protection test succeeds', error.message); } @@ -895,7 +899,7 @@ async function runTests() { ); await fs.remove(tempProjectDir17); - await fs.remove(installedBmadDir17); + await fs.remove(path.dirname(installedBmadDir17)); } catch (error) { assert(false, 'GitHub Copilot native skills migration test succeeds', error.message); } @@ -957,7 +961,7 @@ async function runTests() { assert(await fs.pathExists(skillFile18), 'Cline reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir18); - await fs.remove(installedBmadDir18); + await fs.remove(path.dirname(installedBmadDir18)); } catch (error) { assert(false, 'Cline native skills migration test succeeds', error.message); } @@ -1017,7 +1021,7 @@ async function runTests() { assert(await fs.pathExists(skillFile19), 'CodeBuddy reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir19); - await fs.remove(installedBmadDir19); + await fs.remove(path.dirname(installedBmadDir19)); } catch (error) { assert(false, 'CodeBuddy native skills migration test succeeds', error.message); } @@ -1077,7 +1081,7 @@ async function runTests() { assert(await fs.pathExists(skillFile20), 'Crush reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir20); - await fs.remove(installedBmadDir20); + await fs.remove(path.dirname(installedBmadDir20)); } catch (error) { assert(false, 'Crush native skills migration test succeeds', error.message); } @@ -1136,7 +1140,7 @@ async function runTests() { assert(await fs.pathExists(skillFile21), 'Trae reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir21); - await fs.remove(installedBmadDir21); + await fs.remove(path.dirname(installedBmadDir21)); } catch (error) { assert(false, 'Trae native skills migration test succeeds', error.message); } @@ -1194,7 +1198,7 @@ async function runTests() { ); await fs.remove(tempProjectDir22); - await fs.remove(installedBmadDir22); + await fs.remove(path.dirname(installedBmadDir22)); } catch (error) { assert(false, 'KiloCoder suspended test succeeds', error.message); } @@ -1253,7 +1257,7 @@ async function runTests() { assert(await fs.pathExists(skillFile23), 'Gemini reinstall preserves SKILL.md output'); await fs.remove(tempProjectDir23); - await fs.remove(installedBmadDir23); + await fs.remove(path.dirname(installedBmadDir23)); } catch (error) { assert(false, 'Gemini native skills migration test succeeds', error.message); } @@ -1303,7 +1307,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir24, '.iflow', 'commands'))), 'iFlow setup removes legacy commands dir'); await fs.remove(tempProjectDir24); - await fs.remove(installedBmadDir24); + await fs.remove(path.dirname(installedBmadDir24)); } catch (error) { assert(false, 'iFlow native skills migration test succeeds', error.message); } @@ -1353,7 +1357,7 @@ async function runTests() { assert(!(await fs.pathExists(path.join(tempProjectDir25, '.qwen', 'commands'))), 'QwenCoder setup removes legacy commands dir'); await fs.remove(tempProjectDir25); - await fs.remove(installedBmadDir25); + await fs.remove(path.dirname(installedBmadDir25)); } catch (error) { assert(false, 'QwenCoder native skills migration test succeeds', error.message); } @@ -1422,7 +1426,7 @@ async function runTests() { assert(cleanedPrompts26.prompts[0].name === 'my-custom-prompt', 'Rovo Dev cleanup preserves non-BMAD entries in prompts.yml'); await fs.remove(tempProjectDir26); - await fs.remove(installedBmadDir26); + await fs.remove(path.dirname(installedBmadDir26)); } catch (error) { assert(false, 'Rovo Dev native skills migration test succeeds', error.message); } @@ -1487,7 +1491,7 @@ async function runTests() { assert(!(await fs.pathExists(regularSkillDir27)), 'Cleanup removes stale non-bmad-os skills'); await fs.remove(tempProjectDir27); - await fs.remove(installedBmadDir27); + await fs.remove(path.dirname(installedBmadDir27)); } catch (error) { assert(false, 'bmad-os-* skill preservation test succeeds', error.message); } @@ -1579,7 +1583,7 @@ async function runTests() { assert(false, 'Pi native skills test succeeds', error.message); } finally { if (tempProjectDir28) await fs.remove(tempProjectDir28).catch(() => {}); - if (installedBmadDir28) await fs.remove(installedBmadDir28).catch(() => {}); + if (installedBmadDir28) await fs.remove(path.dirname(installedBmadDir28)).catch(() => {}); } console.log(''); @@ -1837,18 +1841,12 @@ async function runTests() { }); assert(result.success === true, 'Antigravity setup succeeds with overlapping skill names'); - assert(result.detail === '2 agents', 'Installer detail reports agents separately from skills'); - assert(result.handlerResult.results.skillDirectories === 2, 'Result exposes unique skill directory count'); - assert(result.handlerResult.results.agents === 2, 'Result retains generated agent write count'); - assert(result.handlerResult.results.workflows === 1, 'Result retains generated workflow count'); + assert(result.detail === '1 skills', 'Installer detail reports skill count'); + assert(result.handlerResult.results.skillDirectories === 1, 'Result exposes unique skill directory count'); assert(result.handlerResult.results.skills === 1, 'Result retains verbatim skill count'); - assert( - await fs.pathExists(path.join(collisionProjectDir, '.agent', 'skills', 'bmad-agent-bmad-master', 'SKILL.md')), - 'Agent skill directory is created', - ); assert( await fs.pathExists(path.join(collisionProjectDir, '.agent', 'skills', 'bmad-help', 'SKILL.md')), - 'Overlapping skill directory is created once', + 'Skill directory is created from skill-manifest', ); } catch (error) { assert(false, 'Skill-format unique count test succeeds', error.message); @@ -1906,6 +1904,9 @@ async function runTests() { const skillFile32 = path.join(tempProjectDir32, '.ona', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile32), 'Ona install writes SKILL.md directory output'); + const workflowFile32 = path.join(tempProjectDir32, '.ona', 'skills', 'bmad-master', 'workflow.md'); + assert(await fs.pathExists(workflowFile32), 'Ona install copies non-SKILL.md files (workflow.md) verbatim'); + // Parse YAML frontmatter between --- markers const skillContent32 = await fs.readFile(skillFile32, 'utf8'); const fmMatch32 = skillContent32.match(/^---\n([\s\S]*?)\n---\n?([\s\S]*)$/); @@ -1944,7 +1945,7 @@ async function runTests() { assert(false, 'Ona native skills test succeeds', error.message); } finally { if (tempProjectDir32) await fs.remove(tempProjectDir32).catch(() => {}); - if (installedBmadDir32) await fs.remove(installedBmadDir32).catch(() => {}); + if (installedBmadDir32) await fs.remove(path.dirname(installedBmadDir32)).catch(() => {}); } console.log(''); diff --git a/tools/cli/installers/lib/ide/_config-driven.js b/tools/cli/installers/lib/ide/_config-driven.js index e94cb9edb..5fb4c595a 100644 --- a/tools/cli/installers/lib/ide/_config-driven.js +++ b/tools/cli/installers/lib/ide/_config-driven.js @@ -4,9 +4,6 @@ const fs = require('fs-extra'); const yaml = require('yaml'); const { BaseIdeSetup } = require('./_base-ide'); const prompts = require('../../../lib/prompts'); -const { AgentCommandGenerator } = require('./shared/agent-command-generator'); -const { WorkflowCommandGenerator } = require('./shared/workflow-command-generator'); -const { TaskToolCommandGenerator } = require('./shared/task-tool-command-generator'); const csv = require('csv-parse/sync'); /** @@ -115,53 +112,20 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup { * @returns {Promise} Installation result */ async installToTarget(projectDir, bmadDir, config, options) { - const { target_dir, template_type, artifact_types } = config; + const { target_dir } = config; - // Skip targets with explicitly empty artifact_types and no verbatim skills - // This prevents creating empty directories when no artifacts will be written - const skipStandardArtifacts = Array.isArray(artifact_types) && artifact_types.length === 0; - if (skipStandardArtifacts && !config.skill_format) { - return { success: true, results: { agents: 0, workflows: 0, tasks: 0, tools: 0, skills: 0 } }; + if (!config.skill_format) { + return { success: false, reason: 'missing-skill-format', error: 'Installer config missing skill_format — cannot install skills' }; } const targetPath = path.join(projectDir, target_dir); await this.ensureDir(targetPath); - const selectedModules = options.selectedModules || []; - const results = { agents: 0, workflows: 0, tasks: 0, tools: 0, skills: 0 }; - this.skillWriteTracker = config.skill_format ? new Set() : null; + this.skillWriteTracker = new Set(); + const results = { skills: 0 }; - // Install standard artifacts (agents, workflows, tasks, tools) - if (!skipStandardArtifacts) { - // Install agents - if (!artifact_types || artifact_types.includes('agents')) { - const agentGen = new AgentCommandGenerator(this.bmadFolderName); - const { artifacts } = await agentGen.collectAgentArtifacts(bmadDir, selectedModules); - results.agents = await this.writeAgentArtifacts(targetPath, artifacts, template_type, config); - } - - // Install workflows - if (!artifact_types || artifact_types.includes('workflows')) { - const workflowGen = new WorkflowCommandGenerator(this.bmadFolderName); - const { artifacts } = await workflowGen.collectWorkflowArtifacts(bmadDir); - results.workflows = await this.writeWorkflowArtifacts(targetPath, artifacts, template_type, config); - } - - // Install tasks and tools using template system (supports TOML for Gemini, MD for others) - if (!artifact_types || artifact_types.includes('tasks') || artifact_types.includes('tools')) { - const taskToolGen = new TaskToolCommandGenerator(this.bmadFolderName); - const { artifacts } = await taskToolGen.collectTaskToolArtifacts(bmadDir); - const taskToolResult = await this.writeTaskToolArtifacts(targetPath, artifacts, template_type, config); - results.tasks = taskToolResult.tasks || 0; - results.tools = taskToolResult.tools || 0; - } - } - - // Install verbatim skills (type: skill) - if (config.skill_format) { - results.skills = await this.installVerbatimSkills(projectDir, bmadDir, targetPath, config); - results.skillDirectories = this.skillWriteTracker ? this.skillWriteTracker.size : 0; - } + results.skills = await this.installVerbatimSkills(projectDir, bmadDir, targetPath, config); + results.skillDirectories = this.skillWriteTracker.size; await this.printSummary(results, target_dir, options); this.skillWriteTracker = null; @@ -177,15 +141,11 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup { * @returns {Promise} Installation result */ async installToMultipleTargets(projectDir, bmadDir, targets, options) { - const allResults = { agents: 0, workflows: 0, tasks: 0, tools: 0, skills: 0 }; + const allResults = { skills: 0 }; for (const target of targets) { const result = await this.installToTarget(projectDir, bmadDir, target, options); if (result.success) { - allResults.agents += result.results.agents || 0; - allResults.workflows += result.results.workflows || 0; - allResults.tasks += result.results.tasks || 0; - allResults.tools += result.results.tools || 0; allResults.skills += result.results.skills || 0; } } @@ -193,118 +153,6 @@ class ConfigDrivenIdeSetup extends BaseIdeSetup { return { success: true, results: allResults }; } - /** - * Write agent artifacts to target directory - * @param {string} targetPath - Target directory path - * @param {Array} artifacts - Agent artifacts - * @param {string} templateType - Template type to use - * @param {Object} config - Installation configuration - * @returns {Promise} Count of artifacts written - */ - async writeAgentArtifacts(targetPath, artifacts, templateType, config = {}) { - // Try to load platform-specific template, fall back to default-agent - const { content: template, extension } = await this.loadTemplate(templateType, 'agent', config, 'default-agent'); - let count = 0; - - for (const artifact of artifacts) { - const content = this.renderTemplate(template, artifact); - const filename = this.generateFilename(artifact, 'agent', extension); - - if (config.skill_format) { - await this.writeSkillFile(targetPath, artifact, content); - } else { - const filePath = path.join(targetPath, filename); - await this.writeFile(filePath, content); - } - count++; - } - - return count; - } - - /** - * Write workflow artifacts to target directory - * @param {string} targetPath - Target directory path - * @param {Array} artifacts - Workflow artifacts - * @param {string} templateType - Template type to use - * @param {Object} config - Installation configuration - * @returns {Promise} Count of artifacts written - */ - async writeWorkflowArtifacts(targetPath, artifacts, templateType, config = {}) { - let count = 0; - - for (const artifact of artifacts) { - if (artifact.type === 'workflow-command') { - const workflowTemplateType = config.md_workflow_template || `${templateType}-workflow`; - const { content: template, extension } = await this.loadTemplate(workflowTemplateType, '', config, 'default-workflow'); - const content = this.renderTemplate(template, artifact); - const filename = this.generateFilename(artifact, 'workflow', extension); - - if (config.skill_format) { - await this.writeSkillFile(targetPath, artifact, content); - } else { - const filePath = path.join(targetPath, filename); - await this.writeFile(filePath, content); - } - count++; - } - } - - return count; - } - - /** - * Write task/tool artifacts to target directory using templates - * @param {string} targetPath - Target directory path - * @param {Array} artifacts - Task/tool artifacts - * @param {string} templateType - Template type to use - * @param {Object} config - Installation configuration - * @returns {Promise} Counts of tasks and tools written - */ - async writeTaskToolArtifacts(targetPath, artifacts, templateType, config = {}) { - let taskCount = 0; - let toolCount = 0; - - // Pre-load templates to avoid repeated file I/O in the loop - const taskTemplate = await this.loadTemplate(templateType, 'task', config, 'default-task'); - const toolTemplate = await this.loadTemplate(templateType, 'tool', config, 'default-tool'); - - const { artifact_types } = config; - - for (const artifact of artifacts) { - if (artifact.type !== 'task' && artifact.type !== 'tool') { - continue; - } - - // Skip if the specific artifact type is not requested in config - if (artifact_types) { - if (artifact.type === 'task' && !artifact_types.includes('tasks')) continue; - if (artifact.type === 'tool' && !artifact_types.includes('tools')) continue; - } - - // Use pre-loaded template based on artifact type - const { content: template, extension } = artifact.type === 'task' ? taskTemplate : toolTemplate; - - const content = this.renderTemplate(template, artifact); - const filename = this.generateFilename(artifact, artifact.type, extension); - - if (config.skill_format) { - await this.writeSkillFile(targetPath, artifact, content); - } else { - const filePath = path.join(targetPath, filename); - await this.writeFile(filePath, content); - } - - if (artifact.type === 'task') { - taskCount++; - } else { - toolCount++; - } - } - - return { tasks: taskCount, tools: toolCount }; - } - /** * Load template based on type and configuration * @param {string} templateType - Template type (claude, windsurf, etc.) @@ -711,13 +559,10 @@ LOAD and execute from: {project-root}/{{bmadFolderName}}/{{path}} */ async printSummary(results, targetDir, options = {}) { if (options.silent) return; - const parts = []; - const totalDirs = - results.skillDirectories || (results.workflows || 0) + (results.tasks || 0) + (results.tools || 0) + (results.skills || 0); - const skillCount = totalDirs - (results.agents || 0); - if (skillCount > 0) parts.push(`${skillCount} skills`); - if (results.agents > 0) parts.push(`${results.agents} agents`); - await prompts.log.success(`${this.name} configured: ${parts.join(', ')} → ${targetDir}`); + const count = results.skillDirectories || results.skills || 0; + if (count > 0) { + await prompts.log.success(`${this.name} configured: ${count} skills → ${targetDir}`); + } } /** diff --git a/tools/cli/installers/lib/ide/manager.js b/tools/cli/installers/lib/ide/manager.js index d0dee4ae0..0d7f91209 100644 --- a/tools/cli/installers/lib/ide/manager.js +++ b/tools/cli/installers/lib/ide/manager.js @@ -159,14 +159,9 @@ class IdeManager { // Build detail string from handler-returned data let detail = ''; if (handlerResult && handlerResult.results) { - // Config-driven handlers return { success, results: { agents, workflows, tasks, tools } } const r = handlerResult.results; - const parts = []; - const totalDirs = r.skillDirectories || (r.workflows || 0) + (r.tasks || 0) + (r.tools || 0) + (r.skills || 0); - const skillCount = totalDirs - (r.agents || 0); - if (skillCount > 0) parts.push(`${skillCount} skills`); - if (r.agents > 0) parts.push(`${r.agents} agents`); - detail = parts.join(', '); + const count = r.skillDirectories || r.skills || 0; + if (count > 0) detail = `${count} skills`; } // Propagate handler's success status (default true for backward compat) const success = handlerResult?.success !== false; diff --git a/tools/cli/installers/lib/ide/shared/agent-command-generator.js b/tools/cli/installers/lib/ide/shared/agent-command-generator.js index 37820992e..0fc1b04dc 100644 --- a/tools/cli/installers/lib/ide/shared/agent-command-generator.js +++ b/tools/cli/installers/lib/ide/shared/agent-command-generator.js @@ -4,7 +4,6 @@ const { toColonPath, toDashPath, customAgentColonName, customAgentDashName, BMAD /** * Generates launcher command files for each agent - * Similar to WorkflowCommandGenerator but for agents */ class AgentCommandGenerator { constructor(bmadFolderName = BMAD_FOLDER_NAME) { diff --git a/tools/cli/installers/lib/ide/shared/task-tool-command-generator.js b/tools/cli/installers/lib/ide/shared/task-tool-command-generator.js deleted file mode 100644 index f21a5d174..000000000 --- a/tools/cli/installers/lib/ide/shared/task-tool-command-generator.js +++ /dev/null @@ -1,368 +0,0 @@ -const path = require('node:path'); -const fs = require('fs-extra'); -const csv = require('csv-parse/sync'); -const { toColonName, toColonPath, toDashPath, BMAD_FOLDER_NAME } = require('./path-utils'); - -/** - * Generates command files for standalone tasks and tools - */ -class TaskToolCommandGenerator { - /** - * @param {string} bmadFolderName - Name of the BMAD folder for template rendering (default: '_bmad') - * Note: This parameter is accepted for API consistency with AgentCommandGenerator and - * WorkflowCommandGenerator, but is not used for path stripping. The manifest always stores - * filesystem paths with '_bmad/' prefix (the actual folder name), while bmadFolderName is - * used for template placeholder rendering ({{bmadFolderName}}). - */ - constructor(bmadFolderName = BMAD_FOLDER_NAME) { - this.bmadFolderName = bmadFolderName; - } - - /** - * Collect task and tool artifacts for IDE installation - * @param {string} bmadDir - BMAD installation directory - * @returns {Promise} Artifacts array with metadata - */ - async collectTaskToolArtifacts(bmadDir) { - const tasks = await this.loadTaskManifest(bmadDir); - const tools = await this.loadToolManifest(bmadDir); - - // All tasks/tools in manifest are standalone (internal=true items are filtered during manifest generation) - const artifacts = []; - const bmadPrefix = `${BMAD_FOLDER_NAME}/`; - - // Collect task artifacts - for (const task of tasks || []) { - let taskPath = (task.path || '').replaceAll('\\', '/'); - // Convert absolute paths to relative paths - if (path.isAbsolute(taskPath)) { - taskPath = path.relative(bmadDir, taskPath).replaceAll('\\', '/'); - } - // Remove _bmad/ prefix if present to get relative path within bmad folder - if (taskPath.startsWith(bmadPrefix)) { - taskPath = taskPath.slice(bmadPrefix.length); - } - - const taskExt = path.extname(taskPath) || '.md'; - artifacts.push({ - type: 'task', - name: task.name, - displayName: task.displayName || task.name, - description: task.description || `Execute ${task.displayName || task.name}`, - module: task.module, - canonicalId: task.canonicalId || '', - // Use forward slashes for cross-platform consistency (not path.join which uses backslashes on Windows) - relativePath: `${task.module}/tasks/${task.name}${taskExt}`, - path: taskPath, - }); - } - - // Collect tool artifacts - for (const tool of tools || []) { - let toolPath = (tool.path || '').replaceAll('\\', '/'); - // Convert absolute paths to relative paths - if (path.isAbsolute(toolPath)) { - toolPath = path.relative(bmadDir, toolPath).replaceAll('\\', '/'); - } - // Remove _bmad/ prefix if present to get relative path within bmad folder - if (toolPath.startsWith(bmadPrefix)) { - toolPath = toolPath.slice(bmadPrefix.length); - } - - const toolExt = path.extname(toolPath) || '.md'; - artifacts.push({ - type: 'tool', - name: tool.name, - displayName: tool.displayName || tool.name, - description: tool.description || `Execute ${tool.displayName || tool.name}`, - module: tool.module, - canonicalId: tool.canonicalId || '', - // Use forward slashes for cross-platform consistency (not path.join which uses backslashes on Windows) - relativePath: `${tool.module}/tools/${tool.name}${toolExt}`, - path: toolPath, - }); - } - - return { - artifacts, - counts: { - tasks: (tasks || []).length, - tools: (tools || []).length, - }, - }; - } - - /** - * Generate task and tool commands from manifest CSVs - * @param {string} projectDir - Project directory - * @param {string} bmadDir - BMAD installation directory - * @param {string} baseCommandsDir - Optional base commands directory (defaults to .claude/commands/bmad) - */ - async generateTaskToolCommands(projectDir, bmadDir, baseCommandsDir = null) { - const tasks = await this.loadTaskManifest(bmadDir); - const tools = await this.loadToolManifest(bmadDir); - - // Base commands directory - use provided or default to Claude Code structure - const commandsDir = baseCommandsDir || path.join(projectDir, '.claude', 'commands', 'bmad'); - - let generatedCount = 0; - - // Generate command files for tasks - for (const task of tasks || []) { - const moduleTasksDir = path.join(commandsDir, task.module, 'tasks'); - await fs.ensureDir(moduleTasksDir); - - const commandContent = this.generateCommandContent(task, 'task'); - const commandPath = path.join(moduleTasksDir, `${task.name}.md`); - - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - // Generate command files for tools - for (const tool of tools || []) { - const moduleToolsDir = path.join(commandsDir, tool.module, 'tools'); - await fs.ensureDir(moduleToolsDir); - - const commandContent = this.generateCommandContent(tool, 'tool'); - const commandPath = path.join(moduleToolsDir, `${tool.name}.md`); - - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - return { - generated: generatedCount, - tasks: (tasks || []).length, - tools: (tools || []).length, - }; - } - - /** - * Generate command content for a task or tool - */ - generateCommandContent(item, type) { - const description = item.description || `Execute ${item.displayName || item.name}`; - - // Convert path to use {project-root} placeholder - // Handle undefined/missing path by constructing from module and name - let itemPath = item.path; - if (!itemPath || typeof itemPath !== 'string') { - // Fallback: construct path from module and name if path is missing - const typePlural = type === 'task' ? 'tasks' : 'tools'; - itemPath = `{project-root}/${this.bmadFolderName}/${item.module}/${typePlural}/${item.name}.md`; - } else { - // Normalize path separators to forward slashes - itemPath = itemPath.replaceAll('\\', '/'); - - // Extract relative path from absolute paths (Windows or Unix) - // Look for _bmad/ or bmad/ in the path and extract everything after it - // Match patterns like: /_bmad/core/tasks/... or /bmad/core/tasks/... - // Use [/\\] to handle both Unix forward slashes and Windows backslashes, - // and also paths without a leading separator (e.g., C:/_bmad/...) - const bmadMatch = itemPath.match(/[/\\]_bmad[/\\](.+)$/) || itemPath.match(/[/\\]bmad[/\\](.+)$/); - if (bmadMatch) { - // Found /_bmad/ or /bmad/ - use relative path after it - itemPath = `{project-root}/${this.bmadFolderName}/${bmadMatch[1]}`; - } else if (itemPath.startsWith(`${BMAD_FOLDER_NAME}/`)) { - // Relative path starting with _bmad/ - itemPath = `{project-root}/${this.bmadFolderName}/${itemPath.slice(BMAD_FOLDER_NAME.length + 1)}`; - } else if (itemPath.startsWith('bmad/')) { - // Relative path starting with bmad/ - itemPath = `{project-root}/${this.bmadFolderName}/${itemPath.slice(5)}`; - } else if (!itemPath.startsWith('{project-root}')) { - // For other relative paths, prefix with project root and bmad folder - itemPath = `{project-root}/${this.bmadFolderName}/${itemPath}`; - } - } - - return `--- -description: '${description.replaceAll("'", "''")}' ---- - -# ${item.displayName || item.name} - -Read the entire ${type} file at: ${itemPath} - -Follow all instructions in the ${type} file exactly as written. -`; - } - - /** - * Load task manifest CSV - */ - async loadTaskManifest(bmadDir) { - const manifestPath = path.join(bmadDir, '_config', 'task-manifest.csv'); - - if (!(await fs.pathExists(manifestPath))) { - return null; - } - - const csvContent = await fs.readFile(manifestPath, 'utf8'); - return csv.parse(csvContent, { - columns: true, - skip_empty_lines: true, - }); - } - - /** - * Load tool manifest CSV - */ - async loadToolManifest(bmadDir) { - const manifestPath = path.join(bmadDir, '_config', 'tool-manifest.csv'); - - if (!(await fs.pathExists(manifestPath))) { - return null; - } - - const csvContent = await fs.readFile(manifestPath, 'utf8'); - return csv.parse(csvContent, { - columns: true, - skip_empty_lines: true, - }); - } - - /** - * Generate task and tool commands using underscore format (Windows-compatible) - * Creates flat files like: bmad_bmm_help.md - * - * @param {string} projectDir - Project directory - * @param {string} bmadDir - BMAD installation directory - * @param {string} baseCommandsDir - Base commands directory for the IDE - * @returns {Object} Generation results - */ - async generateColonTaskToolCommands(projectDir, bmadDir, baseCommandsDir) { - const tasks = await this.loadTaskManifest(bmadDir); - const tools = await this.loadToolManifest(bmadDir); - - let generatedCount = 0; - - // Generate command files for tasks - for (const task of tasks || []) { - const commandContent = this.generateCommandContent(task, 'task'); - // Use underscore format: bmad_bmm_name.md - const flatName = toColonName(task.module, 'tasks', task.name); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - // Generate command files for tools - for (const tool of tools || []) { - const commandContent = this.generateCommandContent(tool, 'tool'); - // Use underscore format: bmad_bmm_name.md - const flatName = toColonName(tool.module, 'tools', tool.name); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - return { - generated: generatedCount, - tasks: (tasks || []).length, - tools: (tools || []).length, - }; - } - - /** - * Generate task and tool commands using underscore format (Windows-compatible) - * Creates flat files like: bmad_bmm_help.md - * - * @param {string} projectDir - Project directory - * @param {string} bmadDir - BMAD installation directory - * @param {string} baseCommandsDir - Base commands directory for the IDE - * @returns {Object} Generation results - */ - async generateDashTaskToolCommands(projectDir, bmadDir, baseCommandsDir) { - const tasks = await this.loadTaskManifest(bmadDir); - const tools = await this.loadToolManifest(bmadDir); - - let generatedCount = 0; - - // Generate command files for tasks - for (const task of tasks || []) { - const commandContent = this.generateCommandContent(task, 'task'); - // Use dash format: bmad-bmm-name.md - const flatName = toDashPath(`${task.module}/tasks/${task.name}.md`); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - // Generate command files for tools - for (const tool of tools || []) { - const commandContent = this.generateCommandContent(tool, 'tool'); - // Use dash format: bmad-bmm-name.md - const flatName = toDashPath(`${tool.module}/tools/${tool.name}.md`); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - generatedCount++; - } - - return { - generated: generatedCount, - tasks: (tasks || []).length, - tools: (tools || []).length, - }; - } - - /** - * Write task/tool artifacts using underscore format (Windows-compatible) - * Creates flat files like: bmad_bmm_help.md - * - * @param {string} baseCommandsDir - Base commands directory for the IDE - * @param {Array} artifacts - Task/tool artifacts with relativePath - * @returns {number} Count of commands written - */ - async writeColonArtifacts(baseCommandsDir, artifacts) { - let writtenCount = 0; - - for (const artifact of artifacts) { - if (artifact.type === 'task' || artifact.type === 'tool') { - const commandContent = this.generateCommandContent(artifact, artifact.type); - // Use underscore format: bmad_module_name.md - const flatName = toColonPath(artifact.relativePath); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - writtenCount++; - } - } - - return writtenCount; - } - - /** - * Write task/tool artifacts using dash format (NEW STANDARD) - * Creates flat files like: bmad-bmm-help.md - * - * Note: Tasks/tools do NOT have bmad-agent- prefix - only agents do. - * - * @param {string} baseCommandsDir - Base commands directory for the IDE - * @param {Array} artifacts - Task/tool artifacts with relativePath - * @returns {number} Count of commands written - */ - async writeDashArtifacts(baseCommandsDir, artifacts) { - let writtenCount = 0; - - for (const artifact of artifacts) { - if (artifact.type === 'task' || artifact.type === 'tool') { - const commandContent = this.generateCommandContent(artifact, artifact.type); - // Use dash format: bmad-module-name.md - const flatName = toDashPath(artifact.relativePath); - const commandPath = path.join(baseCommandsDir, flatName); - await fs.ensureDir(path.dirname(commandPath)); - await fs.writeFile(commandPath, commandContent); - writtenCount++; - } - } - - return writtenCount; - } -} - -module.exports = { TaskToolCommandGenerator }; diff --git a/tools/cli/installers/lib/ide/shared/workflow-command-generator.js b/tools/cli/installers/lib/ide/shared/workflow-command-generator.js deleted file mode 100644 index 996c8728d..000000000 --- a/tools/cli/installers/lib/ide/shared/workflow-command-generator.js +++ /dev/null @@ -1,179 +0,0 @@ -const path = require('node:path'); -const fs = require('fs-extra'); -const csv = require('csv-parse/sync'); -const { BMAD_FOLDER_NAME } = require('./path-utils'); - -/** - * Generates command files for each workflow in the manifest - */ -class WorkflowCommandGenerator { - constructor(bmadFolderName = BMAD_FOLDER_NAME) { - this.bmadFolderName = bmadFolderName; - } - - async collectWorkflowArtifacts(bmadDir) { - const workflows = await this.loadWorkflowManifest(bmadDir); - - if (!workflows) { - return { artifacts: [], counts: { commands: 0, launchers: 0 } }; - } - - // ALL workflows now generate commands - no standalone filtering - const allWorkflows = workflows; - - const artifacts = []; - - for (const workflow of allWorkflows) { - // Calculate the relative workflow path (e.g., bmm/workflows/4-implementation/sprint-planning/workflow.md) - let workflowRelPath = workflow.path || ''; - // Normalize path separators for cross-platform compatibility - workflowRelPath = workflowRelPath.replaceAll('\\', '/'); - // Remove _bmad/ prefix if present to get relative path from project root - // Handle both absolute paths (/path/to/_bmad/...) and relative paths (_bmad/...) - if (workflowRelPath.includes('_bmad/')) { - const parts = workflowRelPath.split(/_bmad\//); - if (parts.length > 1) { - workflowRelPath = parts.slice(1).join('/'); - } - } else if (workflowRelPath.includes('/src/')) { - // Normalize source paths (e.g. .../src/bmm/...) to relative module path (e.g. bmm/...) - const match = workflowRelPath.match(/\/src\/([^/]+)\/(.+)/); - if (match) { - workflowRelPath = `${match[1]}/${match[2]}`; - } - } - artifacts.push({ - type: 'workflow-command', - name: workflow.name, - description: workflow.description || `${workflow.name} workflow`, - module: workflow.module, - canonicalId: workflow.canonicalId || '', - relativePath: path.join(workflow.module, 'workflows', `${workflow.name}.md`), - workflowPath: workflowRelPath, // Relative path to actual workflow file - sourcePath: workflow.path, - }); - } - - const groupedWorkflows = this.groupWorkflowsByModule(allWorkflows); - for (const [module, launcherContent] of Object.entries(this.buildModuleWorkflowLaunchers(groupedWorkflows))) { - artifacts.push({ - type: 'workflow-launcher', - module, - relativePath: path.join(module, 'workflows', 'README.md'), - content: launcherContent, - sourcePath: null, - }); - } - - return { - artifacts, - counts: { - commands: allWorkflows.length, - launchers: Object.keys(groupedWorkflows).length, - }, - }; - } - - /** - * Create workflow launcher files for each module - */ - async createModuleWorkflowLaunchers(baseCommandsDir, workflowsByModule) { - for (const [module, moduleWorkflows] of Object.entries(workflowsByModule)) { - const content = this.buildLauncherContent(module, moduleWorkflows); - const moduleWorkflowsDir = path.join(baseCommandsDir, module, 'workflows'); - await fs.ensureDir(moduleWorkflowsDir); - const launcherPath = path.join(moduleWorkflowsDir, 'README.md'); - await fs.writeFile(launcherPath, content); - } - } - - groupWorkflowsByModule(workflows) { - const workflowsByModule = {}; - - for (const workflow of workflows) { - if (!workflowsByModule[workflow.module]) { - workflowsByModule[workflow.module] = []; - } - - workflowsByModule[workflow.module].push({ - ...workflow, - displayPath: this.transformWorkflowPath(workflow.path), - }); - } - - return workflowsByModule; - } - - buildModuleWorkflowLaunchers(groupedWorkflows) { - const launchers = {}; - - for (const [module, moduleWorkflows] of Object.entries(groupedWorkflows)) { - launchers[module] = this.buildLauncherContent(module, moduleWorkflows); - } - - return launchers; - } - - buildLauncherContent(module, moduleWorkflows) { - let content = `# ${module.toUpperCase()} Workflows - -## Available Workflows in ${module} - -`; - - for (const workflow of moduleWorkflows) { - content += `**${workflow.name}**\n`; - content += `- Path: \`${workflow.displayPath}\`\n`; - content += `- ${workflow.description}\n\n`; - } - - content += ` -## Execution - -When running any workflow: -1. LOAD the workflow.md file at the path shown above -2. READ its entire contents and follow its directions exactly -3. Save outputs after EACH section - -## Modes -- Normal: Full interaction -- #yolo: Skip optional steps -`; - - return content; - } - - transformWorkflowPath(workflowPath) { - let transformed = workflowPath; - - if (workflowPath.includes('/src/bmm-skills/')) { - const match = workflowPath.match(/\/src\/bmm-skills\/(.+)/); - if (match) { - transformed = `{project-root}/${this.bmadFolderName}/bmm/${match[1]}`; - } - } else if (workflowPath.includes('/src/core-skills/')) { - const match = workflowPath.match(/\/src\/core-skills\/(.+)/); - if (match) { - transformed = `{project-root}/${this.bmadFolderName}/core/${match[1]}`; - } - } - - return transformed; - } - - async loadWorkflowManifest(bmadDir) { - const manifestPath = path.join(bmadDir, '_config', 'workflow-manifest.csv'); - - if (!(await fs.pathExists(manifestPath))) { - return null; - } - - const csvContent = await fs.readFile(manifestPath, 'utf8'); - return csv.parse(csvContent, { - columns: true, - skip_empty_lines: true, - }); - } -} - -module.exports = { WorkflowCommandGenerator };