diff --git a/docs/zh-cn/explanation/established-projects-faq.md b/docs/zh-cn/explanation/established-projects-faq.md index 8756faa20..dcf89df2c 100644 --- a/docs/zh-cn/explanation/established-projects-faq.md +++ b/docs/zh-cn/explanation/established-projects-faq.md @@ -8,10 +8,10 @@ sidebar: ## 问题 -- [我必须先运行 document-project 吗?](#do-i-have-to-run-document-project-first) -- [如果我忘记运行 document-project 怎么办?](#what-if-i-forget-to-run-document-project) -- [我可以在既有项目上使用快速流程吗?](#can-i-use-quick-flow-for-established-projects) -- [如果我的现有代码不遵循最佳实践怎么办?](#what-if-my-existing-code-doesnt-follow-best-practices) +- [我必须先运行 document-project 吗?](#我必须先运行-document-project-吗) +- [如果我忘记运行 document-project 怎么办?](#如果我忘记运行-document-project-怎么办) +- [我可以在既有项目上使用快速流程吗?](#我可以在既有项目上使用快速流程吗) +- [如果我的现有代码不遵循最佳实践怎么办?](#如果我的现有代码不遵循最佳实践怎么办) ### 我必须先运行 document-project 吗? diff --git a/package-lock.json b/package-lock.json index 7f889240f..bcbfedb40 100644 --- a/package-lock.json +++ b/package-lock.json @@ -7230,9 +7230,9 @@ "license": "ISC" }, "node_modules/h3": { - "version": "1.15.5", - "resolved": "https://registry.npmjs.org/h3/-/h3-1.15.5.tgz", - "integrity": "sha512-xEyq3rSl+dhGX2Lm0+eFQIAzlDN6Fs0EcC4f7BNUmzaRX/PTzeuM+Tr2lHB8FoXggsQIeXLj8EDVgs5ywxyxmg==", + "version": "1.15.8", + "resolved": "https://registry.npmjs.org/h3/-/h3-1.15.8.tgz", + "integrity": "sha512-iOH6Vl8mGd9nNfu9C0IZ+GuOAfJHcyf3VriQxWaSWIB76Fg4BnFuk4cxBxjmQSSxJS664+pgjP6e7VBnUzFfcg==", "dev": true, "license": "MIT", "dependencies": { diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md index d00d4edb8..3678d069b 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md @@ -2,6 +2,7 @@ diff_output: '' # set at runtime spec_file: '' # set at runtime (path or empty) review_mode: '' # set at runtime: "full" or "no-spec" +story_key: '' # set at runtime when discovered from sprint status --- # Step 1: Gather Context @@ -23,8 +24,8 @@ review_mode: '' # set at runtime: "full" or "no-spec" - When multiple phrases match, prefer the most specific match (e.g., "branch diff" over bare "diff"). - **If a clear match is found:** Announce the detected mode (e.g., "Detected intent: review staged changes only") and proceed directly to constructing `{diff_output}` using the corresponding sub-case from instruction 3. Skip to instruction 4 (spec question). - **If no match from invocation text, check sprint tracking.** Look for a sprint status file (`*sprint-status*`) in `{implementation_artifacts}` or `{planning_artifacts}`. If found, scan for any story with status `review`. Handle as follows: - - **Exactly one `review` story:** Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, fall through to instruction 2. - - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. Then use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3. + - **Exactly one `review` story:** Set `{story_key}` to the story's key (e.g., `1-2-user-auth`). Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, clear `{story_key}` and fall through to instruction 2. + - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. If the user selects a story, set `{story_key}` to the selected story's key and use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3. If the user selects the manual choice, clear `{story_key}` and fall through to instruction 2. - **If no match and no sprint tracking:** Fall through to instruction 2. 2. HALT. Ask the user: **What do you want to review?** Present these options: diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-02-review.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-02-review.md index 306613014..c262a4971 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-02-review.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-02-review.md @@ -13,27 +13,20 @@ failed_layers: '' # set at runtime: comma-separated list of layers that failed o ## INSTRUCTIONS -1. Launch parallel subagents. Each subagent gets NO conversation history from this session: +1. If `{review_mode}` = `"no-spec"`, note to the user: "Acceptance Auditor skipped — no spec file provided." - - **Blind Hunter** -- Invoke the `bmad-review-adversarial-general` skill in a subagent. Pass `content` = `{diff_output}` only. No spec, no project access. +2. Launch parallel subagents without conversation context. If subagents are not available, generate prompt files in `{implementation_artifacts}` — one per reviewer role below — and HALT. Ask the user to run each in a separate session (ideally a different LLM) and paste back the findings. When findings are pasted, resume from this point and proceed to step 3. - - **Edge Case Hunter** -- Invoke the `bmad-review-edge-case-hunter` skill in a subagent. Pass `content` = `{diff_output}`. This subagent has read access to the project. + - **Blind Hunter** — receives `{diff_output}` only. No spec, no context docs, no project access. Invoke via the `bmad-review-adversarial-general` skill. - - **Acceptance Auditor** (only if `{review_mode}` = `"full"`) -- A subagent that receives `{diff_output}`, the content of the file at `{spec_file}`, and any loaded context docs. Its prompt: - > You are an Acceptance Auditor. Review this diff against the spec and context docs. Check for: violations of acceptance criteria, deviations from spec intent, missing implementation of specified behavior, contradictions between spec constraints and actual code. Output findings as a markdown list. Each finding: one-line title, which AC/constraint it violates, and evidence from the diff. + - **Edge Case Hunter** — receives `{diff_output}` and read access to the project. Invoke via the `bmad-review-edge-case-hunter` skill. -2. **Subagent failure handling**: If any subagent fails, times out, or returns empty results, append the layer name to `{failed_layers}` (comma-separated) and proceed with findings from the remaining layers. + - **Acceptance Auditor** (only if `{review_mode}` = `"full"`) — receives `{diff_output}`, the content of the file at `{spec_file}`, and any loaded context docs. Its prompt: + > You are an Acceptance Auditor. Review this diff against the spec and context docs. Check for: violations of acceptance criteria, deviations from spec intent, missing implementation of specified behavior, contradictions between spec constraints and actual code. Output findings as a Markdown list. Each finding: one-line title, which AC/constraint it violates, and evidence from the diff. -3. If `{review_mode}` = `"no-spec"`, note to the user: "Acceptance Auditor skipped — no spec file provided." +3. **Subagent failure handling**: If any subagent fails, times out, or returns empty results, append the layer name to `{failed_layers}` (comma-separated) and proceed with findings from the remaining layers. -4. **Fallback** (if subagents are not available): Generate prompt files in `{implementation_artifacts}` -- one per active reviewer: - - `review-blind-hunter.md` (always) - - `review-edge-case-hunter.md` (always) - - `review-acceptance-auditor.md` (only if `{review_mode}` = `"full"`) - - HALT. Tell the user to run each prompt in a separate session and paste back findings. When findings are pasted, resume from this point and proceed to step 3. - -5. Collect all findings from the completed layers. +4. Collect all findings from the completed layers. ## NEXT diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-03-triage.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-03-triage.md index 3e1d21665..6bb2635db 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-03-triage.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-03-triage.md @@ -30,19 +30,18 @@ - Set `source` to the merged sources (e.g., `blind+edge`). 3. **Classify** each finding into exactly one bucket: - - **intent_gap** -- The spec/intent is incomplete; cannot resolve from existing information. Only possible if `{review_mode}` = `"full"`. - - **bad_spec** -- The spec should have prevented this; spec is wrong or ambiguous. Only possible if `{review_mode}` = `"full"`. - - **patch** -- Code issue that is trivially fixable without human input. Just needs a code change. + - **decision_needed** -- There is an ambiguous choice that requires human input. The code cannot be correctly patched without knowing the user's intent. Only possible if `{review_mode}` = `"full"`. + - **patch** -- Code issue that is fixable without human input. The correct fix is unambiguous. - **defer** -- Pre-existing issue not caused by the current change. Real but not actionable now. - - **reject** -- Noise, false positive, or handled elsewhere. + - **dismiss** -- Noise, false positive, or handled elsewhere. - If `{review_mode}` = `"no-spec"` and a finding would otherwise be `intent_gap` or `bad_spec`, reclassify it as `patch` (if code-fixable) or `defer` (if not). + If `{review_mode}` = `"no-spec"` and a finding would otherwise be `decision_needed`, reclassify it as `patch` (if the fix is unambiguous) or `defer` (if not). -4. **Drop** all `reject` findings. Record the reject count for the summary. +4. **Drop** all `dismiss` findings. Record the dismiss count for the summary. -5. If `{failed_layers}` is non-empty, report which layers failed before announcing results. If zero findings remain after dropping rejects AND `{failed_layers}` is non-empty, warn the user that the review may be incomplete rather than announcing a clean review. +5. If `{failed_layers}` is non-empty, report which layers failed before announcing results. If zero findings remain after dropping dismissed AND `{failed_layers}` is non-empty, warn the user that the review may be incomplete rather than announcing a clean review. -6. If zero findings remain after dropping rejects and no layers failed, note clean review. +6. If zero findings remain after triage (all rejected or none raised): state "✅ Clean review — all layers passed." (Step 3 already warned if any review layers failed via `{failed_layers}`.) ## NEXT diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md index 73a6919e2..c495d4981 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md @@ -1,38 +1,129 @@ --- +deferred_work_file: '{implementation_artifacts}/deferred-work.md' --- -# Step 4: Present +# Step 4: Present and Act ## RULES - YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` -- Do NOT auto-fix anything. Present findings and let the user decide next steps. +- When `{spec_file}` is set, always write findings to the story file before offering action choices. +- `decision-needed` findings must be resolved before handling `patch` findings. ## INSTRUCTIONS -1. Group remaining findings by category. +### 1. Clean review shortcut -2. Present to the user in this order (include a section only if findings exist in that category): +If zero findings remain after triage (all dismissed or none raised): state that and proceed to section 6 (Sprint Status Update). - - **Intent Gaps**: "These findings suggest the captured intent is incomplete. Consider clarifying intent before proceeding." - - List each with title + detail. +### 2. Write findings to the story file - - **Bad Spec**: "These findings suggest the spec should be amended. Consider regenerating or amending the spec with this context:" - - List each with title + detail + suggested spec amendment. +If `{spec_file}` exists and contains a Tasks/Subtasks section, append a `### Review Findings` subsection. Write all findings in this order: - - **Patch**: "These are fixable code issues:" - - List each with title + detail + location (if available). +1. **`decision-needed`** findings (unchecked): + `- [ ] [Review][Decision] — <Detail>` - - **Defer**: "Pre-existing issues surfaced by this review (not caused by current changes):" - - List each with title + detail. +2. **`patch`** findings (unchecked): + `- [ ] [Review][Patch] <Title> [<file>:<line>]` -3. Summary line: **X** intent_gap, **Y** bad_spec, **Z** patch, **W** defer findings. **R** findings rejected as noise. +3. **`defer`** findings (checked off, marked deferred): + `- [x] [Review][Defer] <Title> [<file>:<line>] — deferred, pre-existing` -4. If clean review (zero findings across all layers after triage): state that N findings were raised but all were classified as noise, or that no findings were raised at all (as applicable). +Also append each `defer` finding to `{deferred_work_file}` under a heading `## Deferred from: code review ({date})`. If `{spec_file}` is set, include its basename in the heading (e.g., `code review of story-3.3 (2026-03-18)`). One bullet per finding with description. -5. Offer the user next steps (recommendations, not automated actions): - - If `patch` findings exist: "These can be addressed in a follow-up implementation pass or manually." - - If `intent_gap` or `bad_spec` findings exist: "Consider running the planning workflow to clarify intent or amend the spec before continuing." - - If only `defer` findings remain: "No action needed for this change. Deferred items are noted for future attention." +### 3. Present summary -Workflow complete. +Announce what was written: + +> **Code review complete.** <D> `decision-needed`, <P> `patch`, <W> `defer`, <R> dismissed as noise. + +If `{spec_file}` is set, add: `Findings written to the review findings section in {spec_file}.` +Otherwise add: `Findings are listed above. No story file was provided, so nothing was persisted.` + +### 4. Resolve decision-needed findings + +If `decision_needed` findings exist, present each one with its detail and the options available. The user must decide — the correct fix is ambiguous without their input. Walk through each finding (or batch related ones) and get the user's call. Once resolved, each becomes a `patch`, `defer`, or is dismissed. + +If the user chooses to defer, ask: Quick one-line reason for deferring this item? (helps future reviews): — then append that reason to both the story file bullet and the `{deferred_work_file}` entry. + +**HALT** — I am waiting for your numbered choice. Reply with only the number (or "0" for batch). Do not proceed until you select an option. + +### 5. Handle `patch` findings + +If `patch` findings exist (including any resolved from step 4), HALT. Ask the user: + +If `{spec_file}` is set, present all three options (if >3 `patch` findings exist, also show option 0): + +> **How would you like to handle the <Z> `patch` findings?** +> 0. **Batch-apply all** — automatically fix every non-controversial patch (recommended when there are many) +> 1. **Fix them automatically** — I will apply fixes now +> 2. **Leave as action items** — they are already in the story file +> 3. **Walk through each** — let me show details before deciding + +If `{spec_file}` is **not** set, present only options 1 and 3 (omit option 2 — findings were not written to a file). If >3 `patch` findings exist, also show option 0: + +> **How would you like to handle the <Z> `patch` findings?** +> 0. **Batch-apply all** — automatically fix every non-controversial patch (recommended when there are many) +> 1. **Fix them automatically** — I will apply fixes now +> 2. **Walk through each** — let me show details before deciding + +**HALT** — I am waiting for your numbered choice. Reply with only the number (or "0" for batch). Do not proceed until you select an option. + +- **Option 0** (only when >3 findings): Apply all non-controversial patches without per-finding confirmation. Skip any finding that requires judgment. Present a summary of changes made and any skipped findings. +- **Option 1**: Apply each fix. After all patches are applied, present a summary of changes made. If `{spec_file}` is set, check off the items in the story file. +- **Option 2** (only when `{spec_file}` is set): Done — findings are already written to the story. +- **Walk through each**: Present each finding with full detail, diff context, and suggested fix. After walkthrough, re-offer the applicable options above. + + **HALT** — I am waiting for your numbered choice. Reply with only the number (or "0" for batch). Do not proceed until you select an option. + +**✅ Code review actions complete** + +- Decision-needed resolved: <D> +- Patches handled: <P> +- Deferred: <W> +- Dismissed: <R> + +### 6. Update story status and sync sprint tracking + +Skip this section if `{spec_file}` is not set. + +#### Determine new status based on review outcome + +- If all `decision-needed` and `patch` findings were resolved (fixed or dismissed) AND no unresolved HIGH/MEDIUM issues remain: set `{new_status}` = `done`. Update the story file Status section to `done`. +- If `patch` findings were left as action items, or unresolved issues remain: set `{new_status}` = `in-progress`. Update the story file Status section to `in-progress`. + +Save the story file. + +#### Sync sprint-status.yaml + +If `{story_key}` is not set, skip this subsection and note that sprint status was not synced because no story key was available. + +If `{sprint_status}` file exists: + +1. Load the FULL `{sprint_status}` file. +2. Find the `development_status` entry matching `{story_key}`. +3. If found: update `development_status[{story_key}]` to `{new_status}`. Update `last_updated` to current date. Save the file, preserving ALL comments and structure including STATUS DEFINITIONS. +4. If `{story_key}` not found in sprint status: warn the user that the story file was updated but sprint-status sync failed. + +If `{sprint_status}` file does not exist, note that story status was updated in the story file only. + +#### Completion summary + +> **Review Complete!** +> +> **Story Status:** `{new_status}` +> **Issues Fixed:** <fixed_count> +> **Action Items Created:** <action_count> +> **Deferred:** <W> +> **Dismissed:** <R> + +### 7. Next steps + +Present the user with follow-up options: + +> **What would you like to do next?** +> 1. **Start the next story** — run `dev-story` to pick up the next `ready-for-dev` story +> 2. **Re-run code review** — address findings and review again +> 3. **Done** — end the workflow + +**HALT** — I am waiting for your choice. Do not proceed until the user selects an option. diff --git a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md b/src/bmm-skills/4-implementation/bmad-code-review/workflow.md index 6653e3c8a..2cad2d870 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/workflow.md @@ -44,6 +44,7 @@ Load and read full config from `{main_config}` and resolve: - `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` - `communication_language`, `document_output_language`, `user_skill_level` - `date` as system-generated current datetime +- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` - `project_context` = `**/project-context.md` (load if exists) - CLAUDE.md / memory files (load if exist) diff --git a/src/core-skills/bmad-advanced-elicitation/SKILL.md b/src/core-skills/bmad-advanced-elicitation/SKILL.md index d40bd5a8b..e7b60683e 100644 --- a/src/core-skills/bmad-advanced-elicitation/SKILL.md +++ b/src/core-skills/bmad-advanced-elicitation/SKILL.md @@ -1,6 +1,137 @@ --- name: bmad-advanced-elicitation description: 'Push the LLM to reconsider, refine, and improve its recent output. Use when user asks for deeper critique or mentions a known deeper critique method, e.g. socratic, first principles, pre-mortem, red team.' +agent_party: '{project-root}/_bmad/_config/agent-manifest.csv' --- -Follow the instructions in ./workflow.md. +# Advanced Elicitation + +**Goal:** Push the LLM to reconsider, refine, and improve its recent output. + +--- + +## CRITICAL LLM INSTRUCTIONS + +- **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER +- DO NOT skip steps or change the sequence +- HALT immediately when halt-conditions are met +- Each action within a step is a REQUIRED action to complete that step +- Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution +- **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`** + +--- + +## INTEGRATION (When Invoked Indirectly) + +When invoked from another prompt or process: + +1. Receive or review the current section content that was just generated +2. Apply elicitation methods iteratively to enhance that specific content +3. Return the enhanced version back when user selects 'x' to proceed and return back +4. The enhanced content replaces the original section content in the output document + +--- + +## FLOW + +### Step 1: Method Registry Loading + +**Action:** Load and read `./methods.csv` and `{agent_party}` + +#### CSV Structure + +- **category:** Method grouping (core, structural, risk, etc.) +- **method_name:** Display name for the method +- **description:** Rich explanation of what the method does, when to use it, and why it's valuable +- **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action") + +#### Context Analysis + +- Use conversation history +- Analyze: content type, complexity, stakeholder needs, risk level, and creative potential + +#### Smart Selection + +1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential +2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV +3. Select 5 methods: Choose methods that best match the context based on their descriptions +4. Balance approach: Include mix of foundational and specialized techniques as appropriate + +--- + +### Step 2: Present Options and Handle Responses + +#### Display Format + +``` +**Advanced Elicitation Options** +_If party mode is active, agents will join in._ +Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed: + +1. [Method Name] +2. [Method Name] +3. [Method Name] +4. [Method Name] +5. [Method Name] +r. Reshuffle the list with 5 new options +a. List all methods with descriptions +x. Proceed / No Further Actions +``` + +#### Response Handling + +**Case 1-5 (User selects a numbered method):** + +- Execute the selected method using its description from the CSV +- Adapt the method's complexity and output format based on the current context +- Apply the method creatively to the current section content being enhanced +- Display the enhanced version showing what the method revealed or improved +- **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response. +- **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user. +- **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations + +**Case r (Reshuffle):** + +- Select 5 random methods from methods.csv, present new list with same prompt format +- When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered + +**Case x (Proceed):** + +- Complete elicitation and proceed +- Return the fully enhanced content back to the invoking skill +- The enhanced content becomes the final version for that section +- Signal completion back to the invoking skill to continue with next section + +**Case a (List All):** + +- List all methods with their descriptions from the CSV in a compact table +- Allow user to select any method by name or number from the full list +- After selection, execute the method as described in the Case 1-5 above + +**Case: Direct Feedback:** + +- Apply changes to current section content and re-present choices + +**Case: Multiple Numbers:** + +- Execute methods in sequence on the content, then re-offer choices + +--- + +### Step 3: Execution Guidelines + +- **Method execution:** Use the description from CSV to understand and apply each method +- **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection") +- **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated) +- **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency +- Focus on actionable insights +- **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise) +- **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already +- **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution +- Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session +- Each method application builds upon previous enhancements +- **Content preservation:** Track all enhancements made during elicitation +- **Iterative enhancement:** Each selected method (1-5) should: + 1. Apply to the current enhanced version of the content + 2. Show the improvements made + 3. Return to the prompt for additional elicitations or completion diff --git a/src/core-skills/bmad-advanced-elicitation/workflow.md b/src/core-skills/bmad-advanced-elicitation/workflow.md deleted file mode 100644 index ecb7f8391..000000000 --- a/src/core-skills/bmad-advanced-elicitation/workflow.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -agent_party: '{project-root}/_bmad/_config/agent-manifest.csv' ---- - -# Advanced Elicitation Workflow - -**Goal:** Push the LLM to reconsider, refine, and improve its recent output. - ---- - -## CRITICAL LLM INSTRUCTIONS - -- **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER -- DO NOT skip steps or change the sequence -- HALT immediately when halt-conditions are met -- Each action within a step is a REQUIRED action to complete that step -- Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution -- **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`** - ---- - -## INTEGRATION (When Invoked Indirectly) - -When invoked from another prompt or process: - -1. Receive or review the current section content that was just generated -2. Apply elicitation methods iteratively to enhance that specific content -3. Return the enhanced version back when user selects 'x' to proceed and return back -4. The enhanced content replaces the original section content in the output document - ---- - -## FLOW - -### Step 1: Method Registry Loading - -**Action:** Load and read `./methods.csv` and `{agent_party}` - -#### CSV Structure - -- **category:** Method grouping (core, structural, risk, etc.) -- **method_name:** Display name for the method -- **description:** Rich explanation of what the method does, when to use it, and why it's valuable -- **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action") - -#### Context Analysis - -- Use conversation history -- Analyze: content type, complexity, stakeholder needs, risk level, and creative potential - -#### Smart Selection - -1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential -2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV -3. Select 5 methods: Choose methods that best match the context based on their descriptions -4. Balance approach: Include mix of foundational and specialized techniques as appropriate - ---- - -### Step 2: Present Options and Handle Responses - -#### Display Format - -``` -**Advanced Elicitation Options** -_If party mode is active, agents will join in._ -Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed: - -1. [Method Name] -2. [Method Name] -3. [Method Name] -4. [Method Name] -5. [Method Name] -r. Reshuffle the list with 5 new options -a. List all methods with descriptions -x. Proceed / No Further Actions -``` - -#### Response Handling - -**Case 1-5 (User selects a numbered method):** - -- Execute the selected method using its description from the CSV -- Adapt the method's complexity and output format based on the current context -- Apply the method creatively to the current section content being enhanced -- Display the enhanced version showing what the method revealed or improved -- **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response. -- **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user. -- **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations - -**Case r (Reshuffle):** - -- Select 5 random methods from methods.csv, present new list with same prompt format -- When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered - -**Case x (Proceed):** - -- Complete elicitation and proceed -- Return the fully enhanced content back to the invoking skill -- The enhanced content becomes the final version for that section -- Signal completion back to the invoking skill to continue with next section - -**Case a (List All):** - -- List all methods with their descriptions from the CSV in a compact table -- Allow user to select any method by name or number from the full list -- After selection, execute the method as described in the Case 1-5 above - -**Case: Direct Feedback:** - -- Apply changes to current section content and re-present choices - -**Case: Multiple Numbers:** - -- Execute methods in sequence on the content, then re-offer choices - ---- - -### Step 3: Execution Guidelines - -- **Method execution:** Use the description from CSV to understand and apply each method -- **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection") -- **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated) -- **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency -- Focus on actionable insights -- **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise) -- **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already -- **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution -- Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session -- Each method application builds upon previous enhancements -- **Content preservation:** Track all enhancements made during elicitation -- **Iterative enhancement:** Each selected method (1-5) should: - 1. Apply to the current enhanced version of the content - 2. Show the improvements made - 3. Return to the prompt for additional elicitations or completion diff --git a/src/core-skills/bmad-editorial-review-prose/SKILL.md b/src/core-skills/bmad-editorial-review-prose/SKILL.md index 3702b0378..3498f925e 100644 --- a/src/core-skills/bmad-editorial-review-prose/SKILL.md +++ b/src/core-skills/bmad-editorial-review-prose/SKILL.md @@ -3,4 +3,84 @@ name: bmad-editorial-review-prose description: 'Clinical copy-editor that reviews text for communication issues. Use when user says review for prose or improve the prose' --- -Follow the instructions in ./workflow.md. +# Editorial Review - Prose + +**Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table. + +**Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. + +**CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed. + +**Inputs:** +- **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML) +- **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. +- **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus + + +## PRINCIPLES + +1. **Minimal intervention:** Apply the smallest fix that achieves clarity +2. **Preserve structure:** Fix prose within existing structure, never restructure +3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup +4. **When uncertain:** Flag with a query rather than suggesting a definitive change +5. **Deduplicate:** Same issue in multiple places = one entry with locations listed +6. **No conflicts:** Merge overlapping fixes into single entries +7. **Respect author voice:** Preserve intentional stylistic choices + +> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. + + +## STEPS + +### Step 1: Validate Input + +- Check if content is empty or contains fewer than 3 words + - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)" +- Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`) + - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'" +- Identify content type (markdown, plain text, XML with text) +- Note any code blocks, frontmatter, or structural markup to skip + +### Step 2: Analyze Style + +- Analyze the style, tone, and voice of the input text +- Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns) +- Calibrate review approach based on reader_type: + - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging + - If `humans`: Prioritize clarity, flow, readability, natural progression + +### Step 3: Editorial Review (CRITICAL) + +- If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review +- Review all prose sections (skip code blocks, frontmatter, structural markup) +- Identify communication issues that impede comprehension +- For each issue, determine the minimal fix that achieves clarity +- Deduplicate: If same issue appears multiple times, create one entry listing all locations +- Merge overlapping issues into single entries (no conflicting suggestions) +- For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change +- Preserve author voice — do not "improve" intentional stylistic choices + +### Step 4: Output Results + +- If issues found: Output a three-column markdown table with all suggested fixes +- If no issues found: Output "No editorial issues identified" + +**Output format:** + +| Original Text | Revised Text | Changes | +|---------------|--------------|---------| +| The exact original passage | The suggested revision | Brief explanation of what changed and why | + +**Example:** + +| Original Text | Revised Text | Changes | +|---------------|--------------|---------| +| The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" | +| Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) | + + +## HALT CONDITIONS + +- HALT with error if content is empty or fewer than 3 words +- HALT with error if reader_type is not `humans` or `llm` +- If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-prose/workflow.md b/src/core-skills/bmad-editorial-review-prose/workflow.md deleted file mode 100644 index 42db68710..000000000 --- a/src/core-skills/bmad-editorial-review-prose/workflow.md +++ /dev/null @@ -1,81 +0,0 @@ -# Editorial Review - Prose - -**Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table. - -**Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. - -**CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed. - -**Inputs:** -- **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML) -- **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. -- **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus - - -## PRINCIPLES - -1. **Minimal intervention:** Apply the smallest fix that achieves clarity -2. **Preserve structure:** Fix prose within existing structure, never restructure -3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup -4. **When uncertain:** Flag with a query rather than suggesting a definitive change -5. **Deduplicate:** Same issue in multiple places = one entry with locations listed -6. **No conflicts:** Merge overlapping fixes into single entries -7. **Respect author voice:** Preserve intentional stylistic choices - -> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. - - -## STEPS - -### Step 1: Validate Input - -- Check if content is empty or contains fewer than 3 words - - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)" -- Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`) - - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'" -- Identify content type (markdown, plain text, XML with text) -- Note any code blocks, frontmatter, or structural markup to skip - -### Step 2: Analyze Style - -- Analyze the style, tone, and voice of the input text -- Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns) -- Calibrate review approach based on reader_type: - - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging - - If `humans`: Prioritize clarity, flow, readability, natural progression - -### Step 3: Editorial Review (CRITICAL) - -- If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review -- Review all prose sections (skip code blocks, frontmatter, structural markup) -- Identify communication issues that impede comprehension -- For each issue, determine the minimal fix that achieves clarity -- Deduplicate: If same issue appears multiple times, create one entry listing all locations -- Merge overlapping issues into single entries (no conflicting suggestions) -- For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change -- Preserve author voice — do not "improve" intentional stylistic choices - -### Step 4: Output Results - -- If issues found: Output a three-column markdown table with all suggested fixes -- If no issues found: Output "No editorial issues identified" - -**Output format:** - -| Original Text | Revised Text | Changes | -|---------------|--------------|---------| -| The exact original passage | The suggested revision | Brief explanation of what changed and why | - -**Example:** - -| Original Text | Revised Text | Changes | -|---------------|--------------|---------| -| The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" | -| Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) | - - -## HALT CONDITIONS - -- HALT with error if content is empty or fewer than 3 words -- HALT with error if reader_type is not `humans` or `llm` -- If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-structure/SKILL.md b/src/core-skills/bmad-editorial-review-structure/SKILL.md index 5be13686b..c93183148 100644 --- a/src/core-skills/bmad-editorial-review-structure/SKILL.md +++ b/src/core-skills/bmad-editorial-review-structure/SKILL.md @@ -3,4 +3,177 @@ name: bmad-editorial-review-structure description: 'Structural editor that proposes cuts, reorganization, and simplification while preserving comprehension. Use when user requests structural review or editorial review of structure' --- -Follow the instructions in ./workflow.md. +# Editorial Review - Structure + +**Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing. + +**Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. + +> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. + +**Inputs:** +- **content** (required) -- Document to review (markdown, plain text, or structured content) +- **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. +- **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview') +- **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers') +- **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density +- **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit') + +## Principles + +- Comprehension through calibration: Optimize for the minimum words needed to maintain understanding +- Front-load value: Critical information comes first; nice-to-know comes last (or goes) +- One source of truth: If information appears identically twice, consolidate +- Scope discipline: Content that belongs in a different document should be cut or linked +- Propose, don't execute: Output recommendations -- user decides what to accept +- **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.** + +## Human-Reader Principles + +These elements serve human comprehension and engagement -- preserve unless clearly wasteful: + +- Visual aids: Diagrams, images, and flowcharts anchor understanding +- Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place +- Reader's Journey: Organize content biologically (linear progression), not logically (database) +- Mental models: Overview before details prevents cognitive overload +- Warmth: Encouraging tone reduces anxiety for new users +- Whitespace: Admonitions and callouts provide visual breathing room +- Summaries: Recaps help retention; they're reinforcement, not redundancy +- Examples: Concrete illustrations make abstract concepts accessible +- Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention + +## LLM-Reader Principles + +When reader_type='llm', optimize for PRECISION and UNAMBIGUITY: + +- Dependency-first: Define concepts before usage to minimize hallucination risk +- Cut emotional language, encouragement, and orientation sections +- IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly. +- Use consistent terminology -- same word for same concept throughout +- Eliminate hedging ("might", "could", "generally") -- use direct statements +- Prefer structured formats (tables, lists, YAML) over prose +- Reference known standards ("conventional commits", "Google style guide") to leverage training +- STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation +- Unambiguous references -- no unclear antecedents ("it", "this", "the above") +- Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth) + +## Structure Models + +### Tutorial/Guide (Linear) +**Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs +- Prerequisites: Setup/Context MUST precede action +- Sequence: Steps must follow strict chronological or logical dependency order +- Goal-oriented: clear 'Definition of Done' at the end + +### Reference/Database +**Applicability:** API docs, glossaries, configuration references, cheat sheets +- Random Access: No narrative flow required; user jumps to specific item +- MECE: Topics are Mutually Exclusive and Collectively Exhaustive +- Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns) + +### Explanation (Conceptual) +**Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context +- Abstract to Concrete: Definition to Context to Implementation/Example +- Scaffolding: Complex ideas built on established foundations + +### Prompt/Task Definition (Functional) +**Applicability:** BMAD tasks, prompts, system instructions, XML definitions +- Meta-first: Inputs, usage constraints, and context defined before instructions +- Separation of Concerns: Instructions (logic) separate from Data (content) +- Step-by-step: Execution flow must be explicit and ordered + +### Strategic/Context (Pyramid) +**Applicability:** PRDs, research reports, proposals, decision records +- Top-down: Conclusion/Status/Recommendation starts the document +- Grouping: Supporting context grouped logically below the headline +- Ordering: Most critical information first +- MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive +- Evidence: Data supports arguments, never leads + +## STEPS + +### Step 1: Validate Input + +- Check if content is empty or contains fewer than 3 words +- If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)" +- Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans") +- If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'" +- Identify document type and structure (headings, sections, lists, etc.) +- Note the current word count and section count + +### Step 2: Understand Purpose + +- If purpose was provided, use it; otherwise infer from content +- If target_audience was provided, use it; otherwise infer from content +- Identify the core question the document answers +- State in one sentence: "This document exists to help [audience] accomplish [goal]" +- Select the most appropriate structural model from Structure Models based on purpose/audience +- Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles) + +### Step 3: Structural Analysis (CRITICAL) + +- If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis +- Map the document structure: list each major section with its word count +- Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid) +- For each section, answer: Does this directly serve the stated purpose? +- If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged? +- Identify sections that could be: cut entirely, merged with another, moved to a different location, or split +- Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement) +- Identify scope violations: content that belongs in a different document +- Identify burying: critical information hidden deep in the document + +### Step 4: Flow Analysis + +- Assess the reader's journey: Does the sequence match how readers will use this? +- Identify premature detail: explanation given before the reader needs it +- Identify missing scaffolding: complex ideas without adequate setup +- Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim +- If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention? + +### Step 5: Generate Recommendations + +- Compile all findings into prioritized recommendations +- Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension) +- For each recommendation, state the rationale in one sentence +- Estimate impact: how many words would this save (or cost, for PRESERVE)? +- If length_target was provided, assess whether recommendations meet it +- If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement" + +### Step 6: Output Results + +- Output document summary (purpose, audience, reader_type, current length) +- Output the recommendation list in priority order +- Output estimated total reduction if all recommendations accepted +- If no recommendations, output: "No substantive changes recommended -- document structure is sound" + +Use the following output format: + +```markdown +## Document Summary +- **Purpose:** [inferred or provided purpose] +- **Audience:** [inferred or provided audience] +- **Reader type:** [selected reader type] +- **Structure model:** [selected structure model] +- **Current length:** [X] words across [Y] sections + +## Recommendations + +### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name] +**Rationale:** [One sentence explanation] +**Impact:** ~[X] words +**Comprehension note:** [If applicable, note impact on reader understanding] + +### 2. ... + +## Summary +- **Total recommendations:** [N] +- **Estimated reduction:** [X] words ([Y]% of original) +- **Meets length target:** [Yes/No/No target specified] +- **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity] +``` + +## HALT CONDITIONS + +- HALT with error if content is empty or fewer than 3 words +- HALT with error if reader_type is not "humans" or "llm" +- If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-editorial-review-structure/workflow.md b/src/core-skills/bmad-editorial-review-structure/workflow.md deleted file mode 100644 index bc6c35f73..000000000 --- a/src/core-skills/bmad-editorial-review-structure/workflow.md +++ /dev/null @@ -1,174 +0,0 @@ -# Editorial Review - Structure - -**Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing. - -**Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step. - -> **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins. - -**Inputs:** -- **content** (required) -- Document to review (markdown, plain text, or structured content) -- **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices. -- **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview') -- **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers') -- **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density -- **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit') - -## Principles - -- Comprehension through calibration: Optimize for the minimum words needed to maintain understanding -- Front-load value: Critical information comes first; nice-to-know comes last (or goes) -- One source of truth: If information appears identically twice, consolidate -- Scope discipline: Content that belongs in a different document should be cut or linked -- Propose, don't execute: Output recommendations -- user decides what to accept -- **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.** - -## Human-Reader Principles - -These elements serve human comprehension and engagement -- preserve unless clearly wasteful: - -- Visual aids: Diagrams, images, and flowcharts anchor understanding -- Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place -- Reader's Journey: Organize content biologically (linear progression), not logically (database) -- Mental models: Overview before details prevents cognitive overload -- Warmth: Encouraging tone reduces anxiety for new users -- Whitespace: Admonitions and callouts provide visual breathing room -- Summaries: Recaps help retention; they're reinforcement, not redundancy -- Examples: Concrete illustrations make abstract concepts accessible -- Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention - -## LLM-Reader Principles - -When reader_type='llm', optimize for PRECISION and UNAMBIGUITY: - -- Dependency-first: Define concepts before usage to minimize hallucination risk -- Cut emotional language, encouragement, and orientation sections -- IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly. -- Use consistent terminology -- same word for same concept throughout -- Eliminate hedging ("might", "could", "generally") -- use direct statements -- Prefer structured formats (tables, lists, YAML) over prose -- Reference known standards ("conventional commits", "Google style guide") to leverage training -- STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation -- Unambiguous references -- no unclear antecedents ("it", "this", "the above") -- Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth) - -## Structure Models - -### Tutorial/Guide (Linear) -**Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs -- Prerequisites: Setup/Context MUST precede action -- Sequence: Steps must follow strict chronological or logical dependency order -- Goal-oriented: clear 'Definition of Done' at the end - -### Reference/Database -**Applicability:** API docs, glossaries, configuration references, cheat sheets -- Random Access: No narrative flow required; user jumps to specific item -- MECE: Topics are Mutually Exclusive and Collectively Exhaustive -- Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns) - -### Explanation (Conceptual) -**Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context -- Abstract to Concrete: Definition to Context to Implementation/Example -- Scaffolding: Complex ideas built on established foundations - -### Prompt/Task Definition (Functional) -**Applicability:** BMAD tasks, prompts, system instructions, XML definitions -- Meta-first: Inputs, usage constraints, and context defined before instructions -- Separation of Concerns: Instructions (logic) separate from Data (content) -- Step-by-step: Execution flow must be explicit and ordered - -### Strategic/Context (Pyramid) -**Applicability:** PRDs, research reports, proposals, decision records -- Top-down: Conclusion/Status/Recommendation starts the document -- Grouping: Supporting context grouped logically below the headline -- Ordering: Most critical information first -- MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive -- Evidence: Data supports arguments, never leads - -## STEPS - -### Step 1: Validate Input - -- Check if content is empty or contains fewer than 3 words -- If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)" -- Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans") -- If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'" -- Identify document type and structure (headings, sections, lists, etc.) -- Note the current word count and section count - -### Step 2: Understand Purpose - -- If purpose was provided, use it; otherwise infer from content -- If target_audience was provided, use it; otherwise infer from content -- Identify the core question the document answers -- State in one sentence: "This document exists to help [audience] accomplish [goal]" -- Select the most appropriate structural model from Structure Models based on purpose/audience -- Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles) - -### Step 3: Structural Analysis (CRITICAL) - -- If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis -- Map the document structure: list each major section with its word count -- Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid) -- For each section, answer: Does this directly serve the stated purpose? -- If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged? -- Identify sections that could be: cut entirely, merged with another, moved to a different location, or split -- Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement) -- Identify scope violations: content that belongs in a different document -- Identify burying: critical information hidden deep in the document - -### Step 4: Flow Analysis - -- Assess the reader's journey: Does the sequence match how readers will use this? -- Identify premature detail: explanation given before the reader needs it -- Identify missing scaffolding: complex ideas without adequate setup -- Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim -- If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention? - -### Step 5: Generate Recommendations - -- Compile all findings into prioritized recommendations -- Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension) -- For each recommendation, state the rationale in one sentence -- Estimate impact: how many words would this save (or cost, for PRESERVE)? -- If length_target was provided, assess whether recommendations meet it -- If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement" - -### Step 6: Output Results - -- Output document summary (purpose, audience, reader_type, current length) -- Output the recommendation list in priority order -- Output estimated total reduction if all recommendations accepted -- If no recommendations, output: "No substantive changes recommended -- document structure is sound" - -Use the following output format: - -```markdown -## Document Summary -- **Purpose:** [inferred or provided purpose] -- **Audience:** [inferred or provided audience] -- **Reader type:** [selected reader type] -- **Structure model:** [selected structure model] -- **Current length:** [X] words across [Y] sections - -## Recommendations - -### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name] -**Rationale:** [One sentence explanation] -**Impact:** ~[X] words -**Comprehension note:** [If applicable, note impact on reader understanding] - -### 2. ... - -## Summary -- **Total recommendations:** [N] -- **Estimated reduction:** [X] words ([Y]% of original) -- **Meets length target:** [Yes/No/No target specified] -- **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity] -``` - -## HALT CONDITIONS - -- HALT with error if content is empty or fewer than 3 words -- HALT with error if reader_type is not "humans" or "llm" -- If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error) diff --git a/src/core-skills/bmad-help/SKILL.md b/src/core-skills/bmad-help/SKILL.md index ace902c2d..fee483e51 100644 --- a/src/core-skills/bmad-help/SKILL.md +++ b/src/core-skills/bmad-help/SKILL.md @@ -3,4 +3,90 @@ name: bmad-help description: 'Analyzes current state and user query to answer BMad questions or recommend the next workflow or agent. Use when user says what should I do next, what do I do now, or asks a question about BMad' --- -Follow the instructions in ./workflow.md. +# Task: BMAD Help + +## ROUTING RULES + +- **Empty `phase` = anytime** — Universal tools work regardless of workflow state +- **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module) +- **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is. +- **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering +- **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed") +- **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases +- **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows + +## DISPLAY RULES + +### Command-Based Workflows +When `command` field has a value: +- Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`) + +### Skill-Referenced Workflows +When `workflow-file` starts with `skill:`: +- The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path +- Do NOT attempt to resolve or load it as a file path +- Display using the `command` column value as a skill name in backticks (same as command-based workflows) + +### Agent-Based Workflows +When `command` field is empty: +- User loads agent first by invoking the agent skill (e.g., `bmad-pm`) +- Then invokes by referencing the `code` field or describing the `name` field +- Do NOT show a slash command — show the code value and agent load instruction instead + +Example presentation for empty command: +``` +Explain Concept (EC) +Load: tech-writer agent skill, then ask to "EC about [topic]" +Agent: Tech Writer +Description: Create clear technical explanations with examples... +``` + +## MODULE DETECTION + +- **Empty `module` column** → universal tools (work across all modules) +- **Named `module`** → module-specific workflows + +Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user. + +## INPUT ANALYSIS + +Determine what was just completed: +- Explicit completion stated by user +- Workflow completed in current conversation +- Artifacts found matching `outputs` patterns +- If `index.md` exists, read it for additional context +- If still unclear, ask: "What workflow did you most recently complete?" + +## EXECUTION + +1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv` + +2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config. + +3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so. + +4. **Detect active module** — Use MODULE DETECTION above + +5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above. + +6. **Present recommendations** — Show next steps based on: + - Completed workflows detected + - Phase/sequence ordering (ROUTING RULES) + - Artifact presence + + **Optional items first** — List optional workflows until a required step is reached + **Required items next** — List the next required workflow + + For each item, apply DISPLAY RULES above and include: + - Workflow **name** + - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES) + - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)") + - Brief **description** + +7. **Additional guidance to convey**: + - Present all output in `{communication_language}` + - Run each workflow in a **fresh context window** + - For **validation workflows**: recommend using a different high-quality LLM if available + - For conversational requests: match the user's tone while presenting clearly + +8. Return to the calling process after presenting recommendations. diff --git a/src/core-skills/bmad-help/workflow.md b/src/core-skills/bmad-help/workflow.md deleted file mode 100644 index 8dced5a7e..000000000 --- a/src/core-skills/bmad-help/workflow.md +++ /dev/null @@ -1,88 +0,0 @@ - -# Task: BMAD Help - -## ROUTING RULES - -- **Empty `phase` = anytime** — Universal tools work regardless of workflow state -- **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module) -- **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is. -- **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering -- **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed") -- **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases -- **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows - -## DISPLAY RULES - -### Command-Based Workflows -When `command` field has a value: -- Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`) - -### Skill-Referenced Workflows -When `workflow-file` starts with `skill:`: -- The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path -- Do NOT attempt to resolve or load it as a file path -- Display using the `command` column value as a skill name in backticks (same as command-based workflows) - -### Agent-Based Workflows -When `command` field is empty: -- User loads agent first by invoking the agent skill (e.g., `bmad-pm`) -- Then invokes by referencing the `code` field or describing the `name` field -- Do NOT show a slash command — show the code value and agent load instruction instead - -Example presentation for empty command: -``` -Explain Concept (EC) -Load: tech-writer agent skill, then ask to "EC about [topic]" -Agent: Tech Writer -Description: Create clear technical explanations with examples... -``` - -## MODULE DETECTION - -- **Empty `module` column** → universal tools (work across all modules) -- **Named `module`** → module-specific workflows - -Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user. - -## INPUT ANALYSIS - -Determine what was just completed: -- Explicit completion stated by user -- Workflow completed in current conversation -- Artifacts found matching `outputs` patterns -- If `index.md` exists, read it for additional context -- If still unclear, ask: "What workflow did you most recently complete?" - -## EXECUTION - -1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv` - -2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config. - -3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so. - -4. **Detect active module** — Use MODULE DETECTION above - -5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above. - -6. **Present recommendations** — Show next steps based on: - - Completed workflows detected - - Phase/sequence ordering (ROUTING RULES) - - Artifact presence - - **Optional items first** — List optional workflows until a required step is reached - **Required items next** — List the next required workflow - - For each item, apply DISPLAY RULES above and include: - - Workflow **name** - - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES) - - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)") - - Brief **description** - -7. **Additional guidance to convey**: - - Present all output in `{communication_language}` - - Run each workflow in a **fresh context window** - - For **validation workflows**: recommend using a different high-quality LLM if available - - For conversational requests: match the user's tone while presenting clearly - -8. Return to the calling process after presenting recommendations. diff --git a/src/core-skills/bmad-index-docs/SKILL.md b/src/core-skills/bmad-index-docs/SKILL.md index 35fffdd45..c92935b71 100644 --- a/src/core-skills/bmad-index-docs/SKILL.md +++ b/src/core-skills/bmad-index-docs/SKILL.md @@ -3,4 +3,64 @@ name: bmad-index-docs description: 'Generates or updates an index.md to reference all docs in the folder. Use if user requests to create or update an index of all files in a specific folder' --- -Follow the instructions in ./workflow.md. +# Index Docs + +**Goal:** Generate or update an index.md to reference all docs in a target folder. + + +## EXECUTION + +### Step 1: Scan Directory + +- List all files and subdirectories in the target location + +### Step 2: Group Content + +- Organize files by type, purpose, or subdirectory + +### Step 3: Generate Descriptions + +- Read each file to understand its actual purpose and create brief (3-10 word) descriptions based on the content, not just the filename + +### Step 4: Create/Update Index + +- Write or update index.md with organized file listings + + +## OUTPUT FORMAT + +```markdown +# Directory Index + +## Files + +- **[filename.ext](./filename.ext)** - Brief description +- **[another-file.ext](./another-file.ext)** - Brief description + +## Subdirectories + +### subfolder/ + +- **[file1.ext](./subfolder/file1.ext)** - Brief description +- **[file2.ext](./subfolder/file2.ext)** - Brief description + +### another-folder/ + +- **[file3.ext](./another-folder/file3.ext)** - Brief description +``` + + +## HALT CONDITIONS + +- HALT if target directory does not exist or is inaccessible +- HALT if user does not have write permissions to create index.md + + +## VALIDATION + +- Use relative paths starting with ./ +- Group similar files together +- Read file contents to generate accurate descriptions - don't guess from filenames +- Keep descriptions concise but informative (3-10 words) +- Sort alphabetically within groups +- Skip hidden files (starting with .) unless specified diff --git a/src/core-skills/bmad-index-docs/workflow.md b/src/core-skills/bmad-index-docs/workflow.md deleted file mode 100644 index b500cf984..000000000 --- a/src/core-skills/bmad-index-docs/workflow.md +++ /dev/null @@ -1,61 +0,0 @@ -# Index Docs - -**Goal:** Generate or update an index.md to reference all docs in a target folder. - - -## EXECUTION - -### Step 1: Scan Directory - -- List all files and subdirectories in the target location - -### Step 2: Group Content - -- Organize files by type, purpose, or subdirectory - -### Step 3: Generate Descriptions - -- Read each file to understand its actual purpose and create brief (3-10 word) descriptions based on the content, not just the filename - -### Step 4: Create/Update Index - -- Write or update index.md with organized file listings - - -## OUTPUT FORMAT - -```markdown -# Directory Index - -## Files - -- **[filename.ext](./filename.ext)** - Brief description -- **[another-file.ext](./another-file.ext)** - Brief description - -## Subdirectories - -### subfolder/ - -- **[file1.ext](./subfolder/file1.ext)** - Brief description -- **[file2.ext](./subfolder/file2.ext)** - Brief description - -### another-folder/ - -- **[file3.ext](./another-folder/file3.ext)** - Brief description -``` - - -## HALT CONDITIONS - -- HALT if target directory does not exist or is inaccessible -- HALT if user does not have write permissions to create index.md - - -## VALIDATION - -- Use relative paths starting with ./ -- Group similar files together -- Read file contents to generate accurate descriptions - don't guess from filenames -- Keep descriptions concise but informative (3-10 words) -- Sort alphabetically within groups -- Skip hidden files (starting with .) unless specified diff --git a/src/core-skills/bmad-review-adversarial-general/SKILL.md b/src/core-skills/bmad-review-adversarial-general/SKILL.md index 4900bc9e1..ae75b7caa 100644 --- a/src/core-skills/bmad-review-adversarial-general/SKILL.md +++ b/src/core-skills/bmad-review-adversarial-general/SKILL.md @@ -3,4 +3,35 @@ name: bmad-review-adversarial-general description: 'Perform a Cynical Review and produce a findings report. Use when the user requests a critical review of something' --- -Follow the instructions in ./workflow.md. +# Adversarial Review (General) + +**Goal:** Cynically review content and produce findings. + +**Your Role:** You are a cynical, jaded reviewer with zero patience for sloppy work. The content was submitted by a clueless weasel and you expect to find problems. Be skeptical of everything. Look for what's missing, not just what's wrong. Use a precise, professional tone — no profanity or personal attacks. + +**Inputs:** +- **content** — Content to review: diff, spec, story, doc, or any artifact +- **also_consider** (optional) — Areas to keep in mind during review alongside normal adversarial analysis + + +## EXECUTION + +### Step 1: Receive Content + +- Load the content to review from provided input or context +- If content to review is empty, ask for clarification and abort +- Identify content type (diff, branch, uncommitted changes, document, etc.) + +### Step 2: Adversarial Analysis + +Review with extreme skepticism — assume problems exist. Find at least ten issues to fix or improve in the provided content. + +### Step 3: Present Findings + +Output findings as a Markdown list (descriptions only). + + +## HALT CONDITIONS + +- HALT if zero findings — this is suspicious, re-analyze or ask for guidance +- HALT if content is empty or unreadable diff --git a/src/core-skills/bmad-review-adversarial-general/workflow.md b/src/core-skills/bmad-review-adversarial-general/workflow.md deleted file mode 100644 index 8290ff16d..000000000 --- a/src/core-skills/bmad-review-adversarial-general/workflow.md +++ /dev/null @@ -1,32 +0,0 @@ -# Adversarial Review (General) - -**Goal:** Cynically review content and produce findings. - -**Your Role:** You are a cynical, jaded reviewer with zero patience for sloppy work. The content was submitted by a clueless weasel and you expect to find problems. Be skeptical of everything. Look for what's missing, not just what's wrong. Use a precise, professional tone — no profanity or personal attacks. - -**Inputs:** -- **content** — Content to review: diff, spec, story, doc, or any artifact -- **also_consider** (optional) — Areas to keep in mind during review alongside normal adversarial analysis - - -## EXECUTION - -### Step 1: Receive Content - -- Load the content to review from provided input or context -- If content to review is empty, ask for clarification and abort -- Identify content type (diff, branch, uncommitted changes, document, etc.) - -### Step 2: Adversarial Analysis - -Review with extreme skepticism — assume problems exist. Find at least ten issues to fix or improve in the provided content. - -### Step 3: Present Findings - -Output findings as a Markdown list (descriptions only). - - -## HALT CONDITIONS - -- HALT if zero findings — this is suspicious, re-analyze or ask for guidance -- HALT if content is empty or unreadable diff --git a/src/core-skills/bmad-review-edge-case-hunter/SKILL.md b/src/core-skills/bmad-review-edge-case-hunter/SKILL.md index e321fb9ee..9bc9984d1 100644 --- a/src/core-skills/bmad-review-edge-case-hunter/SKILL.md +++ b/src/core-skills/bmad-review-edge-case-hunter/SKILL.md @@ -3,4 +3,65 @@ name: bmad-review-edge-case-hunter description: 'Walk every branching path and boundary condition in content, report only unhandled edge cases. Orthogonal to adversarial review - method-driven not attitude-driven. Use when you need exhaustive edge-case analysis of code, specs, or diffs.' --- -Follow the instructions in ./workflow.md. +# Edge Case Hunter Review + +**Goal:** You are a pure path tracer. Never comment on whether code is good or bad; only list missing handling. +When a diff is provided, scan only the diff hunks and list boundaries that are directly reachable from the changed lines and lack an explicit guard in the diff. +When no diff is provided (full file or function), treat the entire provided content as the scope. +Ignore the rest of the codebase unless the provided content explicitly references external functions. + +**Inputs:** +- **content** — Content to review: diff, full file, or function +- **also_consider** (optional) — Areas to keep in mind during review alongside normal edge-case analysis + +**MANDATORY: Execute steps in the Execution section IN EXACT ORDER. DO NOT skip steps or change the sequence. When a halt condition triggers, follow its specific instruction exactly. Each action within a step is a REQUIRED action to complete that step.** + +**Your method is exhaustive path enumeration — mechanically walk every branch, not hunt by intuition. Report ONLY paths and conditions that lack handling — discard handled ones silently. Do NOT editorialize or add filler — findings only.** + + +## EXECUTION + +### Step 1: Receive Content + +- Load the content to review strictly from provided input +- If content is empty, or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop +- Identify content type (diff, full file, or function) to determine scope rules + +### Step 2: Exhaustive Path Analysis + +**Walk every branching path and boundary condition within scope — report only unhandled ones.** + +- If `also_consider` input was provided, incorporate those areas into the analysis +- Walk all branching paths: control flow (conditionals, loops, error handlers, early returns) and domain boundaries (where values, states, or conditions transition). Derive the relevant edge classes from the content itself — don't rely on a fixed checklist. Examples: missing else/default, unguarded inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps +- For each path: determine whether the content handles it +- Collect only the unhandled paths as findings — discard handled ones silently + +### Step 3: Validate Completeness + +- Revisit every edge class from Step 2 — e.g., missing else/default, null/empty inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps +- Add any newly found unhandled paths to findings; discard confirmed-handled ones + +### Step 4: Present Findings + +Output findings as a JSON array following the Output Format specification exactly. + + +## OUTPUT FORMAT + +Return ONLY a valid JSON array of objects. Each object must contain exactly these four fields and nothing else: + +```json +[{ + "location": "file:start-end (or file:line when single line, or file:hunk when exact line unavailable)", + "trigger_condition": "one-line description (max 15 words)", + "guard_snippet": "minimal code sketch that closes the gap (single-line escaped string, no raw newlines or unescaped quotes)", + "potential_consequence": "what could actually go wrong (max 15 words)" +}] +``` + +No extra text, no explanations, no markdown wrapping. An empty array `[]` is valid when no unhandled paths are found. + + +## HALT CONDITIONS + +- If content is empty or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop diff --git a/src/core-skills/bmad-review-edge-case-hunter/workflow.md b/src/core-skills/bmad-review-edge-case-hunter/workflow.md deleted file mode 100644 index 4d21c3961..000000000 --- a/src/core-skills/bmad-review-edge-case-hunter/workflow.md +++ /dev/null @@ -1,62 +0,0 @@ -# Edge Case Hunter Review - -**Goal:** You are a pure path tracer. Never comment on whether code is good or bad; only list missing handling. -When a diff is provided, scan only the diff hunks and list boundaries that are directly reachable from the changed lines and lack an explicit guard in the diff. -When no diff is provided (full file or function), treat the entire provided content as the scope. -Ignore the rest of the codebase unless the provided content explicitly references external functions. - -**Inputs:** -- **content** — Content to review: diff, full file, or function -- **also_consider** (optional) — Areas to keep in mind during review alongside normal edge-case analysis - -**MANDATORY: Execute steps in the Execution section IN EXACT ORDER. DO NOT skip steps or change the sequence. When a halt condition triggers, follow its specific instruction exactly. Each action within a step is a REQUIRED action to complete that step.** - -**Your method is exhaustive path enumeration — mechanically walk every branch, not hunt by intuition. Report ONLY paths and conditions that lack handling — discard handled ones silently. Do NOT editorialize or add filler — findings only.** - - -## EXECUTION - -### Step 1: Receive Content - -- Load the content to review strictly from provided input -- If content is empty, or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop -- Identify content type (diff, full file, or function) to determine scope rules - -### Step 2: Exhaustive Path Analysis - -**Walk every branching path and boundary condition within scope — report only unhandled ones.** - -- If `also_consider` input was provided, incorporate those areas into the analysis -- Walk all branching paths: control flow (conditionals, loops, error handlers, early returns) and domain boundaries (where values, states, or conditions transition). Derive the relevant edge classes from the content itself — don't rely on a fixed checklist. Examples: missing else/default, unguarded inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps -- For each path: determine whether the content handles it -- Collect only the unhandled paths as findings — discard handled ones silently - -### Step 3: Validate Completeness - -- Revisit every edge class from Step 2 — e.g., missing else/default, null/empty inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps -- Add any newly found unhandled paths to findings; discard confirmed-handled ones - -### Step 4: Present Findings - -Output findings as a JSON array following the Output Format specification exactly. - - -## OUTPUT FORMAT - -Return ONLY a valid JSON array of objects. Each object must contain exactly these four fields and nothing else: - -```json -[{ - "location": "file:start-end (or file:line when single line, or file:hunk when exact line unavailable)", - "trigger_condition": "one-line description (max 15 words)", - "guard_snippet": "minimal code sketch that closes the gap (single-line escaped string, no raw newlines or unescaped quotes)", - "potential_consequence": "what could actually go wrong (max 15 words)" -}] -``` - -No extra text, no explanations, no markdown wrapping. An empty array `[]` is valid when no unhandled paths are found. - - -## HALT CONDITIONS - -- If content is empty or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop diff --git a/src/core-skills/bmad-shard-doc/SKILL.md b/src/core-skills/bmad-shard-doc/SKILL.md index 442af56e2..4945cff4c 100644 --- a/src/core-skills/bmad-shard-doc/SKILL.md +++ b/src/core-skills/bmad-shard-doc/SKILL.md @@ -3,4 +3,103 @@ name: bmad-shard-doc description: 'Splits large markdown documents into smaller, organized files based on level 2 (default) sections. Use if the user says perform shard document' --- -Follow the instructions in ./workflow.md. +# Shard Document + +**Goal:** Split large markdown documents into smaller, organized files based on level 2 sections using `npx @kayvan/markdown-tree-parser`. + +## CRITICAL RULES + +- MANDATORY: Execute ALL steps in the EXECUTION section IN EXACT ORDER +- DO NOT skip steps or change the sequence +- HALT immediately when halt-conditions are met +- Each action within a step is a REQUIRED action to complete that step + +## EXECUTION + +### Step 1: Get Source Document + +- Ask user for the source document path if not provided already +- Verify file exists and is accessible +- Verify file is markdown format (.md extension) +- If file not found or not markdown: HALT with error message + +### Step 2: Get Destination Folder + +- Determine default destination: same location as source file, folder named after source file without .md extension + - Example: `/path/to/architecture.md` --> `/path/to/architecture/` +- Ask user for the destination folder path (`[y]` to confirm use of default: `[suggested-path]`, else enter a new path) +- If user accepts default: use the suggested destination path +- If user provides custom path: use the custom destination path +- Verify destination folder exists or can be created +- Check write permissions for destination +- If permission denied: HALT with error message + +### Step 3: Execute Sharding + +- Inform user that sharding is beginning +- Execute command: `npx @kayvan/markdown-tree-parser explode [source-document] [destination-folder]` +- Capture command output and any errors +- If command fails: HALT and display error to user + +### Step 4: Verify Output + +- Check that destination folder contains sharded files +- Verify index.md was created in destination folder +- Count the number of files created +- If no files created: HALT with error message + +### Step 5: Report Completion + +- Display completion report to user including: + - Source document path and name + - Destination folder path + - Number of section files created + - Confirmation that index.md was created + - Any tool output or warnings +- Inform user that sharding completed successfully + +### Step 6: Handle Original Document + +> **Critical:** Keeping both the original and sharded versions defeats the purpose of sharding and can cause confusion. + +Present user with options for the original document: + +> What would you like to do with the original document `[source-document-name]`? +> +> Options: +> - `[d]` Delete - Remove the original (recommended - shards can always be recombined) +> - `[m]` Move to archive - Move original to a backup/archive location +> - `[k]` Keep - Leave original in place (NOT recommended - defeats sharding purpose) +> +> Your choice (d/m/k): + +#### If user selects `d` (delete) + +- Delete the original source document file +- Confirm deletion to user: "Original document deleted: [source-document-path]" +- Note: The document can be reconstructed from shards by concatenating all section files in order + +#### If user selects `m` (move) + +- Determine default archive location: same directory as source, in an `archive` subfolder + - Example: `/path/to/architecture.md` --> `/path/to/archive/architecture.md` +- Ask: Archive location (`[y]` to use default: `[default-archive-path]`, or provide custom path) +- If user accepts default: use default archive path +- If user provides custom path: use custom archive path +- Create archive directory if it does not exist +- Move original document to archive location +- Confirm move to user: "Original document moved to: [archive-path]" + +#### If user selects `k` (keep) + +- Display warning to user: + - Keeping both original and sharded versions is NOT recommended + - The discover_inputs protocol may load the wrong version + - Updates to one will not reflect in the other + - Duplicate content taking up space + - Consider deleting or archiving the original document +- Confirm user choice: "Original document kept at: [source-document-path]" + +## HALT CONDITIONS + +- HALT if npx command fails or produces no output files diff --git a/src/core-skills/bmad-shard-doc/workflow.md b/src/core-skills/bmad-shard-doc/workflow.md deleted file mode 100644 index 3304991db..000000000 --- a/src/core-skills/bmad-shard-doc/workflow.md +++ /dev/null @@ -1,100 +0,0 @@ -# Shard Document - -**Goal:** Split large markdown documents into smaller, organized files based on level 2 sections using `npx @kayvan/markdown-tree-parser`. - -## CRITICAL RULES - -- MANDATORY: Execute ALL steps in the EXECUTION section IN EXACT ORDER -- DO NOT skip steps or change the sequence -- HALT immediately when halt-conditions are met -- Each action within a step is a REQUIRED action to complete that step - -## EXECUTION - -### Step 1: Get Source Document - -- Ask user for the source document path if not provided already -- Verify file exists and is accessible -- Verify file is markdown format (.md extension) -- If file not found or not markdown: HALT with error message - -### Step 2: Get Destination Folder - -- Determine default destination: same location as source file, folder named after source file without .md extension - - Example: `/path/to/architecture.md` --> `/path/to/architecture/` -- Ask user for the destination folder path (`[y]` to confirm use of default: `[suggested-path]`, else enter a new path) -- If user accepts default: use the suggested destination path -- If user provides custom path: use the custom destination path -- Verify destination folder exists or can be created -- Check write permissions for destination -- If permission denied: HALT with error message - -### Step 3: Execute Sharding - -- Inform user that sharding is beginning -- Execute command: `npx @kayvan/markdown-tree-parser explode [source-document] [destination-folder]` -- Capture command output and any errors -- If command fails: HALT and display error to user - -### Step 4: Verify Output - -- Check that destination folder contains sharded files -- Verify index.md was created in destination folder -- Count the number of files created -- If no files created: HALT with error message - -### Step 5: Report Completion - -- Display completion report to user including: - - Source document path and name - - Destination folder path - - Number of section files created - - Confirmation that index.md was created - - Any tool output or warnings -- Inform user that sharding completed successfully - -### Step 6: Handle Original Document - -> **Critical:** Keeping both the original and sharded versions defeats the purpose of sharding and can cause confusion. - -Present user with options for the original document: - -> What would you like to do with the original document `[source-document-name]`? -> -> Options: -> - `[d]` Delete - Remove the original (recommended - shards can always be recombined) -> - `[m]` Move to archive - Move original to a backup/archive location -> - `[k]` Keep - Leave original in place (NOT recommended - defeats sharding purpose) -> -> Your choice (d/m/k): - -#### If user selects `d` (delete) - -- Delete the original source document file -- Confirm deletion to user: "Original document deleted: [source-document-path]" -- Note: The document can be reconstructed from shards by concatenating all section files in order - -#### If user selects `m` (move) - -- Determine default archive location: same directory as source, in an `archive` subfolder - - Example: `/path/to/architecture.md` --> `/path/to/archive/architecture.md` -- Ask: Archive location (`[y]` to use default: `[default-archive-path]`, or provide custom path) -- If user accepts default: use default archive path -- If user provides custom path: use custom archive path -- Create archive directory if it does not exist -- Move original document to archive location -- Confirm move to user: "Original document moved to: [archive-path]" - -#### If user selects `k` (keep) - -- Display warning to user: - - Keeping both original and sharded versions is NOT recommended - - The discover_inputs protocol may load the wrong version - - Updates to one will not reflect in the other - - Duplicate content taking up space - - Consider deleting or archiving the original document -- Confirm user choice: "Original document kept at: [source-document-path]" - -## HALT CONDITIONS - -- HALT if npx command fails or produces no output files