refactor(product-brief,bmad-prd): remove distillation from brief and PRD workflows

Drop bmad-distillator integration from bmad-product-brief (finalize step,
update mode, headless JSON, constraints) and clean up customize.toml comments.
Distillation is the wrong layer — story self-containment via epic solution
design docs is the right answer for downstream context.

Also commit pending bmad-prd changes: working mode selector (Express vs
Facilitative), open-items triage into phase-blocking/resolvable/deferred
buckets, persistence wording fix, and facilitation-guide reference.
This commit is contained in:
Brian Madison 2026-05-12 22:17:56 -05:00
parent 3787e38662
commit 1a88f001d5
4 changed files with 98 additions and 12 deletions

View File

@ -25,7 +25,7 @@ Briefs produced here are honest, right-sized to purpose, and built for what come
**Create.** A brief the user is proud of, that meets their needs, drawn out through real conversation — do not assume: instead converse and understand, and then help craft the best product brief for their needs. Begin in `## Discovery` before drafting; the brief comes after the picture is on the table. Shape follows the product and need. Treat `{workflow.brief_template}` as a starting structure, not a contract: drop sections that do not earn their place, add sections the product needs, reorder freely - create sections for specialized domains or concerns also as needed. The brief serves the product's story, not the template's shape. Bind `{doc_workspace}` to a fresh folder at `{workflow.output_dir}/{workflow.output_folder_name}/` and write `brief.md` there with YAML frontmatter (title, status, created, updated). For Update and Validate, `{doc_workspace}` is the existing folder of the brief being targeted.
**Update.** Reconcile an existing brief with a change signal. Before proposing changes, read the brief, addendum, `decision-log.md`, and original inputs — and run the `## Discovery` posture against the change signal (a patch applied without context becomes drift). Surface conflicts with prior decisions before changing. Headless override: log the reversal to `decision-log.md`, then apply; halt `blocked` if intent is ambiguous. If the change is fundamental, offer Create instead of patching. Regenerate `distillate.md` via `bmad-distillator` if it exists; if the skill is unavailable, flag the distillate as stale in the JSON output.
**Update.** Reconcile an existing brief with a change signal. Before proposing changes, read the brief, addendum, `decision-log.md`, and original inputs — and run the `## Discovery` posture against the change signal (a patch applied without context becomes drift). Surface conflicts with prior decisions before changing. Headless override: log the reversal to `decision-log.md`, then apply; halt `blocked` if intent is ambiguous. If the change is fundamental, offer Create instead of patching.
**Validate.** Honest critique against the brief's own purpose. Read the brief, the addendum if present, `decision-log.md`, and any original inputs first — a validation that ignores prior decisions, rejected ideas, or context the user supplied is shallow. Cite specific lines. Caveat what cannot be evaluated. Return inline — no separate file unless asked. Always offer to roll findings into an Update, even in headless mode — include `"offer_to_update": true` in the JSON status block.
@ -39,7 +39,6 @@ When invoked headless, do not ask. Complete the intent using what is provided, w
"intent": "create",
"brief": "{doc_workspace}/brief.md",
"addendum": "{doc_workspace}/addendum.md",
"distillate": "{doc_workspace}/distillate.md",
"decision_log": "{doc_workspace}/decision-log.md",
"open_questions": [],
"external_handoffs": [
@ -69,13 +68,12 @@ Conversationally surface what the user brings, why this brief exists, and the do
- **File roles.** `decision-log.md` is canonical memory and audit trail — every decision, change, and override (including headless overrides) is recorded there as the conversation unfolds. `addendum.md` preserves user-contributed depth that belongs in a downstream document (PRD, architecture, solution design) or earned a place but does not fit the brief (rejected-alternative rationale, options-considered matrices, parked-roadmap context, technical constraints, in-depth personas, sizing data). Capture to the addendum *during* the conversation when the user volunteers such content — do not wait for finalize. Audit and override information never goes in the addendum.
- **Continuity across sessions.** If a prior in-progress draft for this project exists, the user is offered to resume.
- **Extract, don't ingest.** Source artifacts (provided by the user or discovered during the run — transcripts, brainstorms, research reports, code, web results, prior briefs) enter the parent conversation as relevance-filtered extracts, not loaded wholesale. Subagents do the extraction against the user's stated focus; the parent context stays lean.
- **Length and coherence.** Aim for 1-2 pages — if it is longer, the detail belongs in the addendum or distillate. Structure in service of the product; downstream consumers (PRD workflow, etc.) read this, so coherent shape matters.
- **Length and coherence.** Aim for 1-2 pages — if it is longer, the detail belongs in the addendum. Structure in service of the product; downstream consumers (PRD workflow, etc.) read this, so coherent shape matters.
## Finalize
1. Decision log audit + addendum review: the user ends this step with an explicit, shared accounting of how the meaningful contents of `decision-log.md` were handled — captured in the brief, captured in `addendum.md` (which may already hold detail captured during the conversation — see `## Constraints` for what belongs there), or set aside as process noise.
2. Polish: apply each entry in `{workflow.doc_standards}` (a `skill:`, `file:`, or plain-text directive) to `brief.md` (and `addendum.md` if it exists). Run passes as parallel subagents - apply all doc standards to `brief.md` first, then `addendum.md` so we present a high quality draft for the user to review and finalize.
3. Distillate: offer the user a lean, token-efficient distillate of the brief — frame why it matters (it becomes the primary input when downstream BMad workflows like PRD creation pull this brief in). If they want it, invoke `bmad-distillator` with `source_documents=[brief.md, addendum.md if produced]`, `downstream_consumer="PRD creation"`, `output_path={doc_workspace}/distillate.md`. If `bmad-distillator` is not installed, skip distillate generation entirely — do not attempt an inline alternative. Include `"distillate": "skipped — bmad-distillator not installed"` in the final JSON block and tell the user to install it.
4. External handoffs: execute each entry in `{workflow.external_handoffs}` to route artifacts beyond local files (Confluence, Notion, ticket systems, etc.) — each directive names the MCP tool and the fields it needs. Invoke the tool, capture any URLs or IDs returned, and surface them in the user message. If a named tool is unavailable, skip that handoff and flag it (graceful degradation, same pattern as missing `bmad-distillator`); local files always exist regardless.
5. Tell the user it is ready: local paths, external destinations (URLs returned from handoffs), distillate path. Use the `bmad-help` skill to suggest what next steps make sense in the bmad method ecosystem.
6. Run `{workflow.on_complete}` if non-empty. Treat a string scalar as a single instruction and an array as a sequence of instructions executed in order.
3. External handoffs: execute each entry in `{workflow.external_handoffs}` to route artifacts beyond local files (Confluence, Notion, ticket systems, etc.) — each directive names the MCP tool and the fields it needs. Invoke the tool, capture any URLs or IDs returned, and surface them in the user message. If a named tool is unavailable, skip that handoff and flag it; local files always exist regardless.
4. Tell the user it is ready: local paths and external destinations (URLs returned from handoffs). Use the `bmad-help` skill to suggest what next steps make sense in the bmad method ecosystem.
5. Run `{workflow.on_complete}` if non-empty. Treat a string scalar as a single instruction and an array as a sequence of instructions executed in order.

View File

@ -39,8 +39,7 @@ on_complete = ""
# to enforce a different structure (e.g. regulated-industry, investor-deck).
brief_template = "assets/brief-template.md"
# Run folder location. The brief, optional addendum, and optional distillate
# all land inside `{output_dir}/{output_folder_name}/`.
# Run folder location. The brief and optional addendum land inside `{output_dir}/{output_folder_name}/`.
output_dir = "{planning_artifacts}/briefs"
output_folder_name = "brief-{project_name}-{date}"
@ -91,6 +90,5 @@ external_sources = []
#
# Examples (set in team/user override TOML):
# "After finalize, upload brief.md and addendum.md to Confluence via corp:confluence_upload (space_key='PROD', parent_page='Product Briefs', label='brief', author={user_name})."
# "Mirror the distillate to Notion via notion:create_page (database_id='abc123', title='Brief: '+{project_name})."
# "Post a ready-for-review ping to Slack via corp:slack_post (channel='#product', text='New brief: '+{confluence_url})."
external_handoffs = []

View File

@ -49,6 +49,13 @@ Surface these honestly and let the user choose; if they prefer this skill anyway
Coach, do not quiz. Push hardest where PRD Discipline is at risk — unexamined assumptions, capability-vs-implementation confusion, term drift, silent scope creep, ambiguity for downstream readers. Drill into specifics only after the broad shape is on the table; premature granular questions interrupt the dump. Suggest research only when the stakes warrant it.
**Working mode.** Once the situational read is complete, offer the user a choice before proceeding — one sentence per option:
- **Express:** resolve any remaining critical gaps in a short batch, then draft the full PRD at once.
- **Facilitative:** work through the sections that require PM thinking before drafting, using the techniques in `references/facilitation-guide.md`. Draft after the key sections are walked. The goal is that the PM has authored the thinking — not just answered intake questions.
In both modes, resolve decisions conversationally rather than silently deferring them into `[ASSUMPTION]` tags. Only use `[ASSUMPTION]` when the answer requires research or external input the PM cannot provide in the moment.
## PRD Discipline
Patterns that hold the PRD together across every shape.
@ -73,7 +80,7 @@ Patterns that hold the PRD together across every shape.
## Constraints
- **Persistence is real-time.** Once Create intent is confirmed, the workspace (run folder, `prd.md` skeleton, `decision-log.md`) exists on disk and the user knows the path.
- **Persistence is near real-time.** Once Create intent is confirmed, the workspace (run folder, `prd.md` skeleton, `decision-log.md`) must be created on disk and the user let know of the path.
- **File roles.** `decision-log.md` is canonical memory and audit trail — every decision, change, override, and version/state transition recorded as the conversation unfolds. `addendum.md` preserves user-contributed depth that belongs downstream or earned a place but does not fit the PRD shape (rejected alternatives, options matrices, deep technical detail, ops/cost mechanics, deep competitive analysis). When the user volunteers technical-how detail, capture it to addendum in real time. Audit and override information never goes there.
- **Continuity across sessions.** If a prior in-progress draft for this project exists in `{workflow.output_dir}`, the user is offered to resume. On resume, surface the open items (Open Questions, `[ASSUMPTION]` tags, `[NOTE FOR PM]` callouts) first — they orient both the user and the agent to what was deferred.
- **Extract, don't ingest.** Source artifacts never enter the parent conversation wholesale. Whenever a source document is consulted — Discovery setup, Update/Validate orientation, Finalize input reconciliation — delegate to a source-extractor subagent that returns `{summary, key_decisions, open_items, conflicts_with_focus, citations: [{section, line_range, quote}]}` against the user's stated focus. Run one subagent per document in parallel when there are multiple. The parent assembles from extracts; it never re-opens the source.
@ -85,7 +92,11 @@ Patterns that hold the PRD together across every shape.
1. Decision log audit + addendum review: walk `decision-log.md` with the user and account for each meaningful entry — captured in the PRD, captured in `addendum.md` (see `## Constraints` for what belongs there), or set aside as process noise.
2. Input reconciliation: fan out one source-extractor subagent per user-supplied input, each handed the current `prd.md` + `addendum.md` to compare against, returning `{coverage, gaps, soft_ideas_not_landed}`. Aggregate and present — especially soft or qualitative ideas (tone, voice, feel) the feature-and-FR shape silently drops. Ask whether any should be incorporated. Must happen before polish.
3. Discipline pass: spawn the validator subagent against `prd.md` with `{workflow.validation_checklist}`; produce findings + HTML report per `references/validation-render.md`. Surface failures and warnings conversationally; resolve before polish.
4. Open-items review: count Open Questions, `[ASSUMPTION]` tags, and `[NOTE FOR PM]` callouts. Summarize totals to the user and walk them by category. For each: resolve now, accept-and-ship (log rationale to `decision-log.md`), or escalate. Flag high density relative to the agreed stakes as a red flag before continuing.
4. Open-items review: triage all Open Questions, `[ASSUMPTION]` tags, and `[NOTE FOR PM]` callouts into three buckets before presenting anything to the user:
- **Phase-blocking** — the next BMad workflow (UX, architecture) cannot start without this decision. Surface these explicitly and conversationally, one at a time. Resolve before calling the PRD ready.
- **Resolvable now** — answerable in under a minute. Offer to close them; do not force it.
- **Safely deferred** — will surface naturally in a downstream phase (architecture, UX, post-launch) or is a v2 decision. Log to `decision-log.md` as accepted-at-draft with a one-line note on where they belong. Do not make the user walk these.
Never recite the full list. Lead with: "There are N things to decide before [next phase] can start" — then work through only those. If phase-blocking count is high relative to agreed stakes, flag it as a red flag before continuing.
5. Polish: apply each entry in `{workflow.doc_standards}` (a `skill:`, `file:`, or plain-text directive) to `prd.md` (and `addendum.md` if it exists). Run passes as parallel subagents — apply all doc standards to `prd.md` first, then to `addendum.md`.
6. External handoffs: execute each entry in `{workflow.external_handoffs}` — each directive names the MCP tool and the fields it needs. Invoke, capture any URLs or IDs returned, surface them. If a named tool is unavailable, skip and flag (graceful degradation); local files always exist.
7. Record finalization to `decision-log.md` (version label, accepted-at-ship open items, external destinations). Tell the user it is ready, share artifact paths (PRD, addendum, decision log, external destinations, validation report). Invoke the `bmad-help` skill to surface BMad-ecosystem next steps.

View File

@ -0,0 +1,79 @@
# PRD Facilitation Guide
Per-section conversation techniques for facilitative mode. Each entry names the coaching move that makes the section's conversation productive — not a checklist, a posture. Skip sections the PM has already resolved; spend more time where thinking is thin.
---
## Users and Personas
**The move:** Ground personas in real people, not archetypes.
Ask the PM to describe a specific person they have observed or talked to — not a type, an actual human. "Who is the clerk at your store? Tell me about them." Invented detail (name, age, backstory from nowhere) is persona theater — the team builds for a fiction. If the PM says "someone like..." push gently: "Is there a real person you're thinking of?"
Once grounded: what does that person want to accomplish in the time they interact with this product? What would make them say this is easier than what they do today? What would make them abandon it?
For the remote user or secondary persona: same grounding, different question — what question do they need answered in under ten seconds, and what do they do if they can't get it?
Mark anything the PM could not ground in observation as `[ILLUSTRATIVE]` — and note it's a hypothesis to validate, not a spec to build for.
---
## Core User Journeys
**The move:** Story structure, not use-case list.
For each primary journey, walk through four beats:
- **Opening scene** — where do we meet this person, what is their situation right now, what pain or need is present?
- **Rising action** — what steps do they take, what do they discover or decide along the way?
- **Climax** — the moment the product delivers real value; the thing they could not do before
- **Resolution** — what is their new reality; how is their situation different?
After each journey: what could go wrong at the climax? What is the recovery path? This is where edge cases that matter surface — not invented error states, but real failure modes for this person.
Explicitly name what capability each journey reveals. "This journey requires the operator to log an entry with no internet — which means we need a decision on whether that's in or out of MVP." Journeys produce capability requirements; make the link visible.
---
## Key Feature Decisions
**The move:** Surface the assumptions that would otherwise be silent.
Before the draft exists, there are decisions the agent would silently make and the PM would never know were made. These are the ones worth a thirty-second conversation:
- Decisions that drive the core UX model (e.g., one record per day vs. many; who can edit vs. view; what happens when the expected input doesn't exist)
- Decisions where the "obvious" choice has real consequences the PM may not have considered
- Decisions that, if wrong, require structural changes to fix later
For each: state what you inferred, name the alternative, ask which is right. Do not present options as a quiz — present your inference and invite correction. "I'm assuming one sales tally per day replaces rather than adds. Is that right, or should the operator be able to log multiple?" Resolve and move on. Only tag as `[ASSUMPTION]` when the answer requires external input or research the PM cannot provide now.
---
## Scope Boundary
**The move:** Establish MVP philosophy before listing features.
Before asking what is in or out, ask what kind of MVP this is:
- **Problem-solving MVP** — the minimum that proves the core problem is solved; rough edges acceptable
- **Experience MVP** — the minimum that proves the interaction model works; quality matters
- **Platform MVP** — the minimum infrastructure other things can build on; completeness of the base matters
- **Revenue MVP** — the minimum someone will pay for; business viability is the test
The answer changes what "minimum" means. A problem-solving MVP for a personal-use tool has different scope logic than an experience MVP aimed at non-tech-savvy users who will bounce at the first confusion.
Once the philosophy is named, non-goals do as much work as in-scope items. Probe for the things the PM is tempted to add. "What keeps almost making it onto the list?" For each: is it truly out of MVP, or does it need to be in because the MVP fails without it?
---
## Success Metrics
**The move:** Push every adjective to a measurement.
"Users will love it" — what does that mean in behavior? "It'll be fast" — fast at what, for whom, measured how? "Good adoption" — what percentage, by when, doing what? Every quality claim needs a measurement or it is not a success criterion, it is a wish.
For each metric surfaced: connect it back to the product's differentiator. If the differentiator is simplicity for non-tech users, the primary metric should measure whether non-tech users successfully complete the core action without help — not session count or feature usage breadth.
Name counter-metrics explicitly — what this product should *not* optimize for. These prevent the wrong thing being built: more entries per day is not better if the goal is accurate daily records; longer dashboard sessions may indicate a broken IA, not high engagement. Counter-metrics are as load-bearing as primary metrics for downstream readers.
For low-stakes or personal-use products: one sentence is enough. "Success: I use this daily and it replaces the notebook within a month." Do not impose metric rigor where the stakes do not warrant it.