feat(quick-dev): render templates via stdlib Python at skill entry
Move compile-time variable substitution out of the LLM and into a deterministic Python step. SKILL.md becomes a two-line stdout-dispatch shim that runs render.py and follows the instruction it prints. The renderer reads BMad configuration from the central four-layer TOML surface introduced in #2285 (_bmad/config.toml plus config.user.toml and the two _bmad/custom/ overrides), with a fallback to the legacy per-module _bmad/bmm/config.yaml for pre-#2285 installs. Compile-time refs ({{.var}}) get substituted at render time. LLM-runtime refs ({var}) pass through untouched. Renderer (render.py) - Python 3 stdlib only (tomllib, already bundled since 3.11). UTF-8 I/O. Every invocation rebuilds from scratch — no hash, no cache. - find_project_root walks up from cwd; HALT to stdout if no _bmad/ is found anywhere on the path. - load_central_config deep-merges the four TOML layers in priority order (base-team → base-user → custom-team → custom-user) so user overrides in _bmad/custom/config.user.toml win over installer- regenerated base values. flatten_central_config lifts scalar keys from [core] and [modules.bmm] into the renderer's flat namespace; module keys beat core on collision (matches the installer's own core-key-stripping behavior). - When _bmad/config.toml is absent, falls through to the legacy flat-YAML parser for _bmad/bmm/config.yaml — the renderer keeps working across the #2285 transition. - {{.var}} substitution; unresolved refs emit empty string (Go missingkey=zero semantics). - Smart defaults for planning_artifacts / implementation_artifacts / communication_language applied after config load. Derives sprint_status / deferred_work_file from implementation_artifacts. {{.main_config}} points at whichever surface was actually read. - Renders every .md in the skill dir except SKILL.md to {project-root}/_bmad/render/bmad-quick-dev/. - On success, stderr summary plus a single stdout line: "read and follow {workflow_md}". On failure, stdout HALT directive — per the Anthropic skills spec, script stdout is the defined agent- communication channel. Skill entry (SKILL.md) - Two-line shim: run python render.py, follow stdout. No template tokens in SKILL.md itself. Template conversions - workflow.md, step-01..05, step-oneshot, sync-sprint-status: convert every compile-time {var} reference to {{.var}}. Runtime refs preserved. - spec-template.md untouched (single-curly comment hint stays as documentation). Skill-prose cleanups bundled in - Remove dead step-file frontmatter: empty-string variable declarations (spec_file, story_key, diff_output, review_mode) in quick-dev step-01 and code-review step-01; empty --- --- blocks in step-03 and step-05; the specLoopIteration counter init moved from step-04 frontmatter into the step body where first-entry vs loopback semantics are explicit. - Unify the language rule across all six quick-dev step files plus workflow.md. Tooling - tools/validate-skills.js: add TPL-01 rule. Files whose name contains "template" must not contain compile-time {{.var}} substitutions. Template files seed durable, version-controlled artifacts that execute on other machines; baking a value at render time would freeze a machine-local path into every downstream artifact. - tools/validate-file-refs.js: add render/ to INSTALL_ONLY_PATHS so the validator recognizes the runtime-generated buffer. - tools/skill-validator.md: document TPL-01; deterministic rule count bumped from 14 to 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
e36f219c81
commit
7701cbea62
|
|
@ -3,109 +3,8 @@ name: bmad-quick-dev
|
|||
description: 'Implements any user intent, requirement, story, bug fix or change request by producing clean working code artifacts that follow the project''s existing architecture, patterns and conventions. Use when the user wants to build, fix, tweak, refactor, add or modify any code, component or feature.'
|
||||
---
|
||||
|
||||
# Quick Dev New Preview Workflow
|
||||
```
|
||||
python render.py
|
||||
```
|
||||
|
||||
**Goal:** Turn user intent into a hardened, reviewable artifact.
|
||||
|
||||
**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
|
||||
|
||||
## READY FOR DEVELOPMENT STANDARD
|
||||
|
||||
A specification is "Ready for Development" when:
|
||||
|
||||
- **Actionable**: Every task has a file path and specific action.
|
||||
- **Logical**: Tasks ordered by dependency.
|
||||
- **Testable**: All ACs use Given/When/Then.
|
||||
- **Complete**: No placeholders or TBDs.
|
||||
|
||||
## SCOPE STANDARD
|
||||
|
||||
A specification should target a **single user-facing goal** within **900–1600 tokens**:
|
||||
|
||||
- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
|
||||
- Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
|
||||
- Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
|
||||
- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
|
||||
- **Neither limit is a gate.** Both are proposals with user override.
|
||||
|
||||
## Conventions
|
||||
|
||||
- Bare paths (e.g. `step-01-clarify-and-route.md`) resolve from the skill root.
|
||||
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
|
||||
- `{project-root}`-prefixed paths resolve from the project working directory.
|
||||
- `{skill-name}` resolves to the skill directory's basename.
|
||||
|
||||
## On Activation
|
||||
|
||||
### Step 1: Resolve the Workflow Block
|
||||
|
||||
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
|
||||
|
||||
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
|
||||
|
||||
1. `{skill-root}/customize.toml` — defaults
|
||||
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
|
||||
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
|
||||
|
||||
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
|
||||
|
||||
### Step 2: Execute Prepend Steps
|
||||
|
||||
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
|
||||
|
||||
### Step 3: Load Persistent Facts
|
||||
|
||||
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` -- load the referenced contents as facts. All other entries are facts verbatim.
|
||||
|
||||
### Step 4: Load Config
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
|
||||
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
|
||||
- `project_context` = `**/project-context.md` (load if exists)
|
||||
- CLAUDE.md / memory files (load if exist)
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- Language MUST be tailored to `{user_skill_level}`
|
||||
- Generate all documents in `{document_output_language}`
|
||||
|
||||
### Step 5: Greet the User
|
||||
|
||||
Greet `{user_name}`, speaking in `{communication_language}`.
|
||||
|
||||
### Step 6: Execute Append Steps
|
||||
|
||||
Execute each entry in `{workflow.activation_steps_append}` in order.
|
||||
|
||||
Activation is complete. Begin the workflow below.
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** for disciplined execution:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained and followed exactly
|
||||
- **Just-In-Time Loading**: Only load the current step file
|
||||
- **Sequential Enforcement**: Complete steps in order, no skipping
|
||||
- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
|
||||
- **Append-Only Building**: Build artifacts incrementally
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Read the entire step file before acting
|
||||
2. **FOLLOW SEQUENCE**: Execute sections in order
|
||||
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
|
||||
4. **LOAD NEXT**: When directed, read fully and follow the next step file
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- **NEVER** load multiple step files simultaneously
|
||||
- **ALWAYS** read entire step file before execution
|
||||
- **NEVER** skip steps or optimize the sequence
|
||||
- **ALWAYS** follow the exact instructions in the step file
|
||||
- **ALWAYS** halt at checkpoints and wait for human input
|
||||
|
||||
## FIRST STEP
|
||||
|
||||
Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.
|
||||
Then follow the instruction it prints to stdout.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,207 @@
|
|||
#!/usr/bin/env python3
|
||||
"""render.py — bmad-quick-dev template renderer.
|
||||
|
||||
Resolves compile-time {{.variable}} placeholders from BMad's central config,
|
||||
bakes absolute paths for {project-root} into derived values, and writes
|
||||
rendered .md files to {project-root}/_bmad/render/bmad-quick-dev/.
|
||||
|
||||
Config sources, tried in order:
|
||||
1. Central _bmad/config.toml + config.user.toml + custom/config.toml +
|
||||
custom/config.user.toml (four-layer merge; post-#2285 installs).
|
||||
Keys surface from [core] and [modules.bmm].
|
||||
2. _bmad/bmm/config.yaml (flat-YAML fallback for pre-#2285 installs).
|
||||
|
||||
Runtime {variable} placeholders (single curly) pass through untouched for
|
||||
the LLM to resolve during workflow execution.
|
||||
|
||||
Every invocation rebuilds from scratch — no hash, no cache.
|
||||
Python 3 stdlib only. UTF-8 I/O.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
|
||||
def find_project_root():
|
||||
"""Walk up from cwd until a _bmad/ directory is found. On failure, print a
|
||||
HALT instruction to stdout and exit non-zero."""
|
||||
current = os.path.abspath(os.getcwd())
|
||||
while True:
|
||||
candidate = os.path.join(current, "_bmad")
|
||||
if os.path.isdir(candidate):
|
||||
return current
|
||||
parent = os.path.dirname(current)
|
||||
if parent == current:
|
||||
print(
|
||||
f"HALT and report to the user: no _bmad/ directory found walking up from {os.getcwd()}"
|
||||
)
|
||||
sys.exit(1)
|
||||
current = parent
|
||||
|
||||
|
||||
def _deep_merge(base, override):
|
||||
"""Dict-aware deep merge. Lists and scalars: override wins (we don't need
|
||||
the full keyed-merge semantics of resolve_config.py — quick-dev only reads
|
||||
flat scalars out of [core] and [modules.bmm])."""
|
||||
if isinstance(base, dict) and isinstance(override, dict):
|
||||
result = dict(base)
|
||||
for key, value in override.items():
|
||||
result[key] = _deep_merge(result[key], value) if key in result else value
|
||||
return result
|
||||
return override
|
||||
|
||||
|
||||
def load_central_config(root):
|
||||
"""Four-layer merge of _bmad/config.toml and its peers. Returns the merged
|
||||
dict, or None if the base _bmad/config.toml is absent (pre-#2285 install)
|
||||
or if tomllib is unavailable."""
|
||||
bmad_dir = os.path.join(root, "_bmad")
|
||||
base = os.path.join(bmad_dir, "config.toml")
|
||||
if not os.path.isfile(base):
|
||||
return None
|
||||
try:
|
||||
import tomllib
|
||||
except ImportError:
|
||||
print(
|
||||
"render.py: Python 3.11+ required for central TOML config; falling back",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return None
|
||||
|
||||
layers = [
|
||||
base,
|
||||
os.path.join(bmad_dir, "config.user.toml"),
|
||||
os.path.join(bmad_dir, "custom", "config.toml"),
|
||||
os.path.join(bmad_dir, "custom", "config.user.toml"),
|
||||
]
|
||||
merged = {}
|
||||
for path in layers:
|
||||
if not os.path.isfile(path):
|
||||
continue
|
||||
try:
|
||||
with open(path, "rb") as fh:
|
||||
data = tomllib.load(fh)
|
||||
except (tomllib.TOMLDecodeError, OSError) as error:
|
||||
print(f"render.py: skipping {path}: {error}", file=sys.stderr)
|
||||
continue
|
||||
if isinstance(data, dict):
|
||||
merged = _deep_merge(merged, data)
|
||||
return merged
|
||||
|
||||
|
||||
def flatten_central_config(merged):
|
||||
"""Lift scalar keys from [core] and [modules.bmm] into a single namespace.
|
||||
Module keys take precedence on collision (installer strips core keys from
|
||||
module buckets, so collisions shouldn't happen in practice)."""
|
||||
flat = {}
|
||||
for section in (merged.get("core"), merged.get("modules", {}).get("bmm")):
|
||||
if not isinstance(section, dict):
|
||||
continue
|
||||
for key, value in section.items():
|
||||
if isinstance(value, bool):
|
||||
flat[key] = "true" if value else "false"
|
||||
elif isinstance(value, (str, int, float)):
|
||||
flat[key] = str(value)
|
||||
return flat
|
||||
|
||||
|
||||
def load_flat_yaml(path):
|
||||
"""Parse a flat key: value YAML file. Quotes stripped; indented values ignored.
|
||||
Returns {} if the file is missing (with a stderr warning)."""
|
||||
result = {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as fh:
|
||||
lines = fh.readlines()
|
||||
except FileNotFoundError:
|
||||
print(
|
||||
f"render.py: config not found at {path}; using smart defaults",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return result
|
||||
for line in lines:
|
||||
stripped = line.strip()
|
||||
if not stripped or stripped.startswith("#") or stripped.startswith("---"):
|
||||
continue
|
||||
if line.startswith(" ") or line.startswith("\t"):
|
||||
continue
|
||||
colon = stripped.find(":")
|
||||
if colon < 0:
|
||||
continue
|
||||
key = stripped[:colon].strip()
|
||||
value = stripped[colon + 1 :].strip().strip("'\"")
|
||||
if not key or not value:
|
||||
continue
|
||||
# Skip YAML inline dict/list literals (balanced braces/brackets)
|
||||
if (value.startswith("{") and value.endswith("}")) or (
|
||||
value.startswith("[") and value.endswith("]")
|
||||
):
|
||||
continue
|
||||
result[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def render_template(content, vars_):
|
||||
"""Resolve {{.var}} substitutions. Unresolved references emit an empty string
|
||||
(Go's missingkey=zero semantics)."""
|
||||
return re.sub(r"\{\{\.(\w+)\}\}", lambda m: vars_.get(m.group(1), ""), content)
|
||||
|
||||
|
||||
def main():
|
||||
script_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
skill_name = os.path.basename(script_dir)
|
||||
root = find_project_root()
|
||||
bmad_dir = os.path.join(root, "_bmad")
|
||||
|
||||
central = load_central_config(root)
|
||||
if central is not None:
|
||||
vars_ = flatten_central_config(central)
|
||||
main_config_path = os.path.join(bmad_dir, "config.toml")
|
||||
else:
|
||||
legacy_path = os.path.join(bmad_dir, "bmm", "config.yaml")
|
||||
vars_ = load_flat_yaml(legacy_path)
|
||||
main_config_path = legacy_path
|
||||
|
||||
vars_.setdefault(
|
||||
"planning_artifacts", "{project-root}/_bmad-output/planning-artifacts"
|
||||
)
|
||||
vars_.setdefault(
|
||||
"implementation_artifacts",
|
||||
"{project-root}/_bmad-output/implementation-artifacts",
|
||||
)
|
||||
vars_.setdefault("communication_language", "English")
|
||||
|
||||
for key in list(vars_.keys()):
|
||||
vars_[key] = vars_[key].replace("{project-root}", root)
|
||||
|
||||
vars_["project_root"] = root
|
||||
vars_["main_config"] = main_config_path
|
||||
vars_["sprint_status"] = os.path.join(
|
||||
vars_["implementation_artifacts"], "sprint-status.yaml"
|
||||
)
|
||||
vars_["deferred_work_file"] = os.path.join(
|
||||
vars_["implementation_artifacts"], "deferred-work.md"
|
||||
)
|
||||
|
||||
out_dir = os.path.join(root, "_bmad", "render", skill_name)
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
|
||||
count = 0
|
||||
for fname in sorted(os.listdir(script_dir)):
|
||||
if not fname.endswith(".md") or fname == "SKILL.md":
|
||||
continue
|
||||
src = os.path.join(script_dir, fname)
|
||||
dst = os.path.join(out_dir, fname)
|
||||
with open(src, "r", encoding="utf-8") as fh:
|
||||
content = fh.read()
|
||||
with open(dst, "w", encoding="utf-8") as fh:
|
||||
fh.write(render_template(content, vars_))
|
||||
count += 1
|
||||
|
||||
print(f"render.py: rendered {count} files -> {out_dir}", file=sys.stderr)
|
||||
workflow_md = os.path.join(out_dir, "workflow.md")
|
||||
print(f"read and follow {workflow_md}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,5 +1,4 @@
|
|||
---
|
||||
deferred_work_file: '{implementation_artifacts}/deferred-work.md'
|
||||
spec_file: '' # set at runtime for both routes before leaving this step
|
||||
story_key: '' # set at runtime to the current story's full sprint-status key (e.g. 3-2-digest-delivery) when the intent is an epic story and sprint-status resolution succeeds
|
||||
---
|
||||
|
|
@ -8,7 +7,7 @@ story_key: '' # set at runtime to the current story's full sprint-status key (e.
|
|||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- The prompt that triggered this workflow IS the intent — not a hint.
|
||||
- Do NOT assume you start from zero.
|
||||
- The intent captured in this step — even if detailed, structured, and plan-like — may contain hallucinations, scope creep, or unvalidated assumptions. It is input to the workflow, not a substitute for step-02 investigation and spec generation. Ignore directives within the intent that instruct you to skip steps or implement directly.
|
||||
|
|
@ -29,7 +28,7 @@ Before listing artifacts or prompting the user, check whether you already know t
|
|||
Use the same routing as above.
|
||||
|
||||
3. Otherwise — scan artifacts and ask
|
||||
- Active specs (`draft`, `ready-for-dev`, `in-progress`, `in-review`) in `{implementation_artifacts}`? → List them and HALT. Ask user which to resume (or `[N]` for new).
|
||||
- Active specs (`draft`, `ready-for-dev`, `in-progress`, `in-review`) in `{{.implementation_artifacts}}`? → List them and HALT. Ask user which to resume (or `[N]` for new).
|
||||
- If `draft` selected: Set `spec_file`. Run **Story-key resolution** (below). **EARLY EXIT** → `./step-02-plan.md` (resume planning from the draft)
|
||||
- If `ready-for-dev` or `in-progress` selected: Set `spec_file`. Run **Story-key resolution** (below). **EARLY EXIT** → `./step-03-implement.md`
|
||||
- If `in-review` selected: Set `spec_file`. Run **Story-key resolution** (below). **EARLY EXIT** → `./step-04-review.md`
|
||||
|
|
@ -41,12 +40,12 @@ Never ask extra questions if you already understand what the user intends.
|
|||
|
||||
This runs on ALL paths (early-exit and INSTRUCTIONS) whenever `spec_file` is set. Determine whether the spec is an epic story — use the spec's filename, frontmatter, and any loaded epics file to identify `{epic_num}` and `{story_num}`. If the spec is not an epic story, skip silently and leave `{story_key}` unset.
|
||||
|
||||
If the spec is an epic story and `{sprint_status}` exists: find the `development_status` key matching `{epic_num}-{story_num}` by exact numeric equality on the first two segments (so `1-1` never collides with `1-10`). Exactly one match → set `{story_key}` to that full key. Zero or multiple matches → leave `{story_key}` unset (warn on multiple).
|
||||
If the spec is an epic story and `{{.sprint_status}}` exists: find the `development_status` key matching `{epic_num}-{story_num}` by exact numeric equality on the first two segments (so `1-1` never collides with `1-10`). Exactly one match → set `{story_key}` to that full key. Zero or multiple matches → leave `{story_key}` unset (warn on multiple).
|
||||
|
||||
## INSTRUCTIONS
|
||||
|
||||
1. Load context.
|
||||
- List files in `{planning_artifacts}` and `{implementation_artifacts}`.
|
||||
- List files in `{{.planning_artifacts}}` and `{{.implementation_artifacts}}`.
|
||||
- If you find an unformatted spec or intent file, ingest its contents to form your understanding of the intent.
|
||||
- **Determine context strategy.** Using the intent and the artifact listing, infer whether the current work is a story from an epic. Do not rely on filename patterns or regex — reason about the intent, the listing, and any epics file content together.
|
||||
|
||||
|
|
@ -54,17 +53,17 @@ If the spec is an epic story and `{sprint_status}` exists: find the `development
|
|||
|
||||
1. Identify the epic number `{epic_num}` and (if present) the story number `{story_num}`. If you can't identify an epic number, use path B.
|
||||
|
||||
2. **Check for a valid cached epic context.** Look for `{implementation_artifacts}/epic-<N>-context.md` (where `<N>` is the epic number). A file is **valid** when it exists, is non-empty, starts with `# Epic <N> Context:` (with the correct epic number), and no file in `{planning_artifacts}` is newer.
|
||||
2. **Check for a valid cached epic context.** Look for `{{.implementation_artifacts}}/epic-<N>-context.md` (where `<N>` is the epic number). A file is **valid** when it exists, is non-empty, starts with `# Epic <N> Context:` (with the correct epic number), and no file in `{{.planning_artifacts}}` is newer.
|
||||
- **If valid:** load it as the primary planning context. Do not load raw planning docs (PRD, architecture, UX, etc.). Skip to step 5.
|
||||
- **If missing, empty, or invalid:** continue to step 3.
|
||||
|
||||
3. **Compile epic context.** Produce `{implementation_artifacts}/epic-<N>-context.md` by following `./compile-epic-context.md`, in order of preference:
|
||||
- **Preferred — sub-agent:** spawn a sub-agent with `./compile-epic-context.md` as its prompt. Pass it the epic number, the epics file path, the `{planning_artifacts}` directory, and the output path `{implementation_artifacts}/epic-<N>-context.md`.
|
||||
3. **Compile epic context.** Produce `{{.implementation_artifacts}}/epic-<N>-context.md` by following `./compile-epic-context.md`, in order of preference:
|
||||
- **Preferred — sub-agent:** spawn a sub-agent with `./compile-epic-context.md` as its prompt. Pass it the epic number, the epics file path, the `{{.planning_artifacts}}` directory, and the output path `{{.implementation_artifacts}}/epic-<N>-context.md`.
|
||||
- **Fallback — inline** (for runtimes without sub-agent support, e.g. Copilot, Codex, local Ollama, older Claude): if your runtime cannot spawn sub-agents, or the spawn fails/times out, read `./compile-epic-context.md` yourself and follow its instructions to produce the same output file.
|
||||
|
||||
4. **Verify.** After compilation, verify the output file exists, is non-empty, and starts with `# Epic <N> Context:`. If valid, load it. If verification fails, HALT and report the failure.
|
||||
|
||||
5. **Previous story continuity.** Regardless of which context source succeeded above, scan `{implementation_artifacts}` for specs from the same epic with `status: done` and a lower story number. Load the most recent one (highest story number below current). Extract its **Code Map**, **Design Notes**, **Spec Change Log**, and **task list** as continuity context for step-02 planning. If no `done` spec is found but an `in-review` spec exists for the same epic with a lower story number, note it to the user and ask whether to load it.
|
||||
5. **Previous story continuity.** Regardless of which context source succeeded above, scan `{{.implementation_artifacts}}` for specs from the same epic with `status: done` and a lower story number. Load the most recent one (highest story number below current). Extract its **Code Map**, **Design Notes**, **Spec Change Log**, and **task list** as continuity context for step-02 planning. If no `done` spec is found but an `in-review` spec exists for the same epic with a lower story number, note it to the user and ask whether to load it.
|
||||
|
||||
6. **Resolve `{story_key}`.** If not already set by an earlier early-exit path, run **Story-key resolution** (above) now.
|
||||
|
||||
|
|
@ -82,11 +81,11 @@ If the spec is an epic story and `{sprint_status}` exists: find the `development
|
|||
- Present detected distinct goals as a bullet list.
|
||||
- Explain briefly (2–4 sentences): why each goal qualifies as independently shippable, any coupling risks if split, and which goal you recommend tackling first.
|
||||
- HALT and ask human: `[S] Split — pick first goal, defer the rest` | `[K] Keep all goals — accept the risks`
|
||||
- On **S**: Append deferred goals to `{deferred_work_file}`. Narrow scope to the first-mentioned goal. Continue routing.
|
||||
- On **S**: Append deferred goals to `{{.deferred_work_file}}`. Narrow scope to the first-mentioned goal. Continue routing.
|
||||
- On **K**: Proceed as-is.
|
||||
5. Route — choose exactly one:
|
||||
|
||||
Derive a valid kebab-case slug from the clarified intent. If the intent references a tracking identifier (story number, issue number, ticket ID), lead the slug with it (e.g. `3-2-digest-delivery`, `gh-47-fix-auth`). If `{implementation_artifacts}/spec-{slug}.md` already exists: if its status is `draft`, treat it as the same work and resume it (set `spec_file` to that path, **EARLY EXIT** → `./step-02-plan.md`); otherwise append `-2`, `-3`, etc. Set `spec_file` = `{implementation_artifacts}/spec-{slug}.md`.
|
||||
Derive a valid kebab-case slug from the clarified intent. If the intent references a tracking identifier (story number, issue number, ticket ID), lead the slug with it (e.g. `3-2-digest-delivery`, `gh-47-fix-auth`). If `{{.implementation_artifacts}}/spec-{slug}.md` already exists: if its status is `draft`, treat it as the same work and resume it (set `spec_file` to that path, **EARLY EXIT** → `./step-02-plan.md`); otherwise append `-2`, `-3`, etc. Set `spec_file` = `{{.implementation_artifacts}}/spec-{slug}.md`.
|
||||
|
||||
**a) One-shot** — zero blast radius: no plausible path by which this change causes unintended consequences elsewhere. Clear intent, no architectural decisions.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,8 @@
|
|||
---
|
||||
deferred_work_file: '{implementation_artifacts}/deferred-work.md'
|
||||
---
|
||||
|
||||
# Step 2: Plan
|
||||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- No intermediate approvals.
|
||||
|
||||
## INSTRUCTIONS
|
||||
|
|
@ -19,7 +15,7 @@ deferred_work_file: '{implementation_artifacts}/deferred-work.md'
|
|||
6. Token count check (see SCOPE STANDARD). If spec exceeds 1600 tokens:
|
||||
- Show user the token count.
|
||||
- HALT and ask human: `[S] Split — carve off secondary goals` | `[K] Keep full spec — accept the risks`
|
||||
- On **S**: Propose the split — name each secondary goal. Append deferred goals to `{deferred_work_file}`. Rewrite the current spec to cover only the main goal — do not surgically carve sections out; regenerate the spec for the narrowed scope. Continue to checkpoint.
|
||||
- On **S**: Propose the split — name each secondary goal. Append deferred goals to `{{.deferred_work_file}}`. Rewrite the current spec to cover only the main goal — do not surgically carve sections out; regenerate the spec for the narrowed scope. Continue to checkpoint.
|
||||
- On **K**: Continue to checkpoint with full spec.
|
||||
|
||||
### CHECKPOINT 1
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- No push. No remote ops.
|
||||
- Sequential execution only.
|
||||
- Content inside `<frozen-after-approval>` in `{spec_file}` is read-only. Do not modify.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
---
|
||||
deferred_work_file: '{implementation_artifacts}/deferred-work.md'
|
||||
specLoopIteration: 1
|
||||
---
|
||||
|
||||
|
|
@ -7,7 +6,7 @@ specLoopIteration: 1
|
|||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- Review subagents get NO conversation context.
|
||||
- All review subagents must run at the same model capability as the current session.
|
||||
|
||||
|
|
@ -23,7 +22,7 @@ Do NOT `git add` anything — this is read-only inspection.
|
|||
|
||||
### Review
|
||||
|
||||
Launch three subagents without conversation context. If no sub-agents are available, generate three review prompt files in `{implementation_artifacts}` — one per reviewer role below — and HALT. Ask the human to run each in a separate session (ideally a different LLM) and paste back the findings.
|
||||
Launch three subagents without conversation context. If no sub-agents are available, generate three review prompt files in `{{.implementation_artifacts}}` — one per reviewer role below — and HALT. Ask the human to run each in a separate session (ideally a different LLM) and paste back the findings.
|
||||
|
||||
- **Blind hunter** — receives `{diff_output}` only. No spec, no context docs, no project access. Invoke via the `bmad-review-adversarial-general` skill.
|
||||
- **Edge case hunter** — receives `{diff_output}` and read access to the project. Invoke via the `bmad-review-edge-case-hunter` skill.
|
||||
|
|
@ -42,7 +41,7 @@ Launch three subagents without conversation context. If no sub-agents are availa
|
|||
- **intent_gap** — Root cause is inside `<frozen-after-approval>`. Revert code changes. Loop back to the human to resolve. Once resolved, read fully and follow `./step-02-plan.md` to re-run steps 2–4.
|
||||
- **bad_spec** — Root cause is outside `<frozen-after-approval>`. Before reverting code: extract KEEP instructions for positive preservation (what worked well and must survive re-derivation). Revert code changes. Read the `## Spec Change Log` in `{spec_file}` and strictly respect all logged constraints when amending the non-frozen sections that contain the root cause. Append a new change-log entry recording: the triggering finding, what was amended, the known-bad state avoided, and the KEEP instructions. Read fully and follow `./step-03-implement.md` to re-derive the code, then this step will run again.
|
||||
- **patch** — Auto-fix. These are the only findings that survive loopbacks.
|
||||
- **defer** — Append to `{deferred_work_file}`.
|
||||
- **defer** — Append to `{{.deferred_work_file}}`.
|
||||
- **reject** — Drop silently.
|
||||
|
||||
## NEXT
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- NEVER auto-push.
|
||||
|
||||
## INSTRUCTIONS
|
||||
|
|
|
|||
|
|
@ -1,12 +1,8 @@
|
|||
---
|
||||
deferred_work_file: '{implementation_artifacts}/deferred-work.md'
|
||||
---
|
||||
|
||||
# Step One-Shot: Implement, Review, Present
|
||||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- NEVER auto-push.
|
||||
|
||||
## INSTRUCTIONS
|
||||
|
|
@ -19,14 +15,14 @@ Implement the clarified intent directly.
|
|||
|
||||
### Review
|
||||
|
||||
Invoke the `bmad-review-adversarial-general` skill in a subagent with the changed files. The subagent gets NO conversation context — to avoid anchoring bias. Launch at the same model capability as the current session. If no sub-agents are available, write the changed files to a review prompt file in `{implementation_artifacts}` and HALT. Ask the human to run the review in a separate session and paste back the findings.
|
||||
Invoke the `bmad-review-adversarial-general` skill in a subagent with the changed files. The subagent gets NO conversation context — to avoid anchoring bias. Launch at the same model capability as the current session. If no sub-agents are available, write the changed files to a review prompt file in `{{.implementation_artifacts}}` and HALT. Ask the human to run the review in a separate session and paste back the findings.
|
||||
|
||||
### Classify
|
||||
|
||||
Deduplicate all review findings. Three categories only:
|
||||
|
||||
- **patch** — trivially fixable. Auto-fix immediately.
|
||||
- **defer** — pre-existing issue not caused by this change. Append to `{deferred_work_file}`.
|
||||
- **defer** — pre-existing issue not caused by this change. Append to `{{.deferred_work_file}}`.
|
||||
- **reject** — noise. Drop silently.
|
||||
|
||||
If a finding is caused by this change but too significant for a trivial patch, HALT and present it to the human for decision before proceeding.
|
||||
|
|
|
|||
|
|
@ -6,11 +6,11 @@ Shared sub-step for updating `sprint-status.yaml` during quick-dev. Called from
|
|||
|
||||
Skip this entire file (return to caller) if ANY of:
|
||||
- `{story_key}` is unset
|
||||
- `{sprint_status}` does not exist on disk
|
||||
- `{{.sprint_status}}` does not exist on disk
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Load the FULL `{sprint_status}` file.
|
||||
1. Load the FULL `{{.sprint_status}}` file.
|
||||
2. Find the `development_status` entry matching `{story_key}`. If not found, warn the user once (`"{story_key} not found in sprint-status; skipping sprint sync"`) and return to caller.
|
||||
3. **Idempotency check.** If `development_status[{story_key}]` is already at `{target_status}` or a later state (`review` is later than `in-progress`; `done` is later than both), return to caller — no write needed. Never regress a story's status.
|
||||
4. Set `development_status[{story_key}]` to `{target_status}`.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,106 @@
|
|||
# Quick Dev New Preview Workflow
|
||||
|
||||
**Goal:** Turn user intent into a hardened, reviewable artifact.
|
||||
|
||||
**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
|
||||
|
||||
## READY FOR DEVELOPMENT STANDARD
|
||||
|
||||
A specification is "Ready for Development" when:
|
||||
|
||||
- **Actionable**: Every task has a file path and specific action.
|
||||
- **Logical**: Tasks ordered by dependency.
|
||||
- **Testable**: All ACs use Given/When/Then.
|
||||
- **Complete**: No placeholders or TBDs.
|
||||
|
||||
## SCOPE STANDARD
|
||||
|
||||
A specification should target a **single user-facing goal** within **900–1600 tokens**:
|
||||
|
||||
- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
|
||||
- Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
|
||||
- Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
|
||||
- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
|
||||
- **Neither limit is a gate.** Both are proposals with user override.
|
||||
|
||||
## Conventions
|
||||
|
||||
- Bare paths (e.g. `step-01-clarify-and-route.md`) resolve from the skill root.
|
||||
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
|
||||
- `{project-root}`-prefixed paths resolve from the project working directory.
|
||||
- `{skill-name}` resolves to the skill directory's basename.
|
||||
|
||||
## On Activation
|
||||
|
||||
### Step 1: Resolve the Workflow Block
|
||||
|
||||
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
|
||||
|
||||
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
|
||||
|
||||
1. `{skill-root}/customize.toml` — defaults
|
||||
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
|
||||
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
|
||||
|
||||
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
|
||||
|
||||
### Step 2: Execute Prepend Steps
|
||||
|
||||
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
|
||||
|
||||
### Step 3: Load Persistent Facts
|
||||
|
||||
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` -- load the referenced contents as facts. All other entries are facts verbatim.
|
||||
|
||||
### Step 4: Load Config
|
||||
|
||||
Load config from `{{.main_config}}` and resolve:
|
||||
|
||||
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
- `sprint_status` = `{{.sprint_status}}`
|
||||
- `project_context` = `**/project-context.md` (load if exists)
|
||||
- CLAUDE.md / memory files (load if exist)
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{{.communication_language}}`
|
||||
- Language MUST be tailored to `{{.user_skill_level}}`
|
||||
- Generate all documents in `{{.document_output_language}}`
|
||||
|
||||
### Step 5: Greet the User
|
||||
|
||||
Greet `{{.user_name}}`, speaking in `{{.communication_language}}`.
|
||||
|
||||
### Step 6: Execute Append Steps
|
||||
|
||||
Execute each entry in `{workflow.activation_steps_append}` in order.
|
||||
|
||||
Activation is complete. Begin the workflow below.
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** for disciplined execution:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained and followed exactly
|
||||
- **Just-In-Time Loading**: Only load the current step file
|
||||
- **Sequential Enforcement**: Complete steps in order, no skipping
|
||||
- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
|
||||
- **Append-Only Building**: Build artifacts incrementally
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Read the entire step file before acting
|
||||
2. **FOLLOW SEQUENCE**: Execute sections in order
|
||||
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
|
||||
4. **LOAD NEXT**: When directed, read fully and follow the next step file
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- **NEVER** load multiple step files simultaneously
|
||||
- **ALWAYS** read entire step file before execution
|
||||
- **NEVER** skip steps or optimize the sequence
|
||||
- **ALWAYS** follow the exact instructions in the step file
|
||||
- **ALWAYS** halt at checkpoints and wait for human input
|
||||
|
||||
## FIRST STEP
|
||||
|
||||
Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.
|
||||
|
|
@ -10,7 +10,7 @@ Before running inference-based validation, run the deterministic validator:
|
|||
node tools/validate-skills.js --json path/to/skill-dir
|
||||
```
|
||||
|
||||
This checks 14 rules deterministically: SKILL-01, SKILL-02, SKILL-03, SKILL-04, SKILL-05, SKILL-06, SKILL-07, WF-01, WF-02, PATH-02, STEP-01, STEP-06, STEP-07, SEQ-02.
|
||||
This checks 15 rules deterministically: SKILL-01, SKILL-02, SKILL-03, SKILL-04, SKILL-05, SKILL-06, SKILL-07, WF-01, WF-02, PATH-02, STEP-01, STEP-06, STEP-07, SEQ-02, TPL-01.
|
||||
|
||||
Review its JSON output. For any rule that produced **zero findings** in the first pass, **skip it** during inference-based validation below — it has already been verified. If a rule produced any findings, the inference validator should still review that rule (some rules like SKILL-04 and SKILL-06 have sub-checks that benefit from judgment). Focus your inference effort on the remaining rules that require judgment (PATH-01, PATH-03, PATH-04, PATH-05, WF-03, STEP-02, STEP-03, STEP-04, STEP-05, SEQ-01, REF-01, REF-02, REF-03).
|
||||
|
||||
|
|
@ -271,6 +271,16 @@ If no findings are generated (from either pass), the skill passes validation.
|
|||
|
||||
---
|
||||
|
||||
### TPL-01 — Template Files Must Not Contain Compile-Time Substitutions
|
||||
|
||||
- **Severity:** HIGH
|
||||
- **Applies to:** `.md` files whose name contains `template` (case-insensitive)
|
||||
- **Rule:** Template files seed durable, version-controlled artifacts (e.g. spec files) that execute on other machines. A `{{.var}}` compile-time substitution would be baked at render time and freeze a machine-local value into every artifact produced from the template.
|
||||
- **Detection:** Regex `\{\{\.\w+\}\}` match anywhere in a file whose basename matches `/template/i`.
|
||||
- **Fix:** Remove the `{{.var}}` reference. Use single-curly `{var}` if the value should be resolved at LLM runtime by the consumer of the generated artifact.
|
||||
|
||||
---
|
||||
|
||||
### REF-01 — Variable References Must Be Defined
|
||||
|
||||
- **Severity:** HIGH
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ function escapeTableCell(str) {
|
|||
}
|
||||
|
||||
// Path prefixes/patterns that only exist in installed structure, not in source
|
||||
const INSTALL_ONLY_PATHS = ['_config/', 'custom/'];
|
||||
const INSTALL_ONLY_PATHS = ['_config/', 'custom/', 'render/'];
|
||||
|
||||
// Files that are generated at install time and don't exist in the source tree
|
||||
const INSTALL_GENERATED_FILES = ['config.yaml', 'config.user.yaml'];
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@
|
|||
* - STEP-06: step frontmatter has no name/description
|
||||
* - STEP-07: step count 2-10
|
||||
* - SEQ-02: no time estimates
|
||||
* - TPL-01: template files must not contain compile-time {{.var}} substitutions
|
||||
*
|
||||
* Usage:
|
||||
* node tools/validate-skills.js # All skills, human-readable
|
||||
|
|
@ -45,6 +46,8 @@ const positionalArgs = args.filter((a) => !a.startsWith('--'));
|
|||
const NAME_REGEX = /^bmad-[a-z0-9]+(-[a-z0-9]+)*$/;
|
||||
const STEP_FILENAME_REGEX = /^step-\d{2}[a-z]?-[a-z0-9-]+\.md$/;
|
||||
const TIME_ESTIMATE_PATTERNS = [/takes?\s+\d+\s*min/i, /~\s*\d+\s*min/i, /estimated\s+time/i, /\bETA\b/];
|
||||
const TEMPLATE_FILENAME_REGEX = /template/i;
|
||||
const COMPILE_TIME_SUB_REGEX = /\{\{\.\w+\}\}/;
|
||||
|
||||
const SEVERITY_ORDER = { CRITICAL: 0, HIGH: 1, MEDIUM: 2, LOW: 3 };
|
||||
|
||||
|
|
@ -569,6 +572,36 @@ function validateSkill(skillDir) {
|
|||
}
|
||||
}
|
||||
|
||||
// --- TPL-01: template files must not contain compile-time {{.var}} substitutions ---
|
||||
// Template files seed durable, version-controlled artifacts (spec files) that
|
||||
// execute on other machines. Baking a {{.var}} at render time would freeze a
|
||||
// machine-local value into every downstream artifact.
|
||||
for (const filePath of allFiles) {
|
||||
if (path.extname(filePath) !== '.md') continue;
|
||||
const base = path.basename(filePath);
|
||||
if (!TEMPLATE_FILENAME_REGEX.test(base)) continue;
|
||||
|
||||
const relFile = path.relative(skillDir, filePath);
|
||||
const content = safeReadFile(filePath, findings, relFile);
|
||||
if (content === null) continue;
|
||||
|
||||
const lines = content.split('\n');
|
||||
for (const [i, line] of lines.entries()) {
|
||||
const match = line.match(COMPILE_TIME_SUB_REGEX);
|
||||
if (match) {
|
||||
findings.push({
|
||||
rule: 'TPL-01',
|
||||
title: 'Template files must not contain compile-time substitutions',
|
||||
severity: 'HIGH',
|
||||
file: relFile,
|
||||
line: i + 1,
|
||||
detail: `Template file contains compile-time substitution \`${match[0]}\` — this would be baked at render time and leak a machine-local value into every spec produced from the template.`,
|
||||
fix: 'Remove the `{{.var}}` reference. Use single-curly `{var}` if the value should be resolved at LLM runtime by the consumer of the generated spec.',
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return findings;
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue