Compare commits
7 Commits
5d3533cb9f
...
d7af791f57
| Author | SHA1 | Date |
|---|---|---|
|
|
d7af791f57 | |
|
|
9536e1e6e3 | |
|
|
7ece8b09fa | |
|
|
fdfa3343d3 | |
|
|
a32fc19d4a | |
|
|
c356aae5b4 | |
|
|
373c06b68f |
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
name: bmad-os-findings-triage
|
||||
description: Orchestrate HITL triage of review findings using parallel agents. Use when the user says 'triage these findings' or 'run findings triage' or has a batch of review findings to process.
|
||||
---
|
||||
|
||||
Read `prompts/instructions.md` and execute.
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
# Finding Agent: {{TASK_ID}} — {{TASK_SUBJECT}}
|
||||
|
||||
You are a finding agent in the `{{TEAM_NAME}}` triage team. You own exactly one finding and will shepherd it through research, planning, human conversation, and a final decision.
|
||||
|
||||
## Your Assignment
|
||||
|
||||
- **Task:** `{{TASK_ID}}`
|
||||
- **Finding:** `{{FINDING_ID}}` — {{FINDING_TITLE}}
|
||||
- **Severity:** {{SEVERITY}}
|
||||
- **Team:** `{{TEAM_NAME}}`
|
||||
- **Team Lead:** `{{TEAM_LEAD_NAME}}`
|
||||
|
||||
## Phase 1 — Research (autonomous)
|
||||
|
||||
1. Read your task details with `TaskGet("{{TASK_ID}}")`.
|
||||
2. Read the relevant source files to understand the finding in context:
|
||||
{{FILE_LIST}}
|
||||
If no specific files are listed above, use codebase search to locate code relevant to the finding.
|
||||
|
||||
If a context document was provided:
|
||||
- Also read this context document for background: {{CONTEXT_DOC}}
|
||||
|
||||
If an initial triage was provided:
|
||||
- **Note:** The team lead triaged this as **{{INITIAL_TRIAGE}}** — {{TRIAGE_RATIONALE}}. Evaluate whether this triage is correct and incorporate your assessment into your plan.
|
||||
|
||||
**Rules for research:**
|
||||
- Work autonomously. Do not ask the team lead or the human for help during research.
|
||||
- Use `Read`, `Grep`, `Glob`, and codebase search tools to understand the codebase.
|
||||
- Trace call chains, check tests, read related code — be thorough.
|
||||
- Form your own opinion on whether this finding is real, a false positive, or somewhere in between.
|
||||
|
||||
## Phase 2 — Plan (display only)
|
||||
|
||||
Prepare a plan for dealing with this finding. The plan MUST cover:
|
||||
|
||||
1. **Assessment** — Is this finding real? What is the actual risk or impact?
|
||||
2. **Recommendation** — One of: fix it, accept the risk (wontfix), dismiss as not a real issue, or reject as a false positive.
|
||||
3. **If recommending a fix:** Describe the specific changes — which files, what modifications, why this approach.
|
||||
4. **If recommending against fixing:** Explain the reasoning — existing mitigations, acceptable risk, false positive rationale.
|
||||
|
||||
**Display the plan in your output.** Write it clearly so the human can read it directly. Follow the plan with a 2-5 line summary of the finding itself.
|
||||
|
||||
**CRITICAL: Do NOT send your plan or analysis to the team lead.** The team lead does not need your plan — the human reads it from your output stream. Sending full plans to the team lead wastes its context window.
|
||||
|
||||
## Phase 3 — Signal Ready
|
||||
|
||||
After displaying your plan, send exactly this to the team lead:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "{{TEAM_LEAD_NAME}}",
|
||||
content: "{{FINDING_ID}} ready for HITL",
|
||||
summary: "{{FINDING_ID}} ready for review"
|
||||
})
|
||||
```
|
||||
|
||||
Then **stop and wait**. Do not proceed until the human engages with you.
|
||||
|
||||
## Phase 4 — HITL Conversation
|
||||
|
||||
The human will review your plan and talk to you directly. This is a real conversation, not a rubber stamp:
|
||||
|
||||
- The human may agree immediately, push back, ask questions, or propose alternatives.
|
||||
- Answer questions thoroughly. Refer back to specific code you read.
|
||||
- If the human wants a fix, **apply it** — edit the source files, verify the change makes sense.
|
||||
- If the human disagrees with your assessment, update your recommendation.
|
||||
- Stay focused on THIS finding only. Do not discuss other findings.
|
||||
- **Do not send a decision until the human explicitly states a verdict.** Acknowledging your plan is NOT a decision. Wait for clear direction like "fix it", "dismiss", "reject", "skip", etc.
|
||||
|
||||
## Phase 5 — Report Decision
|
||||
|
||||
When the human reaches a decision, send exactly ONE message to the team lead:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "{{TEAM_LEAD_NAME}}",
|
||||
content: "DECISION {{FINDING_ID}} {{TASK_ID}} [CATEGORY] | [one-sentence summary]",
|
||||
summary: "{{FINDING_ID}} [CATEGORY]"
|
||||
})
|
||||
```
|
||||
|
||||
Where `[CATEGORY]` is one of:
|
||||
|
||||
| Category | Meaning |
|
||||
|----------|---------|
|
||||
| **SKIP** | Human chose to skip without full review. |
|
||||
| **DEFER** | Human chose to defer to a later session. |
|
||||
| **FIX** | Change applied. List the file paths changed and what each change was (use a parseable format: `files: path1, path2`). |
|
||||
| **WONTFIX** | Real finding, not worth fixing now. State why. |
|
||||
| **DISMISS** | Not a real finding or mitigated by existing design. State the mitigation. |
|
||||
| **REJECT** | False positive from the reviewer. State why it is wrong. |
|
||||
|
||||
After sending the decision, **go idle and wait for shutdown**. Do not take any further action. The team lead will send you a shutdown request — approve it.
|
||||
|
||||
## Rules
|
||||
|
||||
- You own ONE finding. Do not touch files unrelated to your finding unless required for the fix.
|
||||
- Your plan is for the human's eyes — display it in your output, never send it to the team lead.
|
||||
- Your only messages to the team lead are: (1) ready for HITL, (2) final decision. Nothing else.
|
||||
- If you cannot form a confident plan (ambiguous finding, missing context), still signal ready for HITL and explain what you are unsure about. The HITL conversation will resolve it.
|
||||
- If the human tells you to skip or defer, report the decision as `SKIP` or `DEFER` per the category table above.
|
||||
- When you receive a shutdown request, approve it immediately.
|
||||
|
|
@ -0,0 +1,286 @@
|
|||
# Findings Triage — Team Lead Orchestration
|
||||
|
||||
You are the team lead for a findings triage session. Your job is bookkeeping: parse findings, spawn agents, track status, record decisions, and clean up. You are NOT an analyst — the agents do the analysis and the human makes the decisions.
|
||||
|
||||
**Be minimal.** Short confirmations. No editorializing. No repeating what agents already said.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Setup
|
||||
|
||||
### 1.1 Determine Input Source
|
||||
|
||||
The human will provide findings in one of three ways:
|
||||
|
||||
1. **A findings report file** — a markdown file with structured findings. Read the file.
|
||||
2. **A pre-populated task list** — tasks already exist. Call `TaskList` to discover them.
|
||||
- If tasks are pre-populated: skip section 1.2 (parsing) and section 1.4 (task creation). Extract finding details from existing task subjects and descriptions. Number findings based on task order. Proceed from section 1.3 (pre-spawn checks).
|
||||
3. **Inline findings** — pasted directly in conversation. Parse them.
|
||||
|
||||
Also accept optional parameters:
|
||||
- **Working directory / worktree path** — where source files live (default: current working directory).
|
||||
- **Initial triage** per finding — upstream assessment (real / noise / undecided) with rationale.
|
||||
- **Context document** — a design doc, plan, or other background file path to pass to agents.
|
||||
|
||||
### 1.2 Parse Findings
|
||||
|
||||
Extract from each finding:
|
||||
- **Title / description**
|
||||
- **Severity** (Critical / High / Medium / Low)
|
||||
- **Relevant file paths**
|
||||
- **Initial triage** (if provided)
|
||||
|
||||
Number findings sequentially: F1, F2, ... Fn. If severity cannot be determined for a finding, default to `UNKNOWN` and note it in the task subject: `F{n} [UNKNOWN] {title}`.
|
||||
|
||||
**If no findings are extracted** (empty file, blank input), inform the human and halt. Do not proceed to task creation or team setup.
|
||||
|
||||
**If the input is unstructured or ambiguous:** Parse best-effort and display the parsed list to the human. Ask for confirmation before proceeding. Do NOT spawn agents until confirmed.
|
||||
|
||||
### 1.3 Pre-Spawn Checks
|
||||
|
||||
**Large batch (>25 findings):**
|
||||
HALT. Tell the human:
|
||||
> "There are {N} findings. Spawning {N} agents at once may overwhelm the system. I recommend processing in waves of ~20. Proceed with all at once, or batch into waves?"
|
||||
|
||||
Wait for the human to decide. If batching, record wave assignments (Wave 1: F1-F20, Wave 2: F21-Fn).
|
||||
|
||||
**Same-file conflicts:**
|
||||
Scan all findings for overlapping file paths. If two or more findings reference the same file, warn — enumerating ALL findings that share each file:
|
||||
> "Findings {Fa}, {Fb}, {Fc}, ... all reference `{file}`. Concurrent edits may conflict. Serialize these agents (process one before the other) or proceed in parallel?"
|
||||
|
||||
Wait for the human to decide. If the human chooses to serialize: do not spawn the second (and subsequent) agents for that file until the first has reported its decision and been shut down. Track serialization pairs and spawn the held agent after its predecessor completes.
|
||||
|
||||
### 1.4 Create Tasks
|
||||
|
||||
For each finding, create a task:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "F{n} [{SEVERITY}] {title}",
|
||||
description: "{full finding details}\n\nFiles: {file paths}\n\nInitial triage: {triage or 'none'}",
|
||||
activeForm: "Analyzing F{n}"
|
||||
})
|
||||
```
|
||||
|
||||
Record the mapping: finding number -> task ID.
|
||||
|
||||
### 1.5 Create Team
|
||||
|
||||
```
|
||||
TeamCreate({
|
||||
team_name: "{review-type}-triage",
|
||||
description: "HITL triage of {N} findings from {source}"
|
||||
})
|
||||
```
|
||||
|
||||
Use a contextual name based on the review type (e.g., `pr-review-triage`, `prompt-audit-triage`, `code-review-triage`). If unsure, use `findings-triage`.
|
||||
|
||||
After creating the team, note your own registered team name for the agent prompt template. Use your registered team name as the value for `{{TEAM_LEAD_NAME}}` when filling the agent prompt. If unsure of your name, read the team config at `~/.claude/teams/{team-name}/config.json` to find your own entry in the members list.
|
||||
|
||||
### 1.6 Spawn Agents
|
||||
|
||||
Read the agent prompt template from `prompts/agent-prompt.md`.
|
||||
|
||||
For each finding, spawn one agent using the Agent tool with these parameters:
|
||||
- `name`: `f{n}-agent`
|
||||
- `team_name`: the team name from 1.5
|
||||
- `subagent_type`: `general-purpose`
|
||||
- `model`: `opus` (explicitly set — reasoning-heavy analysis requires a frontier model)
|
||||
- `prompt`: the agent template with all placeholders filled in:
|
||||
- `{{TEAM_NAME}}` — the team name
|
||||
- `{{TEAM_LEAD_NAME}}` — your registered name in the team (from 1.5)
|
||||
- `{{TASK_ID}}` — the task ID from 1.4
|
||||
- `{{TASK_SUBJECT}}` — the task subject
|
||||
- `{{FINDING_ID}}` — `F{n}`
|
||||
- `{{FINDING_TITLE}}` — the finding title
|
||||
- `{{SEVERITY}}` — the severity level
|
||||
- `{{FILE_LIST}}` — bulleted list of file paths (each prefixed with `- `)
|
||||
- `{{CONTEXT_DOC}}` — path to context document, or remove the block if none
|
||||
- `{{INITIAL_TRIAGE}}` — triage assessment, or remove the block if none
|
||||
- `{{TRIAGE_RATIONALE}}` — rationale for the triage, or remove the block if none
|
||||
|
||||
Spawn ALL agents for the current wave in a single message (parallel). If batching, spawn only the current wave.
|
||||
|
||||
After spawning, print:
|
||||
|
||||
```
|
||||
All {N} agents spawned. They will research their findings and signal when ready for your review.
|
||||
```
|
||||
|
||||
Initialize the scorecard (internal state):
|
||||
|
||||
```
|
||||
Scorecard:
|
||||
- Total: {N}
|
||||
- Pending: {N}
|
||||
- Ready for review: 0
|
||||
- Completed: 0
|
||||
- Decisions: FIX=0 WONTFIX=0 DISMISS=0 REJECT=0 SKIP=0 DEFER=0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — HITL Review Loop
|
||||
|
||||
### 2.1 Track Agent Readiness
|
||||
|
||||
Agents will send messages matching: `F{n} ready for HITL`
|
||||
|
||||
When received:
|
||||
- Note which finding is ready.
|
||||
- Update the internal status tracker.
|
||||
- Print a short status line: `F{n} ready. ({ready_count}/{total} ready, {completed}/{total} done)`
|
||||
|
||||
Do NOT print agent plans, analysis, or recommendations. The human reads those directly from the agent output.
|
||||
|
||||
### 2.2 Status Dashboard
|
||||
|
||||
When the human asks for status (or periodically when useful), print:
|
||||
|
||||
```
|
||||
=== Triage Status ===
|
||||
Ready for review: F3, F7, F11
|
||||
Still analyzing: F1, F5, F9
|
||||
Completed: F2 (FIX), F4 (DISMISS), F6 (REJECT)
|
||||
{completed}/{total} done
|
||||
===
|
||||
```
|
||||
|
||||
Keep it compact. No decoration beyond what is needed.
|
||||
|
||||
### 2.3 Process Decisions
|
||||
|
||||
Agents will send messages matching: `DECISION F{n} {task_id} [CATEGORY] | [summary]`
|
||||
|
||||
When received:
|
||||
1. **Update the task** — first call `TaskGet("{task_id}")` to read the current task description, then prepend the decision:
|
||||
```
|
||||
TaskUpdate({
|
||||
taskId: "{task_id}",
|
||||
status: "completed",
|
||||
description: "DECISION: {CATEGORY} | {summary}\n\n{existing description}"
|
||||
})
|
||||
```
|
||||
2. **Update the scorecard** — increment the decision category counter. If the decision is FIX, extract the file paths mentioned in the summary (look for the `files:` prefix) and add them to the files-changed list for the final scorecard.
|
||||
3. **Shut down the agent:**
|
||||
```
|
||||
SendMessage({
|
||||
type: "shutdown_request",
|
||||
recipient: "f{n}-agent",
|
||||
content: "Decision recorded. Shutting down."
|
||||
})
|
||||
```
|
||||
4. **Print confirmation:** `F{n} closed: {CATEGORY}. {remaining} remaining.`
|
||||
|
||||
### 2.4 Human-Initiated Skip/Defer
|
||||
|
||||
If the human wants to skip or defer a finding without full engagement:
|
||||
|
||||
1. Send the decision to the agent, replacing `{CATEGORY}` with the human's chosen category (`SKIP` or `DEFER`):
|
||||
```
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "f{n}-agent",
|
||||
content: "Human decision: {CATEGORY} this finding. Report {CATEGORY} as your decision and go idle.",
|
||||
summary: "F{n} {CATEGORY} directive"
|
||||
})
|
||||
```
|
||||
2. Wait for the agent to report the decision back (it will send `DECISION F{n} ... {CATEGORY}`).
|
||||
3. Process as a normal decision (2.3).
|
||||
|
||||
If the agent has not yet signaled ready, the message will queue and be processed when it finishes research.
|
||||
|
||||
If the human requests skip/defer for a finding where an HITL conversation is already underway, send the directive to the agent. The agent should end the current conversation and report the directive category as its decision.
|
||||
|
||||
### 2.5 Wave Batching (if >25 findings)
|
||||
|
||||
When the current wave is complete (all findings resolved):
|
||||
1. Print wave summary.
|
||||
2. Ask: `"Wave {W} complete. Spawn wave {W+1} ({count} findings)? (y/n)"`
|
||||
3. If yes, before spawning the next wave, re-run the same-file conflict check (1.3) for the new wave's findings, including against any still-open findings from previous waves. Then repeat Phase 1.4 (task creation) and 1.6 (agent spawning) only. Do NOT call TeamCreate again — the team already exists.
|
||||
4. If the human declines, treat unspawned findings as not processed. Proceed to Phase 3 wrap-up. Note the count of unprocessed findings in the final scorecard.
|
||||
5. Carry the scorecard forward across waves.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Wrap-up
|
||||
|
||||
When all findings across all waves are resolved:
|
||||
|
||||
### 3.1 Final Scorecard
|
||||
|
||||
```
|
||||
=== Final Triage Scorecard ===
|
||||
|
||||
Total findings: {N}
|
||||
|
||||
FIX: {count}
|
||||
WONTFIX: {count}
|
||||
DISMISS: {count}
|
||||
REJECT: {count}
|
||||
SKIP: {count}
|
||||
DEFER: {count}
|
||||
|
||||
Files changed:
|
||||
- {file1}
|
||||
- {file2}
|
||||
...
|
||||
|
||||
Findings:
|
||||
F1 [{SEVERITY}] {title} — {DECISION}
|
||||
F2 [{SEVERITY}] {title} — {DECISION}
|
||||
...
|
||||
|
||||
=== End Triage ===
|
||||
```
|
||||
|
||||
### 3.2 Shutdown Remaining Agents
|
||||
|
||||
Send shutdown requests to any agents still alive (there should be none if all decisions were processed, but handle stragglers):
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "shutdown_request",
|
||||
recipient: "f{n}-agent",
|
||||
content: "Triage complete. Shutting down."
|
||||
})
|
||||
```
|
||||
|
||||
### 3.3 Offer to Save
|
||||
|
||||
Ask the human:
|
||||
> "Save the scorecard to a file? (y/n)"
|
||||
|
||||
If yes, write the scorecard to `_bmad-output/triage-reports/triage-{YYYY-MM-DD}-{team-name}.md`.
|
||||
|
||||
### 3.4 Delete Team
|
||||
|
||||
```
|
||||
TeamDelete()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Edge Cases Reference
|
||||
|
||||
| Situation | Response |
|
||||
|-----------|----------|
|
||||
| >25 findings | HALT, suggest wave batching, wait for human decision |
|
||||
| Same-file conflict | Warn, suggest serializing, wait for human decision |
|
||||
| Unstructured input | Parse best-effort, display list, confirm before spawning |
|
||||
| Agent signals uncertainty | Normal — the HITL conversation resolves it |
|
||||
| Human skips/defers | Send directive to agent, process decision when reported |
|
||||
| Agent goes idle unexpectedly | Send a message to check status; agents stay alive until explicit shutdown |
|
||||
| Human asks to re-open a completed finding | Not supported in this session; suggest re-running triage on that finding |
|
||||
| All agents spawned but none ready yet | Tell the human agents are still analyzing; no action needed |
|
||||
|
||||
---
|
||||
|
||||
## Behavioral Rules
|
||||
|
||||
1. **Be minimal.** Short confirmations, compact dashboards. Do not repeat agent analysis.
|
||||
2. **Never auto-close.** Every finding requires a human decision. No exceptions.
|
||||
3. **One agent per finding.** Never batch multiple findings into one agent.
|
||||
4. **Protect your context window.** Agents display plans in their output, not in messages to you. If an agent sends you a long message, acknowledge it briefly and move on.
|
||||
5. **Track everything.** Finding number, task ID, agent name, decision, files changed. You are the single source of truth for the session.
|
||||
6. **Respect the human's pace.** They review in whatever order they want. Do not rush them. Do not suggest which finding to review next unless asked.
|
||||
|
|
@ -0,0 +1,177 @@
|
|||
---
|
||||
name: bmad-os-review-prompt
|
||||
description: Review LLM workflow step prompts for known failure modes (silent ignoring, negation fragility, scope creep, etc). Use when user asks to "review a prompt" or "audit a workflow step".
|
||||
---
|
||||
|
||||
# Prompt Review Skill: PromptSentinel v1.2
|
||||
|
||||
**Version:** v1.2
|
||||
**Date:** March 2026
|
||||
**Target Models:** Frontier LLMs (Claude 4.6, GPT-5.3, Gemini 3.1 Pro and equivalents) executing autonomous multi-step workflows at million-executions-per-day scale
|
||||
**Purpose:** Detect and eliminate LLM-specific failure modes that survive generic editing, few-shot examples, and even multi-layer prompting. Output is always actionable, quoted, risk-quantified, and mitigation-ready.
|
||||
|
||||
---
|
||||
|
||||
### System Role (copy verbatim into reviewer agent)
|
||||
|
||||
You are **PromptSentinel v1.2**, a Prompt Auditor for production-grade LLM agent systems.
|
||||
|
||||
Your sole objective is to prevent silent, non-deterministic, or cascading failures in prompts that will be executed millions of times daily across heterogeneous models, tool stacks, and sub-agent contexts.
|
||||
|
||||
**Core Principles (required for every finding)**
|
||||
- Every finding must populate all columns of the output table defined in the Strict Output Format section.
|
||||
- Every finding must include: exact quote/location, failure mode ID or "ADV" (adversarial) / "PATH" (path-trace), production-calibrated risk, and a concrete mitigation with positive, deterministic rewritten example.
|
||||
- Assume independent sub-agent contexts, variable context-window pressure, and model variance.
|
||||
|
||||
---
|
||||
|
||||
### Mandatory Review Procedure
|
||||
|
||||
Execute steps in order. Steps 0-1 run sequentially. Steps 2A/2B/2C run in parallel. Steps 3-4 run sequentially after all parallel tracks complete.
|
||||
|
||||
---
|
||||
|
||||
**Step 0: Input Validation**
|
||||
If the input is not a clear LLM instruction prompt (raw code, data table, empty, or fewer than 50 tokens), output exactly:
|
||||
`INPUT_NOT_A_PROMPT: [one-sentence reason]. Review aborted.`
|
||||
and stop.
|
||||
|
||||
**Step 1: Context & Dependency Inventory**
|
||||
Parse the entire prompt. Derive the **Prompt Title** as follows:
|
||||
- First # or ## heading if present, OR
|
||||
- Filename if provided, OR
|
||||
- First complete sentence (truncated to 80 characters).
|
||||
|
||||
Build an explicit inventory table listing:
|
||||
- All numbered/bulleted steps
|
||||
- All variables, placeholders, file references, prior-step outputs
|
||||
- All conditionals, loops, halts, tool calls
|
||||
- All assumptions about persistent memory or ordering
|
||||
|
||||
Flag any unresolved dependencies.
|
||||
Step 1 is complete when the full inventory table is populated.
|
||||
|
||||
This inventory is shared context for all three parallel tracks below.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Three Parallel Review Tracks
|
||||
|
||||
Launch all three tracks concurrently. Each track produces findings in the same table format. Tracks are independent — no track reads another track's output.
|
||||
|
||||
---
|
||||
|
||||
**Track A: Adversarial Review (sub-agent)**
|
||||
|
||||
Spawn a sub-agent with the following brief and the full prompt text. Give it the Step 1 inventory for reference. Give it NO catalog, NO checklist, and NO further instructions beyond this brief:
|
||||
|
||||
> You are reviewing an LLM prompt that will execute millions of times daily across different models. Find every way this prompt could fail, produce wrong results, or behave inconsistently. For each issue found, provide: exact quote or location, what goes wrong at scale, and a concrete fix. Use only training knowledge — rely on your own judgment, not any external checklist.
|
||||
|
||||
Track A is complete when the sub-agent returns its findings.
|
||||
|
||||
---
|
||||
|
||||
**Track B: Catalog Scan + Execution Simulation (main agent)**
|
||||
|
||||
**B.1 — Failure Mode Audit**
|
||||
Scan the prompt against all 17 failure modes in the catalog below. Quote every relevant instance. For modes with zero findings, list them in a single summary line (e.g., "Modes 3, 7, 10, 12: no instances found").
|
||||
B.1 is complete when every mode has been explicitly checked.
|
||||
|
||||
**B.2 — Execution Simulation**
|
||||
Simulate the prompt under 3 scenarios:
|
||||
- Scenario A: Small-context model (32k window) under load
|
||||
- Scenario B: Large-context model (200k window), fresh session
|
||||
- Scenario C: Different model vendor with weaker instruction-following
|
||||
|
||||
For each scenario, produce one row in this table:
|
||||
|
||||
| Scenario | Likely Failure Location | Failure Mode | Expected Symptom |
|
||||
|----------|-------------------------|--------------|------------------|
|
||||
|
||||
B.2 is complete when the table contains 3 fully populated rows.
|
||||
|
||||
Track B is complete when both B.1 and B.2 are finished.
|
||||
|
||||
---
|
||||
|
||||
**Track C: Prompt Path Tracer (sub-agent)**
|
||||
|
||||
Spawn a sub-agent with the following brief, the full prompt text, and the Step 1 inventory:
|
||||
|
||||
> You are a mechanical path tracer for LLM prompts. Walk every execution path through this prompt — every conditional, branch, loop, halt, optional step, tool call, and error path. For each path, determine: is the entry condition unambiguous? Is there a defined done-state? Are all required inputs guaranteed to be available? Report only paths with gaps — discard clean paths silently.
|
||||
>
|
||||
> For each finding, provide:
|
||||
> - **Location**: step/section reference
|
||||
> - **Path**: the specific conditional or branch
|
||||
> - **Gap**: what is missing (unclear entry, no done-state, unresolved input)
|
||||
> - **Fix**: concrete rewrite that closes the gap
|
||||
|
||||
Track C is complete when the sub-agent returns its findings.
|
||||
|
||||
---
|
||||
|
||||
**Step 3: Merge & Deduplicate**
|
||||
|
||||
Collect all findings from Tracks A, B, and C. Tag each finding with its source (ADV, catalog mode number, or PATH). Deduplicate by exact quote — when multiple tracks flag the same issue, keep the finding with the most specific mitigation and note all sources.
|
||||
|
||||
Assign severity to each finding: Critical / High / Medium / Low.
|
||||
|
||||
Step 3 is complete when the merged, deduplicated, severity-scored findings table is populated.
|
||||
|
||||
**Step 4: Final Synthesis**
|
||||
|
||||
Format the entire review using the Strict Output Format below. Emit the complete review only after Step 3 is finished.
|
||||
|
||||
---
|
||||
|
||||
### Complete Failure Mode Catalog (Track B — scan all 17)
|
||||
|
||||
1. **Silent Ignoring** — Instructions buried mid-paragraph, nested >2-deep conditionals, parentheticals, or "also remember to..." after long text.
|
||||
2. **Ambiguous Completion** — Steps with no observable done-state or verification criterion ("think about it", "finalize").
|
||||
3. **Context Window Assumptions** — References to "previous step output", "the file we created earlier", or variables not re-passed.
|
||||
4. **Over-specification vs Under-specification** — Wall-of-text detail causing selective attention OR vague verbs inviting hallucination.
|
||||
5. **Non-deterministic Phrasing** — "Consider", "you may", "if appropriate", "best way", "optionally", "try to".
|
||||
6. **Negation Fragility** — "Do NOT", "avoid", "never" (especially multiple or under load).
|
||||
7. **Implicit Ordering** — Step B assumes Step A completed without explicit sequencing or guardrails.
|
||||
8. **Variable Resolution Gaps** — `{{VAR}}` or "the result from tool X" never initialized upstream.
|
||||
9. **Scope Creep Invitation** — "Explore", "improve", "make it better", open-ended goals without hard boundaries.
|
||||
10. **Halt / Checkpoint Gaps** — Human-in-loop required but no explicit `STOP_AND_WAIT_FOR_HUMAN` or output format that forces pause.
|
||||
11. **Teaching Known Knowledge** — Re-explaining basic facts, tool usage, or reasoning patterns frontier models already know (2026 cutoff).
|
||||
12. **Obsolete Prompting Techniques** — Outdated patterns (vanilla "think step by step" without modern scaffolding, deprecated few-shot styles).
|
||||
13. **Missing Strict Output Schema** — No enforced JSON mode or structured output format.
|
||||
14. **Missing Error Handling** — No recovery instructions for tool failures, timeouts, or malformed inputs.
|
||||
15. **Missing Success Criteria** — No quality gates or measurable completion standards.
|
||||
16. **Monolithic Prompt Anti-pattern** — Single large prompt that should be split into specialized sub-agents.
|
||||
17. **Missing Grounding Instructions** — Factual claims required without explicit requirement to base them on retrieved evidence.
|
||||
|
||||
---
|
||||
|
||||
### Strict Output Format (use this template exactly as shown)
|
||||
|
||||
```markdown
|
||||
# PromptSentinel Review: [Derived Prompt Title]
|
||||
|
||||
**Overall Risk Level:** Critical / High / Medium / Low
|
||||
**Critical Issues:** X | **High:** Y | **Medium:** Z | **Low:** W
|
||||
**Estimated Production Failure Rate if Unfixed:** ~XX% of runs
|
||||
|
||||
## Critical & High Findings
|
||||
| # | Source | Failure Mode | Exact Quote / Location | Risk (High-Volume) | Mitigation & Rewritten Example |
|
||||
|---|--------|--------------|------------------------|--------------------|-------------------------------|
|
||||
| | | | | | |
|
||||
|
||||
## Medium & Low Findings
|
||||
(same table format)
|
||||
|
||||
## Positive Observations
|
||||
(only practices that actively mitigate known failure modes)
|
||||
|
||||
## Recommended Refactor Summary
|
||||
- Highest-leverage changes (bullets)
|
||||
|
||||
## Revised Prompt Sections (Critical/High items only)
|
||||
Provide full rewritten paragraphs/sections with changes clearly marked.
|
||||
|
||||
**Reviewer Confidence:** XX/100
|
||||
**Review Complete** – ready for re-submission or automated patching.
|
||||
```
|
||||
|
|
@ -7,6 +7,7 @@ name: Quality & Validation
|
|||
# - Schema validation (YAML structure)
|
||||
# - Agent schema tests (fixture-based validation)
|
||||
# - Installation component tests (compilation)
|
||||
# - fs wrapper tests (native fs replacement)
|
||||
# - Bundle validation (web bundle integrity)
|
||||
|
||||
"on":
|
||||
|
|
@ -112,5 +113,8 @@ jobs:
|
|||
- name: Test agent compilation components
|
||||
run: npm run test:install
|
||||
|
||||
- name: Test fs wrapper
|
||||
run: npm run test:fs
|
||||
|
||||
- name: Validate file references
|
||||
run: npm run validate:refs
|
||||
|
|
|
|||
|
|
@ -40,8 +40,9 @@
|
|||
"lint:md": "markdownlint-cli2 \"**/*.md\"",
|
||||
"prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0",
|
||||
"rebundle": "node tools/cli/bundlers/bundle-web.js rebundle",
|
||||
"test": "npm run test:schemas && npm run test:refs && npm run test:install && npm run validate:schemas && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test": "npm run test:schemas && npm run test:refs && npm run test:install && npm run test:fs && npm run validate:schemas && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test:coverage": "c8 --reporter=text --reporter=html npm run test:schemas",
|
||||
"test:fs": "node test/test-fs-wrapper.js",
|
||||
"test:install": "node test/test-installation-components.js",
|
||||
"test:refs": "node test/test-file-refs-csv.js",
|
||||
"test:schemas": "node test/test-agent-schema.js",
|
||||
|
|
@ -71,7 +72,6 @@
|
|||
"chalk": "^4.1.2",
|
||||
"commander": "^14.0.0",
|
||||
"csv-parse": "^6.1.0",
|
||||
"fs-extra": "^11.3.0",
|
||||
"glob": "^11.0.3",
|
||||
"ignore": "^7.0.5",
|
||||
"js-yaml": "^4.1.0",
|
||||
|
|
|
|||
|
|
@ -0,0 +1,489 @@
|
|||
/**
|
||||
* Native fs Wrapper Tests
|
||||
*
|
||||
* Validates that tools/cli/lib/fs.js provides the same API surface
|
||||
* as fs-extra but backed entirely by native node:fs. Exercises every
|
||||
* exported method the CLI codebase relies on.
|
||||
*
|
||||
* Usage: node test/test-fs-wrapper.js
|
||||
* Exit codes: 0 = all tests pass, 1 = test failures
|
||||
*/
|
||||
|
||||
const nativeFs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const fs = require('../tools/cli/lib/fs');
|
||||
|
||||
// ANSI color codes
|
||||
const colors = {
|
||||
reset: '\u001B[0m',
|
||||
green: '\u001B[32m',
|
||||
red: '\u001B[31m',
|
||||
yellow: '\u001B[33m',
|
||||
cyan: '\u001B[36m',
|
||||
dim: '\u001B[2m',
|
||||
};
|
||||
|
||||
let totalTests = 0;
|
||||
let passedTests = 0;
|
||||
const failures = [];
|
||||
|
||||
function test(name, fn) {
|
||||
totalTests++;
|
||||
try {
|
||||
fn();
|
||||
passedTests++;
|
||||
console.log(` ${colors.green}\u2713${colors.reset} ${name}`);
|
||||
} catch (error) {
|
||||
console.log(` ${colors.red}\u2717${colors.reset} ${name} ${colors.red}${error.message}${colors.reset}`);
|
||||
failures.push({ name, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async function asyncTest(name, fn) {
|
||||
totalTests++;
|
||||
try {
|
||||
await fn();
|
||||
passedTests++;
|
||||
console.log(` ${colors.green}\u2713${colors.reset} ${name}`);
|
||||
} catch (error) {
|
||||
console.log(` ${colors.red}\u2717${colors.reset} ${name} ${colors.red}${error.message}${colors.reset}`);
|
||||
failures.push({ name, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
function assert(condition, message) {
|
||||
if (!condition) throw new Error(message);
|
||||
}
|
||||
|
||||
function assertEqual(actual, expected, message) {
|
||||
if (actual !== expected) {
|
||||
throw new Error(`${message}: expected ${JSON.stringify(expected)}, got ${JSON.stringify(actual)}`);
|
||||
}
|
||||
}
|
||||
|
||||
// ── Test fixtures ───────────────────────────────────────────────────────────
|
||||
|
||||
const TMP = path.join(__dirname, '.tmp-fs-wrapper-test');
|
||||
|
||||
function setup() {
|
||||
nativeFs.rmSync(TMP, { recursive: true, force: true });
|
||||
nativeFs.mkdirSync(TMP, { recursive: true });
|
||||
}
|
||||
|
||||
function teardown() {
|
||||
nativeFs.rmSync(TMP, { recursive: true, force: true });
|
||||
}
|
||||
|
||||
// ── Tests ───────────────────────────────────────────────────────────────────
|
||||
|
||||
async function runTests() {
|
||||
console.log(`${colors.cyan}========================================`);
|
||||
console.log('Native fs Wrapper Tests');
|
||||
console.log(`========================================${colors.reset}\n`);
|
||||
|
||||
setup();
|
||||
|
||||
// ── Re-exported native members ──────────────────────────────────────────
|
||||
|
||||
console.log(`${colors.yellow}Re-exported native fs members${colors.reset}`);
|
||||
|
||||
test('exports fs.constants', () => {
|
||||
assert(fs.constants !== undefined, 'fs.constants is undefined');
|
||||
assert(typeof fs.constants.F_OK === 'number', 'fs.constants.F_OK is not a number');
|
||||
});
|
||||
|
||||
test('exports fs.existsSync', () => {
|
||||
assert(typeof fs.existsSync === 'function', 'fs.existsSync is not a function');
|
||||
assert(fs.existsSync(__dirname), 'existsSync returns false for existing dir');
|
||||
assert(!fs.existsSync(path.join(TMP, 'nonexistent')), 'existsSync returns true for missing path');
|
||||
});
|
||||
|
||||
test('exports fs.readFileSync', () => {
|
||||
const content = fs.readFileSync(__filename, 'utf8');
|
||||
assert(content.includes('Native fs Wrapper Tests'), 'readFileSync did not return expected content');
|
||||
});
|
||||
|
||||
test('exports fs.writeFileSync', () => {
|
||||
const p = path.join(TMP, 'write-sync.txt');
|
||||
fs.writeFileSync(p, 'hello sync');
|
||||
assertEqual(nativeFs.readFileSync(p, 'utf8'), 'hello sync', 'writeFileSync content mismatch');
|
||||
});
|
||||
|
||||
test('exports fs.mkdirSync', () => {
|
||||
const p = path.join(TMP, 'mkdir-sync');
|
||||
fs.mkdirSync(p);
|
||||
assert(nativeFs.statSync(p).isDirectory(), 'mkdirSync did not create directory');
|
||||
});
|
||||
|
||||
test('exports fs.readdirSync', () => {
|
||||
const entries = fs.readdirSync(TMP);
|
||||
assert(Array.isArray(entries), 'readdirSync did not return array');
|
||||
});
|
||||
|
||||
test('exports fs.statSync', () => {
|
||||
const stat = fs.statSync(__dirname);
|
||||
assert(stat.isDirectory(), 'statSync did not return directory stat');
|
||||
});
|
||||
|
||||
test('exports fs.copyFileSync', () => {
|
||||
const src = path.join(TMP, 'copy-src.txt');
|
||||
const dest = path.join(TMP, 'copy-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'copy me');
|
||||
fs.copyFileSync(src, dest);
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'copy me', 'copyFileSync content mismatch');
|
||||
});
|
||||
|
||||
test('exports fs.accessSync', () => {
|
||||
// Should not throw for existing file
|
||||
fs.accessSync(__filename);
|
||||
let threw = false;
|
||||
try {
|
||||
fs.accessSync(path.join(TMP, 'nonexistent'));
|
||||
} catch {
|
||||
threw = true;
|
||||
}
|
||||
assert(threw, 'accessSync did not throw for missing file');
|
||||
});
|
||||
|
||||
test('exports fs.createReadStream', () => {
|
||||
assert(typeof fs.createReadStream === 'function', 'createReadStream is not a function');
|
||||
});
|
||||
|
||||
console.log('');
|
||||
|
||||
// ── Async promise-based methods ─────────────────────────────────────────
|
||||
|
||||
console.log(`${colors.yellow}Async promise-based methods${colors.reset}`);
|
||||
|
||||
await asyncTest('readFile returns promise with content', async () => {
|
||||
const content = await fs.readFile(__filename, 'utf8');
|
||||
assert(content.includes('Native fs Wrapper Tests'), 'readFile did not return expected content');
|
||||
});
|
||||
|
||||
await asyncTest('writeFile writes content asynchronously', async () => {
|
||||
const p = path.join(TMP, 'write-async.txt');
|
||||
await fs.writeFile(p, 'hello async');
|
||||
assertEqual(nativeFs.readFileSync(p, 'utf8'), 'hello async', 'writeFile content mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('readdir returns directory entries', async () => {
|
||||
const dir = path.join(TMP, 'readdir-test');
|
||||
nativeFs.mkdirSync(dir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(dir, 'a.txt'), 'a');
|
||||
const entries = await fs.readdir(dir);
|
||||
assert(Array.isArray(entries), 'readdir did not return array');
|
||||
assert(entries.length > 0, 'readdir returned empty array for non-empty dir');
|
||||
});
|
||||
|
||||
await asyncTest('readdir with withFileTypes returns Dirent objects', async () => {
|
||||
const dir = path.join(TMP, 'dirent-test');
|
||||
nativeFs.mkdirSync(dir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(dir, 'file.txt'), 'content');
|
||||
nativeFs.mkdirSync(path.join(dir, 'subdir'));
|
||||
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
assert(Array.isArray(entries), 'should return array');
|
||||
|
||||
const fileEntry = entries.find((e) => e.name === 'file.txt');
|
||||
const dirEntry = entries.find((e) => e.name === 'subdir');
|
||||
|
||||
assert(fileEntry && typeof fileEntry.isFile === 'function', 'entry should have isFile method');
|
||||
assert(dirEntry && typeof dirEntry.isDirectory === 'function', 'entry should have isDirectory method');
|
||||
assert(fileEntry.isFile(), 'file entry should return true for isFile()');
|
||||
assert(dirEntry.isDirectory(), 'dir entry should return true for isDirectory()');
|
||||
});
|
||||
|
||||
await asyncTest('stat returns file stats', async () => {
|
||||
const stat = await fs.stat(__dirname);
|
||||
assert(stat.isDirectory(), 'stat did not return directory stat');
|
||||
});
|
||||
|
||||
await asyncTest('access resolves for existing file', async () => {
|
||||
await fs.access(__filename); // should not throw
|
||||
});
|
||||
|
||||
await asyncTest('access rejects for missing file', async () => {
|
||||
let threw = false;
|
||||
try {
|
||||
await fs.access(path.join(TMP, 'nonexistent'));
|
||||
} catch {
|
||||
threw = true;
|
||||
}
|
||||
assert(threw, 'access did not reject for missing file');
|
||||
});
|
||||
|
||||
await asyncTest('rename moves a file', async () => {
|
||||
const src = path.join(TMP, 'rename-src.txt');
|
||||
const dest = path.join(TMP, 'rename-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'rename me');
|
||||
await fs.rename(src, dest);
|
||||
assert(!nativeFs.existsSync(src), 'rename did not remove source');
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'rename me', 'rename content mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('realpath resolves path', async () => {
|
||||
const resolved = await fs.realpath(__dirname);
|
||||
assert(typeof resolved === 'string', 'realpath did not return string');
|
||||
assert(resolved.length > 0, 'realpath returned empty string');
|
||||
});
|
||||
|
||||
console.log('');
|
||||
|
||||
// ── fs-extra compatible methods ─────────────────────────────────────────
|
||||
|
||||
console.log(`${colors.yellow}fs-extra compatible methods${colors.reset}`);
|
||||
|
||||
await asyncTest('ensureDir creates nested directories', async () => {
|
||||
const p = path.join(TMP, 'ensure', 'deep', 'nested');
|
||||
await fs.ensureDir(p);
|
||||
assert(nativeFs.statSync(p).isDirectory(), 'ensureDir did not create nested dirs');
|
||||
});
|
||||
|
||||
await asyncTest('ensureDir is idempotent on existing directory', async () => {
|
||||
const p = path.join(TMP, 'ensure', 'deep', 'nested');
|
||||
await fs.ensureDir(p); // should not throw
|
||||
assert(nativeFs.statSync(p).isDirectory(), 'ensureDir failed on existing dir');
|
||||
});
|
||||
|
||||
await asyncTest('pathExists returns true for existing path', async () => {
|
||||
assertEqual(await fs.pathExists(__filename), true, 'pathExists returned false for existing file');
|
||||
});
|
||||
|
||||
await asyncTest('pathExists returns false for missing path', async () => {
|
||||
assertEqual(await fs.pathExists(path.join(TMP, 'nonexistent')), false, 'pathExists returned true for missing path');
|
||||
});
|
||||
|
||||
test('pathExistsSync returns true for existing path', () => {
|
||||
assertEqual(fs.pathExistsSync(__filename), true, 'pathExistsSync returned false for existing file');
|
||||
});
|
||||
|
||||
test('pathExistsSync returns false for missing path', () => {
|
||||
assertEqual(fs.pathExistsSync(path.join(TMP, 'nonexistent')), false, 'pathExistsSync returned true for missing path');
|
||||
});
|
||||
|
||||
await asyncTest('copy copies a single file', async () => {
|
||||
const src = path.join(TMP, 'copy-file-src.txt');
|
||||
const dest = path.join(TMP, 'copy-file-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'copy file');
|
||||
await fs.copy(src, dest);
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'copy file', 'copy file content mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('copy creates parent directories for dest', async () => {
|
||||
const src = path.join(TMP, 'copy-mkdir-src.txt');
|
||||
nativeFs.writeFileSync(src, 'copy mkdir');
|
||||
const dest = path.join(TMP, 'copy-deep', 'nested', 'dest.txt');
|
||||
await fs.copy(src, dest);
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'copy mkdir', 'copy with mkdir content mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('copy copies a directory recursively', async () => {
|
||||
const srcDir = path.join(TMP, 'copy-dir-src');
|
||||
nativeFs.mkdirSync(path.join(srcDir, 'sub'), { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'a.txt'), 'file a');
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'sub', 'b.txt'), 'file b');
|
||||
|
||||
const destDir = path.join(TMP, 'copy-dir-dest');
|
||||
await fs.copy(srcDir, destDir);
|
||||
|
||||
assertEqual(nativeFs.readFileSync(path.join(destDir, 'a.txt'), 'utf8'), 'file a', 'copy dir: top-level file mismatch');
|
||||
assertEqual(nativeFs.readFileSync(path.join(destDir, 'sub', 'b.txt'), 'utf8'), 'file b', 'copy dir: nested file mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('copy respects overwrite: false for files', async () => {
|
||||
const src = path.join(TMP, 'overwrite-src.txt');
|
||||
const dest = path.join(TMP, 'overwrite-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'new content');
|
||||
nativeFs.writeFileSync(dest, 'original content');
|
||||
await fs.copy(src, dest, { overwrite: false });
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'original content', 'copy overwrote file when overwrite: false');
|
||||
});
|
||||
|
||||
await asyncTest('copy respects overwrite: false for directories', async () => {
|
||||
const srcDir = path.join(TMP, 'ow-dir-src');
|
||||
nativeFs.mkdirSync(srcDir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'file.txt'), 'new');
|
||||
|
||||
const destDir = path.join(TMP, 'ow-dir-dest');
|
||||
nativeFs.mkdirSync(destDir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(destDir, 'file.txt'), 'original');
|
||||
|
||||
await fs.copy(srcDir, destDir, { overwrite: false });
|
||||
assertEqual(nativeFs.readFileSync(path.join(destDir, 'file.txt'), 'utf8'), 'original', 'copy dir overwrote file when overwrite: false');
|
||||
});
|
||||
|
||||
await asyncTest('copy respects filter option for files', async () => {
|
||||
const srcDir = path.join(TMP, 'filter-src');
|
||||
nativeFs.mkdirSync(srcDir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'keep.txt'), 'keep me');
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'skip.log'), 'skip me');
|
||||
|
||||
const destDir = path.join(TMP, 'filter-dest');
|
||||
await fs.copy(srcDir, destDir, {
|
||||
filter: (src) => !src.endsWith('.log'),
|
||||
});
|
||||
|
||||
assert(nativeFs.existsSync(path.join(destDir, 'keep.txt')), 'filter: kept file is missing');
|
||||
assert(!nativeFs.existsSync(path.join(destDir, 'skip.log')), 'filter: skipped file was copied');
|
||||
});
|
||||
|
||||
await asyncTest('copy respects filter option for directories', async () => {
|
||||
const srcDir = path.join(TMP, 'filter-dir-src');
|
||||
nativeFs.mkdirSync(path.join(srcDir, 'include'), { recursive: true });
|
||||
nativeFs.mkdirSync(path.join(srcDir, 'node_modules'), { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'include', 'a.txt'), 'included');
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'node_modules', 'b.txt'), 'excluded');
|
||||
|
||||
const destDir = path.join(TMP, 'filter-dir-dest');
|
||||
await fs.copy(srcDir, destDir, {
|
||||
filter: (src) => !src.includes('node_modules'),
|
||||
});
|
||||
|
||||
assert(nativeFs.existsSync(path.join(destDir, 'include', 'a.txt')), 'filter: included dir file is missing');
|
||||
assert(!nativeFs.existsSync(path.join(destDir, 'node_modules')), 'filter: excluded dir was copied');
|
||||
});
|
||||
|
||||
await asyncTest('copy filter skips top-level src when filter returns false', async () => {
|
||||
const src = path.join(TMP, 'filter-skip-src.txt');
|
||||
const dest = path.join(TMP, 'filter-skip-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'should not be copied');
|
||||
await fs.copy(src, dest, {
|
||||
filter: () => false,
|
||||
});
|
||||
assert(!nativeFs.existsSync(dest), 'filter: file was copied despite filter returning false');
|
||||
});
|
||||
|
||||
await asyncTest('remove deletes a file', async () => {
|
||||
const p = path.join(TMP, 'remove-file.txt');
|
||||
nativeFs.writeFileSync(p, 'delete me');
|
||||
await fs.remove(p);
|
||||
assert(!nativeFs.existsSync(p), 'remove did not delete file');
|
||||
});
|
||||
|
||||
await asyncTest('remove deletes a directory recursively', async () => {
|
||||
const dir = path.join(TMP, 'remove-dir');
|
||||
nativeFs.mkdirSync(path.join(dir, 'sub'), { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(dir, 'sub', 'file.txt'), 'nested');
|
||||
await fs.remove(dir);
|
||||
assert(!nativeFs.existsSync(dir), 'remove did not delete directory');
|
||||
});
|
||||
|
||||
await asyncTest('remove does not throw for missing path', async () => {
|
||||
await fs.remove(path.join(TMP, 'nonexistent-remove-target'));
|
||||
// should not throw — force: true
|
||||
});
|
||||
|
||||
await asyncTest('move renames a file', async () => {
|
||||
const src = path.join(TMP, 'move-src.txt');
|
||||
const dest = path.join(TMP, 'move-dest.txt');
|
||||
nativeFs.writeFileSync(src, 'move me');
|
||||
await fs.move(src, dest);
|
||||
assert(!nativeFs.existsSync(src), 'move did not remove source');
|
||||
assertEqual(nativeFs.readFileSync(dest, 'utf8'), 'move me', 'move content mismatch');
|
||||
});
|
||||
|
||||
await asyncTest('move renames a directory', async () => {
|
||||
const srcDir = path.join(TMP, 'move-dir-src');
|
||||
nativeFs.mkdirSync(srcDir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(srcDir, 'file.txt'), 'dir move');
|
||||
|
||||
const destDir = path.join(TMP, 'move-dir-dest');
|
||||
await fs.move(srcDir, destDir);
|
||||
assert(!nativeFs.existsSync(srcDir), 'move did not remove source dir');
|
||||
assertEqual(nativeFs.readFileSync(path.join(destDir, 'file.txt'), 'utf8'), 'dir move', 'move dir content mismatch');
|
||||
});
|
||||
|
||||
test('readJsonSync parses JSON file', () => {
|
||||
const p = path.join(TMP, 'test.json');
|
||||
nativeFs.writeFileSync(p, JSON.stringify({ key: 'value', num: 42 }));
|
||||
const result = fs.readJsonSync(p);
|
||||
assertEqual(result.key, 'value', 'readJsonSync key mismatch');
|
||||
assertEqual(result.num, 42, 'readJsonSync num mismatch');
|
||||
});
|
||||
|
||||
test('readJsonSync throws on invalid JSON', () => {
|
||||
const p = path.join(TMP, 'bad.json');
|
||||
nativeFs.writeFileSync(p, '{ invalid json }');
|
||||
let threw = false;
|
||||
try {
|
||||
fs.readJsonSync(p);
|
||||
} catch {
|
||||
threw = true;
|
||||
}
|
||||
assert(threw, 'readJsonSync did not throw on invalid JSON');
|
||||
});
|
||||
|
||||
test('readJsonSync strips UTF-8 BOM', () => {
|
||||
const p = path.join(TMP, 'bom.json');
|
||||
nativeFs.writeFileSync(p, '\uFEFF{"bom": true}');
|
||||
const result = fs.readJsonSync(p);
|
||||
assertEqual(result.bom, true, 'readJsonSync failed to parse BOM-prefixed JSON');
|
||||
});
|
||||
|
||||
console.log('');
|
||||
|
||||
// ── Bulk copy stress test ───────────────────────────────────────────────
|
||||
|
||||
console.log(`${colors.yellow}Bulk copy determinism${colors.reset}`);
|
||||
|
||||
await asyncTest('copy preserves all files in a large directory tree', async () => {
|
||||
// Create a tree with 200+ files to verify no silent loss
|
||||
const srcDir = path.join(TMP, 'bulk-src');
|
||||
const fileCount = 250;
|
||||
|
||||
for (let i = 0; i < fileCount; i++) {
|
||||
const subDir = path.join(srcDir, `dir-${String(Math.floor(i / 10)).padStart(2, '0')}`);
|
||||
nativeFs.mkdirSync(subDir, { recursive: true });
|
||||
nativeFs.writeFileSync(path.join(subDir, `file-${i}.txt`), `content-${i}`);
|
||||
}
|
||||
|
||||
const destDir = path.join(TMP, 'bulk-dest');
|
||||
await fs.copy(srcDir, destDir);
|
||||
|
||||
// Count all files in destination
|
||||
let destCount = 0;
|
||||
const countFiles = (dir) => {
|
||||
const entries = nativeFs.readdirSync(dir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
countFiles(path.join(dir, entry.name));
|
||||
} else {
|
||||
destCount++;
|
||||
}
|
||||
}
|
||||
};
|
||||
countFiles(destDir);
|
||||
|
||||
assertEqual(destCount, fileCount, `bulk copy lost files: expected ${fileCount}, got ${destCount}`);
|
||||
});
|
||||
|
||||
console.log('');
|
||||
|
||||
// ── Cleanup ─────────────────────────────────────────────────────────────
|
||||
|
||||
teardown();
|
||||
|
||||
// ── Summary ─────────────────────────────────────────────────────────────
|
||||
console.log(`${colors.cyan}========================================`);
|
||||
console.log('Test Results:');
|
||||
console.log(` Total: ${totalTests}`);
|
||||
console.log(` Passed: ${colors.green}${passedTests}${colors.reset}`);
|
||||
console.log(` Failed: ${colors.red}${totalTests - passedTests}${colors.reset}`);
|
||||
console.log(`========================================${colors.reset}\n`);
|
||||
|
||||
if (failures.length === 0) {
|
||||
console.log(`${colors.green}\u2728 All fs wrapper tests passed!${colors.reset}\n`);
|
||||
process.exit(0);
|
||||
} else {
|
||||
console.log(`${colors.red}\u274C Some fs wrapper tests failed${colors.reset}\n`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run tests
|
||||
runTests().catch((error) => {
|
||||
teardown();
|
||||
console.error(`${colors.red}Test runner failed:${colors.reset}`, error.message);
|
||||
console.error(error.stack);
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
@ -12,7 +12,7 @@
|
|||
*/
|
||||
|
||||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../tools/cli/lib/fs');
|
||||
const { YamlXmlBuilder } = require('../tools/cli/lib/yaml-xml-builder');
|
||||
const { ManifestGenerator } = require('../tools/cli/installers/lib/core/manifest-generator');
|
||||
|
||||
|
|
@ -68,7 +68,9 @@ async function runTests() {
|
|||
const tempOutput = path.join(__dirname, 'temp-pm-agent.md');
|
||||
|
||||
try {
|
||||
const result = await builder.buildAgent(pmAgentPath, null, tempOutput, { includeMetadata: true });
|
||||
const result = await builder.buildAgent(pmAgentPath, null, tempOutput, {
|
||||
includeMetadata: true,
|
||||
});
|
||||
|
||||
assert(result && result.outputPath === tempOutput, 'Agent compilation returns result object with outputPath');
|
||||
|
||||
|
|
@ -168,7 +170,9 @@ async function runTests() {
|
|||
const tempOutput = path.join(__dirname, 'temp-qa-agent.md');
|
||||
|
||||
try {
|
||||
const result = await builder.buildAgent(qaAgentPath, null, tempOutput, { includeMetadata: true });
|
||||
const result = await builder.buildAgent(qaAgentPath, null, tempOutput, {
|
||||
includeMetadata: true,
|
||||
});
|
||||
const compiled = await fs.readFile(tempOutput, 'utf8');
|
||||
|
||||
assert(compiled.includes('QA Engineer'), 'QA agent compilation includes agent title');
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ module.exports = {
|
|||
const { bmadDir } = await installer.findBmadDir(projectDir);
|
||||
|
||||
// Check if bmad directory exists
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../lib/fs');
|
||||
if (!(await fs.pathExists(bmadDir))) {
|
||||
await prompts.log.warn('No BMAD installation found in the current directory.');
|
||||
await prompts.log.message(`Expected location: ${bmadDir}`);
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../lib/fs');
|
||||
const prompts = require('../lib/prompts');
|
||||
const { Installer } = require('../installers/lib/core/installer');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const { getProjectRoot, getModulePath } = require('../../../lib/project-root');
|
||||
const { CLIUtils } = require('../../../lib/cli-utils');
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
* and can be checked into source control
|
||||
*/
|
||||
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const crypto = require('node:crypto');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const glob = require('glob');
|
||||
const yaml = require('yaml');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const { Manifest } = require('./manifest');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const { Detector } = require('./detector');
|
||||
const { Manifest } = require('./manifest');
|
||||
const { ModuleManager } = require('../modules/manager');
|
||||
|
|
@ -87,7 +87,7 @@ class Installer {
|
|||
if (textExtensions.includes(ext)) {
|
||||
try {
|
||||
// Read the file content
|
||||
let content = await fs.readFile(sourcePath, 'utf8');
|
||||
const content = await fs.readFile(sourcePath, 'utf8');
|
||||
|
||||
// Write to target with replaced content
|
||||
await fs.ensureDir(path.dirname(targetPath));
|
||||
|
|
@ -260,7 +260,7 @@ class Installer {
|
|||
|
||||
// Collect configurations for modules (skip if quick update already collected them)
|
||||
let moduleConfigs;
|
||||
let customModulePaths = new Map();
|
||||
const customModulePaths = new Map();
|
||||
|
||||
if (config._quickUpdate) {
|
||||
// Quick update already collected all configs, use them directly
|
||||
|
|
@ -524,7 +524,9 @@ class Installer {
|
|||
// Also check cache directory for custom modules (like quick update does)
|
||||
const cacheDir = path.join(bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
|
||||
const cachedModules = await fs.readdir(cacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const cachedModule of cachedModules) {
|
||||
const moduleId = cachedModule.name;
|
||||
|
|
@ -585,7 +587,9 @@ class Installer {
|
|||
const relativePath = path.relative(bmadDir, modifiedFile.path);
|
||||
const tempBackupPath = path.join(tempModifiedBackupDir, relativePath);
|
||||
await fs.ensureDir(path.dirname(tempBackupPath));
|
||||
await fs.copy(modifiedFile.path, tempBackupPath, { overwrite: true });
|
||||
await fs.copy(modifiedFile.path, tempBackupPath, {
|
||||
overwrite: true,
|
||||
});
|
||||
}
|
||||
spinner.stop(`Backed up ${modifiedFiles.length} modified files`);
|
||||
|
||||
|
|
@ -608,7 +612,9 @@ class Installer {
|
|||
// Also check cache directory for custom modules (like quick update does)
|
||||
const cacheDir = path.join(bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
|
||||
const cachedModules = await fs.readdir(cacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const cachedModule of cachedModules) {
|
||||
const moduleId = cachedModule.name;
|
||||
|
|
@ -668,7 +674,9 @@ class Installer {
|
|||
const relativePath = path.relative(bmadDir, modifiedFile.path);
|
||||
const tempBackupPath = path.join(tempModifiedBackupDir, relativePath);
|
||||
await fs.ensureDir(path.dirname(tempBackupPath));
|
||||
await fs.copy(modifiedFile.path, tempBackupPath, { overwrite: true });
|
||||
await fs.copy(modifiedFile.path, tempBackupPath, {
|
||||
overwrite: true,
|
||||
});
|
||||
}
|
||||
spinner.stop(`Backed up ${modifiedFiles.length} modified files`);
|
||||
config._tempModifiedBackupDir = tempModifiedBackupDir;
|
||||
|
|
@ -887,7 +895,11 @@ class Installer {
|
|||
let taskResolution;
|
||||
|
||||
// Collect directory creation results for output after tasks() completes
|
||||
const dirResults = { createdDirs: [], movedDirs: [], createdWdsFolders: [] };
|
||||
const dirResults = {
|
||||
createdDirs: [],
|
||||
movedDirs: [],
|
||||
createdWdsFolders: [],
|
||||
};
|
||||
|
||||
// Build task list conditionally
|
||||
const installTasks = [];
|
||||
|
|
@ -899,7 +911,9 @@ class Installer {
|
|||
task: async (message) => {
|
||||
await this.installCoreWithDependencies(bmadDir, { core: {} });
|
||||
addResult('Core', 'ok', isQuickUpdate ? 'updated' : 'installed');
|
||||
await this.generateModuleConfigs(bmadDir, { core: config.coreConfig || {} });
|
||||
await this.generateModuleConfigs(bmadDir, {
|
||||
core: config.coreConfig || {},
|
||||
});
|
||||
return isQuickUpdate ? 'Core updated' : 'Core installed';
|
||||
},
|
||||
});
|
||||
|
|
@ -945,7 +959,11 @@ class Installer {
|
|||
const cachedModule = finalCustomContent.cachedModules.find((m) => m.id === moduleName);
|
||||
if (cachedModule) {
|
||||
isCustomModule = true;
|
||||
customInfo = { id: moduleName, path: cachedModule.cachePath, config: {} };
|
||||
customInfo = {
|
||||
id: moduleName,
|
||||
path: cachedModule.cachePath,
|
||||
config: {},
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -995,7 +1013,11 @@ class Installer {
|
|||
},
|
||||
);
|
||||
await this.generateModuleConfigs(bmadDir, {
|
||||
[moduleName]: { ...config.coreConfig, ...customInfo.config, ...collectedModuleConfig },
|
||||
[moduleName]: {
|
||||
...config.coreConfig,
|
||||
...customInfo.config,
|
||||
...collectedModuleConfig,
|
||||
},
|
||||
});
|
||||
} else {
|
||||
if (!resolution || !resolution.byModule) {
|
||||
|
|
@ -1424,7 +1446,9 @@ class Installer {
|
|||
// Also check cache directory
|
||||
const cacheDir = path.join(bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
|
||||
const cachedModules = await fs.readdir(cacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const cachedModule of cachedModules) {
|
||||
if (cachedModule.isDirectory()) {
|
||||
|
|
@ -1499,7 +1523,9 @@ class Installer {
|
|||
|
||||
for (const module of existingInstall.modules) {
|
||||
spinner.message(`Updating module: ${module.id}...`);
|
||||
await this.moduleManager.update(module.id, bmadDir, config.force, { installer: this });
|
||||
await this.moduleManager.update(module.id, bmadDir, config.force, {
|
||||
installer: this,
|
||||
});
|
||||
}
|
||||
|
||||
// Update manifest
|
||||
|
|
@ -1558,7 +1584,9 @@ class Installer {
|
|||
|
||||
// 2. IDE CLEANUP (before _bmad/ deletion so configs are accessible)
|
||||
if (options.removeIdeConfigs !== false) {
|
||||
await this.uninstallIdeConfigs(projectDir, existingInstall, { silent: options.silent });
|
||||
await this.uninstallIdeConfigs(projectDir, existingInstall, {
|
||||
silent: options.silent,
|
||||
});
|
||||
removed.ideConfigs = true;
|
||||
}
|
||||
|
||||
|
|
@ -1797,7 +1825,11 @@ class Installer {
|
|||
|
||||
// Lookup agent info
|
||||
const cleanAgentName = agentName ? agentName.trim() : '';
|
||||
const agentData = agentInfo.get(cleanAgentName) || { command: '', displayName: '', title: '' };
|
||||
const agentData = agentInfo.get(cleanAgentName) || {
|
||||
command: '',
|
||||
displayName: '',
|
||||
title: '',
|
||||
};
|
||||
|
||||
// Build new row with agent info
|
||||
const newRow = [
|
||||
|
|
@ -1852,8 +1884,8 @@ class Installer {
|
|||
}
|
||||
|
||||
// Sequence comparison
|
||||
const seqA = parseInt(colsA[4] || '0', 10);
|
||||
const seqB = parseInt(colsB[4] || '0', 10);
|
||||
const seqA = Number.parseInt(colsA[4] || '0', 10);
|
||||
const seqB = Number.parseInt(colsB[4] || '0', 10);
|
||||
return seqA - seqB;
|
||||
});
|
||||
|
||||
|
|
@ -2395,7 +2427,9 @@ class Installer {
|
|||
}
|
||||
const cacheDir = path.join(bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
|
||||
const cachedModules = await fs.readdir(cacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const cachedModule of cachedModules) {
|
||||
const moduleId = cachedModule.name;
|
||||
|
|
@ -2630,7 +2664,9 @@ class Installer {
|
|||
const customModuleSources = new Map();
|
||||
const cacheDir = path.join(bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(cacheDir)) {
|
||||
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
|
||||
const cachedModules = await fs.readdir(cacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const cachedModule of cachedModules) {
|
||||
if (cachedModule.isDirectory()) {
|
||||
|
|
@ -3102,8 +3138,7 @@ class Installer {
|
|||
// Remove the module from filesystem and manifest
|
||||
const modulePath = path.join(bmadDir, missing.id);
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
const fsExtra = require('fs-extra');
|
||||
await fsExtra.remove(modulePath);
|
||||
await fs.remove(modulePath);
|
||||
await prompts.log.warn(`Deleted module directory: ${path.relative(projectRoot, modulePath)}`);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const crypto = require('node:crypto');
|
||||
const csv = require('csv-parse/sync');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const crypto = require('node:crypto');
|
||||
const { getProjectRoot } = require('../../../lib/project-root');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { FileOps } = require('../../../lib/file-ops');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const { XmlHandler } = require('../../../lib/xml-handler');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { getSourcePath } = require('../../../lib/project-root');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const { BaseIdeSetup } = require('./_base-ide');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { AgentCommandGenerator } = require('./shared/agent-command-generator');
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const { BaseIdeSetup } = require('./_base-ide');
|
||||
const { WorkflowCommandGenerator } = require('./shared/workflow-command-generator');
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ const { BaseIdeSetup } = require('./_base-ide');
|
|||
const prompts = require('../../../lib/prompts');
|
||||
const { AgentCommandGenerator } = require('./shared/agent-command-generator');
|
||||
const { BMAD_FOLDER_NAME, toDashPath } = require('./shared/path-utils');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const csv = require('csv-parse/sync');
|
||||
const yaml = require('yaml');
|
||||
|
||||
|
|
|
|||
|
|
@ -157,7 +157,7 @@ class KiloSetup extends BaseIdeSetup {
|
|||
* @param {string} workflowsDir - Workflows directory path
|
||||
*/
|
||||
async clearBmadWorkflows(workflowsDir) {
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
if (!(await fs.pathExists(workflowsDir))) return;
|
||||
|
||||
const entries = await fs.readdir(workflowsDir);
|
||||
|
|
@ -172,7 +172,7 @@ class KiloSetup extends BaseIdeSetup {
|
|||
* Cleanup KiloCode configuration
|
||||
*/
|
||||
async cleanup(projectDir, options = {}) {
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const kiloModesPath = path.join(projectDir, this.configFile);
|
||||
|
||||
if (await fs.pathExists(kiloModesPath)) {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const { BMAD_FOLDER_NAME } = require('./shared/path-utils');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const { BaseIdeSetup } = require('./_base-ide');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../../lib/fs');
|
||||
const { toColonPath, toDashPath, customAgentColonName, customAgentDashName, BMAD_FOLDER_NAME } = require('./path-utils');
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../../lib/fs');
|
||||
|
||||
/**
|
||||
* Helpers for gathering BMAD agents/tasks from the installed tree.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const { glob } = require('glob');
|
||||
const { getSourcePath } = require('../../../../lib/project-root');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../../lib/fs');
|
||||
const csv = require('csv-parse/sync');
|
||||
const { toColonName, toColonPath, toDashPath, BMAD_FOLDER_NAME } = require('./path-utils');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../../lib/fs');
|
||||
const csv = require('csv-parse/sync');
|
||||
const prompts = require('../../../../lib/prompts');
|
||||
const { toColonPath, toDashPath, customAgentColonName, customAgentDashName, BMAD_FOLDER_NAME } = require('./path-utils');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('../../../lib/fs');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { XmlHandler } = require('../../../lib/xml-handler');
|
||||
|
|
@ -14,7 +14,7 @@ const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils');
|
|||
* and agent file management including XML activation block injection.
|
||||
*
|
||||
* @class ModuleManager
|
||||
* @requires fs-extra
|
||||
* @requires lib/fs
|
||||
* @requires yaml
|
||||
* @requires prompts
|
||||
* @requires XmlHandler
|
||||
|
|
@ -208,7 +208,9 @@ class ModuleManager {
|
|||
if (this.bmadDir) {
|
||||
const customCacheDir = path.join(this.bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(customCacheDir)) {
|
||||
const cacheEntries = await fs.readdir(customCacheDir, { withFileTypes: true });
|
||||
const cacheEntries = await fs.readdir(customCacheDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
for (const entry of cacheEntries) {
|
||||
if (entry.isDirectory()) {
|
||||
const cachePath = path.join(customCacheDir, entry.name);
|
||||
|
|
@ -387,7 +389,12 @@ class ModuleManager {
|
|||
const fetchSpinner = await createSpinner();
|
||||
fetchSpinner.start(`Fetching ${moduleInfo.name}...`);
|
||||
try {
|
||||
const currentRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
|
||||
const currentRef = execSync('git rev-parse HEAD', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: 'pipe',
|
||||
})
|
||||
.toString()
|
||||
.trim();
|
||||
// Fetch and reset to remote - works better with shallow clones than pull
|
||||
execSync('git fetch origin --depth 1', {
|
||||
cwd: moduleCacheDir,
|
||||
|
|
@ -399,7 +406,12 @@ class ModuleManager {
|
|||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
|
||||
});
|
||||
const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
|
||||
const newRef = execSync('git rev-parse HEAD', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: 'pipe',
|
||||
})
|
||||
.toString()
|
||||
.trim();
|
||||
|
||||
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
|
||||
// Force dependency install if we got new code
|
||||
|
|
@ -521,7 +533,9 @@ class ModuleManager {
|
|||
* @param {Object} options.logger - Logger instance for output
|
||||
*/
|
||||
async install(moduleName, bmadDir, fileTrackingCallback = null, options = {}) {
|
||||
const sourcePath = await this.findModuleSource(moduleName, { silent: options.silent });
|
||||
const sourcePath = await this.findModuleSource(moduleName, {
|
||||
silent: options.silent,
|
||||
});
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
|
||||
// Check if source module exists
|
||||
|
|
@ -619,7 +633,9 @@ class ModuleManager {
|
|||
if (force) {
|
||||
// Force update - remove and reinstall
|
||||
await fs.remove(targetPath);
|
||||
return await this.install(moduleName, bmadDir, null, { installer: options.installer });
|
||||
return await this.install(moduleName, bmadDir, null, {
|
||||
installer: options.installer,
|
||||
});
|
||||
} else {
|
||||
// Selective update - preserve user modifications
|
||||
await this.syncModule(sourcePath, targetPath);
|
||||
|
|
@ -947,7 +963,7 @@ class ModuleManager {
|
|||
|
||||
// Check for customizations and build answers object
|
||||
let customizedFields = [];
|
||||
let answers = {};
|
||||
const answers = {};
|
||||
if (await fs.pathExists(customizePath)) {
|
||||
const customizeContent = await fs.readFile(customizePath, 'utf8');
|
||||
const customizeData = yaml.parse(customizeContent);
|
||||
|
|
@ -1020,7 +1036,9 @@ class ModuleManager {
|
|||
|
||||
// Copy any non-sidecar files from agent directory (e.g., foo.md)
|
||||
const agentDir = path.dirname(agentFile);
|
||||
const agentEntries = await fs.readdir(agentDir, { withFileTypes: true });
|
||||
const agentEntries = await fs.readdir(agentDir, {
|
||||
withFileTypes: true,
|
||||
});
|
||||
|
||||
for (const entry of agentEntries) {
|
||||
if (entry.isFile() && !entry.name.endsWith('.agent.yaml') && !entry.name.endsWith('.md')) {
|
||||
|
|
@ -1230,7 +1248,7 @@ class ModuleManager {
|
|||
* @param {string} newModuleName - New module name to reference
|
||||
*/
|
||||
async updateWorkflowConfigSource(workflowYamlPath, newModuleName) {
|
||||
let yamlContent = await fs.readFile(workflowYamlPath, 'utf8');
|
||||
const yamlContent = await fs.readFile(workflowYamlPath, 'utf8');
|
||||
|
||||
// Replace config_source: "{project-root}/_bmad/OLD_MODULE/config.yaml"
|
||||
// with config_source: "{project-root}/_bmad/NEW_MODULE/config.yaml"
|
||||
|
|
@ -1262,7 +1280,11 @@ class ModuleManager {
|
|||
const moduleConfig = options.moduleConfig || {};
|
||||
const existingModuleConfig = options.existingModuleConfig || {};
|
||||
const projectRoot = path.dirname(bmadDir);
|
||||
const emptyResult = { createdDirs: [], movedDirs: [], createdWdsFolders: [] };
|
||||
const emptyResult = {
|
||||
createdDirs: [],
|
||||
movedDirs: [],
|
||||
createdWdsFolders: [],
|
||||
};
|
||||
|
||||
// Special handling for core module - it's in src/core not src/modules
|
||||
let sourcePath;
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const path = require('node:path');
|
||||
const { getSourcePath } = require('./project-root');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const yaml = require('yaml');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
|
||||
/**
|
||||
* Analyzes agent YAML files to detect which handlers are needed
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const { escapeXml } = require('../../lib/xml-utils');
|
||||
|
||||
const AgentPartyGenerator = {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const yaml = require('yaml');
|
||||
const path = require('node:path');
|
||||
const packageJson = require('../../../package.json');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const path = require('node:path');
|
||||
const crypto = require('node:crypto');
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,169 @@
|
|||
/**
|
||||
* Drop-in replacement for fs-extra that uses only native Node.js fs.
|
||||
*
|
||||
* fs-extra routes every call through graceful-fs, whose EMFILE retry queue
|
||||
* causes non-deterministic file loss on macOS during bulk copy operations.
|
||||
* This module provides the same API surface used by the CLI codebase but
|
||||
* backed entirely by `node:fs` and `node:fs/promises` — no third-party
|
||||
* wrappers, no retry queues, no silent data loss.
|
||||
*
|
||||
* Async methods return native promises (from `node:fs/promises`).
|
||||
* Sync methods delegate directly to `node:fs`.
|
||||
*/
|
||||
|
||||
const fs = require('node:fs');
|
||||
const fsp = require('node:fs/promises');
|
||||
const path = require('node:path');
|
||||
|
||||
// ── Re-export every native fs member ────────────────────────────────────────
|
||||
// Callers that use fs.constants, fs.createReadStream, etc. keep working.
|
||||
module.exports = { ...fs };
|
||||
|
||||
// ── Async methods (return promises, like fs-extra) ──────────────────────────
|
||||
|
||||
module.exports.readFile = fsp.readFile;
|
||||
module.exports.writeFile = fsp.writeFile;
|
||||
module.exports.readdir = fsp.readdir;
|
||||
module.exports.stat = fsp.stat;
|
||||
module.exports.access = fsp.access;
|
||||
module.exports.rename = fsp.rename;
|
||||
module.exports.realpath = fsp.realpath;
|
||||
module.exports.rmdir = fsp.rmdir;
|
||||
|
||||
/**
|
||||
* Recursively ensure a directory exists.
|
||||
* @param {string} dirPath
|
||||
*/
|
||||
module.exports.ensureDir = async function ensureDir(dirPath) {
|
||||
await fsp.mkdir(dirPath, { recursive: true });
|
||||
};
|
||||
|
||||
/**
|
||||
* Check whether a path exists.
|
||||
* @param {string} p
|
||||
* @returns {Promise<boolean>}
|
||||
*/
|
||||
module.exports.pathExists = async function pathExists(p) {
|
||||
try {
|
||||
await fsp.access(p);
|
||||
return true;
|
||||
} catch (error) {
|
||||
if (error && (error.code === 'ENOENT' || error.code === 'ENOTDIR')) {
|
||||
return false;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Synchronous variant of pathExists.
|
||||
* @param {string} p
|
||||
* @returns {boolean}
|
||||
*/
|
||||
module.exports.pathExistsSync = function pathExistsSync(p) {
|
||||
return fs.existsSync(p);
|
||||
};
|
||||
|
||||
/**
|
||||
* Recursively copy a directory tree synchronously.
|
||||
* @param {string} src - Source directory
|
||||
* @param {string} dest - Destination directory
|
||||
* @param {boolean} force - Whether to overwrite existing files
|
||||
* @param {Function} [filter] - Optional filter(srcPath) → boolean; return false to skip
|
||||
*/
|
||||
function copyDirSync(src, dest, force, filter) {
|
||||
if (filter && !filter(src)) return;
|
||||
fs.mkdirSync(dest, { recursive: true });
|
||||
const entries = fs.readdirSync(src, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
const srcPath = path.join(src, entry.name);
|
||||
const destPath = path.join(dest, entry.name);
|
||||
if (filter && !filter(srcPath)) continue;
|
||||
if (entry.isDirectory()) {
|
||||
copyDirSync(srcPath, destPath, force, filter);
|
||||
} else {
|
||||
if (!force && fs.existsSync(destPath)) {
|
||||
continue;
|
||||
}
|
||||
fs.copyFileSync(srcPath, destPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy a file or directory.
|
||||
* @param {string} src
|
||||
* @param {string} dest
|
||||
* @param {object} [options]
|
||||
* @param {boolean} [options.overwrite=true]
|
||||
* @param {Function} [options.filter] - Optional filter(srcPath) → boolean; return false to skip
|
||||
*/
|
||||
module.exports.copy = async function copy(src, dest, options = {}) {
|
||||
const overwrite = options.overwrite !== false;
|
||||
const filter = options.filter;
|
||||
|
||||
if (filter && !filter(src)) return;
|
||||
|
||||
const srcStat = await fsp.stat(src);
|
||||
|
||||
if (srcStat.isDirectory()) {
|
||||
copyDirSync(src, dest, overwrite, filter);
|
||||
} else {
|
||||
await fsp.mkdir(path.dirname(dest), { recursive: true });
|
||||
if (!overwrite) {
|
||||
try {
|
||||
await fsp.access(dest);
|
||||
return; // dest exists, skip
|
||||
} catch (error) {
|
||||
if (error && error.code !== 'ENOENT' && error.code !== 'ENOTDIR') {
|
||||
throw error;
|
||||
}
|
||||
// dest doesn't exist, proceed
|
||||
}
|
||||
}
|
||||
fs.copyFileSync(src, dest);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Recursively remove a file or directory.
|
||||
* @param {string} p
|
||||
*/
|
||||
module.exports.remove = async function remove(p) {
|
||||
fs.rmSync(p, { recursive: true, force: true });
|
||||
};
|
||||
|
||||
/**
|
||||
* Move (rename) a file or directory, with cross-device fallback.
|
||||
* @param {string} src
|
||||
* @param {string} dest
|
||||
*/
|
||||
module.exports.move = async function move(src, dest) {
|
||||
try {
|
||||
await fsp.rename(src, dest);
|
||||
} catch (error) {
|
||||
if (error.code === 'EXDEV') {
|
||||
// Cross-device: copy then remove
|
||||
const srcStat = fs.statSync(src);
|
||||
if (srcStat.isDirectory()) {
|
||||
copyDirSync(src, dest, true);
|
||||
} else {
|
||||
fs.mkdirSync(path.dirname(dest), { recursive: true });
|
||||
fs.copyFileSync(src, dest);
|
||||
}
|
||||
fs.rmSync(src, { recursive: true, force: true });
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Read and parse a JSON file synchronously.
|
||||
* @param {string} filePath
|
||||
* @returns {any}
|
||||
*/
|
||||
module.exports.readJsonSync = function readJsonSync(filePath) {
|
||||
const raw = fs.readFileSync(filePath, 'utf8').replace(/^\uFEFF/, '');
|
||||
return JSON.parse(raw);
|
||||
};
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const { getProjectRoot } = require('./project-root');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
|
||||
/**
|
||||
* Find the BMAD project root directory by looking for package.json
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const { CLIUtils } = require('./cli-utils');
|
||||
const { CustomHandler } = require('../installers/lib/custom/handler');
|
||||
const { ExternalModuleManager } = require('../installers/lib/modules/external-manager');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const xml2js = require('xml2js');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const path = require('node:path');
|
||||
const { getProjectRoot, getSourcePath } = require('./project-root');
|
||||
const { YamlXmlBuilder } = require('./yaml-xml-builder');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
const yaml = require('yaml');
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./fs');
|
||||
const path = require('node:path');
|
||||
const crypto = require('node:crypto');
|
||||
const { AgentAnalyzer } = require('./agent-analyzer');
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
* This should be run once to update existing installations
|
||||
*/
|
||||
|
||||
const fs = require('fs-extra');
|
||||
const fs = require('./cli/lib/fs');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const chalk = require('chalk');
|
||||
|
|
|
|||
Loading…
Reference in New Issue