Compare commits
5 Commits
b8e3213fe9
...
a395df0459
| Author | SHA1 | Date |
|---|---|---|
|
|
a395df0459 | |
|
|
8bdf21f65b | |
|
|
7d63dcd6a0 | |
|
|
999ece33a9 | |
|
|
5e841f9cac |
36
README.md
36
README.md
|
|
@ -5,7 +5,7 @@
|
||||||
[](https://nodejs.org)
|
[](https://nodejs.org)
|
||||||
[](https://discord.gg/gk8jAdXWmj)
|
[](https://discord.gg/gk8jAdXWmj)
|
||||||
|
|
||||||
**Build More, Architect Dreams** — An AI-driven agile development framework with 21 specialized agents, 50+ guided workflows, and scale-adaptive intelligence that adjusts from bug fixes to enterprise systems.
|
**Breakthrough Method of Agile AI Driven Development** — An AI-driven agile development framework with 21 specialized agents, 50+ guided workflows, and scale-adaptive intelligence that adjusts from bug fixes to enterprise systems.
|
||||||
|
|
||||||
**100% free and open source.** No paywalls. No gated content. No gated Discord. We believe in empowering everyone, not just those who can pay.
|
**100% free and open source.** No paywalls. No gated content. No gated Discord. We believe in empowering everyone, not just those who can pay.
|
||||||
|
|
||||||
|
|
@ -16,6 +16,7 @@ Traditional AI tools do the thinking for you, producing average results. BMad ag
|
||||||
- **Scale-Adaptive**: Automatically adjusts planning depth based on project complexity (Level 0-4)
|
- **Scale-Adaptive**: Automatically adjusts planning depth based on project complexity (Level 0-4)
|
||||||
- **Structured Workflows**: Grounded in agile best practices across analysis, planning, architecture, and implementation
|
- **Structured Workflows**: Grounded in agile best practices across analysis, planning, architecture, and implementation
|
||||||
- **Specialized Agents**: 12+ domain experts (PM, Architect, Developer, UX, Scrum Master, and more)
|
- **Specialized Agents**: 12+ domain experts (PM, Architect, Developer, UX, Scrum Master, and more)
|
||||||
|
- **Party Mode**: Bring multiple agent personas into one session to plan, troubleshoot, or discuss your project collaboratively
|
||||||
- **Complete Lifecycle**: From brainstorming to deployment, with just-in-time documentation
|
- **Complete Lifecycle**: From brainstorming to deployment, with just-in-time documentation
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
@ -26,30 +27,39 @@ Traditional AI tools do the thinking for you, producing average results. BMad ag
|
||||||
npx bmad-method@alpha install
|
npx bmad-method@alpha install
|
||||||
```
|
```
|
||||||
|
|
||||||
Follow the installer prompts to configure your project.
|
Follow the installer prompts, then open your AI IDE (Claude Code, Cursor, Windsurf, etc.) in the project folder.
|
||||||
|
|
||||||
Once you have installed BMad to a folder, launch your tool of choice from where you installed BMad. (We really like Claude Code and Cursor - but there are any that work great with BMad!)
|
> **Not sure what to do?** Run `/bmad-help` — it tells you exactly what's next and what's optional. You can also ask it questions like `/bmad-help How should I build a web app for XYZ?`
|
||||||
|
|
||||||
Then its simple as running the command: `/bmad-help` if you do not know what to do. Depending on which modules you have installed, you will have different choices.
|
The workflows below show the fastest path to working code. You can also load agents directly for a more structured process, extensive planning, or to learn about agile development practices — the agents guide you with menus, explanations, and elicitation at each step.
|
||||||
|
|
||||||
To make the help more applicable you can even run the `/bmad-help What do you suggest I do to get started building a brand new web application for XYZ`.
|
### Simple Path (Quick Flow)
|
||||||
|
|
||||||
The results from BMad Help will be able to suggest and constantly guide you on what to do next - along with the workflows upon completion also making suggestions on what to do next.
|
Bug fixes, small features, clear scope — 3 commands:
|
||||||
|
|
||||||
This analyzes your project and recommends a track:
|
1. `/quick-spec` — analyzes your codebase and produces a tech-spec with stories
|
||||||
|
2. `/dev-story` — implements each story
|
||||||
|
3. `/code-review` — validates quality
|
||||||
|
|
||||||
| Track | Best For | Time to First Story Coding |
|
### Full Planning Path (BMad Method)
|
||||||
| --------------- | ------------------------- | -------------------------- |
|
|
||||||
| **Quick Flow** | Bug fixes, small features | ~10-30 minutes |
|
Products, platforms, complex features — structured planning then build:
|
||||||
| **BMad Method** | Products and platforms | ~30 minutes - 2 hours |
|
|
||||||
| **Enterprise** | Compliance-heavy systems | ~1-3 hours |
|
1. `/product-brief` — define problem, users, and MVP scope
|
||||||
|
2. `/create-prd` — full requirements with personas, metrics, and risks
|
||||||
|
3. `/create-architecture` — technical decisions and system design
|
||||||
|
4. `/create-epics-and-stories` — break work into prioritized stories
|
||||||
|
5. `/sprint-planning` — initialize sprint tracking
|
||||||
|
6. **Repeat per story:** `/create-story` → `/dev-story` → `/code-review`
|
||||||
|
|
||||||
|
Every step tells you what's next. Optional phases (brainstorming, research, UX design) are available when you need them — ask `/bmad-help` anytime. For a detailed walkthrough, see the [Getting Started Tutorial](http://docs.bmad-method.org/tutorials/getting-started/getting-started-bmadv6/).
|
||||||
|
|
||||||
## Modules
|
## Modules
|
||||||
|
|
||||||
BMad Method extends with official modules for specialized domains. Modules are available during installation and can be added to your project at any time.
|
BMad Method extends with official modules for specialized domains. Modules are available during installation and can be added to your project at any time.
|
||||||
|
|
||||||
| Module | GitHub | NPM | Purpose |
|
| Module | GitHub | NPM | Purpose |
|
||||||
|--------|--------|-----|---------|
|
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
|
||||||
| **BMad Method (BMM)** | [bmad-code-org/BMAD-METHOD](https://github.com/bmad-code-org/BMAD-METHOD) | [bmad-method](https://www.npmjs.com/package/bmad-method) | Core framework with 34+ workflows across 4 development phases |
|
| **BMad Method (BMM)** | [bmad-code-org/BMAD-METHOD](https://github.com/bmad-code-org/BMAD-METHOD) | [bmad-method](https://www.npmjs.com/package/bmad-method) | Core framework with 34+ workflows across 4 development phases |
|
||||||
| **BMad Builder (BMB)** | [bmad-code-org/bmad-builder](https://github.com/bmad-code-org/bmad-builder) | [bmad-builder](https://www.npmjs.com/package/bmad-builder) | Create custom BMad agents, workflows, and domain-specific modules |
|
| **BMad Builder (BMB)** | [bmad-code-org/bmad-builder](https://github.com/bmad-code-org/bmad-builder) | [bmad-builder](https://www.npmjs.com/package/bmad-builder) | Create custom BMad agents, workflows, and domain-specific modules |
|
||||||
| **Game Dev Studio (BMGD)** | [bmad-code-org/bmad-module-game-dev-studio](https://github.com/bmad-code-org/bmad-module-game-dev-studio) | [bmad-game-dev-studio](https://www.npmjs.com/package/bmad-game-dev-studio) | Game development workflows for Unity, Unreal, and Godot |
|
| **Game Dev Studio (BMGD)** | [bmad-code-org/bmad-module-game-dev-studio](https://github.com/bmad-code-org/bmad-module-game-dev-studio) | [bmad-game-dev-studio](https://www.npmjs.com/package/bmad-game-dev-studio) | Game development workflows for Unity, Unreal, and Godot |
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,7 @@ If you're comfortable working with AI coding assistants like Claude, Cursor, or
|
||||||
The fastest way to understand BMad is to try it. Choose a tutorial to walk through your first project in about 10 minutes.
|
The fastest way to understand BMad is to try it. Choose a tutorial to walk through your first project in about 10 minutes.
|
||||||
|
|
||||||
- **[Get Started with BMad](/docs/tutorials/getting-started/getting-started-bmadv6.md)** — Latest features, still in active development
|
- **[Get Started with BMad](/docs/tutorials/getting-started/getting-started-bmadv6.md)** — Latest features, still in active development
|
||||||
|
- **[Workflow Guide](/workflow-guide)** — A simple visual overview of the various BMad tracks that get you going quickly.
|
||||||
|
|
||||||
:::tip[Already familiar with AI-assisted development?]
|
:::tip[Already familiar with AI-assisted development?]
|
||||||
Feel free to skip around. Use the sidebar to jump to any topic, or check out [What Are Agents?](/docs/explanation/core-concepts/what-are-agents.md) to understand how BMad organizes its AI personas.
|
Feel free to skip around. Use the sidebar to jump to any topic, or check out [What Are Agents?](/docs/explanation/core-concepts/what-are-agents.md) to understand how BMad organizes its AI personas.
|
||||||
|
|
|
||||||
|
|
@ -42,9 +42,7 @@ BMad helps you build software through guided workflows with specialized AI agent
|
||||||
| 3 | Solutioning | Design architecture *(BMad Method/Enterprise only)* |
|
| 3 | Solutioning | Design architecture *(BMad Method/Enterprise only)* |
|
||||||
| 4 | Implementation | Build epic by epic, story by story |
|
| 4 | Implementation | Build epic by epic, story by story |
|
||||||
|
|
||||||

|
**[Open the Interactive Workflow Guide](/workflow-guide)** to explore phases, agents, and outputs for your chosen track.
|
||||||
|
|
||||||
*Complete visual flowchart showing all phases, workflows, and agents for the standard greenfield track.*
|
|
||||||
|
|
||||||
Based on your project's complexity, BMad offers three planning tracks:
|
Based on your project's complexity, BMad offers three planning tracks:
|
||||||
|
|
||||||
|
|
@ -86,6 +84,8 @@ your-project/
|
||||||
Having issues? See [Install BMad](/docs/how-to/installation/install-bmad.md) for common solutions.
|
Having issues? See [Install BMad](/docs/how-to/installation/install-bmad.md) for common solutions.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
Open your AI IDE in the project folder. From here, you can run `/bmad-help` anytime to see what to do next — or ask it a question like `/bmad-help How should I build a web app for XYZ?`
|
||||||
|
|
||||||
## Step 1: Initialize Your Project
|
## Step 1: Initialize Your Project
|
||||||
|
|
||||||
Load the **Analyst agent** in your IDE, wait for the menu, then run `workflow-init`.
|
Load the **Analyst agent** in your IDE, wait for the menu, then run `workflow-init`.
|
||||||
|
|
|
||||||
|
|
@ -81,6 +81,21 @@ export default [
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
|
||||||
|
// Test files using Vitest (ES modules)
|
||||||
|
{
|
||||||
|
files: ['test/unit/**/*.js', 'test/integration/**/*.js', 'test/helpers/**/*.js', 'test/setup.js', 'vitest.config.js'],
|
||||||
|
languageOptions: {
|
||||||
|
sourceType: 'module',
|
||||||
|
ecmaVersion: 'latest',
|
||||||
|
},
|
||||||
|
rules: {
|
||||||
|
// Allow dev dependencies in test files
|
||||||
|
'n/no-unpublished-import': 'off',
|
||||||
|
'unicorn/prefer-module': 'off',
|
||||||
|
'no-unused-vars': 'off',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
|
||||||
// CLI scripts under tools/** and test/**
|
// CLI scripts under tools/** and test/**
|
||||||
{
|
{
|
||||||
files: ['tools/**/*.js', 'tools/**/*.mjs', 'test/**/*.js'],
|
files: ['tools/**/*.js', 'tools/**/*.mjs', 'test/**/*.js'],
|
||||||
|
|
|
||||||
|
|
@ -35,6 +35,8 @@
|
||||||
"@astrojs/sitemap": "^3.6.0",
|
"@astrojs/sitemap": "^3.6.0",
|
||||||
"@astrojs/starlight": "^0.37.0",
|
"@astrojs/starlight": "^0.37.0",
|
||||||
"@eslint/js": "^9.33.0",
|
"@eslint/js": "^9.33.0",
|
||||||
|
"@vitest/coverage-v8": "^4.0.16",
|
||||||
|
"@vitest/ui": "^4.0.16",
|
||||||
"archiver": "^7.0.1",
|
"archiver": "^7.0.1",
|
||||||
"astro": "^5.16.0",
|
"astro": "^5.16.0",
|
||||||
"c8": "^10.1.3",
|
"c8": "^10.1.3",
|
||||||
|
|
@ -50,6 +52,7 @@
|
||||||
"prettier": "^3.7.4",
|
"prettier": "^3.7.4",
|
||||||
"prettier-plugin-packagejson": "^2.5.19",
|
"prettier-plugin-packagejson": "^2.5.19",
|
||||||
"sharp": "^0.33.5",
|
"sharp": "^0.33.5",
|
||||||
|
"vitest": "^4.0.16",
|
||||||
"yaml-eslint-parser": "^1.2.3",
|
"yaml-eslint-parser": "^1.2.3",
|
||||||
"yaml-lint": "^1.7.0"
|
"yaml-lint": "^1.7.0"
|
||||||
},
|
},
|
||||||
|
|
@ -244,7 +247,6 @@
|
||||||
"integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==",
|
"integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@babel/code-frame": "^7.27.1",
|
"@babel/code-frame": "^7.27.1",
|
||||||
"@babel/generator": "^7.28.5",
|
"@babel/generator": "^7.28.5",
|
||||||
|
|
@ -2984,6 +2986,13 @@
|
||||||
"url": "https://opencollective.com/pkgr"
|
"url": "https://opencollective.com/pkgr"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@polka/url": {
|
||||||
|
"version": "1.0.0-next.29",
|
||||||
|
"resolved": "https://registry.npmjs.org/@polka/url/-/url-1.0.0-next.29.tgz",
|
||||||
|
"integrity": "sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/@rollup/pluginutils": {
|
"node_modules/@rollup/pluginutils": {
|
||||||
"version": "5.3.0",
|
"version": "5.3.0",
|
||||||
"resolved": "https://registry.npmjs.org/@rollup/pluginutils/-/pluginutils-5.3.0.tgz",
|
"resolved": "https://registry.npmjs.org/@rollup/pluginutils/-/pluginutils-5.3.0.tgz",
|
||||||
|
|
@ -3436,6 +3445,13 @@
|
||||||
"@sinonjs/commons": "^3.0.1"
|
"@sinonjs/commons": "^3.0.1"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@standard-schema/spec": {
|
||||||
|
"version": "1.1.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.1.0.tgz",
|
||||||
|
"integrity": "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/@swc/helpers": {
|
"node_modules/@swc/helpers": {
|
||||||
"version": "0.5.18",
|
"version": "0.5.18",
|
||||||
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.18.tgz",
|
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.18.tgz",
|
||||||
|
|
@ -3502,6 +3518,17 @@
|
||||||
"@babel/types": "^7.28.2"
|
"@babel/types": "^7.28.2"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@types/chai": {
|
||||||
|
"version": "5.2.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/chai/-/chai-5.2.3.tgz",
|
||||||
|
"integrity": "sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@types/deep-eql": "*",
|
||||||
|
"assertion-error": "^2.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@types/debug": {
|
"node_modules/@types/debug": {
|
||||||
"version": "4.1.12",
|
"version": "4.1.12",
|
||||||
"resolved": "https://registry.npmjs.org/@types/debug/-/debug-4.1.12.tgz",
|
"resolved": "https://registry.npmjs.org/@types/debug/-/debug-4.1.12.tgz",
|
||||||
|
|
@ -3511,6 +3538,13 @@
|
||||||
"@types/ms": "*"
|
"@types/ms": "*"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@types/deep-eql": {
|
||||||
|
"version": "4.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/deep-eql/-/deep-eql-4.0.2.tgz",
|
||||||
|
"integrity": "sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/@types/estree": {
|
"node_modules/@types/estree": {
|
||||||
"version": "1.0.8",
|
"version": "1.0.8",
|
||||||
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
|
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
|
||||||
|
|
@ -3954,6 +3988,171 @@
|
||||||
"win32"
|
"win32"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
"node_modules/@vitest/coverage-v8": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/coverage-v8/-/coverage-v8-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-2rNdjEIsPRzsdu6/9Eq0AYAzYdpP6Bx9cje9tL3FE5XzXRQF1fNU9pe/1yE8fCrS0HD+fBtt6gLPh6LI57tX7A==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@bcoe/v8-coverage": "^1.0.2",
|
||||||
|
"@vitest/utils": "4.0.16",
|
||||||
|
"ast-v8-to-istanbul": "^0.3.8",
|
||||||
|
"istanbul-lib-coverage": "^3.2.2",
|
||||||
|
"istanbul-lib-report": "^3.0.1",
|
||||||
|
"istanbul-lib-source-maps": "^5.0.6",
|
||||||
|
"istanbul-reports": "^3.2.0",
|
||||||
|
"magicast": "^0.5.1",
|
||||||
|
"obug": "^2.1.1",
|
||||||
|
"std-env": "^3.10.0",
|
||||||
|
"tinyrainbow": "^3.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@vitest/browser": "4.0.16",
|
||||||
|
"vitest": "4.0.16"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@vitest/browser": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/expect": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/expect/-/expect-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-eshqULT2It7McaJkQGLkPjPjNph+uevROGuIMJdG3V+0BSR2w9u6J9Lwu+E8cK5TETlfou8GRijhafIMhXsimA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@standard-schema/spec": "^1.0.0",
|
||||||
|
"@types/chai": "^5.2.2",
|
||||||
|
"@vitest/spy": "4.0.16",
|
||||||
|
"@vitest/utils": "4.0.16",
|
||||||
|
"chai": "^6.2.1",
|
||||||
|
"tinyrainbow": "^3.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/mocker": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/mocker/-/mocker-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-yb6k4AZxJTB+q9ycAvsoxGn+j/po0UaPgajllBgt1PzoMAAmJGYFdDk0uCcRcxb3BrME34I6u8gHZTQlkqSZpg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/spy": "4.0.16",
|
||||||
|
"estree-walker": "^3.0.3",
|
||||||
|
"magic-string": "^0.30.21"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"msw": "^2.4.9",
|
||||||
|
"vite": "^6.0.0 || ^7.0.0-0"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"msw": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"vite": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/pretty-format": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/pretty-format/-/pretty-format-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-eNCYNsSty9xJKi/UdVD8Ou16alu7AYiS2fCPRs0b1OdhJiV89buAXQLpTbe+X8V9L6qrs9CqyvU7OaAopJYPsA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"tinyrainbow": "^3.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/runner": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/runner/-/runner-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-VWEDm5Wv9xEo80ctjORcTQRJ539EGPB3Pb9ApvVRAY1U/WkHXmmYISqU5E79uCwcW7xYUV38gwZD+RV755fu3Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/utils": "4.0.16",
|
||||||
|
"pathe": "^2.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/snapshot": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/snapshot/-/snapshot-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-sf6NcrYhYBsSYefxnry+DR8n3UV4xWZwWxYbCJUt2YdvtqzSPR7VfGrY0zsv090DAbjFZsi7ZaMi1KnSRyK1XA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/pretty-format": "4.0.16",
|
||||||
|
"magic-string": "^0.30.21",
|
||||||
|
"pathe": "^2.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/spy": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/spy/-/spy-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-4jIOWjKP0ZUaEmJm00E0cOBLU+5WE0BpeNr3XN6TEF05ltro6NJqHWxXD0kA8/Zc8Nh23AT8WQxwNG+WeROupw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/ui": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/ui/-/ui-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-rkoPH+RqWopVxDnCBE/ysIdfQ2A7j1eDmW8tCxxrR9nnFBa9jKf86VgsSAzxBd1x+ny0GC4JgiD3SNfRHv3pOg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/utils": "4.0.16",
|
||||||
|
"fflate": "^0.8.2",
|
||||||
|
"flatted": "^3.3.3",
|
||||||
|
"pathe": "^2.0.3",
|
||||||
|
"sirv": "^3.0.2",
|
||||||
|
"tinyglobby": "^0.2.15",
|
||||||
|
"tinyrainbow": "^3.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"vitest": "4.0.16"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@vitest/utils": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vitest/utils/-/utils-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-h8z9yYhV3e1LEfaQ3zdypIrnAg/9hguReGZoS7Gl0aBG5xgA410zBqECqmaF/+RkTggRsfnzc1XaAHA6bmUufA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/pretty-format": "4.0.16",
|
||||||
|
"tinyrainbow": "^3.0.3"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/abort-controller": {
|
"node_modules/abort-controller": {
|
||||||
"version": "3.0.0",
|
"version": "3.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz",
|
||||||
|
|
@ -3973,7 +4172,6 @@
|
||||||
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
|
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"bin": {
|
"bin": {
|
||||||
"acorn": "bin/acorn"
|
"acorn": "bin/acorn"
|
||||||
},
|
},
|
||||||
|
|
@ -4266,6 +4464,35 @@
|
||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/assertion-error": {
|
||||||
|
"version": "2.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/assertion-error/-/assertion-error-2.0.1.tgz",
|
||||||
|
"integrity": "sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/ast-v8-to-istanbul": {
|
||||||
|
"version": "0.3.10",
|
||||||
|
"resolved": "https://registry.npmjs.org/ast-v8-to-istanbul/-/ast-v8-to-istanbul-0.3.10.tgz",
|
||||||
|
"integrity": "sha512-p4K7vMz2ZSk3wN8l5o3y2bJAoZXT3VuJI5OLTATY/01CYWumWvwkUw0SqDBnNq6IiTO3qDa1eSQDibAV8g7XOQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@jridgewell/trace-mapping": "^0.3.31",
|
||||||
|
"estree-walker": "^3.0.3",
|
||||||
|
"js-tokens": "^9.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/ast-v8-to-istanbul/node_modules/js-tokens": {
|
||||||
|
"version": "9.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
|
||||||
|
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/astring": {
|
"node_modules/astring": {
|
||||||
"version": "1.9.0",
|
"version": "1.9.0",
|
||||||
"resolved": "https://registry.npmjs.org/astring/-/astring-1.9.0.tgz",
|
"resolved": "https://registry.npmjs.org/astring/-/astring-1.9.0.tgz",
|
||||||
|
|
@ -4282,7 +4509,6 @@
|
||||||
"integrity": "sha512-6mF/YrvwwRxLTu+aMEa5pwzKUNl5ZetWbTyZCs9Um0F12HUmxUiF5UHiZPy4rifzU3gtpM3xP2DfdmkNX9eZRg==",
|
"integrity": "sha512-6mF/YrvwwRxLTu+aMEa5pwzKUNl5ZetWbTyZCs9Um0F12HUmxUiF5UHiZPy4rifzU3gtpM3xP2DfdmkNX9eZRg==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@astrojs/compiler": "^2.13.0",
|
"@astrojs/compiler": "^2.13.0",
|
||||||
"@astrojs/internal-helpers": "0.7.5",
|
"@astrojs/internal-helpers": "0.7.5",
|
||||||
|
|
@ -5350,7 +5576,6 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"baseline-browser-mapping": "^2.9.0",
|
"baseline-browser-mapping": "^2.9.0",
|
||||||
"caniuse-lite": "^1.0.30001759",
|
"caniuse-lite": "^1.0.30001759",
|
||||||
|
|
@ -5517,6 +5742,16 @@
|
||||||
"url": "https://github.com/sponsors/wooorm"
|
"url": "https://github.com/sponsors/wooorm"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/chai": {
|
||||||
|
"version": "6.2.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/chai/-/chai-6.2.2.tgz",
|
||||||
|
"integrity": "sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/chalk": {
|
"node_modules/chalk": {
|
||||||
"version": "4.1.2",
|
"version": "4.1.2",
|
||||||
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
|
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
|
||||||
|
|
@ -6666,7 +6901,6 @@
|
||||||
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
|
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@eslint-community/eslint-utils": "^4.8.0",
|
"@eslint-community/eslint-utils": "^4.8.0",
|
||||||
"@eslint-community/regexpp": "^4.12.1",
|
"@eslint-community/regexpp": "^4.12.1",
|
||||||
|
|
@ -7253,6 +7487,16 @@
|
||||||
"node": "^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"
|
"node": "^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/expect-type": {
|
||||||
|
"version": "1.3.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/expect-type/-/expect-type-1.3.0.tgz",
|
||||||
|
"integrity": "sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/expressive-code": {
|
"node_modules/expressive-code": {
|
||||||
"version": "0.41.5",
|
"version": "0.41.5",
|
||||||
"resolved": "https://registry.npmjs.org/expressive-code/-/expressive-code-0.41.5.tgz",
|
"resolved": "https://registry.npmjs.org/expressive-code/-/expressive-code-0.41.5.tgz",
|
||||||
|
|
@ -7368,6 +7612,13 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/fflate": {
|
||||||
|
"version": "0.8.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/fflate/-/fflate-0.8.2.tgz",
|
||||||
|
"integrity": "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/figlet": {
|
"node_modules/figlet": {
|
||||||
"version": "1.9.4",
|
"version": "1.9.4",
|
||||||
"resolved": "https://registry.npmjs.org/figlet/-/figlet-1.9.4.tgz",
|
"resolved": "https://registry.npmjs.org/figlet/-/figlet-1.9.4.tgz",
|
||||||
|
|
@ -10228,7 +10479,6 @@
|
||||||
"integrity": "sha512-p3JTemJJbkiMjXEMiFwgm0v6ym5g8K+b2oDny+6xdl300tUKySxvilJQLSea48C6OaYNmO30kH9KxpiAg5bWJw==",
|
"integrity": "sha512-p3JTemJJbkiMjXEMiFwgm0v6ym5g8K+b2oDny+6xdl300tUKySxvilJQLSea48C6OaYNmO30kH9KxpiAg5bWJw==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"globby": "15.0.0",
|
"globby": "15.0.0",
|
||||||
"js-yaml": "4.1.1",
|
"js-yaml": "4.1.1",
|
||||||
|
|
@ -11699,6 +11949,17 @@
|
||||||
"url": "https://github.com/fb55/nth-check?sponsor=1"
|
"url": "https://github.com/fb55/nth-check?sponsor=1"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/obug": {
|
||||||
|
"version": "2.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/obug/-/obug-2.1.1.tgz",
|
||||||
|
"integrity": "sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==",
|
||||||
|
"dev": true,
|
||||||
|
"funding": [
|
||||||
|
"https://github.com/sponsors/sxzz",
|
||||||
|
"https://opencollective.com/debug"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/ofetch": {
|
"node_modules/ofetch": {
|
||||||
"version": "1.5.1",
|
"version": "1.5.1",
|
||||||
"resolved": "https://registry.npmjs.org/ofetch/-/ofetch-1.5.1.tgz",
|
"resolved": "https://registry.npmjs.org/ofetch/-/ofetch-1.5.1.tgz",
|
||||||
|
|
@ -12144,6 +12405,13 @@
|
||||||
"url": "https://github.com/sponsors/sindresorhus"
|
"url": "https://github.com/sponsors/sindresorhus"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/pathe": {
|
||||||
|
"version": "2.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz",
|
||||||
|
"integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/piccolore": {
|
"node_modules/piccolore": {
|
||||||
"version": "0.1.3",
|
"version": "0.1.3",
|
||||||
"resolved": "https://registry.npmjs.org/piccolore/-/piccolore-0.1.3.tgz",
|
"resolved": "https://registry.npmjs.org/piccolore/-/piccolore-0.1.3.tgz",
|
||||||
|
|
@ -12292,7 +12560,6 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"nanoid": "^3.3.11",
|
"nanoid": "^3.3.11",
|
||||||
"picocolors": "^1.1.1",
|
"picocolors": "^1.1.1",
|
||||||
|
|
@ -12358,7 +12625,6 @@
|
||||||
"integrity": "sha512-v6UNi1+3hSlVvv8fSaoUbggEM5VErKmmpGA7Pl3HF8V6uKY7rvClBOJlH6yNwQtfTueNkGVpOv/mtWL9L4bgRA==",
|
"integrity": "sha512-v6UNi1+3hSlVvv8fSaoUbggEM5VErKmmpGA7Pl3HF8V6uKY7rvClBOJlH6yNwQtfTueNkGVpOv/mtWL9L4bgRA==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"bin": {
|
"bin": {
|
||||||
"prettier": "bin/prettier.cjs"
|
"prettier": "bin/prettier.cjs"
|
||||||
},
|
},
|
||||||
|
|
@ -13187,7 +13453,6 @@
|
||||||
"integrity": "sha512-3nk8Y3a9Ea8szgKhinMlGMhGMw89mqule3KWczxhIzqudyHdCIOHw8WJlj/r329fACjKLEh13ZSk7oE22kyeIw==",
|
"integrity": "sha512-3nk8Y3a9Ea8szgKhinMlGMhGMw89mqule3KWczxhIzqudyHdCIOHw8WJlj/r329fACjKLEh13ZSk7oE22kyeIw==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/estree": "1.0.8"
|
"@types/estree": "1.0.8"
|
||||||
},
|
},
|
||||||
|
|
@ -13371,6 +13636,13 @@
|
||||||
"@types/hast": "^3.0.4"
|
"@types/hast": "^3.0.4"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/siginfo": {
|
||||||
|
"version": "2.0.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/siginfo/-/siginfo-2.0.0.tgz",
|
||||||
|
"integrity": "sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
"node_modules/signal-exit": {
|
"node_modules/signal-exit": {
|
||||||
"version": "4.1.0",
|
"version": "4.1.0",
|
||||||
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz",
|
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz",
|
||||||
|
|
@ -13400,6 +13672,21 @@
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/sirv": {
|
||||||
|
"version": "3.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/sirv/-/sirv-3.0.2.tgz",
|
||||||
|
"integrity": "sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@polka/url": "^1.0.0-next.24",
|
||||||
|
"mrmime": "^2.0.0",
|
||||||
|
"totalist": "^3.0.0"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/sisteransi": {
|
"node_modules/sisteransi": {
|
||||||
"version": "1.0.5",
|
"version": "1.0.5",
|
||||||
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
|
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
|
||||||
|
|
@ -13610,6 +13897,20 @@
|
||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/stackback": {
|
||||||
|
"version": "0.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/stackback/-/stackback-0.0.2.tgz",
|
||||||
|
"integrity": "sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/std-env": {
|
||||||
|
"version": "3.10.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/std-env/-/std-env-3.10.0.tgz",
|
||||||
|
"integrity": "sha512-5GS12FdOZNliM5mAOxFRg7Ir0pWz8MdpYm6AY6VPkGpbA7ZzmbzNcBJQ0GPvvyWgcY7QAhCgf9Uy89I03faLkg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/stream-replace-string": {
|
"node_modules/stream-replace-string": {
|
||||||
"version": "2.0.0",
|
"version": "2.0.0",
|
||||||
"resolved": "https://registry.npmjs.org/stream-replace-string/-/stream-replace-string-2.0.0.tgz",
|
"resolved": "https://registry.npmjs.org/stream-replace-string/-/stream-replace-string-2.0.0.tgz",
|
||||||
|
|
@ -14024,6 +14325,13 @@
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/tinybench": {
|
||||||
|
"version": "2.9.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/tinybench/-/tinybench-2.9.0.tgz",
|
||||||
|
"integrity": "sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/tinyexec": {
|
"node_modules/tinyexec": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.2",
|
||||||
"resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.0.2.tgz",
|
||||||
|
|
@ -14051,6 +14359,16 @@
|
||||||
"url": "https://github.com/sponsors/SuperchupuDev"
|
"url": "https://github.com/sponsors/SuperchupuDev"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/tinyrainbow": {
|
||||||
|
"version": "3.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/tinyrainbow/-/tinyrainbow-3.0.3.tgz",
|
||||||
|
"integrity": "sha512-PSkbLUoxOFRzJYjjxHJt9xro7D+iilgMX/C9lawzVuYiIdcihh9DXmVibBe8lmcFrRi/VzlPjBxbN7rH24q8/Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=14.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/tmpl": {
|
"node_modules/tmpl": {
|
||||||
"version": "1.0.5",
|
"version": "1.0.5",
|
||||||
"resolved": "https://registry.npmjs.org/tmpl/-/tmpl-1.0.5.tgz",
|
"resolved": "https://registry.npmjs.org/tmpl/-/tmpl-1.0.5.tgz",
|
||||||
|
|
@ -14071,6 +14389,16 @@
|
||||||
"node": ">=8.0"
|
"node": ">=8.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/totalist": {
|
||||||
|
"version": "3.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/totalist/-/totalist-3.0.1.tgz",
|
||||||
|
"integrity": "sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=6"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/trim-lines": {
|
"node_modules/trim-lines": {
|
||||||
"version": "3.0.1",
|
"version": "3.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz",
|
||||||
|
|
@ -14727,7 +15055,6 @@
|
||||||
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
|
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"esbuild": "^0.25.0",
|
"esbuild": "^0.25.0",
|
||||||
"fdir": "^6.4.4",
|
"fdir": "^6.4.4",
|
||||||
|
|
@ -14817,6 +15144,84 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/vitest": {
|
||||||
|
"version": "4.0.16",
|
||||||
|
"resolved": "https://registry.npmjs.org/vitest/-/vitest-4.0.16.tgz",
|
||||||
|
"integrity": "sha512-E4t7DJ9pESL6E3I8nFjPa4xGUd3PmiWDLsDztS2qXSJWfHtbQnwAWylaBvSNY48I3vr8PTqIZlyK8TE3V3CA4Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@vitest/expect": "4.0.16",
|
||||||
|
"@vitest/mocker": "4.0.16",
|
||||||
|
"@vitest/pretty-format": "4.0.16",
|
||||||
|
"@vitest/runner": "4.0.16",
|
||||||
|
"@vitest/snapshot": "4.0.16",
|
||||||
|
"@vitest/spy": "4.0.16",
|
||||||
|
"@vitest/utils": "4.0.16",
|
||||||
|
"es-module-lexer": "^1.7.0",
|
||||||
|
"expect-type": "^1.2.2",
|
||||||
|
"magic-string": "^0.30.21",
|
||||||
|
"obug": "^2.1.1",
|
||||||
|
"pathe": "^2.0.3",
|
||||||
|
"picomatch": "^4.0.3",
|
||||||
|
"std-env": "^3.10.0",
|
||||||
|
"tinybench": "^2.9.0",
|
||||||
|
"tinyexec": "^1.0.2",
|
||||||
|
"tinyglobby": "^0.2.15",
|
||||||
|
"tinyrainbow": "^3.0.3",
|
||||||
|
"vite": "^6.0.0 || ^7.0.0",
|
||||||
|
"why-is-node-running": "^2.3.0"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"vitest": "vitest.mjs"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": "^20.0.0 || ^22.0.0 || >=24.0.0"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://opencollective.com/vitest"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@edge-runtime/vm": "*",
|
||||||
|
"@opentelemetry/api": "^1.9.0",
|
||||||
|
"@types/node": "^20.0.0 || ^22.0.0 || >=24.0.0",
|
||||||
|
"@vitest/browser-playwright": "4.0.16",
|
||||||
|
"@vitest/browser-preview": "4.0.16",
|
||||||
|
"@vitest/browser-webdriverio": "4.0.16",
|
||||||
|
"@vitest/ui": "4.0.16",
|
||||||
|
"happy-dom": "*",
|
||||||
|
"jsdom": "*"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@edge-runtime/vm": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@opentelemetry/api": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@types/node": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@vitest/browser-playwright": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@vitest/browser-preview": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@vitest/browser-webdriverio": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@vitest/ui": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"happy-dom": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"jsdom": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/walker": {
|
"node_modules/walker": {
|
||||||
"version": "1.0.8",
|
"version": "1.0.8",
|
||||||
"resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz",
|
"resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz",
|
||||||
|
|
@ -14872,6 +15277,23 @@
|
||||||
"node": ">=4"
|
"node": ">=4"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/why-is-node-running": {
|
||||||
|
"version": "2.3.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/why-is-node-running/-/why-is-node-running-2.3.0.tgz",
|
||||||
|
"integrity": "sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"siginfo": "^2.0.0",
|
||||||
|
"stackback": "0.0.2"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"why-is-node-running": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=8"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/widest-line": {
|
"node_modules/widest-line": {
|
||||||
"version": "3.1.0",
|
"version": "3.1.0",
|
||||||
"resolved": "https://registry.npmjs.org/widest-line/-/widest-line-3.1.0.tgz",
|
"resolved": "https://registry.npmjs.org/widest-line/-/widest-line-3.1.0.tgz",
|
||||||
|
|
@ -15001,7 +15423,6 @@
|
||||||
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
||||||
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
|
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
|
||||||
"license": "ISC",
|
"license": "ISC",
|
||||||
"peer": true,
|
|
||||||
"bin": {
|
"bin": {
|
||||||
"yaml": "bin.mjs"
|
"yaml": "bin.mjs"
|
||||||
},
|
},
|
||||||
|
|
@ -15181,7 +15602,6 @@
|
||||||
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
|
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peer": true,
|
|
||||||
"funding": {
|
"funding": {
|
||||||
"url": "https://github.com/sponsors/colinhacks"
|
"url": "https://github.com/sponsors/colinhacks"
|
||||||
}
|
}
|
||||||
|
|
|
||||||
12
package.json
12
package.json
|
|
@ -45,10 +45,15 @@
|
||||||
"release:minor": "gh workflow run \"Manual Release\" -f version_bump=minor",
|
"release:minor": "gh workflow run \"Manual Release\" -f version_bump=minor",
|
||||||
"release:patch": "gh workflow run \"Manual Release\" -f version_bump=patch",
|
"release:patch": "gh workflow run \"Manual Release\" -f version_bump=patch",
|
||||||
"release:watch": "gh run watch",
|
"release:watch": "gh run watch",
|
||||||
"test": "npm run test:schemas && npm run test:install && npm run validate:schemas && npm run lint && npm run lint:md && npm run format:check",
|
"test": "npm run test:schemas && npm run test:install && npm run test:unit && npm run validate:schemas && npm run lint && npm run lint:md && npm run format:check",
|
||||||
"test:coverage": "c8 --reporter=text --reporter=html npm run test:schemas",
|
"test:coverage": "vitest run --coverage",
|
||||||
"test:install": "node test/test-installation-components.js",
|
"test:install": "node test/test-installation-components.js",
|
||||||
|
"test:integration": "vitest run test/integration",
|
||||||
|
"test:quick": "vitest run --changed",
|
||||||
"test:schemas": "node test/test-agent-schema.js",
|
"test:schemas": "node test/test-agent-schema.js",
|
||||||
|
"test:ui": "vitest --ui",
|
||||||
|
"test:unit": "vitest run",
|
||||||
|
"test:unit:watch": "vitest",
|
||||||
"validate:schemas": "node tools/validate-agent-schema.js"
|
"validate:schemas": "node tools/validate-agent-schema.js"
|
||||||
},
|
},
|
||||||
"lint-staged": {
|
"lint-staged": {
|
||||||
|
|
@ -90,6 +95,8 @@
|
||||||
"@astrojs/sitemap": "^3.6.0",
|
"@astrojs/sitemap": "^3.6.0",
|
||||||
"@astrojs/starlight": "^0.37.0",
|
"@astrojs/starlight": "^0.37.0",
|
||||||
"@eslint/js": "^9.33.0",
|
"@eslint/js": "^9.33.0",
|
||||||
|
"@vitest/coverage-v8": "^4.0.16",
|
||||||
|
"@vitest/ui": "^4.0.16",
|
||||||
"archiver": "^7.0.1",
|
"archiver": "^7.0.1",
|
||||||
"astro": "^5.16.0",
|
"astro": "^5.16.0",
|
||||||
"c8": "^10.1.3",
|
"c8": "^10.1.3",
|
||||||
|
|
@ -105,6 +112,7 @@
|
||||||
"prettier": "^3.7.4",
|
"prettier": "^3.7.4",
|
||||||
"prettier-plugin-packagejson": "^2.5.19",
|
"prettier-plugin-packagejson": "^2.5.19",
|
||||||
"sharp": "^0.33.5",
|
"sharp": "^0.33.5",
|
||||||
|
"vitest": "^4.0.16",
|
||||||
"yaml-eslint-parser": "^1.2.3",
|
"yaml-eslint-parser": "^1.2.3",
|
||||||
"yaml-lint": "^1.7.0"
|
"yaml-lint": "^1.7.0"
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -52,7 +52,7 @@ Analyze the user's input to determine mode:
|
||||||
- Load the spec, extract tasks/context/AC
|
- Load the spec, extract tasks/context/AC
|
||||||
- Set `{execution_mode}` = "tech-spec"
|
- Set `{execution_mode}` = "tech-spec"
|
||||||
- Set `{tech_spec_path}` = provided path
|
- Set `{tech_spec_path}` = provided path
|
||||||
- **NEXT:** Load `step-03-execute.md`
|
- **NEXT:** Read fully and follow: `step-03-execute.md`
|
||||||
|
|
||||||
**Mode B: Direct Instructions**
|
**Mode B: Direct Instructions**
|
||||||
|
|
||||||
|
|
@ -88,43 +88,63 @@ Use holistic judgment, not mechanical keyword matching.
|
||||||
|
|
||||||
### No Escalation (simple request)
|
### No Escalation (simple request)
|
||||||
|
|
||||||
Present choice:
|
Display: "**Select:** [T] Plan first (tech-spec) [E] Execute directly"
|
||||||
|
|
||||||
```
|
#### Menu Handling Logic:
|
||||||
**[t] Plan first** - Create tech-spec then implement
|
|
||||||
**[e] Execute directly** - Start now
|
|
||||||
```
|
|
||||||
|
|
||||||
- **[t]:** Direct user to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
- IF T: Direct user to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
||||||
- **[e]:** Ask for any additional guidance, then **NEXT:** Load `step-02-context-gathering.md`
|
- IF E: Ask for any additional guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md`
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed when user makes a selection
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Escalation Triggered - Level 0-2
|
### Escalation Triggered - Level 0-2
|
||||||
|
|
||||||
```
|
Present: "This looks like a focused feature with multiple components."
|
||||||
This looks like a focused feature with multiple components.
|
|
||||||
|
|
||||||
**[t] Create tech-spec first** (recommended)
|
Display:
|
||||||
**[w] Seems bigger than quick-dev** - Recommend the Full BMad Flow PRD Process
|
|
||||||
**[e] Execute directly**
|
|
||||||
```
|
|
||||||
|
|
||||||
- **[t]:** Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
**[T] Create tech-spec first** (recommended)
|
||||||
- **[w]:** Direct user to run the PRD workflow instead. **EXIT Quick Dev.**
|
**[W] Seems bigger than quick-dev** - Recommend the Full BMad Flow PRD Process
|
||||||
- **[e]:** Ask for guidance, then **NEXT:** Load `step-02-context-gathering.md`
|
**[E] Execute directly**
|
||||||
|
|
||||||
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
|
- IF T: Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
||||||
|
- IF W: Direct user to run the PRD workflow instead. **EXIT Quick Dev.**
|
||||||
|
- IF E: Ask for guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md`
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed when user makes a selection
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Escalation Triggered - Level 3+
|
### Escalation Triggered - Level 3+
|
||||||
|
|
||||||
```
|
Present: "This sounds like platform/system work."
|
||||||
This sounds like platform/system work.
|
|
||||||
|
|
||||||
**[w] Start BMad Method** (recommended)
|
Display:
|
||||||
**[t] Create tech-spec** (lighter planning)
|
|
||||||
**[e] Execute directly** - feeling lucky
|
|
||||||
```
|
|
||||||
|
|
||||||
- **[t]:** Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
**[W] Start BMad Method** (recommended)
|
||||||
- **[w]:** Direct user to run the PRD workflow instead. **EXIT Quick Dev.**
|
**[T] Create tech-spec** (lighter planning)
|
||||||
- **[e]:** Ask for guidance, then **NEXT:** Load `step-02-context-gathering.md`
|
**[E] Execute directly** - feeling lucky
|
||||||
|
|
||||||
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
|
- IF T: Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.**
|
||||||
|
- IF W: Direct user to run the PRD workflow instead. **EXIT Quick Dev.**
|
||||||
|
- IF E: Ask for guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md`
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed when user makes a selection
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -132,9 +152,9 @@ This sounds like platform/system work.
|
||||||
|
|
||||||
**CRITICAL:** When this step completes, explicitly state which step to load:
|
**CRITICAL:** When this step completes, explicitly state which step to load:
|
||||||
|
|
||||||
- Mode A (tech-spec): "**NEXT:** Loading `step-03-execute.md`"
|
- Mode A (tech-spec): "**NEXT:** read fully and follow: `step-03-execute.md`"
|
||||||
- Mode B (direct, [e] selected): "**NEXT:** Loading `step-02-context-gathering.md`"
|
- Mode B (direct, [E] selected): "**NEXT:** Read fully and follow: `step-02-context-gathering.md`"
|
||||||
- Escalation ([t] or [w]): "**EXITING Quick Dev.** Follow the directed workflow."
|
- Escalation ([T] or [W]): "**EXITING Quick Dev.** Follow the directed workflow."
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -99,7 +99,7 @@ Ready to execute? (y/n/adjust)
|
||||||
|
|
||||||
**CRITICAL:** When user confirms ready, explicitly state:
|
**CRITICAL:** When user confirms ready, explicitly state:
|
||||||
|
|
||||||
- **y:** "**NEXT:** Loading `step-03-execute.md`"
|
- **y:** "**NEXT:** Read fully and follow: `step-03-execute.md`"
|
||||||
- **n/adjust:** Continue gathering context, then re-present plan
|
- **n/adjust:** Continue gathering context, then re-present plan
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -91,7 +91,7 @@ For each task:
|
||||||
|
|
||||||
## NEXT STEP
|
## NEXT STEP
|
||||||
|
|
||||||
When ALL tasks are complete (or halted on blocker), load `step-04-self-check.md`.
|
When ALL tasks are complete (or halted on blocker), read fully and follow: `step-04-self-check.md`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -85,7 +85,7 @@ If TodoWrite or similar tool is available, turn each finding into a TODO, includ
|
||||||
|
|
||||||
## NEXT STEP
|
## NEXT STEP
|
||||||
|
|
||||||
With findings in hand, load `step-06-resolve-findings.md` for user to choose resolution approach.
|
With findings in hand, read fully and follow: `step-06-resolve-findings.md` for user to choose resolution approach.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -25,15 +25,24 @@ From previous steps:
|
||||||
|
|
||||||
## RESOLUTION OPTIONS
|
## RESOLUTION OPTIONS
|
||||||
|
|
||||||
Present choice to user:
|
Present: "How would you like to handle these findings?"
|
||||||
|
|
||||||
```
|
Display:
|
||||||
How would you like to handle these findings?
|
|
||||||
|
|
||||||
**[1] Walk through** - Discuss each finding individually
|
**[1] Walk through** - Discuss each finding individually
|
||||||
**[2] Auto-fix** - Automatically fix issues classified as "real"
|
**[2] Auto-fix** - Automatically fix issues classified as "real"
|
||||||
**[3] Skip** - Acknowledge and proceed to commit
|
**[3] Skip** - Acknowledge and proceed to commit
|
||||||
```
|
|
||||||
|
### Menu Handling Logic:
|
||||||
|
|
||||||
|
- IF 1: Execute OPTION 1 (Walk Through) below
|
||||||
|
- IF 2: Execute OPTION 2 (Auto-fix) below
|
||||||
|
- IF 3: Execute OPTION 3 (Skip) below
|
||||||
|
|
||||||
|
### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed when user makes a selection
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -47,20 +47,20 @@ Hey {user_name}! Found a tech-spec in progress:
|
||||||
|
|
||||||
Is this what you're here to continue?
|
Is this what you're here to continue?
|
||||||
|
|
||||||
[y] Yes, pick up where I left off
|
[Y] Yes, pick up where I left off
|
||||||
[n] No, archive it and start something new
|
[N] No, archive it and start something new
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **HALT and wait for user selection.**
|
4. **HALT and wait for user selection.**
|
||||||
|
|
||||||
a) **Menu Handling:**
|
a) **Menu Handling:**
|
||||||
|
|
||||||
- **[y] Continue existing:**
|
- **[Y] Continue existing:**
|
||||||
- Jump directly to the appropriate step based on `stepsCompleted`:
|
- Jump directly to the appropriate step based on `stepsCompleted`:
|
||||||
- `[1]` → Load `{nextStepFile}` (Step 2)
|
- `[1]` → Load `{nextStepFile}` (Step 2)
|
||||||
- `[1, 2]` → Load `{skipToStepFile}` (Step 3)
|
- `[1, 2]` → Load `{skipToStepFile}` (Step 3)
|
||||||
- `[1, 2, 3]` → Load `./step-04-review.md` (Step 4)
|
- `[1, 2, 3]` → Load `./step-04-review.md` (Step 4)
|
||||||
- **[n] Archive and start fresh:**
|
- **[N] Archive and start fresh:**
|
||||||
- Rename `{wipFile}` to `{implementation_artifacts}/tech-spec-{slug}-archived-{date}.md`
|
- Rename `{wipFile}` to `{implementation_artifacts}/tech-spec-{slug}-archived-{date}.md`
|
||||||
|
|
||||||
### 1. Greet and Ask for Initial Request
|
### 1. Greet and Ask for Initial Request
|
||||||
|
|
@ -162,19 +162,22 @@ b) **Report to user:**
|
||||||
|
|
||||||
a) **Display menu:**
|
a) **Display menu:**
|
||||||
|
|
||||||
```
|
Display: "**Select:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Deep Investigation (Step 2 of 4)"
|
||||||
[a] Advanced Elicitation - dig deeper into requirements
|
|
||||||
[c] Continue - proceed to next step
|
|
||||||
[p] Party Mode - bring in other experts
|
|
||||||
```
|
|
||||||
|
|
||||||
b) **HALT and wait for user selection.**
|
b) **HALT and wait for user selection.**
|
||||||
|
|
||||||
#### Menu Handling:
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
- **[a]**: Read fully and follow: `{advanced_elicitation}`, then return here and redisplay menu
|
- IF A: Read fully and follow: `{advanced_elicitation}` with current tech-spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[c]**: Read fully and follow: `{nextStepFile}` (Map Technical Constraints)
|
- IF P: Read fully and follow: `{party_mode_exec}` with current tech-spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[p]**: Read fully and follow: `{party_mode_exec}`, then return here and redisplay menu
|
- IF C: Verify `{wipFile}` has `stepsCompleted: [1]`, then read fully and follow: `{nextStepFile}`
|
||||||
|
- IF Any other comments or queries: respond helpfully then redisplay menu
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed to next step when user selects 'C'
|
||||||
|
- After A or P execution, return to this menu
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -186,4 +189,4 @@ b) **HALT and wait for user selection.**
|
||||||
|
|
||||||
- [ ] WIP check performed FIRST before any greeting.
|
- [ ] WIP check performed FIRST before any greeting.
|
||||||
- [ ] `{wipFile}` created with correct frontmatter, Overview, Context for Development, and `stepsCompleted: [1]`.
|
- [ ] `{wipFile}` created with correct frontmatter, Overview, Context for Development, and `stepsCompleted: [1]`.
|
||||||
- [ ] User selected [c] to continue.
|
- [ ] User selected [C] to continue.
|
||||||
|
|
|
||||||
|
|
@ -115,21 +115,22 @@ Fill in:
|
||||||
|
|
||||||
### 4. Present Checkpoint Menu
|
### 4. Present Checkpoint Menu
|
||||||
|
|
||||||
**Display menu:**
|
Display: "**Select:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Generate Spec (Step 3 of 4)"
|
||||||
|
|
||||||
```
|
|
||||||
[a] Advanced Elicitation - explore more context
|
|
||||||
[c] Continue - proceed to Generate Spec
|
|
||||||
[p] Party Mode - bring in other experts
|
|
||||||
```
|
|
||||||
|
|
||||||
**HALT and wait for user selection.**
|
**HALT and wait for user selection.**
|
||||||
|
|
||||||
#### Menu Handling:
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
- **[a]**: Read fully and follow: `{advanced_elicitation}`, then return here and redisplay menu
|
- IF A: Read fully and follow: `{advanced_elicitation}` with current tech-spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[c]**: Verify frontmatter updated with `stepsCompleted: [1, 2]`, then read fully and follow: `{nextStepFile}`
|
- IF P: Read fully and follow: `{party_mode_exec}` with current tech-spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[p]**: Read fully and follow: `{party_mode_exec}`, then return here and redisplay menu
|
- IF C: Verify frontmatter updated with `stepsCompleted: [1, 2]`, then read fully and follow: `{nextStepFile}`
|
||||||
|
- IF Any other comments or queries: respond helpfully then redisplay menu
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed to next step when user selects 'C'
|
||||||
|
- After A or P execution, return to this menu
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -43,23 +43,24 @@ wipFile: '{implementation_artifacts}/tech-spec-wip.md'
|
||||||
|
|
||||||
**Present review menu:**
|
**Present review menu:**
|
||||||
|
|
||||||
```
|
Display: "**Select:** [Y] Approve [C] Changes [Q] Questions [A] Advanced Elicitation [P] Party Mode"
|
||||||
[y] Approve - finalize the spec
|
|
||||||
[c] Changes - request modifications
|
|
||||||
[q] Questions - ask about any section
|
|
||||||
[a] Advanced Elicitation - dig deeper before approving
|
|
||||||
[p] Party Mode - get expert feedback before approving
|
|
||||||
```
|
|
||||||
|
|
||||||
**HALT and wait for user selection.**
|
**HALT and wait for user selection.**
|
||||||
|
|
||||||
#### Menu Handling:
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
- **[y]**: Proceed to Section 3 (Finalize the Spec)
|
- IF Y: Proceed to Section 3 (Finalize the Spec)
|
||||||
- **[c]**: Proceed to Section 2 (Handle Review Feedback), then return here and redisplay menu
|
- IF C: Proceed to Section 2 (Handle Review Feedback), then return here and redisplay menu
|
||||||
- **[q]**: Answer questions, then redisplay this menu
|
- IF Q: Answer questions, then redisplay this menu
|
||||||
- **[a]**: Read fully and follow: `{advanced_elicitation}`, then return here and redisplay menu
|
- IF A: Read fully and follow: `{advanced_elicitation}` with current spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[p]**: Read fully and follow: `{party_mode_exec}`, then return here and redisplay menu
|
- IF P: Read fully and follow: `{party_mode_exec}` with current spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu
|
||||||
|
- IF Any other comments or queries: respond helpfully then redisplay menu
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- ONLY proceed to finalize when user selects 'Y'
|
||||||
|
- After other menu items execution, return to this menu
|
||||||
|
|
||||||
### 2. Handle Review Feedback
|
### 2. Handle Review Feedback
|
||||||
|
|
||||||
|
|
@ -114,11 +115,11 @@ Saved to: {finalFile}
|
||||||
|
|
||||||
**Next Steps:**
|
**Next Steps:**
|
||||||
|
|
||||||
[a] Advanced Elicitation - refine further
|
[A] Advanced Elicitation - refine further
|
||||||
[r] Adversarial Review - critique of the spec (highly recommended)
|
[R] Adversarial Review - critique of the spec (highly recommended)
|
||||||
[b] Begin Development - start implementing now (not recommended)
|
[B] Begin Development - start implementing now (not recommended)
|
||||||
[d] Done - exit workflow
|
[D] Done - exit workflow
|
||||||
[p] Party Mode - get expert feedback before dev
|
[P] Party Mode - get expert feedback before dev
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -135,14 +136,23 @@ This ensures the dev agent has clean context focused solely on implementation.
|
||||||
|
|
||||||
b) **HALT and wait for user selection.**
|
b) **HALT and wait for user selection.**
|
||||||
|
|
||||||
#### Menu Handling:
|
#### Menu Handling Logic:
|
||||||
|
|
||||||
- **[a]**: Read fully and follow: `{advanced_elicitation}`, then return here and redisplay menu
|
- IF A: Read fully and follow: `{advanced_elicitation}` with current spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[b]**: Read fully and follow: `{quick_dev_workflow}` with the final spec file (warn: fresh context is better)
|
- IF B: Load and execute `{quick_dev_workflow}` with the final spec file (warn: fresh context is better)
|
||||||
- **[d]**: Exit workflow - display final confirmation and path to spec
|
- IF D: Exit workflow - display final confirmation and path to spec
|
||||||
- **[p]**: Read fully and follow: `{party_mode_exec}`, then return here and redisplay menu
|
- IF P: Read fully and follow: `{party_mode_exec}` with current spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu
|
||||||
- **[r]**: Execute Adversarial Review:
|
- IF R: Execute Adversarial Review (see below)
|
||||||
1. **Invoke Adversarial Review Task**:
|
- IF Any other comments or queries: respond helpfully then redisplay menu
|
||||||
|
|
||||||
|
#### EXECUTION RULES:
|
||||||
|
|
||||||
|
- ALWAYS halt and wait for user input after presenting menu
|
||||||
|
- After A, P, or R execution, return to this menu
|
||||||
|
|
||||||
|
#### Adversarial Review [R] Process:
|
||||||
|
|
||||||
|
1. **Invoke Adversarial Review Task**:
|
||||||
> With `{finalFile}` constructed, invoke the review task. If possible, use information asymmetry: run this task, and only it, in a separate subagent or process with read access to the project, but no context except the `{finalFile}`.
|
> With `{finalFile}` constructed, invoke the review task. If possible, use information asymmetry: run this task, and only it, in a separate subagent or process with read access to the project, but no context except the `{finalFile}`.
|
||||||
<invoke-task>Review {finalFile} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
|
<invoke-task>Review {finalFile} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
|
||||||
> **Platform fallback:** If task invocation not available, load the task file and follow its instructions inline, passing `{finalFile}` as the content.
|
> **Platform fallback:** If task invocation not available, load the task file and follow its instructions inline, passing `{finalFile}` as the content.
|
||||||
|
|
@ -161,7 +171,7 @@ b) **HALT and wait for user selection.**
|
||||||
|
|
||||||
### 5. Exit Workflow
|
### 5. Exit Workflow
|
||||||
|
|
||||||
**When user selects [d]:**
|
**When user selects [D]:**
|
||||||
|
|
||||||
"**All done!** Your tech-spec is ready at:
|
"**All done!** Your tech-spec is ready at:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -47,7 +47,7 @@ This uses **step-file architecture** for disciplined execution:
|
||||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
||||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
||||||
3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
|
3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
|
||||||
4. **CHECK CONTINUATION**: Only proceed to next step when user selects [c] (Continue)
|
4. **CHECK CONTINUATION**: Only proceed to next step when user selects [C] (Continue)
|
||||||
5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
|
5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
|
||||||
6. **LOAD NEXT**: When directed, read fully and follow the next step file
|
6. **LOAD NEXT**: When directed, read fully and follow the next step file
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -227,7 +227,7 @@
|
||||||
|
|
||||||
### test-design-qa.md
|
### test-design-qa.md
|
||||||
|
|
||||||
**NEW STRUCTURE (streamlined from 375 to ~287 lines):**
|
**REQUIRED SECTIONS:**
|
||||||
|
|
||||||
- [ ] **Purpose statement** at top (test execution recipe)
|
- [ ] **Purpose statement** at top (test execution recipe)
|
||||||
- [ ] **Executive Summary** with risk summary and coverage summary
|
- [ ] **Executive Summary** with risk summary and coverage summary
|
||||||
|
|
@ -258,19 +258,19 @@
|
||||||
- [ ] **Appendix A: Code Examples & Tagging**
|
- [ ] **Appendix A: Code Examples & Tagging**
|
||||||
- [ ] **Appendix B: Knowledge Base References**
|
- [ ] **Appendix B: Knowledge Base References**
|
||||||
|
|
||||||
**REMOVED SECTIONS (bloat):**
|
**DON'T INCLUDE (bloat):**
|
||||||
- [ ] ❌ NO Quick Reference section (bloat)
|
- [ ] ❌ NO Quick Reference section
|
||||||
- [ ] ❌ NO System Architecture Summary (bloat)
|
- [ ] ❌ NO System Architecture Summary
|
||||||
- [ ] ❌ NO Test Environment Requirements as separate section (integrated into Dependencies)
|
- [ ] ❌ NO Test Environment Requirements as separate section (integrate into Dependencies)
|
||||||
- [ ] ❌ NO Testability Assessment section (bloat - covered in Dependencies)
|
- [ ] ❌ NO Testability Assessment section (covered in Dependencies)
|
||||||
- [ ] ❌ NO Test Levels Strategy section (bloat - obvious from test scenarios)
|
- [ ] ❌ NO Test Levels Strategy section (obvious from test scenarios)
|
||||||
- [ ] ❌ NO NFR Readiness Summary (bloat)
|
- [ ] ❌ NO NFR Readiness Summary
|
||||||
- [ ] ❌ NO Quality Gate Criteria section (teams decide for themselves)
|
- [ ] ❌ NO Quality Gate Criteria section (teams decide for themselves)
|
||||||
- [ ] ❌ NO Follow-on Workflows section (bloat - BMAD commands self-explanatory)
|
- [ ] ❌ NO Follow-on Workflows section (BMAD commands self-explanatory)
|
||||||
- [ ] ❌ NO Approval section (unnecessary formality)
|
- [ ] ❌ NO Approval section
|
||||||
- [ ] ❌ NO Infrastructure/DevOps/Finance effort tables (out of scope)
|
- [ ] ❌ NO Infrastructure/DevOps/Finance effort tables (out of scope)
|
||||||
- [ ] ❌ NO Sprint 0/1/2/3 breakdown tables (too prescriptive)
|
- [ ] ❌ NO Sprint 0/1/2/3 breakdown tables
|
||||||
- [ ] ❌ NO Next Steps section (bloat)
|
- [ ] ❌ NO Next Steps section
|
||||||
|
|
||||||
### Cross-Document Consistency
|
### Cross-Document Consistency
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,83 @@
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import yaml from 'yaml';
|
||||||
|
import xml2js from 'xml2js';
|
||||||
|
|
||||||
|
// Get the directory of this module
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = path.dirname(__filename);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load a fixture file
|
||||||
|
* @param {string} fixturePath - Relative path to fixture from test/fixtures/
|
||||||
|
* @returns {Promise<string>} File content
|
||||||
|
*/
|
||||||
|
export async function loadFixture(fixturePath) {
|
||||||
|
const fullPath = path.join(__dirname, '..', 'fixtures', fixturePath);
|
||||||
|
return fs.readFile(fullPath, 'utf8');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load a YAML fixture
|
||||||
|
* @param {string} fixturePath - Relative path to YAML fixture
|
||||||
|
* @returns {Promise<Object>} Parsed YAML object
|
||||||
|
*/
|
||||||
|
export async function loadYamlFixture(fixturePath) {
|
||||||
|
const content = await loadFixture(fixturePath);
|
||||||
|
return yaml.parse(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load an XML fixture
|
||||||
|
* @param {string} fixturePath - Relative path to XML fixture
|
||||||
|
* @returns {Promise<Object>} Parsed XML object
|
||||||
|
*/
|
||||||
|
export async function loadXmlFixture(fixturePath) {
|
||||||
|
const content = await loadFixture(fixturePath);
|
||||||
|
return xml2js.parseStringPromise(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load a JSON fixture
|
||||||
|
* @param {string} fixturePath - Relative path to JSON fixture
|
||||||
|
* @returns {Promise<Object>} Parsed JSON object
|
||||||
|
*/
|
||||||
|
export async function loadJsonFixture(fixturePath) {
|
||||||
|
const content = await loadFixture(fixturePath);
|
||||||
|
return JSON.parse(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a fixture file exists
|
||||||
|
* @param {string} fixturePath - Relative path to fixture
|
||||||
|
* @returns {Promise<boolean>} True if fixture exists
|
||||||
|
*/
|
||||||
|
export async function fixtureExists(fixturePath) {
|
||||||
|
const fullPath = path.join(__dirname, '..', 'fixtures', fixturePath);
|
||||||
|
return fs.pathExists(fullPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the full path to a fixture
|
||||||
|
* @param {string} fixturePath - Relative path to fixture
|
||||||
|
* @returns {string} Full path to fixture
|
||||||
|
*/
|
||||||
|
export function getFixturePath(fixturePath) {
|
||||||
|
return path.join(__dirname, '..', 'fixtures', fixturePath);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a test file in a temporary directory
|
||||||
|
* (Re-exported from temp-dir for convenience)
|
||||||
|
* @param {string} tmpDir - Temporary directory path
|
||||||
|
* @param {string} relativePath - Relative path for the file
|
||||||
|
* @param {string} content - File content
|
||||||
|
* @returns {Promise<string>} Full path to the created file
|
||||||
|
*/
|
||||||
|
export async function createTestFile(tmpDir, relativePath, content) {
|
||||||
|
const fullPath = path.join(tmpDir, relativePath);
|
||||||
|
await fs.ensureDir(path.dirname(fullPath));
|
||||||
|
await fs.writeFile(fullPath, content, 'utf8');
|
||||||
|
return fullPath;
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,82 @@
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
import os from 'node:os';
|
||||||
|
import { randomUUID } from 'node:crypto';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a temporary directory for testing
|
||||||
|
* @param {string} prefix - Prefix for the directory name
|
||||||
|
* @returns {Promise<string>} Path to the created temporary directory
|
||||||
|
*/
|
||||||
|
export async function createTempDir(prefix = 'bmad-test-') {
|
||||||
|
const tmpDir = path.join(os.tmpdir(), `${prefix}${randomUUID()}`);
|
||||||
|
await fs.ensureDir(tmpDir);
|
||||||
|
return tmpDir;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clean up a temporary directory
|
||||||
|
* @param {string} tmpDir - Path to the temporary directory
|
||||||
|
* @returns {Promise<void>}
|
||||||
|
*/
|
||||||
|
export async function cleanupTempDir(tmpDir) {
|
||||||
|
if (await fs.pathExists(tmpDir)) {
|
||||||
|
await fs.remove(tmpDir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Execute a test function with a temporary directory
|
||||||
|
* Automatically creates and cleans up the directory
|
||||||
|
* @param {Function} testFn - Test function that receives the temp directory path
|
||||||
|
* @returns {Promise<void>}
|
||||||
|
*/
|
||||||
|
export async function withTempDir(testFn) {
|
||||||
|
const tmpDir = await createTempDir();
|
||||||
|
try {
|
||||||
|
await testFn(tmpDir);
|
||||||
|
} finally {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a test file in a temporary directory
|
||||||
|
* @param {string} tmpDir - Temporary directory path
|
||||||
|
* @param {string} relativePath - Relative path for the file
|
||||||
|
* @param {string} content - File content
|
||||||
|
* @returns {Promise<string>} Full path to the created file
|
||||||
|
*/
|
||||||
|
export async function createTestFile(tmpDir, relativePath, content) {
|
||||||
|
const fullPath = path.join(tmpDir, relativePath);
|
||||||
|
await fs.ensureDir(path.dirname(fullPath));
|
||||||
|
await fs.writeFile(fullPath, content, 'utf8');
|
||||||
|
return fullPath;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create multiple test files in a temporary directory
|
||||||
|
* @param {string} tmpDir - Temporary directory path
|
||||||
|
* @param {Object} files - Object mapping relative paths to content
|
||||||
|
* @returns {Promise<string[]>} Array of created file paths
|
||||||
|
*/
|
||||||
|
export async function createTestFiles(tmpDir, files) {
|
||||||
|
const paths = [];
|
||||||
|
for (const [relativePath, content] of Object.entries(files)) {
|
||||||
|
const fullPath = await createTestFile(tmpDir, relativePath, content);
|
||||||
|
paths.push(fullPath);
|
||||||
|
}
|
||||||
|
return paths;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a test directory structure
|
||||||
|
* @param {string} tmpDir - Temporary directory path
|
||||||
|
* @param {string[]} dirs - Array of relative directory paths
|
||||||
|
* @returns {Promise<void>}
|
||||||
|
*/
|
||||||
|
export async function createTestDirs(tmpDir, dirs) {
|
||||||
|
for (const dir of dirs) {
|
||||||
|
await fs.ensureDir(path.join(tmpDir, dir));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
import { beforeEach, afterEach } from 'vitest';
|
||||||
|
|
||||||
|
// Global test setup
|
||||||
|
beforeEach(() => {
|
||||||
|
// Reset environment variables to prevent test pollution
|
||||||
|
// Store original env for restoration
|
||||||
|
if (!globalThis.__originalEnv) {
|
||||||
|
globalThis.__originalEnv = { ...process.env };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
// Restore original environment variables
|
||||||
|
if (globalThis.__originalEnv) {
|
||||||
|
process.env = { ...globalThis.__originalEnv };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Any global cleanup can go here
|
||||||
|
});
|
||||||
|
|
||||||
|
// Increase timeout for file system operations
|
||||||
|
// (Individual tests can override this if needed)
|
||||||
|
const DEFAULT_TIMEOUT = 10_000; // 10 seconds
|
||||||
|
|
||||||
|
// Make timeout available globally
|
||||||
|
globalThis.DEFAULT_TEST_TIMEOUT = DEFAULT_TIMEOUT;
|
||||||
|
|
@ -0,0 +1,428 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { Config } from '../../../tools/cli/lib/config.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
import yaml from 'yaml';
|
||||||
|
|
||||||
|
describe('Config', () => {
|
||||||
|
let tmpDir;
|
||||||
|
let config;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
config = new Config();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('loadYaml()', () => {
|
||||||
|
it('should load and parse YAML file', async () => {
|
||||||
|
const yamlContent = {
|
||||||
|
key1: 'value1',
|
||||||
|
key2: { nested: 'value2' },
|
||||||
|
array: [1, 2, 3],
|
||||||
|
};
|
||||||
|
|
||||||
|
const configPath = path.join(tmpDir, 'config.yaml');
|
||||||
|
await fs.writeFile(configPath, yaml.stringify(yamlContent));
|
||||||
|
|
||||||
|
const result = await config.loadYaml(configPath);
|
||||||
|
|
||||||
|
expect(result).toEqual(yamlContent);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw error for non-existent file', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'missing.yaml');
|
||||||
|
|
||||||
|
await expect(config.loadYaml(nonExistent)).rejects.toThrow('Configuration file not found');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode content', async () => {
|
||||||
|
const yamlContent = {
|
||||||
|
chinese: '测试',
|
||||||
|
russian: 'Тест',
|
||||||
|
japanese: 'テスト',
|
||||||
|
};
|
||||||
|
|
||||||
|
const configPath = path.join(tmpDir, 'unicode.yaml');
|
||||||
|
await fs.writeFile(configPath, yaml.stringify(yamlContent));
|
||||||
|
|
||||||
|
const result = await config.loadYaml(configPath);
|
||||||
|
|
||||||
|
expect(result.chinese).toBe('测试');
|
||||||
|
expect(result.russian).toBe('Тест');
|
||||||
|
expect(result.japanese).toBe('テスト');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Note: saveYaml() is not tested because it uses yaml.dump() which doesn't exist
|
||||||
|
// in yaml 2.7.0 (should use yaml.stringify). This method is never called in production
|
||||||
|
// and represents dead code with a latent bug.
|
||||||
|
|
||||||
|
describe('processConfig()', () => {
|
||||||
|
it('should replace {project-root} placeholder', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Root is {project-root}/bmad');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, { root: '/home/user/project' });
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe('Root is /home/user/project/bmad');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {module} placeholder', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Module: {module}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, { module: 'bmm' });
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe('Module: bmm');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {version} placeholder with package version', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Version: {version}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath);
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toMatch(/Version: \d+\.\d+\.\d+/); // Semver format
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace {date} placeholder with current date', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Date: {date}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath);
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toMatch(/Date: \d{4}-\d{2}-\d{2}/); // YYYY-MM-DD
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace multiple placeholders', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Root: {project-root}, Module: {module}, Version: {version}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, {
|
||||||
|
root: '/project',
|
||||||
|
module: 'test',
|
||||||
|
});
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toContain('Root: /project');
|
||||||
|
expect(content).toContain('Module: test');
|
||||||
|
expect(content).toMatch(/Version: \d+\.\d+/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace custom placeholders', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Custom: {custom-placeholder}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, { '{custom-placeholder}': 'custom-value' });
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe('Custom: custom-value');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape regex special characters in placeholders', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Path: {project-root}/test');
|
||||||
|
|
||||||
|
// Test that {project-root} doesn't get interpreted as regex
|
||||||
|
await config.processConfig(configPath, {
|
||||||
|
root: '/path/with/special$chars^',
|
||||||
|
});
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe('Path: /path/with/special$chars^/test');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle placeholders with regex metacharacters in values', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, 'Value: {placeholder}');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, {
|
||||||
|
'{placeholder}': String.raw`value with $1 and \backslash`,
|
||||||
|
});
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe(String.raw`Value: value with $1 and \backslash`);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace all occurrences of placeholder', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'config.txt');
|
||||||
|
await fs.writeFile(configPath, '{module} is here and {module} is there and {module} everywhere');
|
||||||
|
|
||||||
|
await config.processConfig(configPath, { module: 'BMM' });
|
||||||
|
|
||||||
|
const content = await fs.readFile(configPath, 'utf8');
|
||||||
|
expect(content).toBe('BMM is here and BMM is there and BMM everywhere');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('deepMerge()', () => {
|
||||||
|
it('should merge shallow objects', () => {
|
||||||
|
const target = { a: 1, b: 2 };
|
||||||
|
const source = { b: 3, c: 4 };
|
||||||
|
|
||||||
|
const result = config.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ a: 1, b: 3, c: 4 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should merge nested objects', () => {
|
||||||
|
const target = { level1: { a: 1, b: 2 } };
|
||||||
|
const source = { level1: { b: 3, c: 4 } };
|
||||||
|
|
||||||
|
const result = config.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result.level1).toEqual({ a: 1, b: 3, c: 4 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not merge arrays (just replace)', () => {
|
||||||
|
const target = { items: [1, 2, 3] };
|
||||||
|
const source = { items: [4, 5] };
|
||||||
|
|
||||||
|
const result = config.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result.items).toEqual([4, 5]); // Replaced, not merged
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle null values', () => {
|
||||||
|
const target = { a: 'value', b: null };
|
||||||
|
const source = { a: null, c: 'new' };
|
||||||
|
|
||||||
|
const result = config.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ a: null, b: null, c: 'new' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not mutate original objects', () => {
|
||||||
|
const target = { a: 1 };
|
||||||
|
const source = { b: 2 };
|
||||||
|
|
||||||
|
config.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(target).toEqual({ a: 1 });
|
||||||
|
expect(source).toEqual({ b: 2 });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('mergeConfigs()', () => {
|
||||||
|
it('should delegate to deepMerge', () => {
|
||||||
|
const base = { setting1: 'base' };
|
||||||
|
const override = { setting2: 'override' };
|
||||||
|
|
||||||
|
const result = config.mergeConfigs(base, override);
|
||||||
|
|
||||||
|
expect(result).toEqual({ setting1: 'base', setting2: 'override' });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('isObject()', () => {
|
||||||
|
it('should return true for plain objects', () => {
|
||||||
|
expect(config.isObject({})).toBe(true);
|
||||||
|
expect(config.isObject({ key: 'value' })).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for arrays', () => {
|
||||||
|
expect(config.isObject([])).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for null', () => {
|
||||||
|
expect(config.isObject(null)).toBeFalsy();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for primitives', () => {
|
||||||
|
expect(config.isObject('string')).toBe(false);
|
||||||
|
expect(config.isObject(42)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getValue() and setValue()', () => {
|
||||||
|
it('should get value by dot notation path', () => {
|
||||||
|
const obj = {
|
||||||
|
level1: {
|
||||||
|
level2: {
|
||||||
|
value: 'test',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.getValue(obj, 'level1.level2.value');
|
||||||
|
|
||||||
|
expect(result).toBe('test');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should set value by dot notation path', () => {
|
||||||
|
const obj = {
|
||||||
|
level1: {
|
||||||
|
level2: {},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
config.setValue(obj, 'level1.level2.value', 'new value');
|
||||||
|
|
||||||
|
expect(obj.level1.level2.value).toBe('new value');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return default value for non-existent path', () => {
|
||||||
|
const obj = { a: { b: 'value' } };
|
||||||
|
|
||||||
|
const result = config.getValue(obj, 'a.c.d', 'default');
|
||||||
|
|
||||||
|
expect(result).toBe('default');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null default when path not found', () => {
|
||||||
|
const obj = { a: { b: 'value' } };
|
||||||
|
|
||||||
|
const result = config.getValue(obj, 'a.c.d');
|
||||||
|
|
||||||
|
expect(result).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle simple (non-nested) paths', () => {
|
||||||
|
const obj = { key: 'value' };
|
||||||
|
|
||||||
|
expect(config.getValue(obj, 'key')).toBe('value');
|
||||||
|
|
||||||
|
config.setValue(obj, 'newKey', 'newValue');
|
||||||
|
expect(obj.newKey).toBe('newValue');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create intermediate objects when setting deep paths', () => {
|
||||||
|
const obj = {};
|
||||||
|
|
||||||
|
config.setValue(obj, 'a.b.c.d', 'deep value');
|
||||||
|
|
||||||
|
expect(obj.a.b.c.d).toBe('deep value');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('validateConfig()', () => {
|
||||||
|
it('should validate required fields', () => {
|
||||||
|
const cfg = { field1: 'value1' };
|
||||||
|
const schema = {
|
||||||
|
required: ['field1', 'field2'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.valid).toBe(false);
|
||||||
|
expect(result.errors).toContain('Missing required field: field2');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass when all required fields present', () => {
|
||||||
|
const cfg = { field1: 'value1', field2: 'value2' };
|
||||||
|
const schema = {
|
||||||
|
required: ['field1', 'field2'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.valid).toBe(true);
|
||||||
|
expect(result.errors).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate field types', () => {
|
||||||
|
const cfg = {
|
||||||
|
stringField: 'text',
|
||||||
|
numberField: '42', // Wrong type
|
||||||
|
arrayField: [1, 2, 3],
|
||||||
|
objectField: 'not-object', // Wrong type
|
||||||
|
boolField: true,
|
||||||
|
};
|
||||||
|
|
||||||
|
const schema = {
|
||||||
|
properties: {
|
||||||
|
stringField: { type: 'string' },
|
||||||
|
numberField: { type: 'number' },
|
||||||
|
arrayField: { type: 'array' },
|
||||||
|
objectField: { type: 'object' },
|
||||||
|
boolField: { type: 'boolean' },
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.valid).toBe(false);
|
||||||
|
expect(result.errors.some((e) => e.includes('numberField'))).toBe(true);
|
||||||
|
expect(result.errors.some((e) => e.includes('objectField'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate enum values', () => {
|
||||||
|
const cfg = { level: 'expert' };
|
||||||
|
const schema = {
|
||||||
|
properties: {
|
||||||
|
level: { type: 'string', enum: ['beginner', 'intermediate', 'advanced'] },
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.valid).toBe(false);
|
||||||
|
expect(result.errors.some((e) => e.includes('must be one of'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should pass validation for valid enum value', () => {
|
||||||
|
const cfg = { level: 'intermediate' };
|
||||||
|
const schema = {
|
||||||
|
properties: {
|
||||||
|
level: { type: 'string', enum: ['beginner', 'intermediate', 'advanced'] },
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.valid).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return warnings array', () => {
|
||||||
|
const cfg = { field: 'value' };
|
||||||
|
const schema = { required: ['field'] };
|
||||||
|
|
||||||
|
const result = config.validateConfig(cfg, schema);
|
||||||
|
|
||||||
|
expect(result.warnings).toBeDefined();
|
||||||
|
expect(Array.isArray(result.warnings)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty YAML file', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'empty.yaml');
|
||||||
|
await fs.writeFile(configPath, '');
|
||||||
|
|
||||||
|
const result = await config.loadYaml(configPath);
|
||||||
|
|
||||||
|
expect(result).toBeNull(); // Empty YAML parses to null
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle YAML with only comments', async () => {
|
||||||
|
const configPath = path.join(tmpDir, 'comments.yaml');
|
||||||
|
await fs.writeFile(configPath, '# Just a comment\n# Another comment\n');
|
||||||
|
|
||||||
|
const result = await config.loadYaml(configPath);
|
||||||
|
|
||||||
|
expect(result).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very deep object nesting', () => {
|
||||||
|
const deep = {
|
||||||
|
l1: { l2: { l3: { l4: { l5: { l6: { l7: { l8: { value: 'deep' } } } } } } } },
|
||||||
|
};
|
||||||
|
const override = {
|
||||||
|
l1: { l2: { l3: { l4: { l5: { l6: { l7: { l8: { value: 'updated' } } } } } } } },
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = config.deepMerge(deep, override);
|
||||||
|
|
||||||
|
expect(result.l1.l2.l3.l4.l5.l6.l7.l8.value).toBe('updated');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,558 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { DependencyResolver } from '../../../tools/cli/installers/lib/core/dependency-resolver.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('DependencyResolver - Advanced Scenarios', () => {
|
||||||
|
let tmpDir;
|
||||||
|
let bmadDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
bmadDir = path.join(tmpDir, 'src');
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'agents'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'tasks'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'templates'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'agents'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'tasks'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'templates'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('module path resolution', () => {
|
||||||
|
it('should resolve bmad/bmm/tasks/task.md (module path)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/bmm/tasks/analyze.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/analyze.md', 'BMM Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('bmm'))).toBe(true);
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('analyze.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle glob in module path bmad/bmm/tasks/*.md', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/bmm/tasks/*.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/task1.md', 'Task 1');
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/task2.md', 'Task 2');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']); // Include bmm module
|
||||||
|
|
||||||
|
// Should resolve glob pattern
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle non-existent module path gracefully', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/nonexistent/tasks/task.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should not crash, just skip missing dependency
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('relative glob patterns', () => {
|
||||||
|
it('should resolve relative glob patterns ../tasks/*.md', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["../tasks/*.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task1.md', 'Task 1');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task2.md', 'Task 2');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle glob pattern with no matches', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["../tasks/nonexistent-*.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should handle gracefully - just the agent
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle glob in non-existent directory', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["../nonexistent/*.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should handle gracefully
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('template dependencies', () => {
|
||||||
|
it('should resolve template with {project-root} prefix', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Agent</agent>');
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task.md',
|
||||||
|
`---
|
||||||
|
template: "{project-root}/bmad/core/templates/form.yaml"
|
||||||
|
---
|
||||||
|
Task content`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/form.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Template dependency should be resolved
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve template from module path', async () => {
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/agents/agent.md', '<agent>BMM Agent</agent>');
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'modules/bmm/tasks/task.md',
|
||||||
|
`---
|
||||||
|
template: "{project-root}/bmad/bmm/templates/prd-template.yaml"
|
||||||
|
---
|
||||||
|
Task`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/templates/prd-template.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
// Should resolve files from BMM module
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing template gracefully', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/missing.yaml"
|
||||||
|
---
|
||||||
|
Task`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should not crash
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('bmad-path type resolution', () => {
|
||||||
|
it('should resolve bmad-path dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command exec="bmad/core/tasks/analyze" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/analyze.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('analyze.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve bmad-path for module files', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command exec="bmad/bmm/tasks/create-prd" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/create-prd.md', 'PRD Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('create-prd.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle non-existent bmad-path gracefully', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command exec="bmad/core/tasks/missing" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should not crash
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('command resolution with modules', () => {
|
||||||
|
it('should search multiple modules for @task-name', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
Use @task-custom-task
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/custom-task.md', 'Custom Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('custom-task.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should search multiple modules for @agent-name', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/main.md',
|
||||||
|
`<agent>
|
||||||
|
Use @agent-pm
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/agents/pm.md', '<agent>PM</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('pm.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle bmad/ path with 4+ segments', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
Reference bmad/core/tasks/nested/deep/task
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/nested/deep/task.md', 'Deep task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Implementation may or may not support deeply nested paths in commands
|
||||||
|
// Just verify it doesn't crash
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle bmad path with .md extension already', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
Use bmad/core/tasks/task.md explicitly
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('task.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('verbose mode', () => {
|
||||||
|
it('should include console output when verbose is true', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Test</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
// Mock console.log to capture output
|
||||||
|
const logs = [];
|
||||||
|
const originalLog = console.log;
|
||||||
|
console.log = (...args) => logs.push(args.join(' '));
|
||||||
|
|
||||||
|
await resolver.resolve(bmadDir, [], { verbose: true });
|
||||||
|
|
||||||
|
console.log = originalLog;
|
||||||
|
|
||||||
|
// Should have logged something in verbose mode
|
||||||
|
expect(logs.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not log when verbose is false', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Test</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
const logs = [];
|
||||||
|
const originalLog = console.log;
|
||||||
|
console.log = (...args) => logs.push(args.join(' '));
|
||||||
|
|
||||||
|
await resolver.resolve(bmadDir, [], { verbose: false });
|
||||||
|
|
||||||
|
console.log = originalLog;
|
||||||
|
|
||||||
|
// Should not have logged in non-verbose mode
|
||||||
|
// (There might be warns but no regular logs)
|
||||||
|
expect(logs.length).toBe(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('createWebBundle()', () => {
|
||||||
|
it('should create bundle with metadata', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Agent</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const resolution = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
const bundle = await resolver.createWebBundle(resolution);
|
||||||
|
|
||||||
|
expect(bundle.metadata).toBeDefined();
|
||||||
|
expect(bundle.metadata.modules).toContain('core');
|
||||||
|
expect(bundle.metadata.totalFiles).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should organize bundle by file type', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Agent</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const resolution = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
const bundle = await resolver.createWebBundle(resolution);
|
||||||
|
|
||||||
|
expect(bundle.agents).toBeDefined();
|
||||||
|
expect(bundle.tasks).toBeDefined();
|
||||||
|
expect(bundle.templates).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('single string dependency (not array)', () => {
|
||||||
|
it('should handle single string dependency (converted to array)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: "{project-root}/bmad/core/tasks/task.md"
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Single string should be converted to array internally
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle single string template', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/form.yaml"
|
||||||
|
---
|
||||||
|
Task`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/form.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('form.yaml'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('missing dependency tracking', () => {
|
||||||
|
it('should track missing relative file dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["../tasks/missing-file.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Missing dependency should be tracked
|
||||||
|
expect(result.missing.length).toBeGreaterThanOrEqual(0);
|
||||||
|
// Should not crash
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('reportResults()', () => {
|
||||||
|
it('should report results with file counts', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent1.md', '<agent>1</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent2.md', '<agent>2</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task1.md', 'Task 1');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task2.md', 'Task 2');
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template.yaml', 'Template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
// Mock console.log
|
||||||
|
const logs = [];
|
||||||
|
const originalLog = console.log;
|
||||||
|
console.log = (...args) => logs.push(args.join(' '));
|
||||||
|
|
||||||
|
const result = await resolver.resolve(bmadDir, [], { verbose: true });
|
||||||
|
|
||||||
|
console.log = originalLog;
|
||||||
|
|
||||||
|
// Should have reported module statistics
|
||||||
|
expect(logs.some((log) => log.includes('CORE'))).toBe(true);
|
||||||
|
expect(logs.some((log) => log.includes('Agents:'))).toBe(true);
|
||||||
|
expect(logs.some((log) => log.includes('Tasks:'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should report missing dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["../tasks/missing.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
const logs = [];
|
||||||
|
const originalLog = console.log;
|
||||||
|
console.log = (...args) => logs.push(args.join(' '));
|
||||||
|
|
||||||
|
await resolver.resolve(bmadDir, [], { verbose: true });
|
||||||
|
|
||||||
|
console.log = originalLog;
|
||||||
|
|
||||||
|
// May log warning about missing dependencies
|
||||||
|
expect(logs.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('file without .md extension in command', () => {
|
||||||
|
it('should add .md extension to bmad/ commands without extension', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
Use bmad/core/tasks/analyze without extension
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/analyze.md', 'Analyze');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('analyze.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('module structure detection', () => {
|
||||||
|
it('should detect source directory structure (src/)', async () => {
|
||||||
|
// Default structure already uses src/
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Core</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should detect installed directory structure (no src/)', async () => {
|
||||||
|
// Create installed structure
|
||||||
|
const installedDir = path.join(tmpDir, 'installed');
|
||||||
|
await fs.ensureDir(path.join(installedDir, 'core', 'agents'));
|
||||||
|
await fs.ensureDir(path.join(installedDir, 'modules', 'bmm', 'agents'));
|
||||||
|
await createTestFile(installedDir, 'core/agents/agent.md', '<agent>Core</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(installedDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('dependency deduplication', () => {
|
||||||
|
it('should not include same file twice', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent1.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/shared.md"]
|
||||||
|
---
|
||||||
|
<agent>1</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent2.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/shared.md"]
|
||||||
|
---
|
||||||
|
<agent>2</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/shared.md', 'Shared');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should have 2 agents + 1 shared task = 3 unique files
|
||||||
|
expect(result.allFiles).toHaveLength(3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,796 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { DependencyResolver } from '../../../tools/cli/installers/lib/core/dependency-resolver.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('DependencyResolver', () => {
|
||||||
|
let tmpDir;
|
||||||
|
let bmadDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
// Create structure: tmpDir/src/core and tmpDir/src/modules/
|
||||||
|
bmadDir = path.join(tmpDir, 'src');
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'agents'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'tasks'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'core', 'templates'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'agents'));
|
||||||
|
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'tasks'));
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('basic resolution', () => {
|
||||||
|
it('should resolve core agents with no dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/simple.md',
|
||||||
|
`---
|
||||||
|
name: simple
|
||||||
|
---
|
||||||
|
<agent>Simple agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
expect(result.primaryFiles[0].type).toBe('agent');
|
||||||
|
expect(result.primaryFiles[0].module).toBe('core');
|
||||||
|
expect(result.allFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve multiple agents from same module', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent1.md', '<agent>Agent 1</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent2.md', '<agent>Agent 2</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent3.md', '<agent>Agent 3</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles).toHaveLength(3);
|
||||||
|
expect(result.allFiles).toHaveLength(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should always include core module', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
// Core should be included even though only 'bmm' was requested
|
||||||
|
expect(result.byModule.core).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip agents with localskip="true"', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/normal.md', '<agent>Normal agent</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/agents/webonly.md', '<agent localskip="true">Web only agent</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
expect(result.primaryFiles[0].name).toBe('normal');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('path resolution variations', () => {
|
||||||
|
it('should resolve {project-root}/bmad/core/tasks/foo.md dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent with task dependency</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task content');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(2);
|
||||||
|
expect(result.dependencies.size).toBeGreaterThan(0);
|
||||||
|
expect([...result.dependencies].some((d) => d.includes('task.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve relative path dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/template.yaml"
|
||||||
|
---
|
||||||
|
<agent>Agent with template</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template.yaml', 'template: data');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(2);
|
||||||
|
expect([...result.dependencies].some((d) => d.includes('template.yaml'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve glob pattern dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/*.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent with multiple tasks</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task1.md', 'Task 1');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task2.md', 'Task 2');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task3.md', 'Task 3');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should find agent + 3 tasks
|
||||||
|
expect(result.allFiles).toHaveLength(4);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve array of dependencies', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies:
|
||||||
|
- "{project-root}/bmad/core/tasks/task1.md"
|
||||||
|
- "{project-root}/bmad/core/tasks/task2.md"
|
||||||
|
- "../templates/template.yaml"
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task1.md', 'Task 1');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task2.md', 'Task 2');
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(4); // agent + 2 tasks + template
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('command reference resolution', () => {
|
||||||
|
it('should resolve @task-name references', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
Use @task-analyze for analysis
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/analyze.md', 'Analyze task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(2);
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('analyze.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve @agent-name references', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/main.md',
|
||||||
|
`<agent>
|
||||||
|
Reference @agent-helper for help
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/agents/helper.md', '<agent>Helper</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(2);
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('helper.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve bmad/module/type/name references', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
See bmad/core/tasks/review
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/review.md', 'Review task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('review.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('exec and tmpl attribute parsing', () => {
|
||||||
|
it('should parse exec attributes from command tags', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command exec="{project-root}/bmad/core/tasks/task.md" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('task.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should parse tmpl attributes from command tags', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command tmpl="../templates/form.yaml" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/form.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect([...result.allFiles].some((f) => f.includes('form.yaml'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore exec="*" wildcard', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`<agent>
|
||||||
|
<command exec="*" description="Dynamic" />
|
||||||
|
</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should only have the agent itself
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('multi-pass dependency resolution', () => {
|
||||||
|
it('should resolve single-level dependencies (A→B)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-b.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task-b.md', 'Task B');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(2);
|
||||||
|
// Primary files includes both agents and tasks from selected modules
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
expect(result.dependencies.size).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve two-level dependencies (A→B→C)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-b.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-b.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/template-c.yaml"
|
||||||
|
---
|
||||||
|
Task B content`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template-c.yaml', 'template: data');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(3);
|
||||||
|
// Primary files includes agents and tasks
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
// Total dependencies (direct + transitive) should be at least 2
|
||||||
|
const totalDeps = result.dependencies.size + result.transitiveDependencies.size;
|
||||||
|
expect(totalDeps).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve three-level dependencies (A→B→C→D)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-b.md"]
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-b.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-c.md"]
|
||||||
|
---
|
||||||
|
Task B`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-c.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/template-d.yaml"
|
||||||
|
---
|
||||||
|
Task C`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template-d.yaml', 'Template D');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(4);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve multiple branches (A→B, A→C)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies:
|
||||||
|
- "{project-root}/bmad/core/tasks/task-b.md"
|
||||||
|
- "{project-root}/bmad/core/tasks/task-c.md"
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task-b.md', 'Task B');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task-c.md', 'Task C');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.allFiles).toHaveLength(3);
|
||||||
|
expect(result.dependencies.size).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should deduplicate diamond pattern (A→B,C; B,C→D)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies:
|
||||||
|
- "{project-root}/bmad/core/tasks/task-b.md"
|
||||||
|
- "{project-root}/bmad/core/tasks/task-c.md"
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-b.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/shared.yaml"
|
||||||
|
---
|
||||||
|
Task B`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-c.md',
|
||||||
|
`---
|
||||||
|
template: "../templates/shared.yaml"
|
||||||
|
---
|
||||||
|
Task C`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/templates/shared.yaml', 'Shared template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// A + B + C + shared = 4 unique files (D appears twice but should be deduped)
|
||||||
|
expect(result.allFiles).toHaveLength(4);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('circular dependency detection', () => {
|
||||||
|
it('should detect direct circular dependency (A→B→A)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-b.md"]
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-b.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/agents/agent-a.md"]
|
||||||
|
---
|
||||||
|
Task B`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
// Should not hang or crash
|
||||||
|
const resultPromise = resolver.resolve(bmadDir, []);
|
||||||
|
await expect(resultPromise).resolves.toBeDefined();
|
||||||
|
|
||||||
|
const result = await resultPromise;
|
||||||
|
// Should process both files without infinite loop
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(2);
|
||||||
|
}, 5000); // 5 second timeout to ensure no infinite loop
|
||||||
|
|
||||||
|
it('should detect indirect circular dependency (A→B→C→A)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-b.md"]
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-b.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/task-c.md"]
|
||||||
|
---
|
||||||
|
Task B`,
|
||||||
|
);
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/tasks/task-c.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/agents/agent-a.md"]
|
||||||
|
---
|
||||||
|
Task C`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const resultPromise = resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
await expect(resultPromise).resolves.toBeDefined();
|
||||||
|
const result = await resultPromise;
|
||||||
|
|
||||||
|
// Should include all 3 files without duplicates
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(3);
|
||||||
|
}, 5000);
|
||||||
|
|
||||||
|
it('should handle self-reference (A→A)', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent-a.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/agents/agent-a.md"]
|
||||||
|
---
|
||||||
|
<agent>A</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Should include the file once, not infinite times
|
||||||
|
expect(result.allFiles).toHaveLength(1);
|
||||||
|
}, 5000);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('command reference parsing', () => {
|
||||||
|
describe('parseCommandReferences()', () => {
|
||||||
|
it('should extract @task- references', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = 'Use @task-analyze for analysis\nThen @task-review';
|
||||||
|
|
||||||
|
const refs = resolver.parseCommandReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('@task-analyze');
|
||||||
|
expect(refs).toContain('@task-review');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract @agent- references', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = 'Call @agent-architect then @agent-developer';
|
||||||
|
|
||||||
|
const refs = resolver.parseCommandReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('@agent-architect');
|
||||||
|
expect(refs).toContain('@agent-developer');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract bmad/ path references', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = 'See bmad/core/agents/analyst and bmad/bmm/tasks/review';
|
||||||
|
|
||||||
|
const refs = resolver.parseCommandReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('bmad/core/agents/analyst');
|
||||||
|
expect(refs).toContain('bmad/bmm/tasks/review');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract @bmad- references', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = 'Use @bmad-master command';
|
||||||
|
|
||||||
|
const refs = resolver.parseCommandReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('@bmad-master');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle multiple reference types in same content', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = `
|
||||||
|
Use @task-analyze for analysis
|
||||||
|
Then run @agent-architect
|
||||||
|
Finally check bmad/core/tasks/review
|
||||||
|
`;
|
||||||
|
|
||||||
|
const refs = resolver.parseCommandReferences(content);
|
||||||
|
|
||||||
|
expect(refs.length).toBeGreaterThanOrEqual(3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('parseFileReferences()', () => {
|
||||||
|
it('should extract exec attribute paths', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = '<command exec="{project-root}/bmad/core/tasks/foo.md" />';
|
||||||
|
|
||||||
|
const refs = resolver.parseFileReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('/bmad/core/tasks/foo.md');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract tmpl attribute paths', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = '<command tmpl="../templates/bar.yaml" />';
|
||||||
|
|
||||||
|
const refs = resolver.parseFileReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('../templates/bar.yaml');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract relative file paths', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = 'Load "./data/config.json" and "../templates/form.yaml"';
|
||||||
|
|
||||||
|
const refs = resolver.parseFileReferences(content);
|
||||||
|
|
||||||
|
expect(refs).toContain('./data/config.json');
|
||||||
|
expect(refs).toContain('../templates/form.yaml');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip exec="*" wildcards', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const content = '<command exec="*" description="Dynamic" />';
|
||||||
|
|
||||||
|
const refs = resolver.parseFileReferences(content);
|
||||||
|
|
||||||
|
// Should not include "*"
|
||||||
|
expect(refs).not.toContain('*');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('module organization', () => {
|
||||||
|
it('should organize files by module correctly', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/agents/bmm-agent.md', '<agent>BMM</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
expect(result.byModule.core).toBeDefined();
|
||||||
|
expect(result.byModule.bmm).toBeDefined();
|
||||||
|
expect(result.byModule.core.agents).toHaveLength(1);
|
||||||
|
expect(result.byModule.bmm.agents).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should categorize files by type', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/agent.md', '<agent>Agent</agent>');
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
await createTestFile(bmadDir, 'core/templates/template.yaml', 'template');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const files = [
|
||||||
|
path.join(bmadDir, 'core/agents/agent.md'),
|
||||||
|
path.join(bmadDir, 'core/tasks/task.md'),
|
||||||
|
path.join(bmadDir, 'core/templates/template.yaml'),
|
||||||
|
];
|
||||||
|
|
||||||
|
const organized = resolver.organizeByModule(bmadDir, new Set(files));
|
||||||
|
|
||||||
|
expect(organized.core.agents).toHaveLength(1);
|
||||||
|
expect(organized.core.tasks).toHaveLength(1);
|
||||||
|
expect(organized.core.templates).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should treat brain-tech as data, not tasks', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/brain-tech/data.csv', 'col1,col2\nval1,val2');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const files = [path.join(bmadDir, 'core/tasks/brain-tech/data.csv')];
|
||||||
|
|
||||||
|
const organized = resolver.organizeByModule(bmadDir, new Set(files));
|
||||||
|
|
||||||
|
expect(organized.core.data).toHaveLength(1);
|
||||||
|
expect(organized.core.tasks).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getModuleFromPath()', () => {
|
||||||
|
it('should extract module from src/core path', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const filePath = path.join(bmadDir, 'core/agents/agent.md');
|
||||||
|
|
||||||
|
const module = resolver.getModuleFromPath(bmadDir, filePath);
|
||||||
|
|
||||||
|
expect(module).toBe('core');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should extract module from src/modules/bmm path', () => {
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const filePath = path.join(bmadDir, 'modules/bmm/agents/pm.md');
|
||||||
|
|
||||||
|
const module = resolver.getModuleFromPath(bmadDir, filePath);
|
||||||
|
|
||||||
|
expect(module).toBe('bmm');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle installed directory structure', async () => {
|
||||||
|
// Create installed structure (no src/ prefix)
|
||||||
|
const installedDir = path.join(tmpDir, 'installed');
|
||||||
|
await fs.ensureDir(path.join(installedDir, 'core/agents'));
|
||||||
|
await fs.ensureDir(path.join(installedDir, 'modules/bmm/agents'));
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
const coreFile = path.join(installedDir, 'core/agents/agent.md');
|
||||||
|
const moduleFile = path.join(installedDir, 'modules/bmm/agents/pm.md');
|
||||||
|
|
||||||
|
expect(resolver.getModuleFromPath(installedDir, coreFile)).toBe('core');
|
||||||
|
expect(resolver.getModuleFromPath(installedDir, moduleFile)).toBe('bmm');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle malformed YAML frontmatter', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/bad-yaml.md',
|
||||||
|
`---
|
||||||
|
dependencies: [invalid: yaml: here
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
|
||||||
|
// Should not crash, just warn and continue
|
||||||
|
await expect(resolver.resolve(bmadDir, [])).resolves.toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle backticks in YAML values', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/backticks.md',
|
||||||
|
`---
|
||||||
|
name: \`test\`
|
||||||
|
dependencies: [\`{project-root}/bmad/core/tasks/task.md\`]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/task.md', 'Task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
// Backticks should be pre-processed
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing dependencies gracefully', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/missing.md"]
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
// Don't create missing.md
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles.length).toBeGreaterThanOrEqual(1);
|
||||||
|
// Implementation may or may not track missing dependencies
|
||||||
|
// Just verify it doesn't crash
|
||||||
|
expect(result).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty dependencies array', async () => {
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'core/agents/agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: []
|
||||||
|
---
|
||||||
|
<agent>Agent</agent>`,
|
||||||
|
);
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
expect(result.allFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing frontmatter', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/no-frontmatter.md', '<agent>Agent</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, []);
|
||||||
|
|
||||||
|
expect(result.primaryFiles).toHaveLength(1);
|
||||||
|
expect(result.allFiles).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle non-existent module directory', async () => {
|
||||||
|
// Create at least one core file so core module appears
|
||||||
|
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['nonexistent']);
|
||||||
|
|
||||||
|
// Should include core even though nonexistent module not found
|
||||||
|
expect(result.byModule.core).toBeDefined();
|
||||||
|
expect(result.byModule.nonexistent).toBeUndefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('cross-module dependencies', () => {
|
||||||
|
it('should resolve dependencies across modules', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||||
|
await createTestFile(
|
||||||
|
bmadDir,
|
||||||
|
'modules/bmm/agents/bmm-agent.md',
|
||||||
|
`---
|
||||||
|
dependencies: ["{project-root}/bmad/core/tasks/shared-task.md"]
|
||||||
|
---
|
||||||
|
<agent>BMM Agent</agent>`,
|
||||||
|
);
|
||||||
|
await createTestFile(bmadDir, 'core/tasks/shared-task.md', 'Shared task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
// Should include: core agent + bmm agent + shared task
|
||||||
|
expect(result.allFiles.length).toBeGreaterThanOrEqual(3);
|
||||||
|
expect(result.byModule.core).toBeDefined();
|
||||||
|
expect(result.byModule.bmm).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should resolve module tasks', async () => {
|
||||||
|
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/agents/pm.md', '<agent>PM</agent>');
|
||||||
|
await createTestFile(bmadDir, 'modules/bmm/tasks/create-prd.md', 'Create PRD task');
|
||||||
|
|
||||||
|
const resolver = new DependencyResolver();
|
||||||
|
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||||
|
|
||||||
|
expect(result.byModule.bmm.agents).toHaveLength(1);
|
||||||
|
expect(result.byModule.bmm.tasks).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,243 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
describe('copyDirectory()', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
let tmpDir;
|
||||||
|
let sourceDir;
|
||||||
|
let destDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
sourceDir = path.join(tmpDir, 'source');
|
||||||
|
destDir = path.join(tmpDir, 'dest');
|
||||||
|
await fs.ensureDir(sourceDir);
|
||||||
|
await fs.ensureDir(destDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('basic copying', () => {
|
||||||
|
it('should copy a single file', async () => {
|
||||||
|
await createTestFile(sourceDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const destFile = path.join(destDir, 'test.txt');
|
||||||
|
expect(await fs.pathExists(destFile)).toBe(true);
|
||||||
|
expect(await fs.readFile(destFile, 'utf8')).toBe('content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should copy multiple files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file1.txt', 'content1');
|
||||||
|
await createTestFile(sourceDir, 'file2.md', 'content2');
|
||||||
|
await createTestFile(sourceDir, 'file3.json', '{}');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file1.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file2.md'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file3.json'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should copy nested directory structure', async () => {
|
||||||
|
await createTestFile(sourceDir, 'root.txt', 'root');
|
||||||
|
await createTestFile(sourceDir, 'level1/file.txt', 'level1');
|
||||||
|
await createTestFile(sourceDir, 'level1/level2/deep.txt', 'deep');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'root.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'file.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'level2', 'deep.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create destination directory if it does not exist', async () => {
|
||||||
|
const newDest = path.join(tmpDir, 'new-dest');
|
||||||
|
await createTestFile(sourceDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, newDest);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(newDest)).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(newDest, 'test.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('overwrite behavior', () => {
|
||||||
|
it('should overwrite existing files by default', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'new content');
|
||||||
|
await createTestFile(destDir, 'file.txt', 'old content');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const content = await fs.readFile(path.join(destDir, 'file.txt'), 'utf8');
|
||||||
|
expect(content).toBe('new content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve file content when overwriting', async () => {
|
||||||
|
await createTestFile(sourceDir, 'data.json', '{"new": true}');
|
||||||
|
await createTestFile(destDir, 'data.json', '{"old": true}');
|
||||||
|
await createTestFile(destDir, 'keep.txt', 'preserve this');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.readFile(path.join(destDir, 'data.json'), 'utf8')).toBe('{"new": true}');
|
||||||
|
// Files not in source should be preserved
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'keep.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('filtering with shouldIgnore', () => {
|
||||||
|
it('should filter out .git directories', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'content');
|
||||||
|
await createTestFile(sourceDir, '.git/config', 'git config');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, '.git'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should filter out node_modules directories', async () => {
|
||||||
|
await createTestFile(sourceDir, 'package.json', '{}');
|
||||||
|
await createTestFile(sourceDir, 'node_modules/lib/code.js', 'code');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'package.json'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'node_modules'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should filter out *.swp and *.tmp files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'document.txt', 'content');
|
||||||
|
await createTestFile(sourceDir, 'document.txt.swp', 'vim swap');
|
||||||
|
await createTestFile(sourceDir, 'temp.tmp', 'temporary');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'document.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'document.txt.swp'))).toBe(false);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'temp.tmp'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should filter out .DS_Store files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'content');
|
||||||
|
await createTestFile(sourceDir, '.DS_Store', 'mac metadata');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, '.DS_Store'))).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty source directory', async () => {
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const files = await fs.readdir(destDir);
|
||||||
|
expect(files).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode filenames', async () => {
|
||||||
|
await createTestFile(sourceDir, '测试.txt', 'chinese');
|
||||||
|
await createTestFile(sourceDir, 'файл.json', 'russian');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, '测试.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'файл.json'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle filenames with special characters', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file with spaces.txt', 'content');
|
||||||
|
await createTestFile(sourceDir, 'special-chars!@#.md', 'content');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file with spaces.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'special-chars!@#.md'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very deep directory nesting', async () => {
|
||||||
|
const deepPath = Array.from({ length: 10 }, (_, i) => `level${i}`).join('/');
|
||||||
|
await createTestFile(sourceDir, `${deepPath}/deep.txt`, 'very deep');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, ...deepPath.split('/'), 'deep.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve file permissions', async () => {
|
||||||
|
const execFile = path.join(sourceDir, 'script.sh');
|
||||||
|
await fs.writeFile(execFile, '#!/bin/bash\necho "test"');
|
||||||
|
await fs.chmod(execFile, 0o755); // Make executable
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const destFile = path.join(destDir, 'script.sh');
|
||||||
|
const stats = await fs.stat(destFile);
|
||||||
|
// Check if file is executable (user execute bit)
|
||||||
|
expect((stats.mode & 0o100) !== 0).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle large number of files', async () => {
|
||||||
|
// Create 50 files
|
||||||
|
const promises = Array.from({ length: 50 }, (_, i) => createTestFile(sourceDir, `file${i}.txt`, `content ${i}`));
|
||||||
|
await Promise.all(promises);
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const destFiles = await fs.readdir(destDir);
|
||||||
|
expect(destFiles).toHaveLength(50);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('content integrity', () => {
|
||||||
|
it('should preserve file content exactly', async () => {
|
||||||
|
const content = 'Line 1\nLine 2\nLine 3\n';
|
||||||
|
await createTestFile(sourceDir, 'file.txt', content);
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const copiedContent = await fs.readFile(path.join(destDir, 'file.txt'), 'utf8');
|
||||||
|
expect(copiedContent).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve binary file content', async () => {
|
||||||
|
const buffer = Buffer.from([0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a]);
|
||||||
|
await fs.writeFile(path.join(sourceDir, 'binary.dat'), buffer);
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const copiedBuffer = await fs.readFile(path.join(destDir, 'binary.dat'));
|
||||||
|
expect(copiedBuffer).toEqual(buffer);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve UTF-8 content', async () => {
|
||||||
|
const utf8Content = 'Hello 世界 🌍';
|
||||||
|
await createTestFile(sourceDir, 'utf8.txt', utf8Content);
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const copied = await fs.readFile(path.join(destDir, 'utf8.txt'), 'utf8');
|
||||||
|
expect(copied).toBe(utf8Content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve empty files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'empty.txt', '');
|
||||||
|
|
||||||
|
await fileOps.copyDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const content = await fs.readFile(path.join(destDir, 'empty.txt'), 'utf8');
|
||||||
|
expect(content).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,211 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
describe('getFileHash()', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
let tmpDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('basic hashing', () => {
|
||||||
|
it('should return SHA256 hash for a simple file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'hello');
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
// SHA256 of 'hello' is known
|
||||||
|
expect(hash).toBe('2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824');
|
||||||
|
expect(hash).toHaveLength(64); // SHA256 is 64 hex characters
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return consistent hash for same content', async () => {
|
||||||
|
const content = 'test content for hashing';
|
||||||
|
const file1 = await createTestFile(tmpDir, 'file1.txt', content);
|
||||||
|
const file2 = await createTestFile(tmpDir, 'file2.txt', content);
|
||||||
|
|
||||||
|
const hash1 = await fileOps.getFileHash(file1);
|
||||||
|
const hash2 = await fileOps.getFileHash(file2);
|
||||||
|
|
||||||
|
expect(hash1).toBe(hash2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return different hash for different content', async () => {
|
||||||
|
const file1 = await createTestFile(tmpDir, 'file1.txt', 'content A');
|
||||||
|
const file2 = await createTestFile(tmpDir, 'file2.txt', 'content B');
|
||||||
|
|
||||||
|
const hash1 = await fileOps.getFileHash(file1);
|
||||||
|
const hash2 = await fileOps.getFileHash(file2);
|
||||||
|
|
||||||
|
expect(hash1).not.toBe(hash2);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('file size handling', () => {
|
||||||
|
it('should handle empty file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'empty.txt', '');
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
// SHA256 of empty string
|
||||||
|
expect(hash).toBe('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle small file (<4KB)', async () => {
|
||||||
|
const content = 'a'.repeat(1000); // 1KB
|
||||||
|
const filePath = await createTestFile(tmpDir, 'small.txt', content);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle medium file (~1MB)', async () => {
|
||||||
|
const content = 'x'.repeat(1024 * 1024); // 1MB
|
||||||
|
const filePath = await createTestFile(tmpDir, 'medium.txt', content);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle large file (~10MB) via streaming', async () => {
|
||||||
|
// Create a 10MB file
|
||||||
|
const chunkSize = 1024 * 1024; // 1MB chunks
|
||||||
|
const chunks = Array.from({ length: 10 }, () => 'y'.repeat(chunkSize));
|
||||||
|
const content = chunks.join('');
|
||||||
|
|
||||||
|
const filePath = await createTestFile(tmpDir, 'large.txt', content);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
}, 15_000); // 15 second timeout for large file
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('content type handling', () => {
|
||||||
|
it('should handle binary content', async () => {
|
||||||
|
// Create a buffer with binary data
|
||||||
|
const buffer = Buffer.from([0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a]);
|
||||||
|
const filePath = await createTestFile(tmpDir, 'binary.dat', buffer.toString('binary'));
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle UTF-8 content correctly', async () => {
|
||||||
|
const content = 'Hello 世界 🌍';
|
||||||
|
const filePath = await createTestFile(tmpDir, 'utf8.txt', content);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
// Hash should be consistent for UTF-8 content
|
||||||
|
const hash2 = await fileOps.getFileHash(filePath);
|
||||||
|
expect(hash).toBe(hash2);
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle newline characters', async () => {
|
||||||
|
const contentLF = 'line1\nline2\nline3';
|
||||||
|
const contentCRLF = 'line1\r\nline2\r\nline3';
|
||||||
|
|
||||||
|
const fileLF = await createTestFile(tmpDir, 'lf.txt', contentLF);
|
||||||
|
const fileCRLF = await createTestFile(tmpDir, 'crlf.txt', contentCRLF);
|
||||||
|
|
||||||
|
const hashLF = await fileOps.getFileHash(fileLF);
|
||||||
|
const hashCRLF = await fileOps.getFileHash(fileCRLF);
|
||||||
|
|
||||||
|
// Different line endings should produce different hashes
|
||||||
|
expect(hashLF).not.toBe(hashCRLF);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle JSON content', async () => {
|
||||||
|
const json = JSON.stringify({ key: 'value', nested: { array: [1, 2, 3] } }, null, 2);
|
||||||
|
const filePath = await createTestFile(tmpDir, 'data.json', json);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle file with special characters in name', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'file with spaces & special-chars.txt', 'content');
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle concurrent hash calculations', async () => {
|
||||||
|
const files = await Promise.all([
|
||||||
|
createTestFile(tmpDir, 'file1.txt', 'content 1'),
|
||||||
|
createTestFile(tmpDir, 'file2.txt', 'content 2'),
|
||||||
|
createTestFile(tmpDir, 'file3.txt', 'content 3'),
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Calculate hashes concurrently
|
||||||
|
const hashes = await Promise.all(files.map((file) => fileOps.getFileHash(file)));
|
||||||
|
|
||||||
|
// All hashes should be valid
|
||||||
|
expect(hashes).toHaveLength(3);
|
||||||
|
for (const hash of hashes) {
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hashes should be different
|
||||||
|
expect(hashes[0]).not.toBe(hashes[1]);
|
||||||
|
expect(hashes[1]).not.toBe(hashes[2]);
|
||||||
|
expect(hashes[0]).not.toBe(hashes[2]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle file with only whitespace', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'whitespace.txt', ' ');
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
// Should be different from empty file
|
||||||
|
expect(hash).not.toBe('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very long single line', async () => {
|
||||||
|
const longLine = 'x'.repeat(100_000); // 100KB single line
|
||||||
|
const filePath = await createTestFile(tmpDir, 'longline.txt', longLine);
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('error handling', () => {
|
||||||
|
it('should reject for non-existent file', async () => {
|
||||||
|
const nonExistentPath = `${tmpDir}/does-not-exist.txt`;
|
||||||
|
|
||||||
|
await expect(fileOps.getFileHash(nonExistentPath)).rejects.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject for directory instead of file', async () => {
|
||||||
|
await expect(fileOps.getFileHash(tmpDir)).rejects.toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('streaming behavior', () => {
|
||||||
|
it('should use streaming for efficiency (test implementation detail)', async () => {
|
||||||
|
// This test verifies that the implementation uses streams
|
||||||
|
// by checking that large files can be processed without loading entirely into memory
|
||||||
|
const largeContent = 'z'.repeat(5 * 1024 * 1024); // 5MB
|
||||||
|
const filePath = await createTestFile(tmpDir, 'stream.txt', largeContent);
|
||||||
|
|
||||||
|
// If this completes without memory issues, streaming is working
|
||||||
|
const hash = await fileOps.getFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(64);
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{64}$/);
|
||||||
|
}, 10_000);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,283 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile, createTestDirs } from '../../helpers/temp-dir.js';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
describe('getFileList()', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
let tmpDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('basic functionality', () => {
|
||||||
|
it('should return empty array for empty directory', async () => {
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
expect(files).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return single file in directory', async () => {
|
||||||
|
await createTestFile(tmpDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe('test.txt');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return multiple files in directory', async () => {
|
||||||
|
await createTestFile(tmpDir, 'file1.txt', 'content1');
|
||||||
|
await createTestFile(tmpDir, 'file2.md', 'content2');
|
||||||
|
await createTestFile(tmpDir, 'file3.json', 'content3');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files).toContain('file1.txt');
|
||||||
|
expect(files).toContain('file2.md');
|
||||||
|
expect(files).toContain('file3.json');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('recursive directory walking', () => {
|
||||||
|
it('should recursively find files in nested directories', async () => {
|
||||||
|
await createTestFile(tmpDir, 'root.txt', 'root');
|
||||||
|
await createTestFile(tmpDir, 'level1/file1.txt', 'level1');
|
||||||
|
await createTestFile(tmpDir, 'level1/level2/file2.txt', 'level2');
|
||||||
|
await createTestFile(tmpDir, 'level1/level2/level3/file3.txt', 'level3');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(4);
|
||||||
|
expect(files).toContain('root.txt');
|
||||||
|
expect(files).toContain(path.join('level1', 'file1.txt'));
|
||||||
|
expect(files).toContain(path.join('level1', 'level2', 'file2.txt'));
|
||||||
|
expect(files).toContain(path.join('level1', 'level2', 'level3', 'file3.txt'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle multiple subdirectories at same level', async () => {
|
||||||
|
await createTestFile(tmpDir, 'dir1/file1.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, 'dir2/file2.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, 'dir3/file3.txt', 'content');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files).toContain(path.join('dir1', 'file1.txt'));
|
||||||
|
expect(files).toContain(path.join('dir2', 'file2.txt'));
|
||||||
|
expect(files).toContain(path.join('dir3', 'file3.txt'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not include empty directories in results', async () => {
|
||||||
|
await createTestDirs(tmpDir, ['empty1', 'empty2', 'has-file']);
|
||||||
|
await createTestFile(tmpDir, 'has-file/file.txt', 'content');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe(path.join('has-file', 'file.txt'));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('ignore filtering', () => {
|
||||||
|
it('should ignore .git directories', async () => {
|
||||||
|
await createTestFile(tmpDir, 'normal.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, '.git/config', 'git config');
|
||||||
|
await createTestFile(tmpDir, '.git/hooks/pre-commit', 'hook');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe('normal.txt');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore node_modules directories', async () => {
|
||||||
|
await createTestFile(tmpDir, 'package.json', '{}');
|
||||||
|
await createTestFile(tmpDir, 'node_modules/package/index.js', 'code');
|
||||||
|
await createTestFile(tmpDir, 'node_modules/package/lib/util.js', 'util');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe('package.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore .DS_Store files', async () => {
|
||||||
|
await createTestFile(tmpDir, 'file.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, '.DS_Store', 'mac metadata');
|
||||||
|
await createTestFile(tmpDir, 'subdir/.DS_Store', 'mac metadata');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe('file.txt');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore *.swp and *.tmp files', async () => {
|
||||||
|
await createTestFile(tmpDir, 'document.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, 'document.txt.swp', 'vim swap');
|
||||||
|
await createTestFile(tmpDir, 'temp.tmp', 'temporary');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe('document.txt');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore multiple ignored patterns together', async () => {
|
||||||
|
await createTestFile(tmpDir, 'src/index.js', 'source code');
|
||||||
|
await createTestFile(tmpDir, 'node_modules/lib/code.js', 'dependency');
|
||||||
|
await createTestFile(tmpDir, '.git/config', 'git config');
|
||||||
|
await createTestFile(tmpDir, '.DS_Store', 'mac file');
|
||||||
|
await createTestFile(tmpDir, 'file.swp', 'swap file');
|
||||||
|
await createTestFile(tmpDir, '.idea/workspace.xml', 'ide');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe(path.join('src', 'index.js'));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('relative path handling', () => {
|
||||||
|
it('should return paths relative to base directory', async () => {
|
||||||
|
await createTestFile(tmpDir, 'a/b/c/deep.txt', 'deep');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files[0]).toBe(path.join('a', 'b', 'c', 'deep.txt'));
|
||||||
|
expect(path.isAbsolute(files[0])).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle subdirectory as base', async () => {
|
||||||
|
await createTestFile(tmpDir, 'root.txt', 'root');
|
||||||
|
await createTestFile(tmpDir, 'sub/file1.txt', 'sub1');
|
||||||
|
await createTestFile(tmpDir, 'sub/file2.txt', 'sub2');
|
||||||
|
|
||||||
|
const subDir = path.join(tmpDir, 'sub');
|
||||||
|
const files = await fileOps.getFileList(subDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(2);
|
||||||
|
expect(files).toContain('file1.txt');
|
||||||
|
expect(files).toContain('file2.txt');
|
||||||
|
// Should not include root.txt
|
||||||
|
expect(files).not.toContain('root.txt');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle directory with special characters', async () => {
|
||||||
|
await createTestFile(tmpDir, 'folder with spaces/file.txt', 'content');
|
||||||
|
await createTestFile(tmpDir, 'special-chars!@#/data.json', 'data');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(2);
|
||||||
|
expect(files).toContain(path.join('folder with spaces', 'file.txt'));
|
||||||
|
expect(files).toContain(path.join('special-chars!@#', 'data.json'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode filenames', async () => {
|
||||||
|
await createTestFile(tmpDir, '文档/测试.txt', 'chinese');
|
||||||
|
await createTestFile(tmpDir, 'файл/данные.json', 'russian');
|
||||||
|
await createTestFile(tmpDir, 'ファイル/データ.yaml', 'japanese');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files.some((f) => f.includes('测试.txt'))).toBe(true);
|
||||||
|
expect(files.some((f) => f.includes('данные.json'))).toBe(true);
|
||||||
|
expect(files.some((f) => f.includes('データ.yaml'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty array for non-existent directory', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'does-not-exist');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(nonExistent);
|
||||||
|
|
||||||
|
expect(files).toEqual([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very deep directory nesting', async () => {
|
||||||
|
// Create a deeply nested structure (10 levels)
|
||||||
|
const deepPath = Array.from({ length: 10 }, (_, i) => `level${i}`).join('/');
|
||||||
|
await createTestFile(tmpDir, `${deepPath}/deep.txt`, 'very deep');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(1);
|
||||||
|
expect(files[0]).toBe(path.join(...deepPath.split('/'), 'deep.txt'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle directory with many files', async () => {
|
||||||
|
// Create 100 files
|
||||||
|
const promises = Array.from({ length: 100 }, (_, i) => createTestFile(tmpDir, `file${i}.txt`, `content ${i}`));
|
||||||
|
await Promise.all(promises);
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(100);
|
||||||
|
expect(files.every((f) => f.startsWith('file') && f.endsWith('.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed ignored and non-ignored files', async () => {
|
||||||
|
await createTestFile(tmpDir, 'src/main.js', 'code');
|
||||||
|
await createTestFile(tmpDir, 'src/main.js.swp', 'swap');
|
||||||
|
await createTestFile(tmpDir, 'lib/utils.js', 'utils');
|
||||||
|
await createTestFile(tmpDir, 'node_modules/dep/index.js', 'dep');
|
||||||
|
await createTestFile(tmpDir, 'test/test.js', 'test');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files).toContain(path.join('src', 'main.js'));
|
||||||
|
expect(files).toContain(path.join('lib', 'utils.js'));
|
||||||
|
expect(files).toContain(path.join('test', 'test.js'));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('file types', () => {
|
||||||
|
it('should include files with no extension', async () => {
|
||||||
|
await createTestFile(tmpDir, 'README', 'readme content');
|
||||||
|
await createTestFile(tmpDir, 'LICENSE', 'license text');
|
||||||
|
await createTestFile(tmpDir, 'Makefile', 'make commands');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files).toContain('README');
|
||||||
|
expect(files).toContain('LICENSE');
|
||||||
|
expect(files).toContain('Makefile');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include dotfiles (except ignored ones)', async () => {
|
||||||
|
await createTestFile(tmpDir, '.gitignore', 'ignore patterns');
|
||||||
|
await createTestFile(tmpDir, '.env', 'environment');
|
||||||
|
await createTestFile(tmpDir, '.eslintrc', 'eslint config');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
expect(files).toContain('.gitignore');
|
||||||
|
expect(files).toContain('.env');
|
||||||
|
expect(files).toContain('.eslintrc');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include files with multiple extensions', async () => {
|
||||||
|
await createTestFile(tmpDir, 'archive.tar.gz', 'archive');
|
||||||
|
await createTestFile(tmpDir, 'backup.sql.bak', 'backup');
|
||||||
|
await createTestFile(tmpDir, 'config.yaml.sample', 'sample config');
|
||||||
|
|
||||||
|
const files = await fileOps.getFileList(tmpDir);
|
||||||
|
|
||||||
|
expect(files).toHaveLength(3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,177 @@
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
describe('shouldIgnore()', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
|
||||||
|
describe('exact matches', () => {
|
||||||
|
it('should ignore .git directory', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.git')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/.git')).toBe(true);
|
||||||
|
// Note: basename of '/project/.git/hooks' is 'hooks', not '.git'
|
||||||
|
expect(fileOps.shouldIgnore('/project/.git/hooks')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore .DS_Store files', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.DS_Store')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/.DS_Store')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore node_modules directory', () => {
|
||||||
|
expect(fileOps.shouldIgnore('node_modules')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/node_modules')).toBe(true);
|
||||||
|
// Note: basename of '/project/node_modules/package' is 'package', not 'node_modules'
|
||||||
|
expect(fileOps.shouldIgnore('/project/node_modules/package')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore .idea directory', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.idea')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/.idea')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore .vscode directory', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.vscode')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/.vscode')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore __pycache__ directory', () => {
|
||||||
|
expect(fileOps.shouldIgnore('__pycache__')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/__pycache__')).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('glob pattern matches', () => {
|
||||||
|
it('should ignore *.swp files (Vim swap files)', () => {
|
||||||
|
expect(fileOps.shouldIgnore('file.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('.config.yaml.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/document.txt.swp')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore *.tmp files (temporary files)', () => {
|
||||||
|
expect(fileOps.shouldIgnore('file.tmp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('temp_data.tmp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/cache.tmp')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore *.pyc files (Python compiled)', () => {
|
||||||
|
expect(fileOps.shouldIgnore('module.pyc')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('__init__.pyc')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/script.pyc')).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('files that should NOT be ignored', () => {
|
||||||
|
it('should not ignore normal files', () => {
|
||||||
|
expect(fileOps.shouldIgnore('README.md')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('package.json')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('index.js')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not ignore .gitignore itself', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.gitignore')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('/path/to/.gitignore')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not ignore files with similar but different names', () => {
|
||||||
|
expect(fileOps.shouldIgnore('git-file.txt')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('node_modules.backup')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('swap-file.txt')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not ignore files with ignored patterns in parent directory', () => {
|
||||||
|
// The pattern matches basename, not full path
|
||||||
|
expect(fileOps.shouldIgnore('/project/src/utils.js')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('/code/main.py')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not ignore directories with dot prefix (except specific ones)', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.github')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('.husky')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('.npmrc')).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty string', () => {
|
||||||
|
expect(fileOps.shouldIgnore('')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle paths with multiple segments', () => {
|
||||||
|
// basename of '/very/deep/path/to/node_modules/package' is 'package'
|
||||||
|
expect(fileOps.shouldIgnore('/very/deep/path/to/node_modules/package')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('/very/deep/path/to/file.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('/very/deep/path/to/normal.js')).toBe(false);
|
||||||
|
// But the directory itself would be ignored
|
||||||
|
expect(fileOps.shouldIgnore('/very/deep/path/to/node_modules')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Windows-style paths', () => {
|
||||||
|
// Note: path.basename() on Unix doesn't recognize backslashes
|
||||||
|
// On Unix: basename('C:\\project\\file.tmp') = 'C:\\project\\file.tmp'
|
||||||
|
// So we test cross-platform path handling
|
||||||
|
expect(fileOps.shouldIgnore(String.raw`C:\project\file.tmp`)).toBe(true); // .tmp matches
|
||||||
|
expect(fileOps.shouldIgnore(String.raw`test\file.swp`)).toBe(true); // .swp matches
|
||||||
|
// These won't be ignored because they don't match the patterns on Unix
|
||||||
|
expect(fileOps.shouldIgnore(String.raw`C:\project\node_modules\pkg`)).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore(String.raw`C:\project\src\main.js`)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle relative paths', () => {
|
||||||
|
// basename of './node_modules/package' is 'package'
|
||||||
|
expect(fileOps.shouldIgnore('./node_modules/package')).toBe(false);
|
||||||
|
// basename of '../.git/hooks' is 'hooks'
|
||||||
|
expect(fileOps.shouldIgnore('../.git/hooks')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('./src/index.js')).toBe(false);
|
||||||
|
// But the directories themselves would be ignored
|
||||||
|
expect(fileOps.shouldIgnore('./node_modules')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('../.git')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle files with multiple extensions', () => {
|
||||||
|
expect(fileOps.shouldIgnore('file.tar.tmp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('backup.sql.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('data.json.gz')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be case-sensitive for exact matches', () => {
|
||||||
|
expect(fileOps.shouldIgnore('Node_Modules')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('NODE_MODULES')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('node_modules')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle files starting with ignored patterns', () => {
|
||||||
|
expect(fileOps.shouldIgnore('.git-credentials')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('.gitattributes')).toBe(false);
|
||||||
|
expect(fileOps.shouldIgnore('.git')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode filenames', () => {
|
||||||
|
expect(fileOps.shouldIgnore('文档.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('файл.tmp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('ドキュメント.txt')).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('pattern matching behavior', () => {
|
||||||
|
it('should match patterns based on basename only', () => {
|
||||||
|
// shouldIgnore uses path.basename(), so only the last segment matters
|
||||||
|
expect(fileOps.shouldIgnore('/home/user/.git/config')).toBe(false); // basename is 'config'
|
||||||
|
expect(fileOps.shouldIgnore('/home/user/project/node_modules')).toBe(true); // basename is 'node_modules'
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle trailing slashes', () => {
|
||||||
|
// path.basename() returns the directory name, not empty string for trailing slash
|
||||||
|
expect(fileOps.shouldIgnore('node_modules/')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('.git/')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should treat patterns as partial regex matches', () => {
|
||||||
|
// The *.swp pattern becomes /.*\.swp/ regex
|
||||||
|
expect(fileOps.shouldIgnore('test.swp')).toBe(true);
|
||||||
|
expect(fileOps.shouldIgnore('swp')).toBe(false); // doesn't match .*\.swp
|
||||||
|
expect(fileOps.shouldIgnore('.swp')).toBe(true); // matches .*\.swp (. before swp)
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,316 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
describe('syncDirectory()', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
let tmpDir;
|
||||||
|
let sourceDir;
|
||||||
|
let destDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
sourceDir = path.join(tmpDir, 'source');
|
||||||
|
destDir = path.join(tmpDir, 'dest');
|
||||||
|
await fs.ensureDir(sourceDir);
|
||||||
|
await fs.ensureDir(destDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('hash-based selective update', () => {
|
||||||
|
it('should update file when hashes are identical (safe update)', async () => {
|
||||||
|
const content = 'identical content';
|
||||||
|
await createTestFile(sourceDir, 'file.txt', content);
|
||||||
|
await createTestFile(destDir, 'file.txt', content);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// File should be updated (copied over) since hashes match
|
||||||
|
const destContent = await fs.readFile(path.join(destDir, 'file.txt'), 'utf8');
|
||||||
|
expect(destContent).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve modified file when dest is newer', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'source content');
|
||||||
|
await createTestFile(destDir, 'file.txt', 'modified by user');
|
||||||
|
|
||||||
|
// Make dest file newer
|
||||||
|
const destFile = path.join(destDir, 'file.txt');
|
||||||
|
const futureTime = new Date(Date.now() + 10_000);
|
||||||
|
await fs.utimes(destFile, futureTime, futureTime);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// User modification should be preserved
|
||||||
|
const destContent = await fs.readFile(destFile, 'utf8');
|
||||||
|
expect(destContent).toBe('modified by user');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should update file when source is newer than modified dest', async () => {
|
||||||
|
// Create both files first
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'new source content');
|
||||||
|
await createTestFile(destDir, 'file.txt', 'old modified content');
|
||||||
|
|
||||||
|
// Make dest older and source newer with explicit times
|
||||||
|
const destFile = path.join(destDir, 'file.txt');
|
||||||
|
const sourceFile = path.join(sourceDir, 'file.txt');
|
||||||
|
|
||||||
|
const pastTime = new Date(Date.now() - 10_000);
|
||||||
|
const futureTime = new Date(Date.now() + 10_000);
|
||||||
|
|
||||||
|
await fs.utimes(destFile, pastTime, pastTime);
|
||||||
|
await fs.utimes(sourceFile, futureTime, futureTime);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// Should update to source content since source is newer
|
||||||
|
const destContent = await fs.readFile(destFile, 'utf8');
|
||||||
|
expect(destContent).toBe('new source content');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('new file handling', () => {
|
||||||
|
it('should copy new files from source', async () => {
|
||||||
|
await createTestFile(sourceDir, 'new-file.txt', 'new content');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'new-file.txt'))).toBe(true);
|
||||||
|
expect(await fs.readFile(path.join(destDir, 'new-file.txt'), 'utf8')).toBe('new content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should copy multiple new files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file1.txt', 'content1');
|
||||||
|
await createTestFile(sourceDir, 'file2.md', 'content2');
|
||||||
|
await createTestFile(sourceDir, 'file3.json', 'content3');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file1.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file2.md'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file3.json'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create nested directories for new files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'level1/level2/deep.txt', 'deep content');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'level2', 'deep.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('orphaned file removal', () => {
|
||||||
|
it('should remove files that no longer exist in source', async () => {
|
||||||
|
await createTestFile(sourceDir, 'keep.txt', 'keep this');
|
||||||
|
await createTestFile(destDir, 'keep.txt', 'keep this');
|
||||||
|
await createTestFile(destDir, 'remove.txt', 'delete this');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'keep.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'remove.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should remove multiple orphaned files', async () => {
|
||||||
|
await createTestFile(sourceDir, 'current.txt', 'current');
|
||||||
|
await createTestFile(destDir, 'current.txt', 'current');
|
||||||
|
await createTestFile(destDir, 'old1.txt', 'orphan 1');
|
||||||
|
await createTestFile(destDir, 'old2.txt', 'orphan 2');
|
||||||
|
await createTestFile(destDir, 'old3.txt', 'orphan 3');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'current.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'old1.txt'))).toBe(false);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'old2.txt'))).toBe(false);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'old3.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should remove orphaned directories', async () => {
|
||||||
|
await createTestFile(sourceDir, 'keep/file.txt', 'keep');
|
||||||
|
await createTestFile(destDir, 'keep/file.txt', 'keep');
|
||||||
|
await createTestFile(destDir, 'remove/orphan.txt', 'orphan');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'keep'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'remove', 'orphan.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('complex scenarios', () => {
|
||||||
|
it('should handle mixed operations in single sync', async () => {
|
||||||
|
const now = Date.now();
|
||||||
|
const pastTime = now - 100_000; // 100 seconds ago
|
||||||
|
const futureTime = now + 100_000; // 100 seconds from now
|
||||||
|
|
||||||
|
// Identical file (update)
|
||||||
|
await createTestFile(sourceDir, 'identical.txt', 'same');
|
||||||
|
await createTestFile(destDir, 'identical.txt', 'same');
|
||||||
|
|
||||||
|
// Modified file with newer dest (preserve)
|
||||||
|
await createTestFile(sourceDir, 'modified.txt', 'original');
|
||||||
|
await createTestFile(destDir, 'modified.txt', 'user modified');
|
||||||
|
const modifiedFile = path.join(destDir, 'modified.txt');
|
||||||
|
await fs.utimes(modifiedFile, futureTime, futureTime);
|
||||||
|
|
||||||
|
// New file (copy)
|
||||||
|
await createTestFile(sourceDir, 'new.txt', 'new content');
|
||||||
|
|
||||||
|
// Orphaned file (remove)
|
||||||
|
await createTestFile(destDir, 'orphan.txt', 'delete me');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// Verify operations
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'identical.txt'))).toBe(true);
|
||||||
|
|
||||||
|
expect(await fs.readFile(modifiedFile, 'utf8')).toBe('user modified');
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'new.txt'))).toBe(true);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'orphan.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle nested directory changes', async () => {
|
||||||
|
// Create nested structure in source
|
||||||
|
await createTestFile(sourceDir, 'level1/keep.txt', 'keep');
|
||||||
|
await createTestFile(sourceDir, 'level1/level2/deep.txt', 'deep');
|
||||||
|
|
||||||
|
// Create different nested structure in dest
|
||||||
|
await createTestFile(destDir, 'level1/keep.txt', 'keep');
|
||||||
|
await createTestFile(destDir, 'level1/remove.txt', 'orphan');
|
||||||
|
await createTestFile(destDir, 'old-level/file.txt', 'old');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'keep.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'level2', 'deep.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'level1', 'remove.txt'))).toBe(false);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'old-level', 'file.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty source directory', async () => {
|
||||||
|
await createTestFile(destDir, 'file.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// All files should be removed
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file.txt'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty destination directory', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode filenames', async () => {
|
||||||
|
await createTestFile(sourceDir, '测试.txt', 'chinese');
|
||||||
|
await createTestFile(destDir, '测试.txt', 'modified chinese');
|
||||||
|
|
||||||
|
// Make dest newer
|
||||||
|
await fs.utimes(path.join(destDir, '测试.txt'), Date.now() + 10_000, Date.now() + 10_000);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// Should preserve user modification
|
||||||
|
expect(await fs.readFile(path.join(destDir, '测试.txt'), 'utf8')).toBe('modified chinese');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle large number of files', async () => {
|
||||||
|
// Create 50 files in source
|
||||||
|
for (let i = 0; i < 50; i++) {
|
||||||
|
await createTestFile(sourceDir, `file${i}.txt`, `content ${i}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create 25 matching files and 25 orphaned files in dest
|
||||||
|
for (let i = 0; i < 25; i++) {
|
||||||
|
await createTestFile(destDir, `file${i}.txt`, `content ${i}`);
|
||||||
|
await createTestFile(destDir, `orphan${i}.txt`, `orphan ${i}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// All 50 source files should exist
|
||||||
|
for (let i = 0; i < 50; i++) {
|
||||||
|
expect(await fs.pathExists(path.join(destDir, `file${i}.txt`))).toBe(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
// All 25 orphaned files should be removed
|
||||||
|
for (let i = 0; i < 25; i++) {
|
||||||
|
expect(await fs.pathExists(path.join(destDir, `orphan${i}.txt`))).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle binary files correctly', async () => {
|
||||||
|
const buffer = Buffer.from([0x89, 0x50, 0x4e, 0x47]);
|
||||||
|
await fs.writeFile(path.join(sourceDir, 'binary.dat'), buffer);
|
||||||
|
await fs.writeFile(path.join(destDir, 'binary.dat'), buffer);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
const destBuffer = await fs.readFile(path.join(destDir, 'binary.dat'));
|
||||||
|
expect(destBuffer).toEqual(buffer);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('timestamp precision', () => {
|
||||||
|
it('should handle files with very close modification times', async () => {
|
||||||
|
await createTestFile(sourceDir, 'file.txt', 'source');
|
||||||
|
await createTestFile(destDir, 'file.txt', 'dest modified');
|
||||||
|
|
||||||
|
// Make dest just slightly newer (100ms)
|
||||||
|
const destFile = path.join(destDir, 'file.txt');
|
||||||
|
await fs.utimes(destFile, Date.now() + 100, Date.now() + 100);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// Should preserve user modification even with small time difference
|
||||||
|
expect(await fs.readFile(destFile, 'utf8')).toBe('dest modified');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('data integrity', () => {
|
||||||
|
it('should not corrupt files during sync', async () => {
|
||||||
|
const content = 'Important data\nLine 2\nLine 3\n';
|
||||||
|
await createTestFile(sourceDir, 'data.txt', content);
|
||||||
|
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
expect(await fs.readFile(path.join(destDir, 'data.txt'), 'utf8')).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle sync interruption gracefully', async () => {
|
||||||
|
// This test verifies that partial syncs don't leave inconsistent state
|
||||||
|
await createTestFile(sourceDir, 'file1.txt', 'content1');
|
||||||
|
await createTestFile(sourceDir, 'file2.txt', 'content2');
|
||||||
|
|
||||||
|
// First sync
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// Modify source
|
||||||
|
await createTestFile(sourceDir, 'file3.txt', 'content3');
|
||||||
|
|
||||||
|
// Second sync
|
||||||
|
await fileOps.syncDirectory(sourceDir, destDir);
|
||||||
|
|
||||||
|
// All files should be present and correct
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file1.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file2.txt'))).toBe(true);
|
||||||
|
expect(await fs.pathExists(path.join(destDir, 'file3.txt'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,214 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { FileOps } from '../../../tools/cli/lib/file-ops.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
|
||||||
|
describe('FileOps', () => {
|
||||||
|
const fileOps = new FileOps();
|
||||||
|
let tmpDir;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('ensureDir()', () => {
|
||||||
|
it('should create directory if it does not exist', async () => {
|
||||||
|
const newDir = path.join(tmpDir, 'new-directory');
|
||||||
|
|
||||||
|
await fileOps.ensureDir(newDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(newDir)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not fail if directory already exists', async () => {
|
||||||
|
const existingDir = path.join(tmpDir, 'existing');
|
||||||
|
await fs.ensureDir(existingDir);
|
||||||
|
|
||||||
|
await expect(fileOps.ensureDir(existingDir)).resolves.not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create nested directories', async () => {
|
||||||
|
const nestedDir = path.join(tmpDir, 'level1', 'level2', 'level3');
|
||||||
|
|
||||||
|
await fileOps.ensureDir(nestedDir);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(nestedDir)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('remove()', () => {
|
||||||
|
it('should remove a file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.remove(filePath);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(filePath)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should remove a directory', async () => {
|
||||||
|
const dirPath = path.join(tmpDir, 'test-dir');
|
||||||
|
await fs.ensureDir(dirPath);
|
||||||
|
await createTestFile(dirPath, 'file.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.remove(dirPath);
|
||||||
|
|
||||||
|
expect(await fs.pathExists(dirPath)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not fail if path does not exist', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'does-not-exist');
|
||||||
|
|
||||||
|
await expect(fileOps.remove(nonExistent)).resolves.not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should remove nested directories', async () => {
|
||||||
|
const nested = path.join(tmpDir, 'a', 'b', 'c');
|
||||||
|
await fs.ensureDir(nested);
|
||||||
|
await createTestFile(nested, 'file.txt', 'content');
|
||||||
|
|
||||||
|
await fileOps.remove(path.join(tmpDir, 'a'));
|
||||||
|
|
||||||
|
expect(await fs.pathExists(path.join(tmpDir, 'a'))).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('readFile()', () => {
|
||||||
|
it('should read file content', async () => {
|
||||||
|
const content = 'test content';
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', content);
|
||||||
|
|
||||||
|
const result = await fileOps.readFile(filePath);
|
||||||
|
|
||||||
|
expect(result).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should read UTF-8 content', async () => {
|
||||||
|
const content = 'Hello 世界 🌍';
|
||||||
|
const filePath = await createTestFile(tmpDir, 'utf8.txt', content);
|
||||||
|
|
||||||
|
const result = await fileOps.readFile(filePath);
|
||||||
|
|
||||||
|
expect(result).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should read empty file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'empty.txt', '');
|
||||||
|
|
||||||
|
const result = await fileOps.readFile(filePath);
|
||||||
|
|
||||||
|
expect(result).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject for non-existent file', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'does-not-exist.txt');
|
||||||
|
|
||||||
|
await expect(fileOps.readFile(nonExistent)).rejects.toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('writeFile()', () => {
|
||||||
|
it('should write file content', async () => {
|
||||||
|
const filePath = path.join(tmpDir, 'new-file.txt');
|
||||||
|
const content = 'test content';
|
||||||
|
|
||||||
|
await fileOps.writeFile(filePath, content);
|
||||||
|
|
||||||
|
expect(await fs.readFile(filePath, 'utf8')).toBe(content);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create parent directories if they do not exist', async () => {
|
||||||
|
const filePath = path.join(tmpDir, 'level1', 'level2', 'file.txt');
|
||||||
|
|
||||||
|
await fileOps.writeFile(filePath, 'content');
|
||||||
|
|
||||||
|
expect(await fs.pathExists(filePath)).toBe(true);
|
||||||
|
expect(await fs.readFile(filePath, 'utf8')).toBe('content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should overwrite existing file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'old content');
|
||||||
|
|
||||||
|
await fileOps.writeFile(filePath, 'new content');
|
||||||
|
|
||||||
|
expect(await fs.readFile(filePath, 'utf8')).toBe('new content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle UTF-8 content', async () => {
|
||||||
|
const content = '测试 Тест 🎉';
|
||||||
|
const filePath = path.join(tmpDir, 'unicode.txt');
|
||||||
|
|
||||||
|
await fileOps.writeFile(filePath, content);
|
||||||
|
|
||||||
|
expect(await fs.readFile(filePath, 'utf8')).toBe(content);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('exists()', () => {
|
||||||
|
it('should return true for existing file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
const result = await fileOps.exists(filePath);
|
||||||
|
|
||||||
|
expect(result).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for existing directory', async () => {
|
||||||
|
const dirPath = path.join(tmpDir, 'test-dir');
|
||||||
|
await fs.ensureDir(dirPath);
|
||||||
|
|
||||||
|
const result = await fileOps.exists(dirPath);
|
||||||
|
|
||||||
|
expect(result).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for non-existent path', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'does-not-exist');
|
||||||
|
|
||||||
|
const result = await fileOps.exists(nonExistent);
|
||||||
|
|
||||||
|
expect(result).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('stat()', () => {
|
||||||
|
it('should return stats for file', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
const stats = await fileOps.stat(filePath);
|
||||||
|
|
||||||
|
expect(stats.isFile()).toBe(true);
|
||||||
|
expect(stats.isDirectory()).toBe(false);
|
||||||
|
expect(stats.size).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return stats for directory', async () => {
|
||||||
|
const dirPath = path.join(tmpDir, 'test-dir');
|
||||||
|
await fs.ensureDir(dirPath);
|
||||||
|
|
||||||
|
const stats = await fileOps.stat(dirPath);
|
||||||
|
|
||||||
|
expect(stats.isDirectory()).toBe(true);
|
||||||
|
expect(stats.isFile()).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject for non-existent path', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'does-not-exist');
|
||||||
|
|
||||||
|
await expect(fileOps.stat(nonExistent)).rejects.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return modification time', async () => {
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', 'content');
|
||||||
|
|
||||||
|
const stats = await fileOps.stat(filePath);
|
||||||
|
|
||||||
|
expect(stats.mtime).toBeInstanceOf(Date);
|
||||||
|
expect(stats.mtime.getTime()).toBeLessThanOrEqual(Date.now());
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,335 @@
|
||||||
|
import { describe, it, expect, beforeEach } from 'vitest';
|
||||||
|
import { YamlXmlBuilder } from '../../../tools/cli/lib/yaml-xml-builder.js';
|
||||||
|
|
||||||
|
describe('YamlXmlBuilder - buildCommandsXml()', () => {
|
||||||
|
let builder;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
builder = new YamlXmlBuilder();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('menu injection', () => {
|
||||||
|
it('should always inject *menu item first', () => {
|
||||||
|
const xml = builder.buildCommandsXml([]);
|
||||||
|
|
||||||
|
expect(xml).toContain('<item cmd="*menu">[M] Redisplay Menu Options</item>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should always inject *dismiss item last', () => {
|
||||||
|
const xml = builder.buildCommandsXml([]);
|
||||||
|
|
||||||
|
expect(xml).toContain('<item cmd="*dismiss">[D] Dismiss Agent</item>');
|
||||||
|
// Should be at the end before </menu>
|
||||||
|
expect(xml).toMatch(/\*dismiss.*<\/menu>/s);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should place user items between *menu and *dismiss', () => {
|
||||||
|
const menuItems = [{ trigger: 'help', description: 'Show help', action: 'show_help' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
const menuIndex = xml.indexOf('*menu');
|
||||||
|
const helpIndex = xml.indexOf('*help');
|
||||||
|
const dismissIndex = xml.indexOf('*dismiss');
|
||||||
|
|
||||||
|
expect(menuIndex).toBeLessThan(helpIndex);
|
||||||
|
expect(helpIndex).toBeLessThan(dismissIndex);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('legacy format items', () => {
|
||||||
|
it('should add * prefix to triggers', () => {
|
||||||
|
const menuItems = [{ trigger: 'help', description: 'Help', action: 'show_help' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('cmd="*help"');
|
||||||
|
expect(xml).not.toContain('cmd="help"'); // Should not have unprefixed version
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve * prefix if already present', () => {
|
||||||
|
const menuItems = [{ trigger: '*custom', description: 'Custom', action: 'custom_action' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('cmd="*custom"');
|
||||||
|
expect(xml).not.toContain('cmd="**custom"'); // Should not double-prefix
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include description as item content', () => {
|
||||||
|
const menuItems = [{ trigger: 'analyze', description: '[A] Analyze code', action: 'analyze' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('>[A] Analyze code</item>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape XML special characters in description', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
trigger: 'test',
|
||||||
|
description: 'Test <brackets> & "quotes"',
|
||||||
|
action: 'test',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('<brackets> & "quotes"');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('handler attributes', () => {
|
||||||
|
it('should include workflow attribute', () => {
|
||||||
|
const menuItems = [{ trigger: 'start', description: 'Start workflow', workflow: 'main-workflow' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('workflow="main-workflow"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include exec attribute', () => {
|
||||||
|
const menuItems = [{ trigger: 'run', description: 'Run task', exec: 'path/to/task.md' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('exec="path/to/task.md"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include action attribute', () => {
|
||||||
|
const menuItems = [{ trigger: 'help', description: 'Help', action: 'show_help' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('action="show_help"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include tmpl attribute', () => {
|
||||||
|
const menuItems = [{ trigger: 'form', description: 'Form', tmpl: 'templates/form.yaml' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('tmpl="templates/form.yaml"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include data attribute', () => {
|
||||||
|
const menuItems = [{ trigger: 'load', description: 'Load', data: 'data/config.json' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('data="data/config.json"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include validate-workflow attribute', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
trigger: 'validate',
|
||||||
|
description: 'Validate',
|
||||||
|
'validate-workflow': 'validation-flow',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('validate-workflow="validation-flow"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should prioritize workflow-install over workflow', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
trigger: 'start',
|
||||||
|
description: 'Start',
|
||||||
|
workflow: 'original',
|
||||||
|
'workflow-install': 'installed-location',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('workflow="installed-location"');
|
||||||
|
expect(xml).not.toContain('workflow="original"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle multiple attributes on same item', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
trigger: 'complex',
|
||||||
|
description: 'Complex command',
|
||||||
|
workflow: 'flow',
|
||||||
|
data: 'data.json',
|
||||||
|
action: 'custom',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('workflow="flow"');
|
||||||
|
expect(xml).toContain('data="data.json"');
|
||||||
|
expect(xml).toContain('action="custom"');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('IDE and web filtering', () => {
|
||||||
|
it('should include ide-only items for IDE installation', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{ trigger: 'local', description: 'Local only', action: 'local', 'ide-only': true },
|
||||||
|
{ trigger: 'normal', description: 'Normal', action: 'normal' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems, false);
|
||||||
|
|
||||||
|
expect(xml).toContain('*local');
|
||||||
|
expect(xml).toContain('*normal');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip ide-only items for web bundle', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{ trigger: 'local', description: 'Local only', action: 'local', 'ide-only': true },
|
||||||
|
{ trigger: 'normal', description: 'Normal', action: 'normal' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems, true);
|
||||||
|
|
||||||
|
expect(xml).not.toContain('*local');
|
||||||
|
expect(xml).toContain('*normal');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include web-only items for web bundle', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{ trigger: 'web', description: 'Web only', action: 'web', 'web-only': true },
|
||||||
|
{ trigger: 'normal', description: 'Normal', action: 'normal' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems, true);
|
||||||
|
|
||||||
|
expect(xml).toContain('*web');
|
||||||
|
expect(xml).toContain('*normal');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip web-only items for IDE installation', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{ trigger: 'web', description: 'Web only', action: 'web', 'web-only': true },
|
||||||
|
{ trigger: 'normal', description: 'Normal', action: 'normal' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems, false);
|
||||||
|
|
||||||
|
expect(xml).not.toContain('*web');
|
||||||
|
expect(xml).toContain('*normal');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('multi format with nested handlers', () => {
|
||||||
|
it('should build multi format items with nested handlers', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
multi: '[TS] Technical Specification',
|
||||||
|
triggers: [
|
||||||
|
{
|
||||||
|
'tech-spec': [{ input: 'Create technical specification' }, { route: 'workflows/tech-spec.yaml' }],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
TS: [{ input: 'Create technical specification' }, { route: 'workflows/tech-spec.yaml' }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('<item type="multi">');
|
||||||
|
expect(xml).toContain('[TS] Technical Specification');
|
||||||
|
expect(xml).toContain('<handler');
|
||||||
|
expect(xml).toContain('match="Create technical specification"');
|
||||||
|
expect(xml).toContain('</item>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape XML in multi description', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{
|
||||||
|
multi: '[A] Analyze <code>',
|
||||||
|
triggers: [
|
||||||
|
{
|
||||||
|
analyze: [{ input: 'Analyze', route: 'task.md' }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('<code>');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty menu items array', () => {
|
||||||
|
const xml = builder.buildCommandsXml([]);
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
expect(xml).toContain('</menu>');
|
||||||
|
expect(xml).toContain('*menu');
|
||||||
|
expect(xml).toContain('*dismiss');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle null menu items', () => {
|
||||||
|
const xml = builder.buildCommandsXml(null);
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
expect(xml).toContain('*menu');
|
||||||
|
expect(xml).toContain('*dismiss');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle undefined menu items', () => {
|
||||||
|
const xml = builder.buildCommandsXml();
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty description', () => {
|
||||||
|
const menuItems = [{ trigger: 'test', description: '', action: 'test' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('cmd="*test"');
|
||||||
|
expect(xml).toContain('></item>'); // Empty content between tags
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing trigger (edge case)', () => {
|
||||||
|
const menuItems = [{ description: 'No trigger', action: 'test' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
// Should handle gracefully - might skip or add * prefix to empty
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode in descriptions', () => {
|
||||||
|
const menuItems = [{ trigger: 'test', description: '[测试] Test 日本語', action: 'test' }];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
expect(xml).toContain('测试');
|
||||||
|
expect(xml).toContain('日本語');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('multiple menu items', () => {
|
||||||
|
it('should process all menu items in order', () => {
|
||||||
|
const menuItems = [
|
||||||
|
{ trigger: 'first', description: 'First', action: 'first' },
|
||||||
|
{ trigger: 'second', description: 'Second', action: 'second' },
|
||||||
|
{ trigger: 'third', description: 'Third', action: 'third' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildCommandsXml(menuItems);
|
||||||
|
|
||||||
|
const firstIndex = xml.indexOf('*first');
|
||||||
|
const secondIndex = xml.indexOf('*second');
|
||||||
|
const thirdIndex = xml.indexOf('*third');
|
||||||
|
|
||||||
|
expect(firstIndex).toBeLessThan(secondIndex);
|
||||||
|
expect(secondIndex).toBeLessThan(thirdIndex);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,605 @@
|
||||||
|
import { describe, it, expect, beforeEach } from 'vitest';
|
||||||
|
import { YamlXmlBuilder } from '../../../tools/cli/lib/yaml-xml-builder.js';
|
||||||
|
|
||||||
|
describe('YamlXmlBuilder - convertToXml()', () => {
|
||||||
|
let builder;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
builder = new YamlXmlBuilder();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('basic XML generation', () => {
|
||||||
|
it('should generate XML with agent tag and attributes', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
id: 'test-agent',
|
||||||
|
name: 'Test Agent',
|
||||||
|
title: 'Test Agent Title',
|
||||||
|
icon: '🔧',
|
||||||
|
},
|
||||||
|
persona: {
|
||||||
|
role: 'Test Role',
|
||||||
|
identity: 'Test Identity',
|
||||||
|
communication_style: 'Professional',
|
||||||
|
principles: ['Principle 1'],
|
||||||
|
},
|
||||||
|
menu: [{ trigger: 'help', description: 'Help', action: 'show_help' }],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<agent id="test-agent"');
|
||||||
|
expect(xml).toContain('name="Test Agent"');
|
||||||
|
expect(xml).toContain('title="Test Agent Title"');
|
||||||
|
expect(xml).toContain('icon="🔧"');
|
||||||
|
expect(xml).toContain('</agent>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include persona section', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Developer',
|
||||||
|
identity: 'Helpful assistant',
|
||||||
|
communication_style: 'Professional',
|
||||||
|
principles: ['Clear', 'Concise'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<persona>');
|
||||||
|
expect(xml).toContain('<role>Developer</role>');
|
||||||
|
expect(xml).toContain('<identity>Helpful assistant</identity>');
|
||||||
|
expect(xml).toContain('<communication_style>Professional</communication_style>');
|
||||||
|
expect(xml).toContain('<principles>Clear Concise</principles>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include memories section if present', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
memories: ['Memory 1', 'Memory 2'],
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<memories>');
|
||||||
|
expect(xml).toContain('<memory>Memory 1</memory>');
|
||||||
|
expect(xml).toContain('<memory>Memory 2</memory>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include prompts section if present', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
prompts: [{ id: 'p1', content: 'Prompt content' }],
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<prompts>');
|
||||||
|
expect(xml).toContain('<prompt id="p1">');
|
||||||
|
expect(xml).toContain('Prompt content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include menu section', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [
|
||||||
|
{ trigger: 'help', description: 'Show help', action: 'show_help' },
|
||||||
|
{ trigger: 'start', description: 'Start workflow', workflow: 'main' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
expect(xml).toContain('</menu>');
|
||||||
|
// Menu always includes injected *menu item
|
||||||
|
expect(xml).toContain('*menu');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('XML escaping', () => {
|
||||||
|
it('should escape special characters in all fields', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
id: 'test',
|
||||||
|
name: 'Test',
|
||||||
|
title: 'Test Agent',
|
||||||
|
icon: '🔧',
|
||||||
|
},
|
||||||
|
persona: {
|
||||||
|
role: 'Role with <brackets>',
|
||||||
|
identity: 'Identity with & ampersand',
|
||||||
|
communication_style: 'Style with "quotes"',
|
||||||
|
principles: ["Principle with ' apostrophe"],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
// Metadata in attributes might not be escaped - focus on content
|
||||||
|
expect(xml).toContain('<brackets>');
|
||||||
|
expect(xml).toContain('& ampersand');
|
||||||
|
expect(xml).toContain('"quotes"');
|
||||||
|
expect(xml).toContain('' apostrophe');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve Unicode characters', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
id: 'unicode',
|
||||||
|
name: '测试代理',
|
||||||
|
title: 'Тестовый агент',
|
||||||
|
icon: '🔧',
|
||||||
|
},
|
||||||
|
persona: {
|
||||||
|
role: '開発者',
|
||||||
|
identity: 'مساعد مفيد',
|
||||||
|
communication_style: 'Profesional',
|
||||||
|
principles: ['原则'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('测试代理');
|
||||||
|
expect(xml).toContain('Тестовый агент');
|
||||||
|
expect(xml).toContain('開発者');
|
||||||
|
expect(xml).toContain('مساعد مفيد');
|
||||||
|
expect(xml).toContain('原则');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('module detection', () => {
|
||||||
|
it('should handle module in buildMetadata', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
module: 'bmm',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Module is stored in metadata but may not be rendered as attribute
|
||||||
|
expect(xml).toContain('<agent');
|
||||||
|
expect(xml).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not include module attribute for core agents', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
// No module attribute for core
|
||||||
|
expect(xml).not.toContain('module=');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('output format variations', () => {
|
||||||
|
it('should generate installation format with YAML frontmatter', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test Agent', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
sourceFile: 'test-agent.yaml',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Installation format has YAML frontmatter
|
||||||
|
expect(xml).toMatch(/^---\n/);
|
||||||
|
expect(xml).toContain('name: "test agent"'); // Derived from filename
|
||||||
|
expect(xml).toContain('description: "Test Agent"');
|
||||||
|
expect(xml).toContain('---');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should generate web bundle format without frontmatter', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test Agent', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
forWebBundle: true,
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Web bundle format has comment header
|
||||||
|
expect(xml).toContain('<!-- Powered by BMAD-CORE™ -->');
|
||||||
|
expect(xml).toContain('# Test Agent');
|
||||||
|
expect(xml).not.toMatch(/^---\n/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should derive name from filename (remove .agent suffix)', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'pm', name: 'PM', title: 'Product Manager', icon: '📋' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
sourceFile: 'pm.agent.yaml',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Should convert pm.agent.yaml → "pm"
|
||||||
|
expect(xml).toContain('name: "pm"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert hyphens to spaces in filename', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'cli', name: 'CLI', title: 'CLI Chief', icon: '⚙️' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
sourceFile: 'cli-chief.yaml',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Should convert cli-chief.yaml → "cli chief"
|
||||||
|
expect(xml).toContain('name: "cli chief"');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('localskip attribute', () => {
|
||||||
|
it('should add localskip="true" when metadata has localskip', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
id: 'web-only',
|
||||||
|
name: 'Web Only',
|
||||||
|
title: 'Web Only Agent',
|
||||||
|
icon: '🌐',
|
||||||
|
localskip: true,
|
||||||
|
},
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('localskip="true"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not add localskip when false or missing', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).not.toContain('localskip=');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge cases', () => {
|
||||||
|
it('should handle empty menu array', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
expect(xml).toContain('</menu>');
|
||||||
|
// Should still have injected *menu item
|
||||||
|
expect(xml).toContain('*menu');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing memories', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).not.toContain('<memories>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing prompts', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).not.toContain('<prompts>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should wrap XML in markdown code fence', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('```xml');
|
||||||
|
expect(xml).toContain('```\n');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include activation instruction for installation format', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
sourceFile: 'test.yaml',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(xml).toContain('You must fully embody this agent');
|
||||||
|
expect(xml).toContain('NEVER break character');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not include activation instruction for web bundle', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
forWebBundle: true,
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(xml).not.toContain('You must fully embody');
|
||||||
|
expect(xml).toContain('<!-- Powered by BMAD-CORE™ -->');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('legacy commands field support', () => {
|
||||||
|
it('should handle legacy "commands" field (renamed to menu)', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
commands: [{ trigger: 'help', description: 'Help', action: 'show_help' }],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
// Should process commands as menu items
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should prioritize menu over commands when both exist', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P'],
|
||||||
|
},
|
||||||
|
menu: [{ trigger: 'new', description: 'New', action: 'new_action' }],
|
||||||
|
commands: [{ trigger: 'old', description: 'Old', action: 'old_action' }],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, { skipActivation: true });
|
||||||
|
|
||||||
|
// Should use menu, not commands
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('complete agent transformation', () => {
|
||||||
|
it('should transform a complete agent with all fields', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
id: 'full-agent',
|
||||||
|
name: 'Full Agent',
|
||||||
|
title: 'Complete Test Agent',
|
||||||
|
icon: '🤖',
|
||||||
|
},
|
||||||
|
persona: {
|
||||||
|
role: 'Full Stack Developer',
|
||||||
|
identity: 'Experienced software engineer',
|
||||||
|
communication_style: 'Clear and professional',
|
||||||
|
principles: ['Quality', 'Performance', 'Maintainability'],
|
||||||
|
},
|
||||||
|
memories: ['Remember project context', 'Track user preferences'],
|
||||||
|
prompts: [
|
||||||
|
{ id: 'init', content: 'Initialize the agent' },
|
||||||
|
{ id: 'task', content: 'Process the task' },
|
||||||
|
],
|
||||||
|
critical_actions: ['Never delete data', 'Always backup'],
|
||||||
|
menu: [
|
||||||
|
{ trigger: 'help', description: '[H] Show help', action: 'show_help' },
|
||||||
|
{ trigger: 'start', description: '[S] Start workflow', workflow: 'main' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = await builder.convertToXml(agentYaml, {
|
||||||
|
sourceFile: 'full-agent.yaml',
|
||||||
|
module: 'bmm',
|
||||||
|
skipActivation: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Verify all sections are present
|
||||||
|
expect(xml).toContain('```xml');
|
||||||
|
expect(xml).toContain('<agent id="full-agent"');
|
||||||
|
expect(xml).toContain('<persona>');
|
||||||
|
expect(xml).toContain('<memories>');
|
||||||
|
expect(xml).toContain('<prompts>');
|
||||||
|
expect(xml).toContain('<menu>');
|
||||||
|
expect(xml).toContain('</agent>');
|
||||||
|
expect(xml).toContain('```');
|
||||||
|
// Verify persona content
|
||||||
|
expect(xml).toContain('Full Stack Developer');
|
||||||
|
// Verify memories
|
||||||
|
expect(xml).toContain('Remember project context');
|
||||||
|
// Verify prompts
|
||||||
|
expect(xml).toContain('Initialize the agent');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,636 @@
|
||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { YamlXmlBuilder } from '../../../tools/cli/lib/yaml-xml-builder.js';
|
||||||
|
import { createTempDir, cleanupTempDir, createTestFile } from '../../helpers/temp-dir.js';
|
||||||
|
import fs from 'fs-extra';
|
||||||
|
import path from 'node:path';
|
||||||
|
import yaml from 'yaml';
|
||||||
|
|
||||||
|
describe('YamlXmlBuilder', () => {
|
||||||
|
let tmpDir;
|
||||||
|
let builder;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
tmpDir = await createTempDir();
|
||||||
|
builder = new YamlXmlBuilder();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await cleanupTempDir(tmpDir);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('deepMerge()', () => {
|
||||||
|
it('should merge shallow objects', () => {
|
||||||
|
const target = { a: 1, b: 2 };
|
||||||
|
const source = { b: 3, c: 4 };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ a: 1, b: 3, c: 4 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should merge nested objects', () => {
|
||||||
|
const target = { level1: { a: 1, b: 2 } };
|
||||||
|
const source = { level1: { b: 3, c: 4 } };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ level1: { a: 1, b: 3, c: 4 } });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should merge deeply nested objects', () => {
|
||||||
|
const target = { l1: { l2: { l3: { value: 'old' } } } };
|
||||||
|
const source = { l1: { l2: { l3: { value: 'new', extra: 'data' } } } };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ l1: { l2: { l3: { value: 'new', extra: 'data' } } } });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should append arrays instead of replacing', () => {
|
||||||
|
const target = { items: [1, 2, 3] };
|
||||||
|
const source = { items: [4, 5, 6] };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result.items).toEqual([1, 2, 3, 4, 5, 6]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle arrays in nested objects', () => {
|
||||||
|
const target = { config: { values: ['a', 'b'] } };
|
||||||
|
const source = { config: { values: ['c', 'd'] } };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result.config.values).toEqual(['a', 'b', 'c', 'd']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should replace arrays if target is not an array', () => {
|
||||||
|
const target = { items: 'string' };
|
||||||
|
const source = { items: ['a', 'b'] };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result.items).toEqual(['a', 'b']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle null values', () => {
|
||||||
|
const target = { a: null, b: 2 };
|
||||||
|
const source = { a: 1, c: null };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ a: 1, b: 2, c: null });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve target values when source has no override', () => {
|
||||||
|
const target = { a: 1, b: 2, c: 3 };
|
||||||
|
const source = { d: 4 };
|
||||||
|
|
||||||
|
const result = builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(result).toEqual({ a: 1, b: 2, c: 3, d: 4 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not mutate original objects', () => {
|
||||||
|
const target = { a: 1 };
|
||||||
|
const source = { b: 2 };
|
||||||
|
|
||||||
|
builder.deepMerge(target, source);
|
||||||
|
|
||||||
|
expect(target).toEqual({ a: 1 }); // Unchanged
|
||||||
|
expect(source).toEqual({ b: 2 }); // Unchanged
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('isObject()', () => {
|
||||||
|
it('should return true for plain objects', () => {
|
||||||
|
expect(builder.isObject({})).toBe(true);
|
||||||
|
expect(builder.isObject({ key: 'value' })).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for arrays', () => {
|
||||||
|
expect(builder.isObject([])).toBe(false);
|
||||||
|
expect(builder.isObject([1, 2, 3])).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return falsy for null', () => {
|
||||||
|
expect(builder.isObject(null)).toBeFalsy();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return falsy for primitives', () => {
|
||||||
|
expect(builder.isObject('string')).toBeFalsy();
|
||||||
|
expect(builder.isObject(42)).toBeFalsy();
|
||||||
|
expect(builder.isObject(true)).toBeFalsy();
|
||||||
|
expect(builder.isObject()).toBeFalsy();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('loadAndMergeAgent()', () => {
|
||||||
|
it('should load agent YAML without customization', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test Agent', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Test Role',
|
||||||
|
identity: 'Test Identity',
|
||||||
|
communication_style: 'Professional',
|
||||||
|
principles: ['Principle 1'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const agentPath = path.join(tmpDir, 'agent.yaml');
|
||||||
|
await fs.writeFile(agentPath, yaml.stringify(agentYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(agentPath);
|
||||||
|
|
||||||
|
expect(result.agent.metadata.id).toBe('test');
|
||||||
|
expect(result.agent.persona.role).toBe('Test Role');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve base persona when customize has empty strings', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Base Role',
|
||||||
|
identity: 'Base Identity',
|
||||||
|
communication_style: 'Base Style',
|
||||||
|
principles: ['Base Principle'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
persona: {
|
||||||
|
role: 'Custom Role',
|
||||||
|
identity: '', // Empty - should NOT override
|
||||||
|
communication_style: 'Custom Style',
|
||||||
|
// principles omitted
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.persona.role).toBe('Custom Role'); // Overridden
|
||||||
|
expect(result.agent.persona.identity).toBe('Base Identity'); // Preserved
|
||||||
|
expect(result.agent.persona.communication_style).toBe('Custom Style'); // Overridden
|
||||||
|
expect(result.agent.persona.principles).toEqual(['Base Principle']); // Preserved
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve base persona when customize has null values', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Base Role',
|
||||||
|
identity: 'Base Identity',
|
||||||
|
communication_style: 'Base Style',
|
||||||
|
principles: ['Base'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
persona: {
|
||||||
|
role: null,
|
||||||
|
identity: 'Custom Identity',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.persona.role).toBe('Base Role'); // Preserved (null skipped)
|
||||||
|
expect(result.agent.persona.identity).toBe('Custom Identity'); // Overridden
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve base persona when customize has empty arrays', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: {
|
||||||
|
role: 'Base Role',
|
||||||
|
identity: 'Base Identity',
|
||||||
|
communication_style: 'Base Style',
|
||||||
|
principles: ['Principle 1', 'Principle 2'],
|
||||||
|
},
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
persona: {
|
||||||
|
principles: [], // Empty array - should NOT override
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.persona.principles).toEqual(['Principle 1', 'Principle 2']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should append menu items from customize', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
menu: [{ trigger: 'help', description: 'Help', action: 'show_help' }],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
menu: [{ trigger: 'custom', description: 'Custom', action: 'custom_action' }],
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.menu).toHaveLength(2);
|
||||||
|
expect(result.agent.menu[0].trigger).toBe('help');
|
||||||
|
expect(result.agent.menu[1].trigger).toBe('custom');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should append critical_actions from customize', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
critical_actions: ['Action 1'],
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
critical_actions: ['Action 2', 'Action 3'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.critical_actions).toHaveLength(3);
|
||||||
|
expect(result.agent.critical_actions).toEqual(['Action 1', 'Action 2', 'Action 3']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should append prompts from customize', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
prompts: [{ id: 'p1', content: 'Prompt 1' }],
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
prompts: [{ id: 'p2', content: 'Prompt 2' }],
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.prompts).toHaveLength(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing customization file', async () => {
|
||||||
|
const agentYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'test', name: 'Test', title: 'Test', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const agentPath = path.join(tmpDir, 'agent.yaml');
|
||||||
|
await fs.writeFile(agentPath, yaml.stringify(agentYaml));
|
||||||
|
|
||||||
|
const nonExistent = path.join(tmpDir, 'nonexistent.yaml');
|
||||||
|
const result = await builder.loadAndMergeAgent(agentPath, nonExistent);
|
||||||
|
|
||||||
|
expect(result.agent.metadata.id).toBe('test');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle legacy commands field (renamed to menu)', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base', title: 'Base', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
commands: [{ trigger: 'old', description: 'Old', action: 'old_action' }],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
commands: [{ trigger: 'new', description: 'New', action: 'new_action' }],
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.commands).toHaveLength(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should override metadata with non-empty values', async () => {
|
||||||
|
const baseYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: { id: 'base', name: 'Base Name', title: 'Base Title', icon: '🔧' },
|
||||||
|
persona: { role: 'Role', identity: 'ID', communication_style: 'Style', principles: ['P'] },
|
||||||
|
menu: [],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const customizeYaml = {
|
||||||
|
agent: {
|
||||||
|
metadata: {
|
||||||
|
name: 'Custom Name',
|
||||||
|
title: '', // Empty - should be skipped
|
||||||
|
icon: '🎯',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const basePath = path.join(tmpDir, 'base.yaml');
|
||||||
|
const customizePath = path.join(tmpDir, 'customize.yaml');
|
||||||
|
await fs.writeFile(basePath, yaml.stringify(baseYaml));
|
||||||
|
await fs.writeFile(customizePath, yaml.stringify(customizeYaml));
|
||||||
|
|
||||||
|
const result = await builder.loadAndMergeAgent(basePath, customizePath);
|
||||||
|
|
||||||
|
expect(result.agent.metadata.name).toBe('Custom Name');
|
||||||
|
expect(result.agent.metadata.title).toBe('Base Title'); // Preserved
|
||||||
|
expect(result.agent.metadata.icon).toBe('🎯');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('buildPersonaXml()', () => {
|
||||||
|
it('should build complete persona XML', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Test Role',
|
||||||
|
identity: 'Test Identity',
|
||||||
|
communication_style: 'Professional',
|
||||||
|
principles: ['Principle 1', 'Principle 2', 'Principle 3'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('<persona>');
|
||||||
|
expect(xml).toContain('</persona>');
|
||||||
|
expect(xml).toContain('<role>Test Role</role>');
|
||||||
|
expect(xml).toContain('<identity>Test Identity</identity>');
|
||||||
|
expect(xml).toContain('<communication_style>Professional</communication_style>');
|
||||||
|
expect(xml).toContain('<principles>Principle 1 Principle 2 Principle 3</principles>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape XML special characters in persona', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Role with <tags> & "quotes"',
|
||||||
|
identity: "O'Reilly's Identity",
|
||||||
|
communication_style: 'Use <code> tags',
|
||||||
|
principles: ['Principle with & ampersand'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('<tags> & "quotes"');
|
||||||
|
expect(xml).toContain('O'Reilly's Identity');
|
||||||
|
expect(xml).toContain('<code> tags');
|
||||||
|
expect(xml).toContain('& ampersand');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle principles as array', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: ['P1', 'P2', 'P3'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('<principles>P1 P2 P3</principles>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle principles as string', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
principles: 'Single principle string',
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('<principles>Single principle string</principles>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve Unicode in persona fields', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Тестовая роль',
|
||||||
|
identity: '日本語のアイデンティティ',
|
||||||
|
communication_style: 'Estilo profesional',
|
||||||
|
principles: ['原则一', 'Принцип два'],
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('Тестовая роль');
|
||||||
|
expect(xml).toContain('日本語のアイデンティティ');
|
||||||
|
expect(xml).toContain('Estilo profesional');
|
||||||
|
expect(xml).toContain('原则一 Принцип два');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing persona gracefully', () => {
|
||||||
|
const xml = builder.buildPersonaXml(null);
|
||||||
|
|
||||||
|
expect(xml).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle partial persona (missing optional fields)', () => {
|
||||||
|
const persona = {
|
||||||
|
role: 'Role',
|
||||||
|
identity: 'ID',
|
||||||
|
communication_style: 'Style',
|
||||||
|
// principles missing
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPersonaXml(persona);
|
||||||
|
|
||||||
|
expect(xml).toContain('<role>Role</role>');
|
||||||
|
expect(xml).toContain('<identity>ID</identity>');
|
||||||
|
expect(xml).toContain('<communication_style>Style</communication_style>');
|
||||||
|
expect(xml).not.toContain('<principles>');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('buildMemoriesXml()', () => {
|
||||||
|
it('should build memories XML from array', () => {
|
||||||
|
const memories = ['Memory 1', 'Memory 2', 'Memory 3'];
|
||||||
|
|
||||||
|
const xml = builder.buildMemoriesXml(memories);
|
||||||
|
|
||||||
|
expect(xml).toContain('<memories>');
|
||||||
|
expect(xml).toContain('</memories>');
|
||||||
|
expect(xml).toContain('<memory>Memory 1</memory>');
|
||||||
|
expect(xml).toContain('<memory>Memory 2</memory>');
|
||||||
|
expect(xml).toContain('<memory>Memory 3</memory>');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape XML special characters in memories', () => {
|
||||||
|
const memories = ['Memory with <tags>', 'Memory with & ampersand', 'Memory with "quotes"'];
|
||||||
|
|
||||||
|
const xml = builder.buildMemoriesXml(memories);
|
||||||
|
|
||||||
|
expect(xml).toContain('<tags>');
|
||||||
|
expect(xml).toContain('& ampersand');
|
||||||
|
expect(xml).toContain('"quotes"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string for null memories', () => {
|
||||||
|
expect(builder.buildMemoriesXml(null)).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string for empty array', () => {
|
||||||
|
expect(builder.buildMemoriesXml([])).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode in memories', () => {
|
||||||
|
const memories = ['记忆 1', 'Память 2', '記憶 3'];
|
||||||
|
|
||||||
|
const xml = builder.buildMemoriesXml(memories);
|
||||||
|
|
||||||
|
expect(xml).toContain('记忆 1');
|
||||||
|
expect(xml).toContain('Память 2');
|
||||||
|
expect(xml).toContain('記憶 3');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('buildPromptsXml()', () => {
|
||||||
|
it('should build prompts XML from array format', () => {
|
||||||
|
const prompts = [
|
||||||
|
{ id: 'p1', content: 'Prompt 1 content' },
|
||||||
|
{ id: 'p2', content: 'Prompt 2 content' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const xml = builder.buildPromptsXml(prompts);
|
||||||
|
|
||||||
|
expect(xml).toContain('<prompts>');
|
||||||
|
expect(xml).toContain('</prompts>');
|
||||||
|
expect(xml).toContain('<prompt id="p1">');
|
||||||
|
expect(xml).toContain('<content>');
|
||||||
|
expect(xml).toContain('Prompt 1 content');
|
||||||
|
expect(xml).toContain('<prompt id="p2">');
|
||||||
|
expect(xml).toContain('Prompt 2 content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape XML special characters in prompts', () => {
|
||||||
|
const prompts = [{ id: 'test', content: 'Content with <tags> & "quotes"' }];
|
||||||
|
|
||||||
|
const xml = builder.buildPromptsXml(prompts);
|
||||||
|
|
||||||
|
expect(xml).toContain('<content>');
|
||||||
|
expect(xml).toContain('<tags> & "quotes"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string for null prompts', () => {
|
||||||
|
expect(builder.buildPromptsXml(null)).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Unicode in prompts', () => {
|
||||||
|
const prompts = [{ id: 'unicode', content: 'Test 测试 тест テスト' }];
|
||||||
|
|
||||||
|
const xml = builder.buildPromptsXml(prompts);
|
||||||
|
|
||||||
|
expect(xml).toContain('<content>');
|
||||||
|
expect(xml).toContain('测试 тест テスト');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle object/dictionary format prompts', () => {
|
||||||
|
const prompts = {
|
||||||
|
p1: 'Prompt 1 content',
|
||||||
|
p2: 'Prompt 2 content',
|
||||||
|
};
|
||||||
|
|
||||||
|
const xml = builder.buildPromptsXml(prompts);
|
||||||
|
|
||||||
|
expect(xml).toContain('<prompts>');
|
||||||
|
expect(xml).toContain('<prompt id="p1">');
|
||||||
|
expect(xml).toContain('Prompt 1 content');
|
||||||
|
expect(xml).toContain('<prompt id="p2">');
|
||||||
|
expect(xml).toContain('Prompt 2 content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string for empty array', () => {
|
||||||
|
expect(builder.buildPromptsXml([])).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('calculateFileHash()', () => {
|
||||||
|
it('should calculate MD5 hash of file content', async () => {
|
||||||
|
const content = 'test content for hashing';
|
||||||
|
const filePath = await createTestFile(tmpDir, 'test.txt', content);
|
||||||
|
|
||||||
|
const hash = await builder.calculateFileHash(filePath);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(8); // MD5 truncated to 8 chars
|
||||||
|
expect(hash).toMatch(/^[a-f0-9]{8}$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return consistent hash for same content', async () => {
|
||||||
|
const file1 = await createTestFile(tmpDir, 'file1.txt', 'content');
|
||||||
|
const file2 = await createTestFile(tmpDir, 'file2.txt', 'content');
|
||||||
|
|
||||||
|
const hash1 = await builder.calculateFileHash(file1);
|
||||||
|
const hash2 = await builder.calculateFileHash(file2);
|
||||||
|
|
||||||
|
expect(hash1).toBe(hash2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for non-existent file', async () => {
|
||||||
|
const nonExistent = path.join(tmpDir, 'missing.txt');
|
||||||
|
|
||||||
|
const hash = await builder.calculateFileHash(nonExistent);
|
||||||
|
|
||||||
|
expect(hash).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty file', async () => {
|
||||||
|
const file = await createTestFile(tmpDir, 'empty.txt', '');
|
||||||
|
|
||||||
|
const hash = await builder.calculateFileHash(file);
|
||||||
|
|
||||||
|
expect(hash).toHaveLength(8);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,84 @@
|
||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { escapeXml } from '../../../tools/lib/xml-utils.js';
|
||||||
|
|
||||||
|
describe('xml-utils', () => {
|
||||||
|
describe('escapeXml()', () => {
|
||||||
|
it('should escape ampersand (&) to &', () => {
|
||||||
|
expect(escapeXml('Tom & Jerry')).toBe('Tom & Jerry');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape less than (<) to <', () => {
|
||||||
|
expect(escapeXml('5 < 10')).toBe('5 < 10');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape greater than (>) to >', () => {
|
||||||
|
expect(escapeXml('10 > 5')).toBe('10 > 5');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape double quote (") to "', () => {
|
||||||
|
expect(escapeXml('He said "hello"')).toBe('He said "hello"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it("should escape single quote (') to '", () => {
|
||||||
|
expect(escapeXml("It's working")).toBe('It's working');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve Unicode characters', () => {
|
||||||
|
expect(escapeXml('Hello 世界 🌍')).toBe('Hello 世界 🌍');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape multiple special characters in sequence', () => {
|
||||||
|
expect(escapeXml('<tag attr="value">')).toBe('<tag attr="value">');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape all five special characters together', () => {
|
||||||
|
expect(escapeXml(`&<>"'`)).toBe('&<>"'');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string', () => {
|
||||||
|
expect(escapeXml('')).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle null', () => {
|
||||||
|
expect(escapeXml(null)).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle undefined', () => {
|
||||||
|
expect(escapeXml()).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle text with no special characters', () => {
|
||||||
|
expect(escapeXml('Hello World')).toBe('Hello World');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle text that is only special characters', () => {
|
||||||
|
expect(escapeXml('&&&')).toBe('&&&');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not double-escape already escaped entities', () => {
|
||||||
|
// Note: This is expected behavior - the function WILL double-escape
|
||||||
|
// This test documents the actual behavior
|
||||||
|
expect(escapeXml('&')).toBe('&amp;');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should escape special characters in XML content', () => {
|
||||||
|
const xmlContent = '<persona role="Developer & Architect">Use <code> tags</persona>';
|
||||||
|
const expected = '<persona role="Developer & Architect">Use <code> tags</persona>';
|
||||||
|
expect(escapeXml(xmlContent)).toBe(expected);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed Unicode and special characters', () => {
|
||||||
|
expect(escapeXml('测试 <tag> & "quotes"')).toBe('测试 <tag> & "quotes"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle newlines and special characters', () => {
|
||||||
|
const multiline = 'Line 1 & text\n<Line 2>\n"Line 3"';
|
||||||
|
const expected = 'Line 1 & text\n<Line 2>\n"Line 3"';
|
||||||
|
expect(escapeXml(multiline)).toBe(expected);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle string with only whitespace', () => {
|
||||||
|
expect(escapeXml(' ')).toBe(' ');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -27,6 +27,9 @@ const LINK_REGEX = /\[([^\]]*)\]\((\/[^)]+)\)/g;
|
||||||
// File extensions that are static assets, not markdown docs
|
// File extensions that are static assets, not markdown docs
|
||||||
const STATIC_ASSET_EXTENSIONS = ['.zip', '.txt', '.pdf', '.png', '.jpg', '.jpeg', '.gif', '.svg', '.webp', '.ico'];
|
const STATIC_ASSET_EXTENSIONS = ['.zip', '.txt', '.pdf', '.png', '.jpg', '.jpeg', '.gif', '.svg', '.webp', '.ico'];
|
||||||
|
|
||||||
|
// Custom Astro page routes (not part of the docs content collection)
|
||||||
|
const CUSTOM_PAGE_ROUTES = new Set(['/workflow-guide']);
|
||||||
|
|
||||||
// Regex to extract headings for anchor validation
|
// Regex to extract headings for anchor validation
|
||||||
const HEADING_PATTERN = /^#{1,6}\s+(.+)$/gm;
|
const HEADING_PATTERN = /^#{1,6}\s+(.+)$/gm;
|
||||||
|
|
||||||
|
|
@ -210,6 +213,11 @@ function processFile(filePath) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Skip custom Astro page routes
|
||||||
|
if (CUSTOM_PAGE_ROUTES.has(linkPath)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
// Validate the link target exists
|
// Validate the link target exists
|
||||||
const targetFile = resolveLink(linkPath);
|
const targetFile = resolveLink(linkPath);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,51 @@
|
||||||
|
import { defineConfig } from 'vitest/config';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
test: {
|
||||||
|
// Test file patterns
|
||||||
|
include: ['test/unit/**/*.test.js', 'test/integration/**/*.test.js'],
|
||||||
|
exclude: ['test/test-*.js', 'node_modules/**'],
|
||||||
|
|
||||||
|
// Timeouts
|
||||||
|
testTimeout: 10_000, // 10s for unit tests
|
||||||
|
hookTimeout: 30_000, // 30s for setup/teardown
|
||||||
|
|
||||||
|
// Parallel execution for speed
|
||||||
|
threads: true,
|
||||||
|
maxThreads: 4,
|
||||||
|
|
||||||
|
// Coverage configuration (using V8)
|
||||||
|
coverage: {
|
||||||
|
provider: 'v8',
|
||||||
|
reporter: ['text', 'html', 'lcov', 'json-summary'],
|
||||||
|
|
||||||
|
// Files to include in coverage
|
||||||
|
include: ['tools/**/*.js', 'src/**/*.js'],
|
||||||
|
|
||||||
|
// Files to exclude from coverage
|
||||||
|
exclude: [
|
||||||
|
'test/**',
|
||||||
|
'tools/flattener/**', // Separate concern
|
||||||
|
'tools/bmad-npx-wrapper.js', // Entry point
|
||||||
|
'tools/build-docs.js', // Documentation tools
|
||||||
|
'tools/check-doc-links.js', // Documentation tools
|
||||||
|
'**/*.config.js', // Configuration files
|
||||||
|
],
|
||||||
|
|
||||||
|
// Include all files for accurate coverage
|
||||||
|
all: true,
|
||||||
|
|
||||||
|
// Coverage thresholds (fail if below these)
|
||||||
|
statements: 85,
|
||||||
|
branches: 80,
|
||||||
|
functions: 85,
|
||||||
|
lines: 85,
|
||||||
|
},
|
||||||
|
|
||||||
|
// Global setup file
|
||||||
|
setupFiles: ['./test/setup.js'],
|
||||||
|
|
||||||
|
// Environment
|
||||||
|
environment: 'node',
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,444 @@
|
||||||
|
---
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="workflow-guide not-content">
|
||||||
|
<div class="help-callout">
|
||||||
|
<code>/bmad-help</code>
|
||||||
|
<span>Run this anytime to see what to do next — or ask it a question like <em>"what should I do to build a web app?"</em></span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<p class="agents-note">Loading agents is optional. If your IDE supports slash commands, you can run workflows directly.</p>
|
||||||
|
|
||||||
|
<div class="track-selector">
|
||||||
|
<div class="track-buttons">
|
||||||
|
<button class="track-btn" data-track="quick">
|
||||||
|
<span class="track-label">Quick Flow</span>
|
||||||
|
<span class="track-desc">Bug fixes, small features</span>
|
||||||
|
</button>
|
||||||
|
<button class="track-btn" data-track="method">
|
||||||
|
<span class="track-label">BMad Method</span>
|
||||||
|
<span class="track-desc">Products, platforms</span>
|
||||||
|
</button>
|
||||||
|
<button class="track-btn" data-track="enterprise">
|
||||||
|
<span class="track-label">Enterprise</span>
|
||||||
|
<span class="track-desc">Compliance, large-scale</span>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flow-container" data-state="no-track">
|
||||||
|
<p class="choose-prompt">Select a track above to see the workflow.</p>
|
||||||
|
|
||||||
|
<div class="flow-list">
|
||||||
|
<!-- Phase 1: Analysis -->
|
||||||
|
<div class="phase-label">Analysis</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="optional" data-method="optional" data-enterprise="optional">
|
||||||
|
<code class="flow-cmd">/brainstorm-project</code>
|
||||||
|
<span class="flow-agent">Analyst</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Guided ideation using 60+ techniques to explore your project idea and create brainstorm notes.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="optional" data-method="optional" data-enterprise="optional">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="optional" data-method="optional" data-enterprise="optional">
|
||||||
|
<code class="flow-cmd">/research</code>
|
||||||
|
<span class="flow-agent">Analyst</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Market, technical, or competitive research producing a structured research document.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="optional" data-method="optional" data-enterprise="optional">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="optional" data-method="optional" data-enterprise="optional">
|
||||||
|
<code class="flow-cmd">/product-brief</code>
|
||||||
|
<span class="flow-agent">Analyst</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Combines brainstorm and research into a foundation document covering problem, users, and MVP scope.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="optional" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<!-- Phase 2: Planning -->
|
||||||
|
<div class="phase-label">Planning</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="required" data-method="skip" data-enterprise="skip">
|
||||||
|
<code class="flow-cmd">/quick-spec</code>
|
||||||
|
<span class="flow-agent">Barry</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Analyzes your codebase, auto-detects stack, and produces tech-spec.md with implementation-ready story files.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="required" data-method="skip" data-enterprise="skip">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/create-prd</code>
|
||||||
|
<span class="flow-agent">PM</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Creates PRD.md with user personas, requirements, success metrics, and risks.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="optional" data-method="optional" data-enterprise="optional">
|
||||||
|
<code class="flow-cmd">/create-ux-design</code>
|
||||||
|
<span class="flow-agent">UX Designer</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Creates ux-design.md with user journeys, wireframes, and a design system.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="required" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<!-- Phase 3: Solutioning -->
|
||||||
|
<div class="phase-label" data-quick="skip" data-method="required" data-enterprise="required">Solutioning</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/create-architecture</code>
|
||||||
|
<span class="flow-agent">Architect</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Designs system architecture with ADRs covering data, API, security, and deployment decisions.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/create-epics-and-stories</code>
|
||||||
|
<span class="flow-agent">PM</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Breaks PRD and architecture into epic files with prioritized, technically-informed stories.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="optional" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/implementation-readiness</code>
|
||||||
|
<span class="flow-agent">Architect</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Validates cohesion across all planning documents to confirm you're ready to build.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<!-- Phase 4: Implementation -->
|
||||||
|
<div class="phase-label">Implementation</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/sprint-planning</code>
|
||||||
|
<span class="flow-agent">SM</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Initializes sprint-status.yaml to track all stories through development. Run once.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<!-- Dev loop -->
|
||||||
|
<div class="dev-loop" data-quick="required" data-method="required" data-enterprise="required">
|
||||||
|
<span class="loop-label">↻ Repeat for each story</span>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/create-story</code>
|
||||||
|
<span class="flow-agent">SM</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Prepares a story file with full context and acceptance criteria from the epic.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="required" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="required" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/dev-story</code>
|
||||||
|
<span class="flow-agent">DEV</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Implements production code and tests following architecture patterns.</p>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="required" data-method="required" data-enterprise="required">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="optional" data-method="required" data-enterprise="required">
|
||||||
|
<code class="flow-cmd">/code-review</code>
|
||||||
|
<span class="flow-agent">DEV</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Reviews code for quality, architecture alignment, tests, and security.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="flow-arrow" data-quick="skip" data-method="optional" data-enterprise="optional">↓</div>
|
||||||
|
|
||||||
|
<div class="flow-row" data-quick="skip" data-method="optional" data-enterprise="optional">
|
||||||
|
<code class="flow-cmd">/epic-retrospective</code>
|
||||||
|
<span class="flow-agent">SM</span>
|
||||||
|
<span class="flow-status"></span>
|
||||||
|
<p class="flow-desc">Captures learnings from a completed epic to improve the next one.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
.workflow-guide {
|
||||||
|
max-width: 36rem;
|
||||||
|
margin: 0 auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
.help-callout {
|
||||||
|
display: flex;
|
||||||
|
align-items: baseline;
|
||||||
|
gap: 0.75rem;
|
||||||
|
padding: 0.75rem 1rem;
|
||||||
|
border: 2px solid var(--sl-color-accent);
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
background: var(--sl-color-accent-low);
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.help-callout code {
|
||||||
|
font-size: 1.05rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--sl-color-accent-high);
|
||||||
|
white-space: nowrap;
|
||||||
|
background: none;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.help-callout span {
|
||||||
|
font-size: 0.85rem;
|
||||||
|
color: var(--sl-color-gray-2);
|
||||||
|
line-height: 1.4;
|
||||||
|
}
|
||||||
|
|
||||||
|
.agents-note {
|
||||||
|
font-size: 0.8rem;
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
font-style: italic;
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-selector {
|
||||||
|
margin-bottom: 1.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-buttons {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(3, 1fr);
|
||||||
|
gap: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-btn {
|
||||||
|
width: 100%;
|
||||||
|
box-sizing: border-box;
|
||||||
|
padding: 0.5rem 0.6rem;
|
||||||
|
border: 1px solid var(--sl-color-gray-5);
|
||||||
|
border-radius: 0.4rem;
|
||||||
|
background: transparent;
|
||||||
|
cursor: pointer;
|
||||||
|
text-align: left;
|
||||||
|
transition: border-color 0.15s ease, background-color 0.15s ease;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-btn:hover {
|
||||||
|
border-color: var(--sl-color-accent);
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-btn.active {
|
||||||
|
border-color: var(--sl-color-accent);
|
||||||
|
background: var(--sl-color-accent-low);
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-label {
|
||||||
|
font-weight: 600;
|
||||||
|
font-size: 0.85rem;
|
||||||
|
color: var(--sl-color-white);
|
||||||
|
}
|
||||||
|
|
||||||
|
.track-desc {
|
||||||
|
font-size: 0.7rem;
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-container[data-state="no-track"] .flow-list {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-container[data-state="no-track"] .choose-prompt {
|
||||||
|
display: block;
|
||||||
|
}
|
||||||
|
|
||||||
|
.choose-prompt {
|
||||||
|
display: none;
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
font-style: italic;
|
||||||
|
text-align: center;
|
||||||
|
padding: 1.5rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-list {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
align-items: stretch;
|
||||||
|
}
|
||||||
|
|
||||||
|
.phase-label {
|
||||||
|
font-size: 0.65rem;
|
||||||
|
font-weight: 700;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.06em;
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
margin-top: 0.75rem;
|
||||||
|
margin-bottom: 0.3rem;
|
||||||
|
padding-left: 0.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.phase-label:first-child {
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.phase-label[data-visibility="skip"] {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-row {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 1fr auto auto;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.75rem;
|
||||||
|
padding: 0.4rem 0.6rem;
|
||||||
|
border: 1px solid var(--sl-color-gray-5);
|
||||||
|
border-radius: 0.3rem;
|
||||||
|
transition: opacity 0.15s ease, border-color 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-row[data-visibility="active"] {
|
||||||
|
border-color: var(--sl-color-gray-4);
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-row[data-visibility="skip"] {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-cmd {
|
||||||
|
font-size: 0.9rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--sl-color-white);
|
||||||
|
background: none;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-agent {
|
||||||
|
font-size: 0.75rem;
|
||||||
|
color: var(--sl-color-accent-high);
|
||||||
|
background: var(--sl-color-accent-low);
|
||||||
|
padding: 0.1rem 0.4rem;
|
||||||
|
border-radius: 0.2rem;
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-status {
|
||||||
|
font-size: 0.65rem;
|
||||||
|
font-weight: 500;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.03em;
|
||||||
|
min-width: 4rem;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-status[data-status="required"] {
|
||||||
|
color: var(--sl-color-accent-high);
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-status[data-status="optional"] {
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-status[data-status="recommended"] {
|
||||||
|
color: var(--sl-color-accent-high);
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-desc {
|
||||||
|
grid-column: 1 / -1;
|
||||||
|
font-size: 0.75rem;
|
||||||
|
color: var(--sl-color-gray-3);
|
||||||
|
line-height: 1.4;
|
||||||
|
margin: 0.1rem 0 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-arrow {
|
||||||
|
text-align: center;
|
||||||
|
font-size: 1rem;
|
||||||
|
color: var(--sl-color-gray-4);
|
||||||
|
line-height: 1;
|
||||||
|
padding: 0.15rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flow-arrow[data-visibility="skip"] {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dev-loop {
|
||||||
|
border: 1px dashed var(--sl-color-accent);
|
||||||
|
border-radius: 0.4rem;
|
||||||
|
padding: 0.4rem 0.5rem;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
align-items: stretch;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dev-loop[data-visibility="skip"] {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dev-loop .flow-row {
|
||||||
|
border-color: var(--sl-color-gray-5);
|
||||||
|
}
|
||||||
|
|
||||||
|
.loop-label {
|
||||||
|
font-size: 0.65rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--sl-color-accent-high);
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.04em;
|
||||||
|
margin-bottom: 0.3rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (max-width: 30rem) {
|
||||||
|
.track-buttons {
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
}
|
||||||
|
|
||||||
|
.help-callout {
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.4rem;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
function initWorkflowGuide() {
|
||||||
|
const guide = document.querySelector('.workflow-guide');
|
||||||
|
if (!guide) return;
|
||||||
|
|
||||||
|
const trackBtns = guide.querySelectorAll('.track-btn');
|
||||||
|
const flowContainer = guide.querySelector('.flow-container');
|
||||||
|
const allElements = guide.querySelectorAll('[data-quick]');
|
||||||
|
|
||||||
|
trackBtns.forEach((btn) => {
|
||||||
|
btn.addEventListener('click', () => {
|
||||||
|
const track = btn.getAttribute('data-track');
|
||||||
|
|
||||||
|
trackBtns.forEach((b) => b.classList.remove('active'));
|
||||||
|
btn.classList.add('active');
|
||||||
|
|
||||||
|
flowContainer.setAttribute('data-state', 'has-track');
|
||||||
|
|
||||||
|
allElements.forEach((el) => {
|
||||||
|
const status = el.getAttribute(`data-${track}`);
|
||||||
|
|
||||||
|
if (status === 'skip') {
|
||||||
|
el.setAttribute('data-visibility', 'skip');
|
||||||
|
} else {
|
||||||
|
el.setAttribute('data-visibility', 'active');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update status text for flow rows
|
||||||
|
const statusEl = el.querySelector('.flow-status');
|
||||||
|
if (statusEl) {
|
||||||
|
statusEl.setAttribute('data-status', status);
|
||||||
|
statusEl.textContent = status === 'skip' ? '' : status;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('DOMContentLoaded', initWorkflowGuide);
|
||||||
|
document.addEventListener('astro:page-load', initWorkflowGuide);
|
||||||
|
</script>
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
---
|
||||||
|
import StarlightPage from '@astrojs/starlight/components/StarlightPage.astro';
|
||||||
|
import WorkflowGuide from '../components/WorkflowGuide.astro';
|
||||||
|
---
|
||||||
|
|
||||||
|
<StarlightPage
|
||||||
|
frontmatter={{
|
||||||
|
title: 'Workflow Guide',
|
||||||
|
description: 'Interactive guide to the BMad Method workflow — choose your track and see the recommended phases, agents, and outputs.',
|
||||||
|
tableOfContents: false,
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>
|
||||||
|
This interactive guide helps you understand which workflows to run, which agents to use, and what outputs to expect at each phase. Select your project's track to see the relevant path.
|
||||||
|
</p>
|
||||||
|
<WorkflowGuide />
|
||||||
|
</StarlightPage>
|
||||||
Loading…
Reference in New Issue