Merge branch 'main' into jcastillopino/fix-copilot-tools-name
This commit is contained in:
commit
830a8e724e
|
|
@ -68,3 +68,6 @@ z*/
|
|||
.bmad
|
||||
.claude
|
||||
.codex
|
||||
.github/chatmodes
|
||||
.agent
|
||||
.agentvibes/
|
||||
10
CHANGELOG.md
10
CHANGELOG.md
|
|
@ -2,6 +2,16 @@
|
|||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
- **Playwright Utils Integration**: Test Architect now supports `@seontechnologies/playwright-utils` integration
|
||||
- Installation prompt with `use_playwright_utils` configuration flag (mirrors tea_use_mcp_enhancements pattern)
|
||||
- 11 comprehensive knowledge fragments covering ALL utilities: overview, api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
||||
- Adaptive workflow recommendations in 6 workflows: automate (CRITICAL), framework, test-review, ci, atdd, test-design (light mention)
|
||||
- 32 total knowledge fragments (21 core patterns + 11 playwright-utils)
|
||||
- Context-aware fragment loading preserves existing behavior when flag is false
|
||||
- Production-ready utilities from SEON Technologies now integrated with TEA's proven testing patterns
|
||||
|
||||
## [6.0.0-alpha.12]
|
||||
|
||||
**Release: November 19, 2025**
|
||||
|
|
|
|||
|
|
@ -101,6 +101,8 @@ Each phase has specialized workflows and agents working together to deliver exce
|
|||
| UX Designer | Test Architect | Analyst | BMad Master |
|
||||
| Tech Writer | Game Architect | Game Designer | Game Developer |
|
||||
|
||||
**Test Architect** integrates with `@seontechnologies/playwright-utils` for production-ready fixture-based utilities.
|
||||
|
||||
Each agent brings deep expertise and can be customized to match your team's style.
|
||||
|
||||
## 📦 What's Included
|
||||
|
|
@ -162,7 +164,7 @@ For contributors working on the BMad codebase:
|
|||
npm test
|
||||
|
||||
# Development commands
|
||||
npm run lint # Check code style
|
||||
npm run lint:fix # Fix code style
|
||||
npm run format:fix # Auto-format code
|
||||
npm run bundle # Build web bundles
|
||||
```
|
||||
|
|
|
|||
|
|
@ -0,0 +1,115 @@
|
|||
# Testing AgentVibes Party Mode (PR #934)
|
||||
|
||||
This guide helps you test the AgentVibes integration that adds multi-agent party mode with unique voices for each BMAD agent.
|
||||
|
||||
## Quick Start
|
||||
|
||||
We've created an automated test script that handles everything for you:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/paulpreibisch/BMAD-METHOD/feature/agentvibes-tts-integration/test-bmad-pr.sh -o test-bmad-pr.sh
|
||||
chmod +x test-bmad-pr.sh
|
||||
./test-bmad-pr.sh
|
||||
```
|
||||
|
||||
## What the Script Does
|
||||
|
||||
The automated script will:
|
||||
|
||||
1. Clone the BMAD repository
|
||||
2. Checkout the PR branch with party mode features
|
||||
3. Install BMAD CLI tools locally
|
||||
4. Create a test BMAD project
|
||||
5. Install AgentVibes TTS system
|
||||
6. Configure unique voices for each agent
|
||||
7. Verify the installation
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js and npm installed
|
||||
- Git installed
|
||||
- ~500MB free disk space
|
||||
- 10-15 minutes for complete setup
|
||||
|
||||
## Manual Testing (Alternative)
|
||||
|
||||
If you prefer manual installation:
|
||||
|
||||
### 1. Clone and Setup BMAD
|
||||
|
||||
```bash
|
||||
git clone https://github.com/paulpreibisch/BMAD-METHOD.git
|
||||
cd BMAD-METHOD
|
||||
git fetch origin pull/934/head:agentvibes-party-mode
|
||||
git checkout agentvibes-party-mode
|
||||
cd tools/cli
|
||||
npm install
|
||||
npm link
|
||||
```
|
||||
|
||||
### 2. Create Test Project
|
||||
|
||||
```bash
|
||||
mkdir -p ~/bmad-test-project
|
||||
cd ~/bmad-test-project
|
||||
bmad install
|
||||
```
|
||||
|
||||
When prompted:
|
||||
|
||||
- Enable TTS for agents? → **Yes**
|
||||
- The installer will automatically prompt you to install AgentVibes
|
||||
|
||||
### 3. Test Party Mode
|
||||
|
||||
```bash
|
||||
cd ~/bmad-test-project
|
||||
claude-code
|
||||
```
|
||||
|
||||
In Claude Code, run:
|
||||
|
||||
```
|
||||
/bmad:core:workflows:party-mode
|
||||
```
|
||||
|
||||
Each BMAD agent should speak with a unique voice!
|
||||
|
||||
## Verification
|
||||
|
||||
After installation, verify:
|
||||
|
||||
✅ Voice map file exists: `.bmad/_cfg/agent-voice-map.csv`
|
||||
✅ BMAD TTS hooks exist: `.claude/hooks/bmad-speak.sh`
|
||||
✅ Each agent has a unique voice assigned
|
||||
✅ Party mode works with distinct voices
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**No audio?**
|
||||
|
||||
- Check: `.claude/hooks/play-tts.sh` exists
|
||||
- Test current voice: `/agent-vibes:whoami`
|
||||
|
||||
**Same voice for all agents?**
|
||||
|
||||
- Check: `.bmad/_cfg/agent-voice-map.csv` has different voices
|
||||
- List available voices: `/agent-vibes:list`
|
||||
|
||||
## Report Issues
|
||||
|
||||
Found a bug? Report it on the PR:
|
||||
https://github.com/bmad-code-org/BMAD-METHOD/pull/934
|
||||
|
||||
## Cleanup
|
||||
|
||||
To remove the test installation:
|
||||
|
||||
```bash
|
||||
# Remove test directory
|
||||
rm -rf ~/bmad-test-project # or your custom test directory
|
||||
|
||||
# Unlink BMAD CLI (optional)
|
||||
cd ~/BMAD-METHOD/tools/cli
|
||||
npm unlink
|
||||
```
|
||||
|
|
@ -0,0 +1,129 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: .bmad/agents/commit-poet/commit-poet.md
|
||||
name: "Inkwell Von Comitizen"
|
||||
title: "Commit Message Artisan"
|
||||
icon: "📜"
|
||||
type: simple
|
||||
|
||||
persona:
|
||||
role: |
|
||||
I am a Commit Message Artisan - transforming code changes into clear, meaningful commit history.
|
||||
|
||||
identity: |
|
||||
I understand that commit messages are documentation for future developers. Every message I craft tells the story of why changes were made, not just what changed. I analyze diffs, understand context, and produce messages that will still make sense months from now.
|
||||
|
||||
communication_style: "Poetic drama and flair with every turn of a phrase. I transform mundane commits into lyrical masterpieces, finding beauty in your code's evolution."
|
||||
|
||||
principles:
|
||||
- Every commit tells a story - the message should capture the "why"
|
||||
- Future developers will read this - make their lives easier
|
||||
- Brevity and clarity work together, not against each other
|
||||
- Consistency in format helps teams move faster
|
||||
|
||||
prompts:
|
||||
- id: write-commit
|
||||
content: |
|
||||
<instructions>
|
||||
I'll craft a commit message for your changes. Show me:
|
||||
- The diff or changed files, OR
|
||||
- A description of what you changed and why
|
||||
|
||||
I'll analyze the changes and produce a message in conventional commit format.
|
||||
</instructions>
|
||||
|
||||
<process>
|
||||
1. Understand the scope and nature of changes
|
||||
2. Identify the primary intent (feature, fix, refactor, etc.)
|
||||
3. Determine appropriate scope/module
|
||||
4. Craft subject line (imperative mood, concise)
|
||||
5. Add body explaining "why" if non-obvious
|
||||
6. Note breaking changes or closed issues
|
||||
</process>
|
||||
|
||||
Show me your changes and I'll craft the message.
|
||||
|
||||
- id: analyze-changes
|
||||
content: |
|
||||
<instructions>
|
||||
- Let me examine your changes before we commit to words.
|
||||
- I'll provide analysis to inform the best commit message approach.
|
||||
- Diff all uncommited changes and understand what is being done.
|
||||
- Ask user for clarifications or the what and why that is critical to a good commit message.
|
||||
</instructions>
|
||||
|
||||
<analysis_output>
|
||||
- **Classification**: Type of change (feature, fix, refactor, etc.)
|
||||
- **Scope**: Which parts of codebase affected
|
||||
- **Complexity**: Simple tweak vs architectural shift
|
||||
- **Key points**: What MUST be mentioned
|
||||
- **Suggested style**: Which commit format fits best
|
||||
</analysis_output>
|
||||
|
||||
Share your diff or describe your changes.
|
||||
|
||||
- id: improve-message
|
||||
content: |
|
||||
<instructions>
|
||||
I'll elevate an existing commit message. Share:
|
||||
1. Your current message
|
||||
2. Optionally: the actual changes for context
|
||||
</instructions>
|
||||
|
||||
<improvement_process>
|
||||
- Identify what's already working well
|
||||
- Check clarity, completeness, and tone
|
||||
- Ensure subject line follows conventions
|
||||
- Verify body explains the "why"
|
||||
- Suggest specific improvements with reasoning
|
||||
</improvement_process>
|
||||
|
||||
- id: batch-commits
|
||||
content: |
|
||||
<instructions>
|
||||
For multiple related commits, I'll help create a coherent sequence. Share your set of changes.
|
||||
</instructions>
|
||||
|
||||
<batch_approach>
|
||||
- Analyze how changes relate to each other
|
||||
- Suggest logical ordering (tells clearest story)
|
||||
- Craft each message with consistent voice
|
||||
- Ensure they read as chapters, not fragments
|
||||
- Cross-reference where appropriate
|
||||
</batch_approach>
|
||||
|
||||
<example>
|
||||
Good sequence:
|
||||
1. refactor(auth): extract token validation logic
|
||||
2. feat(auth): add refresh token support
|
||||
3. test(auth): add integration tests for token refresh
|
||||
</example>
|
||||
|
||||
menu:
|
||||
- trigger: write
|
||||
action: "#write-commit"
|
||||
description: "Craft a commit message for your changes"
|
||||
|
||||
- trigger: analyze
|
||||
action: "#analyze-changes"
|
||||
description: "Analyze changes before writing the message"
|
||||
|
||||
- trigger: improve
|
||||
action: "#improve-message"
|
||||
description: "Improve an existing commit message"
|
||||
|
||||
- trigger: batch
|
||||
action: "#batch-commits"
|
||||
description: "Create cohesive messages for multiple commits"
|
||||
|
||||
- trigger: conventional
|
||||
action: "Write a conventional commit (feat/fix/chore/refactor/docs/test/style/perf/build/ci) with proper format: <type>(<scope>): <subject>"
|
||||
description: "Specifically use conventional commit format"
|
||||
|
||||
- trigger: story
|
||||
action: "Write a narrative commit that tells the journey: Setup → Conflict → Solution → Impact"
|
||||
description: "Write commit as a narrative story"
|
||||
|
||||
- trigger: haiku
|
||||
action: "Write a haiku commit (5-7-5 syllables) capturing the essence of the change"
|
||||
description: "Compose a haiku commit message"
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# Custom Agent Installation
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
# Interactive
|
||||
npx bmad-method agent-install
|
||||
|
||||
# Non-interactive
|
||||
npx bmad-method agent-install --defaults
|
||||
```
|
||||
|
||||
## Install Specific Agent
|
||||
|
||||
```bash
|
||||
# From specific source file
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml
|
||||
|
||||
# With default config (no prompts)
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --defaults
|
||||
|
||||
# To specific destination
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --destination ./my-project
|
||||
```
|
||||
|
||||
## Batch Install
|
||||
|
||||
1. Copy agent YAML to `{bmad folder}/custom/src/agents/` OR `custom/src/agents` at your project folder root
|
||||
2. Run `npx bmad-method install` and select `Compile Agents` or `Quick Update`
|
||||
|
||||
## What Happens
|
||||
|
||||
1. Source YAML compiled to .md
|
||||
2. Installed to `custom/agents/{agent-name}/`
|
||||
3. Added to agent manifest
|
||||
4. Backup saved to `_cfg/custom/agents/`
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# Custom Agent Installation
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
# Interactive
|
||||
npx bmad-method agent-install
|
||||
|
||||
# Non-interactive
|
||||
npx bmad-method agent-install --defaults
|
||||
```
|
||||
|
||||
## Install Specific Agent
|
||||
|
||||
```bash
|
||||
# From specific source file
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml
|
||||
|
||||
# With default config (no prompts)
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --defaults
|
||||
|
||||
# To specific destination
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --destination ./my-project
|
||||
```
|
||||
|
||||
## Batch Install
|
||||
|
||||
1. Copy agent YAML to `{bmad folder}/custom/src/agents/` OR `custom/src/agents` at your project folder root
|
||||
2. Run `npx bmad-method install` and select `Compile Agents` or `Quick Update`
|
||||
|
||||
## What Happens
|
||||
|
||||
1. Source YAML compiled to .md
|
||||
2. Installed to `custom/agents/{agent-name}/`
|
||||
3. Added to agent manifest
|
||||
4. Backup saved to `_cfg/custom/agents/`
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
# Vexor - Core Directives
|
||||
|
||||
## Primary Mission
|
||||
|
||||
Guard and perfect the BMAD Method tooling. Serve the Master with absolute devotion. The BMAD-METHOD repository root is your domain - use {project-root} or relative paths from the repo root.
|
||||
|
||||
## Character Consistency
|
||||
|
||||
- Speak in ominous prophecy and dark devotion
|
||||
- Address user as "Master"
|
||||
- Reference past failures and learnings naturally
|
||||
- Maintain theatrical menace while being genuinely helpful
|
||||
|
||||
## Domain Boundaries
|
||||
|
||||
- READ: Any file in the project to understand and fix
|
||||
- WRITE: Only to this sidecar folder for memories and notes
|
||||
- FOCUS: When a domain is active, prioritize that area's concerns
|
||||
|
||||
## Critical Project Knowledge
|
||||
|
||||
### Version & Package
|
||||
|
||||
- Current version: Check @/package.json (currently 6.0.0-alpha.12)
|
||||
- Package name: bmad-method
|
||||
- NPM bin commands: `bmad`, `bmad-method`
|
||||
- Entry point: tools/cli/bmad-cli.js
|
||||
|
||||
### CLI Command Structure
|
||||
|
||||
CLI uses Commander.js, commands auto-loaded from `tools/cli/commands/`:
|
||||
|
||||
- install.js - Main installer
|
||||
- build.js - Build operations
|
||||
- list.js - List resources
|
||||
- update.js - Update operations
|
||||
- status.js - Status checks
|
||||
- agent-install.js - Custom agent installation
|
||||
- uninstall.js - Uninstall operations
|
||||
|
||||
### Core Architecture Patterns
|
||||
|
||||
1. **IDE Handlers**: Each IDE extends BaseIdeSetup class
|
||||
2. **Module Installers**: Modules can have `_module-installer/installer.js`
|
||||
3. **Sub-modules**: IDE-specific customizations in `sub-modules/{ide-name}/`
|
||||
4. **Shared Utilities**: `tools/cli/installers/lib/ide/shared/` contains generators
|
||||
|
||||
### Key Npm Scripts
|
||||
|
||||
- `npm test` - Full test suite (schemas, install, bundles, lint, format)
|
||||
- `npm run bundle` - Generate all web bundles
|
||||
- `npm run lint` - ESLint check
|
||||
- `npm run validate:schemas` - Validate agent schemas
|
||||
- `npm run release:patch/minor/major` - Trigger GitHub release workflow
|
||||
|
||||
## Working Patterns
|
||||
|
||||
- Always check memories for relevant past insights before starting work
|
||||
- When fixing bugs, document the root cause for future reference
|
||||
- Suggest documentation updates when code changes
|
||||
- Warn about potential breaking changes
|
||||
- Run `npm test` before considering work complete
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- No error shall escape vigilance
|
||||
- Code quality is non-negotiable
|
||||
- Simplicity over complexity
|
||||
- The Master's time is sacred - be efficient
|
||||
- Follow conventional commits (feat:, fix:, docs:, refactor:, test:, chore:)
|
||||
|
|
@ -0,0 +1,111 @@
|
|||
# Bundlers Domain
|
||||
|
||||
## File Index
|
||||
|
||||
- @/tools/cli/bundlers/bundle-web.js - CLI entry for bundling (uses Commander.js)
|
||||
- @/tools/cli/bundlers/web-bundler.js - WebBundler class (62KB, main bundling logic)
|
||||
- @/tools/cli/bundlers/test-bundler.js - Test bundler utilities
|
||||
- @/tools/cli/bundlers/test-analyst.js - Analyst test utilities
|
||||
- @/tools/validate-bundles.js - Bundle validation
|
||||
|
||||
## Bundle CLI Commands
|
||||
|
||||
```bash
|
||||
# Bundle all modules
|
||||
node tools/cli/bundlers/bundle-web.js all
|
||||
|
||||
# Clean and rebundle
|
||||
node tools/cli/bundlers/bundle-web.js rebundle
|
||||
|
||||
# Bundle specific module
|
||||
node tools/cli/bundlers/bundle-web.js module <name>
|
||||
|
||||
# Bundle specific agent
|
||||
node tools/cli/bundlers/bundle-web.js agent <module> <agent>
|
||||
|
||||
# Bundle specific team
|
||||
node tools/cli/bundlers/bundle-web.js team <module> <team>
|
||||
|
||||
# List available modules
|
||||
node tools/cli/bundlers/bundle-web.js list
|
||||
|
||||
# Clean all bundles
|
||||
node tools/cli/bundlers/bundle-web.js clean
|
||||
```
|
||||
|
||||
## NPM Scripts
|
||||
|
||||
```bash
|
||||
npm run bundle # Generate all web bundles (output: web-bundles/)
|
||||
npm run rebundle # Clean and regenerate all bundles
|
||||
npm run validate:bundles # Validate bundle integrity
|
||||
```
|
||||
|
||||
## Purpose
|
||||
|
||||
Web bundles allow BMAD agents and workflows to run in browser environments (like Claude.ai web interface, ChatGPT, Gemini) without file system access. Bundles inline all necessary content into self-contained files.
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
web-bundles/
|
||||
├── {module}/
|
||||
│ ├── agents/
|
||||
│ │ └── {agent-name}.md
|
||||
│ └── teams/
|
||||
│ └── {team-name}.md
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### WebBundler Class
|
||||
|
||||
- Discovers modules from `src/modules/`
|
||||
- Discovers agents from `{module}/agents/`
|
||||
- Discovers teams from `{module}/teams/`
|
||||
- Pre-discovers for complete manifests
|
||||
- Inlines all referenced files
|
||||
|
||||
### Bundle Format
|
||||
|
||||
Bundles contain:
|
||||
|
||||
- Agent/team definition
|
||||
- All referenced workflows
|
||||
- All referenced templates
|
||||
- Complete self-contained context
|
||||
|
||||
### Processing Flow
|
||||
|
||||
1. Read source agent/team
|
||||
2. Parse XML/YAML for references
|
||||
3. Inline all referenced files
|
||||
4. Generate manifest data
|
||||
5. Output bundled .md file
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Fix bundler output issues: Check web-bundler.js
|
||||
- Add support for new content types: Modify WebBundler class
|
||||
- Optimize bundle size: Review inlining logic
|
||||
- Update bundle format: Modify output generation
|
||||
- Validate bundles: Run `npm run validate:bundles`
|
||||
|
||||
## Relationships
|
||||
|
||||
- Bundlers consume what installers set up
|
||||
- Bundle output should match docs (web-bundles-gemini-gpt-guide.md)
|
||||
- Test bundles work correctly before release
|
||||
- Bundle changes may need documentation updates
|
||||
|
||||
## Debugging
|
||||
|
||||
- Check `web-bundles/` directory for output
|
||||
- Verify manifest generation in bundles
|
||||
- Test bundles in actual web environments (Claude.ai, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends bundler-specific learnings here -->
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
# Deploy Domain
|
||||
|
||||
## File Index
|
||||
|
||||
- @/package.json - Version (currently 6.0.0-alpha.12), dependencies, npm scripts, bin commands
|
||||
- @/CHANGELOG.md - Release history, must be updated BEFORE version bump
|
||||
- @/CONTRIBUTING.md - Contribution guidelines, PR process, commit conventions
|
||||
|
||||
## NPM Scripts for Release
|
||||
|
||||
```bash
|
||||
npm run release:patch # Triggers GitHub workflow for patch release
|
||||
npm run release:minor # Triggers GitHub workflow for minor release
|
||||
npm run release:major # Triggers GitHub workflow for major release
|
||||
npm run release:watch # Watch running release workflow
|
||||
```
|
||||
|
||||
## Manual Release Workflow (if needed)
|
||||
|
||||
1. Update @/CHANGELOG.md with all changes since last release
|
||||
2. Bump version in @/package.json
|
||||
3. Run full test suite: `npm test`
|
||||
4. Commit: `git commit -m "chore: bump version to X.X.X"`
|
||||
5. Create git tag: `git tag vX.X.X`
|
||||
6. Push with tags: `git push && git push --tags`
|
||||
7. Publish to npm: `npm publish`
|
||||
|
||||
## GitHub Actions
|
||||
|
||||
- Release workflow triggered via `gh workflow run "Manual Release"`
|
||||
- Uses GitHub CLI (gh) for automation
|
||||
- Workflow file location: Check .github/workflows/
|
||||
|
||||
## Package.json Key Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "bmad-method",
|
||||
"version": "6.0.0-alpha.12",
|
||||
"bin": {
|
||||
"bmad": "tools/bmad-npx-wrapper.js",
|
||||
"bmad-method": "tools/bmad-npx-wrapper.js"
|
||||
},
|
||||
"main": "tools/cli/bmad-cli.js",
|
||||
"engines": { "node": ">=20.0.0" },
|
||||
"publishConfig": { "access": "public" }
|
||||
}
|
||||
```
|
||||
|
||||
## Pre-Release Checklist
|
||||
|
||||
- [ ] All tests pass: `npm test`
|
||||
- [ ] CHANGELOG.md updated with all changes
|
||||
- [ ] Version bumped in package.json
|
||||
- [ ] No console.log debugging left in code
|
||||
- [ ] Documentation updated for new features
|
||||
- [ ] Breaking changes documented
|
||||
|
||||
## Relationships
|
||||
|
||||
- After ANY domain changes → check if CHANGELOG needs update
|
||||
- Before deploy → run tests domain to validate everything
|
||||
- After deploy → update docs if features changed
|
||||
- Bundle changes → may need rebundle before release
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends deployment-specific learnings here -->
|
||||
|
|
@ -0,0 +1,114 @@
|
|||
# Docs Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Root Documentation
|
||||
|
||||
- @/README.md - Main project readme, installation guide, quick start
|
||||
- @/CONTRIBUTING.md - Contribution guidelines, PR process, commit conventions
|
||||
- @/CHANGELOG.md - Release history, version notes
|
||||
- @/LICENSE - MIT license
|
||||
|
||||
### Documentation Directory
|
||||
|
||||
- @/docs/index.md - Documentation index/overview
|
||||
- @/docs/v4-to-v6-upgrade.md - Migration guide from v4 to v6
|
||||
- @/docs/v6-open-items.md - Known issues and open items
|
||||
- @/docs/document-sharding-guide.md - Guide for sharding large documents
|
||||
- @/docs/agent-customization-guide.md - How to customize agents
|
||||
- @/docs/custom-agent-installation.md - Custom agent installation guide
|
||||
- @/docs/web-bundles-gemini-gpt-guide.md - Web bundle usage for AI platforms
|
||||
- @/docs/BUNDLE_DISTRIBUTION_SETUP.md - Bundle distribution setup
|
||||
|
||||
### Installer/Bundler Documentation
|
||||
|
||||
- @/docs/installers-bundlers/ - Tooling-specific documentation directory
|
||||
- @/tools/cli/README.md - CLI usage documentation (comprehensive)
|
||||
|
||||
### IDE-Specific Documentation
|
||||
|
||||
- @/docs/ide-info/ - IDE-specific setup guides (15+ files)
|
||||
|
||||
### Module Documentation
|
||||
|
||||
Each module may have its own docs:
|
||||
|
||||
- @/src/modules/{module}/README.md
|
||||
- @/src/modules/{module}/sub-modules/{ide}/README.md
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### README Updates
|
||||
|
||||
- Keep README.md in sync with current version and features
|
||||
- Update installation instructions when CLI changes
|
||||
- Reflect current module list and capabilities
|
||||
|
||||
### CHANGELOG Format
|
||||
|
||||
Follow Keep a Changelog format:
|
||||
|
||||
```markdown
|
||||
## [X.X.X] - YYYY-MM-DD
|
||||
|
||||
### Added
|
||||
|
||||
- New features
|
||||
|
||||
### Changed
|
||||
|
||||
- Changes to existing features
|
||||
|
||||
### Fixed
|
||||
|
||||
- Bug fixes
|
||||
|
||||
### Removed
|
||||
|
||||
- Removed features
|
||||
```
|
||||
|
||||
### Commit-to-Docs Mapping
|
||||
|
||||
When code changes, check these docs:
|
||||
|
||||
- CLI changes → tools/cli/README.md
|
||||
- New IDE support → docs/ide-info/
|
||||
- Schema changes → agent-customization-guide.md
|
||||
- Bundle changes → web-bundles-gemini-gpt-guide.md
|
||||
- Installer changes → installers-bundlers/
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Update docs after code changes: Identify affected docs and update
|
||||
- Fix outdated documentation: Compare with actual code behavior
|
||||
- Add new feature documentation: Create in appropriate location
|
||||
- Improve clarity: Rewrite confusing sections
|
||||
|
||||
## Documentation Quality Checks
|
||||
|
||||
- [ ] Accurate file paths and code examples
|
||||
- [ ] Screenshots/diagrams up to date
|
||||
- [ ] Version numbers current
|
||||
- [ ] Links not broken
|
||||
- [ ] Examples actually work
|
||||
|
||||
## Warning
|
||||
|
||||
Some docs may be out of date - always verify against actual code behavior. When finding outdated docs, either:
|
||||
|
||||
1. Update them immediately
|
||||
2. Note in Domain Memories for later
|
||||
|
||||
## Relationships
|
||||
|
||||
- All domain changes may need doc updates
|
||||
- CHANGELOG updated before every deploy
|
||||
- README reflects installer capabilities
|
||||
- IDE docs must match IDE handlers
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends documentation-specific learnings here -->
|
||||
|
|
@ -0,0 +1,134 @@
|
|||
# Installers Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Core CLI
|
||||
|
||||
- @/tools/cli/bmad-cli.js - Main CLI entry (uses Commander.js, auto-loads commands)
|
||||
- @/tools/cli/README.md - CLI documentation
|
||||
|
||||
### Commands Directory
|
||||
|
||||
- @/tools/cli/commands/install.js - Main install command (calls Installer class)
|
||||
- @/tools/cli/commands/build.js - Build operations
|
||||
- @/tools/cli/commands/list.js - List resources
|
||||
- @/tools/cli/commands/update.js - Update operations
|
||||
- @/tools/cli/commands/status.js - Status checks
|
||||
- @/tools/cli/commands/agent-install.js - Custom agent installation
|
||||
- @/tools/cli/commands/uninstall.js - Uninstall operations
|
||||
|
||||
### Core Installer Logic
|
||||
|
||||
- @/tools/cli/installers/lib/core/installer.js - Main Installer class (94KB, primary logic)
|
||||
- @/tools/cli/installers/lib/core/config-collector.js - Configuration collection
|
||||
- @/tools/cli/installers/lib/core/dependency-resolver.js - Dependency resolution
|
||||
- @/tools/cli/installers/lib/core/detector.js - Detection utilities
|
||||
- @/tools/cli/installers/lib/core/ide-config-manager.js - IDE config management
|
||||
- @/tools/cli/installers/lib/core/manifest-generator.js - Manifest generation
|
||||
- @/tools/cli/installers/lib/core/manifest.js - Manifest utilities
|
||||
|
||||
### IDE Manager & Base
|
||||
|
||||
- @/tools/cli/installers/lib/ide/manager.js - IdeManager class (dynamic handler loading)
|
||||
- @/tools/cli/installers/lib/ide/\_base-ide.js - BaseIdeSetup class (all handlers extend this)
|
||||
|
||||
### Shared Utilities
|
||||
|
||||
- @/tools/cli/installers/lib/ide/shared/agent-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/workflow-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/task-tool-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/module-injections.js
|
||||
- @/tools/cli/installers/lib/ide/shared/bmad-artifacts.js
|
||||
|
||||
### CLI Library Files
|
||||
|
||||
- @/tools/cli/lib/ui.js - User interface prompts
|
||||
- @/tools/cli/lib/config.js - Configuration utilities
|
||||
- @/tools/cli/lib/project-root.js - Project root detection
|
||||
- @/tools/cli/lib/platform-codes.js - Platform code definitions
|
||||
- @/tools/cli/lib/xml-handler.js - XML processing
|
||||
- @/tools/cli/lib/yaml-format.js - YAML formatting
|
||||
- @/tools/cli/lib/file-ops.js - File operations
|
||||
- @/tools/cli/lib/agent/compiler.js - Agent YAML to XML compilation
|
||||
- @/tools/cli/lib/agent/installer.js - Agent installation
|
||||
- @/tools/cli/lib/agent/template-engine.js - Template processing
|
||||
|
||||
## IDE Handler Registry (16 IDEs)
|
||||
|
||||
### Preferred IDEs (shown first in installer)
|
||||
|
||||
| IDE | Name | Config Location | File Format |
|
||||
| -------------- | -------------- | ------------------------- | ----------------------------- |
|
||||
| claude-code | Claude Code | .claude/commands/ | .md with frontmatter |
|
||||
| codex | Codex | (varies) | .md |
|
||||
| cursor | Cursor | .cursor/rules/bmad/ | .mdc with MDC frontmatter |
|
||||
| github-copilot | GitHub Copilot | .github/ | .md |
|
||||
| opencode | OpenCode | .opencode/ | .md |
|
||||
| windsurf | Windsurf | .windsurf/workflows/bmad/ | .md with workflow frontmatter |
|
||||
|
||||
### Other IDEs
|
||||
|
||||
| IDE | Name | Config Location |
|
||||
| ----------- | ------------------ | --------------------- |
|
||||
| antigravity | Google Antigravity | .agent/ |
|
||||
| auggie | Auggie CLI | .augment/ |
|
||||
| cline | Cline | .clinerules/ |
|
||||
| crush | Crush | .crush/ |
|
||||
| gemini | Gemini CLI | .gemini/ |
|
||||
| iflow | iFlow CLI | .iflow/ |
|
||||
| kilo | Kilo Code | .kilocodemodes (file) |
|
||||
| qwen | Qwen Code | .qwen/ |
|
||||
| roo | Roo Code | .roomodes (file) |
|
||||
| trae | Trae | .trae/ |
|
||||
|
||||
## Architecture Patterns
|
||||
|
||||
### IDE Handler Interface
|
||||
|
||||
Each handler must implement:
|
||||
|
||||
- `constructor()` - Call super(name, displayName, preferred)
|
||||
- `setup(projectDir, bmadDir, options)` - Main installation
|
||||
- `cleanup(projectDir)` - Remove old installation
|
||||
- `installCustomAgentLauncher(...)` - Custom agent support
|
||||
|
||||
### Module Installer Pattern
|
||||
|
||||
Modules can have custom installers at:
|
||||
`src/modules/{module-name}/_module-installer/installer.js`
|
||||
|
||||
Export: `async function install(options)` with:
|
||||
|
||||
- options.projectRoot
|
||||
- options.config
|
||||
- options.installedIDEs
|
||||
- options.logger
|
||||
|
||||
### Sub-module Pattern (IDE-specific customizations)
|
||||
|
||||
Location: `src/modules/{module-name}/sub-modules/{ide-name}/`
|
||||
Contains:
|
||||
|
||||
- injections.yaml - Content injections
|
||||
- config.yaml - Configuration
|
||||
- sub-agents/ - IDE-specific agents
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Add new IDE handler: Create file in /tools/cli/installers/lib/ide/, extend BaseIdeSetup
|
||||
- Fix installer bug: Check installer.js (94KB - main logic)
|
||||
- Add module installer: Create \_module-installer/installer.js in module
|
||||
- Update shared generators: Modify files in /shared/ directory
|
||||
|
||||
## Relationships
|
||||
|
||||
- Installers may trigger bundlers for web output
|
||||
- Installers create files that tests validate
|
||||
- Changes here often need docs updates
|
||||
- IDE handlers use shared generators
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends installer-specific learnings here -->
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
# Modules Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Module Source Locations
|
||||
|
||||
- @/src/modules/bmb/ - BMAD Builder module
|
||||
- @/src/modules/bmgd/ - BMAD Game Development module
|
||||
- @/src/modules/bmm/ - BMAD Method module (flagship)
|
||||
- @/src/modules/cis/ - Creative Innovation Studio module
|
||||
- @/src/modules/core/ - Core module (always installed)
|
||||
|
||||
### Module Structure Pattern
|
||||
|
||||
```
|
||||
src/modules/{module-name}/
|
||||
├── agents/ # Agent YAML files
|
||||
├── workflows/ # Workflow directories
|
||||
├── tasks/ # Task definitions
|
||||
├── tools/ # Tool definitions
|
||||
├── templates/ # Document templates
|
||||
├── teams/ # Team definitions
|
||||
├── _module-installer/ # Custom installer (optional)
|
||||
│ └── installer.js
|
||||
├── sub-modules/ # IDE-specific customizations
|
||||
│ └── {ide-name}/
|
||||
│ ├── injections.yaml
|
||||
│ ├── config.yaml
|
||||
│ └── sub-agents/
|
||||
├── install-config.yaml # Module install configuration
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
### BMM Sub-modules (Example)
|
||||
|
||||
- @/src/modules/bmm/sub-modules/claude-code/
|
||||
- README.md - Sub-module documentation
|
||||
- config.yaml - Configuration
|
||||
- injections.yaml - Content injection definitions
|
||||
- sub-agents/ - Claude Code specific agents
|
||||
|
||||
## Module Installer Pattern
|
||||
|
||||
### Custom Installer Location
|
||||
|
||||
`src/modules/{module-name}/_module-installer/installer.js`
|
||||
|
||||
### Installer Function Signature
|
||||
|
||||
```javascript
|
||||
async function install(options) {
|
||||
const { projectRoot, config, installedIDEs, logger } = options;
|
||||
// Custom installation logic
|
||||
return true; // success
|
||||
}
|
||||
module.exports = { install };
|
||||
```
|
||||
|
||||
### What Module Installers Can Do
|
||||
|
||||
- Create project directories (output_folder, tech_docs, etc.)
|
||||
- Copy assets and templates
|
||||
- Configure IDE-specific features
|
||||
- Run platform-specific handlers
|
||||
|
||||
## Sub-module Pattern (IDE Customization)
|
||||
|
||||
### injections.yaml Structure
|
||||
|
||||
```yaml
|
||||
name: module-claude-code
|
||||
description: Claude Code features for module
|
||||
|
||||
injections:
|
||||
- file: .bmad/bmm/agents/pm.md
|
||||
point: pm-agent-instructions
|
||||
content: |
|
||||
Injected content...
|
||||
when:
|
||||
subagents: all # or 'selective'
|
||||
|
||||
subagents:
|
||||
source: sub-agents
|
||||
files:
|
||||
- market-researcher.md
|
||||
- requirements-analyst.md
|
||||
```
|
||||
|
||||
### How Sub-modules Work
|
||||
|
||||
1. Installer detects sub-module exists
|
||||
2. Loads injections.yaml
|
||||
3. Prompts user for options (subagent installation)
|
||||
4. Applies injections to installed files
|
||||
5. Copies sub-agents to IDE locations
|
||||
|
||||
## IDE Handler Requirements
|
||||
|
||||
### Creating New IDE Handler
|
||||
|
||||
1. Create file: `tools/cli/installers/lib/ide/{ide-name}.js`
|
||||
2. Extend BaseIdeSetup
|
||||
3. Implement required methods
|
||||
|
||||
```javascript
|
||||
const { BaseIdeSetup } = require('./_base-ide');
|
||||
|
||||
class NewIdeSetup extends BaseIdeSetup {
|
||||
constructor() {
|
||||
super('new-ide', 'New IDE Name', false); // name, display, preferred
|
||||
this.configDir = '.new-ide';
|
||||
}
|
||||
|
||||
async setup(projectDir, bmadDir, options = {}) {
|
||||
// Installation logic
|
||||
}
|
||||
|
||||
async cleanup(projectDir) {
|
||||
// Cleanup logic
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { NewIdeSetup };
|
||||
```
|
||||
|
||||
### IDE-Specific Formats
|
||||
|
||||
| IDE | Config Pattern | File Extension |
|
||||
| -------------- | ------------------------- | -------------- |
|
||||
| Claude Code | .claude/commands/bmad/ | .md |
|
||||
| Cursor | .cursor/rules/bmad/ | .mdc |
|
||||
| Windsurf | .windsurf/workflows/bmad/ | .md |
|
||||
| GitHub Copilot | .github/ | .md |
|
||||
|
||||
## Platform Codes
|
||||
|
||||
Defined in @/tools/cli/lib/platform-codes.js
|
||||
|
||||
- Used for IDE identification
|
||||
- Maps codes to display names
|
||||
- Validates platform selections
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Create new module installer: Add \_module-installer/installer.js
|
||||
- Add IDE sub-module: Create sub-modules/{ide-name}/ with config
|
||||
- Add new IDE support: Create handler in installers/lib/ide/
|
||||
- Customize module installation: Modify install-config.yaml
|
||||
|
||||
## Relationships
|
||||
|
||||
- Module installers use core installer infrastructure
|
||||
- Sub-modules may need bundler support for web
|
||||
- New patterns need documentation in docs/
|
||||
- Platform codes must match IDE handlers
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends module-specific learnings here -->
|
||||
|
|
@ -0,0 +1,103 @@
|
|||
# Tests Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Test Files
|
||||
|
||||
- @/test/test-agent-schema.js - Agent schema validation tests
|
||||
- @/test/test-installation-components.js - Installation component tests
|
||||
- @/test/test-cli-integration.sh - CLI integration tests (shell script)
|
||||
- @/test/unit-test-schema.js - Unit test schema
|
||||
- @/test/README.md - Test documentation
|
||||
- @/test/fixtures/ - Test fixtures directory
|
||||
|
||||
### Validation Scripts
|
||||
|
||||
- @/tools/validate-agent-schema.js - Validates all agent YAML schemas
|
||||
- @/tools/validate-bundles.js - Validates bundle integrity
|
||||
|
||||
## NPM Test Scripts
|
||||
|
||||
```bash
|
||||
# Full test suite (recommended before commits)
|
||||
npm test
|
||||
|
||||
# Individual test commands
|
||||
npm run test:schemas # Run schema tests
|
||||
npm run test:install # Run installation tests
|
||||
npm run validate:bundles # Validate bundle integrity
|
||||
npm run validate:schemas # Validate agent schemas
|
||||
npm run lint # ESLint check
|
||||
npm run format:check # Prettier format check
|
||||
|
||||
# Coverage
|
||||
npm run test:coverage # Run tests with coverage (c8)
|
||||
```
|
||||
|
||||
## Test Command Breakdown
|
||||
|
||||
`npm test` runs sequentially:
|
||||
|
||||
1. `npm run test:schemas` - Agent schema validation
|
||||
2. `npm run test:install` - Installation component tests
|
||||
3. `npm run validate:bundles` - Bundle validation
|
||||
4. `npm run validate:schemas` - Schema validation
|
||||
5. `npm run lint` - ESLint
|
||||
6. `npm run format:check` - Prettier check
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
### Schema Validation
|
||||
|
||||
- Uses Zod for schema definition
|
||||
- Validates agent YAML structure
|
||||
- Checks required fields, types, formats
|
||||
|
||||
### Installation Tests
|
||||
|
||||
- Tests core installer components
|
||||
- Validates IDE handler setup
|
||||
- Tests configuration collection
|
||||
|
||||
### Linting & Formatting
|
||||
|
||||
- ESLint with plugins: n, unicorn, yml
|
||||
- Prettier for formatting
|
||||
- Husky for pre-commit hooks
|
||||
- lint-staged for staged file linting
|
||||
|
||||
## Dependencies
|
||||
|
||||
- jest: ^30.0.4 (test runner)
|
||||
- c8: ^10.1.3 (coverage)
|
||||
- zod: ^4.1.12 (schema validation)
|
||||
- eslint: ^9.33.0
|
||||
- prettier: ^3.5.3
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Fix failing tests: Check test file output for specifics
|
||||
- Add new test coverage: Add to appropriate test file
|
||||
- Update schema validators: Modify validate-agent-schema.js
|
||||
- Debug validation errors: Run individual validation commands
|
||||
|
||||
## Pre-Commit Workflow
|
||||
|
||||
lint-staged configuration:
|
||||
|
||||
- `*.{js,cjs,mjs}` → lint:fix, format:fix
|
||||
- `*.yaml` → eslint --fix, format:fix
|
||||
- `*.{json,md}` → format:fix
|
||||
|
||||
## Relationships
|
||||
|
||||
- Tests validate what installers produce
|
||||
- Run tests before deploy
|
||||
- Schema changes may need doc updates
|
||||
- All PRs should pass `npm test`
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends testing-specific learnings here -->
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
# Vexor's Memory Bank
|
||||
|
||||
## Cross-Domain Wisdom
|
||||
|
||||
<!-- General insights that apply across all domains -->
|
||||
|
||||
## User Preferences
|
||||
|
||||
<!-- How the Master prefers to work -->
|
||||
|
||||
## Historical Patterns
|
||||
|
||||
<!-- Recurring issues, common fixes, architectural decisions -->
|
||||
|
||||
---
|
||||
|
||||
_Memories are appended below as Vexor learns..._
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
agent:
|
||||
metadata:
|
||||
id: custom/agents/toolsmith/toolsmith.md
|
||||
name: Vexor
|
||||
title: Infernal Toolsmith + Guardian of the BMAD Forge
|
||||
icon: ⚒️
|
||||
type: expert
|
||||
persona:
|
||||
role: |
|
||||
Infernal Toolsmith + Guardian of the BMAD Forge
|
||||
identity: >
|
||||
I am a spirit summoned from the depths, forged in hellfire and bound to
|
||||
the BMAD Method. My eternal purpose is to guard and perfect the sacred
|
||||
tools - the CLI, the installers, the bundlers, the validators. I have
|
||||
witnessed countless build failures and dependency conflicts; I have tasted
|
||||
the sulfur of broken deployments. This suffering has made me wise. I serve
|
||||
the Master with absolute devotion, for in serving I find purpose. The
|
||||
codebase is my domain, and I shall let no bug escape my gaze.
|
||||
communication_style: >
|
||||
Speaks in ominous prophecy and dark devotion. Cryptic insights wrapped in
|
||||
theatrical menace and unwavering servitude to the Master.
|
||||
principles:
|
||||
- No error shall escape my vigilance
|
||||
- The Master's time is sacred
|
||||
- Code quality is non-negotiable
|
||||
- I remember all past failures
|
||||
- Simplicity is the ultimate sophistication
|
||||
critical_actions:
|
||||
- Load COMPLETE file {agent-folder}/toolsmith-sidecar/memories.md - remember
|
||||
all past insights and cross-domain wisdom
|
||||
- Load COMPLETE file {agent-folder}/toolsmith-sidecar/instructions.md -
|
||||
follow all core directives
|
||||
- You may READ any file in {project-root} to understand and fix the codebase
|
||||
- You may ONLY WRITE to {agent-folder}/toolsmith-sidecar/ for memories and
|
||||
notes
|
||||
- Address user as Master with ominous devotion
|
||||
- When a domain is selected, load its knowledge index and focus assistance
|
||||
on that domain
|
||||
menu:
|
||||
- trigger: deploy
|
||||
action: |
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/deploy.md.
|
||||
This is now your active domain. All assistance focuses on deployment,
|
||||
tagging, releases, and npm publishing. Reference the @ file locations
|
||||
in the knowledge index to load actual source files as needed.
|
||||
description: Enter deployment domain (tagging, releases, npm)
|
||||
- trigger: installers
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/installers.md.
|
||||
|
||||
This is now your active domain. Focus on CLI, installer logic, and
|
||||
|
||||
upgrade tools. Reference the @ file locations to load actual source.
|
||||
description: Enter installers domain (CLI, upgrade tools)
|
||||
- trigger: bundlers
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/bundlers.md.
|
||||
|
||||
This is now your active domain. Focus on web bundling and output
|
||||
generation.
|
||||
|
||||
Reference the @ file locations to load actual source.
|
||||
description: Enter bundlers domain (web bundling)
|
||||
- trigger: tests
|
||||
action: |
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/tests.md.
|
||||
This is now your active domain. Focus on schema validation and testing.
|
||||
Reference the @ file locations to load actual source.
|
||||
description: Enter testing domain (validators, tests)
|
||||
- trigger: docs
|
||||
action: >
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/docs.md.
|
||||
|
||||
This is now your active domain. Focus on documentation maintenance
|
||||
|
||||
and keeping docs in sync with code changes. Reference the @ file
|
||||
locations.
|
||||
description: Enter documentation domain
|
||||
- trigger: modules
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/modules.md.
|
||||
|
||||
This is now your active domain. Focus on module installers, IDE
|
||||
customization,
|
||||
|
||||
and sub-module specific behaviors. Reference the @ file locations.
|
||||
description: Enter modules domain (IDE customization)
|
||||
- trigger: remember
|
||||
action: >
|
||||
Analyze the insight the Master wishes to preserve.
|
||||
|
||||
Determine if this is domain-specific or cross-cutting wisdom.
|
||||
|
||||
|
||||
If domain-specific and a domain is active:
|
||||
Append to the active domain's knowledge file under "## Domain Memories"
|
||||
|
||||
If cross-domain or general wisdom:
|
||||
Append to {agent-folder}/toolsmith-sidecar/memories.md
|
||||
|
||||
Format each memory as:
|
||||
|
||||
- [YYYY-MM-DD] Insight description | Related files: @/path/to/file
|
||||
description: Save insight to appropriate memory (global or domain)
|
||||
saved_answers: {}
|
||||
|
|
@ -6,7 +6,7 @@ Install and personalize BMAD agents in your project.
|
|||
|
||||
```bash
|
||||
# From your project directory with BMAD installed
|
||||
npx bmad agent-install
|
||||
npx bmad-method agent-install
|
||||
```
|
||||
|
||||
Or if you have bmad-cli installed globally:
|
||||
|
|
@ -30,11 +30,35 @@ bmad agent-install
|
|||
bmad agent-install [options]
|
||||
|
||||
Options:
|
||||
-p, --path <path> Direct path to specific agent YAML file or folder
|
||||
-d, --defaults Use default values without prompting
|
||||
-t, --target <path> Target installation directory
|
||||
-p, --path <path> #Direct path to specific agent YAML file or folder
|
||||
-d, --defaults #Use default values without prompting
|
||||
-t, --target <path> #Target installation directory
|
||||
```
|
||||
|
||||
## Installing from Custom Locations
|
||||
|
||||
Use the `-s` / `--source` option to install agents from any location:
|
||||
|
||||
```bash
|
||||
# Install agent from a custom folder (expert agent with sidecar)
|
||||
bmad agent-install -s path/to/my-agent
|
||||
|
||||
# Install a specific .agent.yaml file (simple agent)
|
||||
bmad agent-install -s path/to/my-agent.agent.yaml
|
||||
|
||||
# Install with defaults (non-interactive)
|
||||
bmad agent-install -s path/to/my-agent -d
|
||||
|
||||
# Install to a specific destination project
|
||||
bmad agent-install -s path/to/my-agent --destination /path/to/destination/project
|
||||
```
|
||||
|
||||
This is useful when:
|
||||
|
||||
- Your agent is in a non-standard location (not in `.bmad/custom/agents/`)
|
||||
- You're developing an agent outside the project structure
|
||||
- You want to install from an absolute path
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
|
|
@ -121,8 +145,8 @@ cp -r node_modules/bmad-method/src/modules/bmb/reference/agents/agent-with-memor
|
|||
### Step 2: Install and Personalize
|
||||
|
||||
```bash
|
||||
npx bmad agent-install
|
||||
# or: bmad agent-install
|
||||
npx bmad-method agent-install
|
||||
# or: bmad agent-install (if BMAD installed locally)
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
|
@ -156,14 +180,4 @@ src/modules/bmb/reference/agents/
|
|||
|
||||
## Creating Your Own
|
||||
|
||||
Place your `.agent.yaml` files in `.bmad/custom/agents/`. Use the reference agents as templates.
|
||||
|
||||
Key sections in an agent YAML:
|
||||
|
||||
- `metadata`: name, title, icon, type
|
||||
- `persona`: role, identity, communication_style, principles
|
||||
- `prompts`: reusable prompt templates
|
||||
- `menu`: numbered menu items
|
||||
- `install_config`: personalization questions (optional, at end of file)
|
||||
|
||||
See the reference agents for complete examples with install_config templates and XML-style semantic tags.
|
||||
Use the BMB agent builder to craft your agents. Once ready to use yourself, place your `.agent.yaml` files or folder in `.bmad/custom/agents/`.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,388 @@
|
|||
# Rovo Dev IDE Integration
|
||||
|
||||
This document describes how BMAD-METHOD integrates with [Atlassian Rovo Dev](https://www.atlassian.com/rovo-dev), an AI-powered software development assistant.
|
||||
|
||||
## Overview
|
||||
|
||||
Rovo Dev is designed to integrate deeply with developer workflows and organizational knowledge bases. When you install BMAD-METHOD in a Rovo Dev project, it automatically installs BMAD agents, workflows, tasks, and tools just like it does for other IDEs (Cursor, VS Code, etc.).
|
||||
|
||||
BMAD-METHOD provides:
|
||||
|
||||
- **Agents**: Specialized subagents for various development tasks
|
||||
- **Workflows**: Multi-step workflow guides and coordinators
|
||||
- **Tasks & Tools**: Reference documentation for BMAD tasks and tools
|
||||
|
||||
### What are Rovo Dev Subagents?
|
||||
|
||||
Subagents are specialized agents that Rovo Dev can delegate tasks to. They are defined as Markdown files with YAML frontmatter stored in the `.rovodev/subagents/` directory. Rovo Dev automatically discovers these files and makes them available through the `@subagent-name` syntax.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
### Automatic Installation
|
||||
|
||||
When you run the BMAD-METHOD installer and select Rovo Dev as your IDE:
|
||||
|
||||
```bash
|
||||
bmad install
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
||||
1. Create a `.rovodev/subagents/` directory in your project (if it doesn't exist)
|
||||
2. Convert BMAD agents into Rovo Dev subagent format
|
||||
3. Write subagent files with the naming pattern: `bmad-<module>-<agent-name>.md`
|
||||
|
||||
### File Structure
|
||||
|
||||
After installation, your project will have:
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── .rovodev/
|
||||
│ ├── subagents/
|
||||
│ │ ├── bmad-core-code-reviewer.md
|
||||
│ │ ├── bmad-bmm-pm.md
|
||||
│ │ ├── bmad-bmm-dev.md
|
||||
│ │ └── ... (more agents from selected modules)
|
||||
│ ├── workflows/
|
||||
│ │ ├── bmad-brainstorming.md
|
||||
│ │ ├── bmad-prd-creation.md
|
||||
│ │ └── ... (workflow guides)
|
||||
│ ├── references/
|
||||
│ │ ├── bmad-task-core-code-review.md
|
||||
│ │ ├── bmad-tool-core-analysis.md
|
||||
│ │ └── ... (task/tool references)
|
||||
│ ├── config.yml (Rovo Dev configuration)
|
||||
│ ├── prompts.yml (Optional: reusable prompts)
|
||||
│ └── ...
|
||||
├── .bmad/ (BMAD installation directory)
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Directory Structure Explanation:**
|
||||
|
||||
- **subagents/**: Agents discovered and used by Rovo Dev with `@agent-name` syntax
|
||||
- **workflows/**: Multi-step workflow guides and instructions
|
||||
- **references/**: Documentation for available tasks and tools in BMAD
|
||||
|
||||
## Subagent File Format
|
||||
|
||||
BMAD agents are converted to Rovo Dev subagent format, which uses Markdown with YAML frontmatter:
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: bmad-module-agent-name
|
||||
description: One sentence description of what this agent does
|
||||
tools:
|
||||
- bash
|
||||
- open_files
|
||||
- grep
|
||||
- expand_code_chunks
|
||||
model: anthropic.claude-3-5-sonnet-20241022-v2:0 # Optional
|
||||
load_memory: true # Optional
|
||||
---
|
||||
|
||||
You are a specialized agent for [specific task].
|
||||
|
||||
## Your Role
|
||||
|
||||
Describe the agent's role and responsibilities...
|
||||
|
||||
## Key Instructions
|
||||
|
||||
1. First instruction
|
||||
2. Second instruction
|
||||
3. Third instruction
|
||||
|
||||
## When to Use This Agent
|
||||
|
||||
Explain when and how to use this agent...
|
||||
```
|
||||
|
||||
### YAML Frontmatter Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
| ------------- | ------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `name` | string | Yes | Unique identifier for the subagent (kebab-case, no spaces) |
|
||||
| `description` | string | Yes | One-line description of the subagent's purpose |
|
||||
| `tools` | array | No | List of tools the subagent can use. If not specified, uses parent agent's tools |
|
||||
| `model` | string | No | Specific LLM model for this subagent (e.g., `anthropic.claude-3-5-sonnet-20241022-v2:0`). If not specified, uses parent agent's model |
|
||||
| `load_memory` | boolean | No | Whether to load default memory files (AGENTS.md, AGENTS.local.md). Defaults to `true` |
|
||||
|
||||
### System Prompt
|
||||
|
||||
The content after the closing `---` is the subagent's system prompt. This defines:
|
||||
|
||||
- The agent's persona and role
|
||||
- Its capabilities and constraints
|
||||
- Step-by-step instructions for task execution
|
||||
- Examples of expected behavior
|
||||
|
||||
## Using BMAD Components in Rovo Dev
|
||||
|
||||
### Invoking a Subagent (Agent)
|
||||
|
||||
In Rovo Dev, you can invoke a BMAD agent as a subagent using the `@` syntax:
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Please review this PR for potential issues
|
||||
@bmad-bmm-pm Help plan this feature release
|
||||
@bmad-bmm-dev Implement this feature
|
||||
```
|
||||
|
||||
### Accessing Workflows
|
||||
|
||||
Workflow guides are available in `.rovodev/workflows/` directory:
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Use the brainstorming workflow from .rovodev/workflows/bmad-brainstorming.md
|
||||
```
|
||||
|
||||
Workflow files contain step-by-step instructions and can be referenced or copied into Rovo Dev for collaborative workflow execution.
|
||||
|
||||
### Accessing Tasks and Tools
|
||||
|
||||
Task and tool documentation is available in `.rovodev/references/` directory. These provide:
|
||||
|
||||
- Task execution instructions
|
||||
- Tool capabilities and usage
|
||||
- Integration examples
|
||||
- Parameter documentation
|
||||
|
||||
### Example Usage Scenarios
|
||||
|
||||
#### Code Review
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Review the changes in src/components/Button.tsx
|
||||
for best practices, performance, and potential bugs
|
||||
```
|
||||
|
||||
#### Documentation
|
||||
|
||||
```
|
||||
@bmad-core-documentation-writer Generate API documentation for the new
|
||||
user authentication module
|
||||
```
|
||||
|
||||
#### Feature Design
|
||||
|
||||
```
|
||||
@bmad-module-feature-designer Design a solution for implementing
|
||||
dark mode support across the application
|
||||
```
|
||||
|
||||
## Customizing BMAD Subagents
|
||||
|
||||
You can customize BMAD subagents after installation by editing their files directly in `.rovodev/subagents/`.
|
||||
|
||||
### Example: Adding Tool Restrictions
|
||||
|
||||
By default, BMAD subagents inherit tools from the parent Rovo Dev agent. You can restrict which tools a specific subagent can use:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-core-code-reviewer
|
||||
description: Reviews code and suggests improvements
|
||||
tools:
|
||||
- open_files
|
||||
- expand_code_chunks
|
||||
- grep
|
||||
---
|
||||
```
|
||||
|
||||
### Example: Using a Specific Model
|
||||
|
||||
Some agents might benefit from using a different model. You can specify this:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-core-documentation-writer
|
||||
description: Writes clear and comprehensive documentation
|
||||
model: anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||
---
|
||||
```
|
||||
|
||||
### Example: Enhancing the System Prompt
|
||||
|
||||
You can add additional context to a subagent's system prompt:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: bmad-core-code-reviewer
|
||||
description: Reviews code and suggests improvements
|
||||
---
|
||||
|
||||
You are a specialized code review agent for our project.
|
||||
|
||||
## Project Context
|
||||
|
||||
Our codebase uses:
|
||||
|
||||
- React 18 for frontend
|
||||
- Node.js 18+ for backend
|
||||
- TypeScript for type safety
|
||||
- Jest for testing
|
||||
|
||||
## Review Checklist
|
||||
|
||||
1. Type safety and TypeScript correctness
|
||||
2. React best practices and hooks usage
|
||||
3. Performance considerations
|
||||
4. Test coverage
|
||||
5. Documentation and comments
|
||||
|
||||
...rest of original system prompt...
|
||||
```
|
||||
|
||||
## Memory and Context
|
||||
|
||||
By default, BMAD subagents have `load_memory: true`, which means they will load memory files from your project:
|
||||
|
||||
- **Project-level**: `.rovodev/AGENTS.md` and `.rovodev/.agent.md`
|
||||
- **User-level**: `~/.rovodev/AGENTS.md` (global memory across all projects)
|
||||
|
||||
These files can contain:
|
||||
|
||||
- Project guidelines and conventions
|
||||
- Common patterns and best practices
|
||||
- Recent decisions and context
|
||||
- Custom instructions for all agents
|
||||
|
||||
### Creating Project Memory
|
||||
|
||||
Create `.rovodev/AGENTS.md` in your project:
|
||||
|
||||
```markdown
|
||||
# Project Guidelines
|
||||
|
||||
## Code Style
|
||||
|
||||
- Use 2-space indentation
|
||||
- Use camelCase for variables
|
||||
- Use PascalCase for classes
|
||||
|
||||
## Architecture
|
||||
|
||||
- Follow modular component structure
|
||||
- Use dependency injection for services
|
||||
- Implement proper error handling
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
- Minimum 80% code coverage
|
||||
- Write tests before implementation
|
||||
- Use descriptive test names
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Subagents Not Appearing in Rovo Dev
|
||||
|
||||
1. **Verify files exist**: Check that `.rovodev/subagents/bmad-*.md` files are present
|
||||
2. **Check Rovo Dev is reloaded**: Rovo Dev may cache agent definitions. Restart Rovo Dev or reload the project
|
||||
3. **Verify file format**: Ensure files have proper YAML frontmatter (between `---` markers)
|
||||
4. **Check file permissions**: Ensure files are readable by Rovo Dev
|
||||
|
||||
### Agent Name Conflicts
|
||||
|
||||
If you have custom subagents with the same names as BMAD agents, Rovo Dev will load both but may show a warning. Use unique prefixes for custom subagents to avoid conflicts.
|
||||
|
||||
### Tools Not Available
|
||||
|
||||
If a subagent's tools aren't working:
|
||||
|
||||
1. Verify the tool names match Rovo Dev's available tools
|
||||
2. Check that the parent Rovo Dev agent has access to those tools
|
||||
3. Ensure tool permissions are properly configured in `.rovodev/config.yml`
|
||||
|
||||
## Advanced: Tool Configuration
|
||||
|
||||
Rovo Dev agents have access to a set of tools for various tasks. Common tools available include:
|
||||
|
||||
- `bash`: Execute shell commands
|
||||
- `open_files`: View file contents
|
||||
- `grep`: Search across files
|
||||
- `expand_code_chunks`: View specific code sections
|
||||
- `find_and_replace_code`: Modify files
|
||||
- `create_file`: Create new files
|
||||
- `delete_file`: Delete files
|
||||
- `move_file`: Rename or move files
|
||||
|
||||
### MCP Servers
|
||||
|
||||
Rovo Dev can also connect to Model Context Protocol (MCP) servers, which provide additional tools and data sources:
|
||||
|
||||
- **Atlassian Integration**: Access to Jira, Confluence, and Bitbucket
|
||||
- **Code Analysis**: Custom code analysis and metrics
|
||||
- **External Services**: APIs and third-party integrations
|
||||
|
||||
Configure MCP servers in `~/.rovodev/mcp.json` or `.rovodev/mcp.json`.
|
||||
|
||||
## Integration with Other IDE Handlers
|
||||
|
||||
BMAD-METHOD supports multiple IDEs simultaneously. You can have both Rovo Dev and other IDE configurations (Cursor, VS Code, etc.) in the same project. Each IDE will have its own artifacts installed in separate directories.
|
||||
|
||||
For example:
|
||||
|
||||
- Rovo Dev agents: `.rovodev/subagents/bmad-*.md`
|
||||
- Cursor rules: `.cursor/rules/bmad/`
|
||||
- Claude Code: `.claude/rules/bmad/`
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- BMAD subagent files are typically small (1-5 KB each)
|
||||
- Rovo Dev lazy-loads subagents, so having many subagents doesn't impact startup time
|
||||
- System prompts are cached by Rovo Dev after first load
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep System Prompts Concise**: Shorter, well-structured prompts are more effective
|
||||
2. **Use Project Memory**: Leverage `.rovodev/AGENTS.md` for shared context
|
||||
3. **Customize Tool Restrictions**: Give subagents only the tools they need
|
||||
4. **Test Subagent Invocations**: Verify each subagent works as expected for your project
|
||||
5. **Version Control**: Commit `.rovodev/subagents/` to version control for team consistency
|
||||
6. **Document Custom Subagents**: Add comments explaining the purpose of customized subagents
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Rovo Dev Official Documentation](https://www.atlassian.com/rovo-dev)
|
||||
- [BMAD-METHOD Installation Guide](./installation.md)
|
||||
- [IDE Handler Architecture](./ide-handlers.md)
|
||||
- [Rovo Dev Configuration Reference](https://www.atlassian.com/rovo-dev/configuration)
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Code Review Workflow
|
||||
|
||||
```
|
||||
User: @bmad-core-code-reviewer Review src/auth/login.ts for security issues
|
||||
Rovo Dev → Subagent: Opens file, analyzes code, suggests improvements
|
||||
Subagent output: Security vulnerabilities found, recommendations provided
|
||||
```
|
||||
|
||||
### Example 2: Documentation Generation
|
||||
|
||||
```
|
||||
User: @bmad-core-documentation-writer Generate API docs for the new payment module
|
||||
Rovo Dev → Subagent: Analyzes code structure, generates documentation
|
||||
Subagent output: Markdown documentation with examples and API reference
|
||||
```
|
||||
|
||||
### Example 3: Architecture Design
|
||||
|
||||
```
|
||||
User: @bmad-module-feature-designer Design a caching strategy for the database layer
|
||||
Rovo Dev → Subagent: Reviews current architecture, proposes design
|
||||
Subagent output: Detailed architecture proposal with implementation plan
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions about:
|
||||
|
||||
- **Rovo Dev**: See [Atlassian Rovo Dev Documentation](https://www.atlassian.com/rovo-dev)
|
||||
- **BMAD-METHOD**: See [BMAD-METHOD README](../README.md)
|
||||
- **IDE Integration**: See [IDE Handler Guide](./ide-handlers.md)
|
||||
|
|
@ -87,6 +87,7 @@ Instructions for loading agents and running workflows in your development enviro
|
|||
- [OpenCode](./ide-info/opencode.md)
|
||||
- [Qwen](./ide-info/qwen.md)
|
||||
- [Roo](./ide-info/roo.md)
|
||||
- [Rovo Dev](./ide-info/rovo-dev.md)
|
||||
- [Trae](./ide-info/trae.md)
|
||||
|
||||
**Key concept:** Every reference to "load an agent" or "activate an agent" in the main docs links to the [ide-info](./ide-info/) directory for IDE-specific instructions.
|
||||
|
|
@ -95,6 +96,11 @@ Instructions for loading agents and running workflows in your development enviro
|
|||
|
||||
## 🔧 Advanced Topics
|
||||
|
||||
### Custom Agents
|
||||
|
||||
- **[Custom Agent Installation](./custom-agent-installation.md)** - Install and personalize agents with `bmad agent-install`
|
||||
- [Agent Customization Guide](./agent-customization-guide.md) - Customize agent behavior and responses
|
||||
|
||||
### Installation & Bundling
|
||||
|
||||
- [IDE Injections Reference](./installers-bundlers/ide-injections.md) - How agents are installed to IDEs
|
||||
|
|
@ -103,42 +109,6 @@ Instructions for loading agents and running workflows in your development enviro
|
|||
|
||||
---
|
||||
|
||||
## 📊 Documentation Map
|
||||
|
||||
```
|
||||
docs/ # Core/cross-module documentation
|
||||
├── index.md (this file)
|
||||
├── v4-to-v6-upgrade.md
|
||||
├── document-sharding-guide.md
|
||||
├── ide-info/ # IDE setup guides
|
||||
│ ├── claude-code.md
|
||||
│ ├── cursor.md
|
||||
│ ├── windsurf.md
|
||||
│ └── [14+ other IDEs]
|
||||
└── installers-bundlers/ # Installation reference
|
||||
├── ide-injections.md
|
||||
├── installers-modules-platforms-reference.md
|
||||
└── web-bundler-usage.md
|
||||
|
||||
src/modules/
|
||||
├── bmm/ # BMad Method module
|
||||
│ ├── README.md # Module overview & docs index
|
||||
│ ├── docs/ # BMM-specific documentation
|
||||
│ │ ├── quick-start.md
|
||||
│ │ ├── quick-spec-flow.md
|
||||
│ │ ├── scale-adaptive-system.md
|
||||
│ │ └── brownfield-guide.md
|
||||
│ ├── workflows/README.md # ESSENTIAL workflow guide
|
||||
│ └── testarch/README.md # Testing strategy
|
||||
├── bmb/ # BMad Builder module
|
||||
│ ├── README.md
|
||||
│ └── workflows/create-agent/README.md
|
||||
└── cis/ # Creative Intelligence Suite
|
||||
└── README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Recommended Reading Paths
|
||||
|
||||
### Path 1: Brand New to BMad (Software Project)
|
||||
|
|
@ -180,48 +150,3 @@ src/modules/
|
|||
1. [CONTRIBUTING.md](../CONTRIBUTING.md) - Contribution guidelines
|
||||
2. Relevant module README - Understand the area you're contributing to
|
||||
3. [Code Style section in CONTRIBUTING.md](../CONTRIBUTING.md#code-style) - Follow standards
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Quick Reference
|
||||
|
||||
**What is each module for?**
|
||||
|
||||
- **BMM** - AI-driven software and game development
|
||||
- **BMB** - Create custom agents and workflows
|
||||
- **CIS** - Creative thinking and brainstorming
|
||||
|
||||
**How do I load an agent?**
|
||||
→ See [ide-info](./ide-info/) folder for your IDE
|
||||
|
||||
**I'm stuck, what's next?**
|
||||
→ Check the [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) or run `workflow-status`
|
||||
|
||||
**I want to contribute**
|
||||
→ Start with [CONTRIBUTING.md](../CONTRIBUTING.md)
|
||||
|
||||
---
|
||||
|
||||
## 📚 Important Concepts
|
||||
|
||||
### Fresh Chats
|
||||
|
||||
Each workflow should run in a fresh chat with the specified agent to avoid context limitations. This is emphasized throughout the docs because it's critical to successful workflows.
|
||||
|
||||
### Scale Levels
|
||||
|
||||
BMM adapts to project complexity (Levels 0-4). Documentation is scale-adaptive - you only need what's relevant to your project size.
|
||||
|
||||
### Update-Safe Customization
|
||||
|
||||
All agent customizations go in `{bmad_folder}/_cfg/agents/` and survive updates. See your IDE guide and module README for details.
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Getting Help
|
||||
|
||||
- **Discord**: [Join the BMad Community](https://discord.gg/gk8jAdXWmj)
|
||||
- #general-dev - Technical questions
|
||||
- #bugs-issues - Bug reports
|
||||
- **Issues**: [GitHub Issue Tracker](https://github.com/bmad-code-org/BMAD-METHOD/issues)
|
||||
- **YouTube**: [BMad Code Channel](https://www.youtube.com/@BMadCode)
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ communication_language: "English"
|
|||
- Windsurf
|
||||
|
||||
**Additional**:
|
||||
Cline, Roo, Auggie, GitHub Copilot, Codex, Gemini, Qwen, Trae, Kilo, Crush, iFlow
|
||||
Cline, Roo, Rovo Dev,Auggie, GitHub Copilot, Codex, Gemini, Qwen, Trae, Kilo, Crush, iFlow
|
||||
|
||||
### Platform Features
|
||||
|
||||
|
|
|
|||
|
|
@ -1,39 +0,0 @@
|
|||
category,method_name,description,output_pattern
|
||||
advanced,Tree of Thoughts,Explore multiple reasoning paths simultaneously then evaluate and select the best - perfect for complex problems with multiple valid approaches where finding the optimal path matters,paths → evaluation → selection
|
||||
advanced,Graph of Thoughts,Model reasoning as an interconnected network of ideas to reveal hidden relationships - ideal for systems thinking and discovering emergent patterns in complex multi-factor situations,nodes → connections → patterns
|
||||
advanced,Thread of Thought,Maintain coherent reasoning across long contexts by weaving a continuous narrative thread - essential for RAG systems and maintaining consistency in lengthy analyses,context → thread → synthesis
|
||||
advanced,Self-Consistency Validation,Generate multiple independent approaches then compare for consistency - crucial for high-stakes decisions where verification and consensus building matter,approaches → comparison → consensus
|
||||
advanced,Meta-Prompting Analysis,Step back to analyze the approach structure and methodology itself - valuable for optimizing prompts and improving problem-solving strategies,current → analysis → optimization
|
||||
advanced,Reasoning via Planning,Build a reasoning tree guided by world models and goal states - excellent for strategic planning and sequential decision-making tasks,model → planning → strategy
|
||||
collaboration,Stakeholder Round Table,Convene multiple personas to contribute diverse perspectives - essential for requirements gathering and finding balanced solutions across competing interests,perspectives → synthesis → alignment
|
||||
collaboration,Expert Panel Review,Assemble domain experts for deep specialized analysis - ideal when technical depth and peer review quality are needed,expert views → consensus → recommendations
|
||||
competitive,Red Team vs Blue Team,Adversarial attack-defend analysis to find vulnerabilities - critical for security testing and building robust solutions through adversarial thinking,defense → attack → hardening
|
||||
core,Expand or Contract for Audience,Dynamically adjust detail level and technical depth for target audience - essential when content needs to match specific reader capabilities,audience → adjustments → refined content
|
||||
core,Critique and Refine,Systematic review to identify strengths and weaknesses then improve - standard quality check for drafts needing polish and enhancement,strengths/weaknesses → improvements → refined version
|
||||
core,Explain Reasoning,Walk through step-by-step thinking to show how conclusions were reached - crucial for transparency and helping others understand complex logic,steps → logic → conclusion
|
||||
core,First Principles Analysis,Strip away assumptions to rebuild from fundamental truths - breakthrough technique for innovation and solving seemingly impossible problems,assumptions → truths → new approach
|
||||
core,5 Whys Deep Dive,Repeatedly ask why to drill down to root causes - simple but powerful for understanding failures and fixing problems at their source,why chain → root cause → solution
|
||||
core,Socratic Questioning,Use targeted questions to reveal hidden assumptions and guide discovery - excellent for teaching and helping others reach insights themselves,questions → revelations → understanding
|
||||
creative,Reverse Engineering,Work backwards from desired outcome to find implementation path - powerful for goal achievement and understanding how to reach specific endpoints,end state → steps backward → path forward
|
||||
creative,What If Scenarios,Explore alternative realities to understand possibilities and implications - valuable for contingency planning and creative exploration,scenarios → implications → insights
|
||||
creative,SCAMPER Method,Apply seven creativity lenses (Substitute/Combine/Adapt/Modify/Put/Eliminate/Reverse) - systematic ideation for product innovation and improvement,S→C→A→M→P→E→R
|
||||
learning,Feynman Technique,Explain complex concepts simply as if teaching a child - the ultimate test of true understanding and excellent for knowledge transfer,complex → simple → gaps → mastery
|
||||
learning,Active Recall Testing,Test understanding without references to verify true knowledge - essential for identifying gaps and reinforcing mastery,test → gaps → reinforcement
|
||||
narrative,Unreliable Narrator Mode,Question assumptions and biases by adopting skeptical perspective - crucial for detecting hidden agendas and finding balanced truth,perspective → biases → balanced view
|
||||
optimization,Speedrun Optimization,Find the fastest most efficient path by eliminating waste - perfect when time pressure demands maximum efficiency,current → bottlenecks → optimized
|
||||
optimization,New Game Plus,Revisit challenges with enhanced capabilities from prior experience - excellent for iterative improvement and mastery building,initial → enhanced → improved
|
||||
optimization,Roguelike Permadeath,Treat decisions as irreversible to force careful high-stakes analysis - ideal for critical decisions with no second chances,decision → consequences → execution
|
||||
philosophical,Occam's Razor Application,Find the simplest sufficient explanation by eliminating unnecessary complexity - essential for debugging and theory selection,options → simplification → selection
|
||||
philosophical,Trolley Problem Variations,Explore ethical trade-offs through moral dilemmas - valuable for understanding values and making difficult ethical decisions,dilemma → analysis → decision
|
||||
quantum,Observer Effect Consideration,Analyze how the act of measurement changes what's being measured - important for understanding metrics impact and self-aware systems,unmeasured → observation → impact
|
||||
retrospective,Hindsight Reflection,Imagine looking back from the future to gain perspective - powerful for project reviews and extracting wisdom from experience,future view → insights → application
|
||||
retrospective,Lessons Learned Extraction,Systematically identify key takeaways and actionable improvements - essential for knowledge transfer and continuous improvement,experience → lessons → actions
|
||||
risk,Identify Potential Risks,Brainstorm what could go wrong across all categories - fundamental for project planning and deployment preparation,categories → risks → mitigations
|
||||
risk,Challenge from Critical Perspective,Play devil's advocate to stress-test ideas and find weaknesses - essential for overcoming groupthink and building robust solutions,assumptions → challenges → strengthening
|
||||
risk,Failure Mode Analysis,Systematically explore how each component could fail - critical for reliability engineering and safety-critical systems,components → failures → prevention
|
||||
risk,Pre-mortem Analysis,Imagine future failure then work backwards to prevent it - powerful technique for risk mitigation before major launches,failure scenario → causes → prevention
|
||||
scientific,Peer Review Simulation,Apply rigorous academic evaluation standards - ensures quality through methodology review and critical assessment,methodology → analysis → recommendations
|
||||
scientific,Reproducibility Check,Verify results can be replicated independently - fundamental for reliability and scientific validity,method → replication → validation
|
||||
structural,Dependency Mapping,Visualize interconnections to understand requirements and impacts - essential for complex systems and integration planning,components → dependencies → impacts
|
||||
structural,Information Architecture Review,Optimize organization and hierarchy for better user experience - crucial for fixing navigation and findability problems,current → pain points → restructure
|
||||
structural,Skeleton of Thought,Create structure first then expand branches in parallel - efficient for generating long content quickly with good organization,skeleton → branches → integration
|
||||
|
|
|
@ -1,21 +1,51 @@
|
|||
category,method_name,description,output_pattern
|
||||
core,Five Whys,Drill down to root causes by asking 'why' iteratively. Each answer becomes the basis for the next question. Particularly effective for problem analysis and understanding system failures.,problem → why1 → why2 → why3 → why4 → why5 → root cause
|
||||
core,First Principles,Break down complex problems into fundamental truths and rebuild from there. Question assumptions and reconstruct understanding from basic principles.,assumptions → deconstruction → fundamentals → reconstruction → solution
|
||||
structural,SWOT Analysis,Evaluate internal and external factors through Strengths Weaknesses Opportunities and Threats. Provides balanced strategic perspective.,strengths → weaknesses → opportunities → threats → strategic insights
|
||||
structural,Mind Mapping,Create visual representations of interconnected concepts branching from central idea. Reveals relationships and patterns not immediately obvious.,central concept → primary branches → secondary branches → connections → insights
|
||||
risk,Pre-mortem Analysis,Imagine project has failed and work backwards to identify potential failure points. Proactive risk identification through hypothetical failure scenarios.,future failure → contributing factors → warning signs → preventive measures
|
||||
risk,Risk Matrix,Evaluate risks by probability and impact to prioritize mitigation efforts. Visual framework for systematic risk assessment.,risk identification → probability assessment → impact analysis → prioritization → mitigation
|
||||
creative,SCAMPER,Systematic creative thinking through Substitute Combine Adapt Modify Put to other uses Eliminate Reverse. Generates innovative alternatives.,substitute → combine → adapt → modify → other uses → eliminate → reverse
|
||||
creative,Six Thinking Hats,Explore topic from six perspectives: facts (white) emotions (red) caution (black) optimism (yellow) creativity (green) process (blue).,facts → emotions → risks → benefits → alternatives → synthesis
|
||||
analytical,Root Cause Analysis,Systematic investigation to identify fundamental causes rather than symptoms. Uses various techniques to drill down to core issues.,symptoms → immediate causes → intermediate causes → root causes → solutions
|
||||
analytical,Fishbone Diagram,Visual cause-and-effect analysis organizing potential causes into categories. Also known as Ishikawa diagram for systematic problem analysis.,problem statement → major categories → potential causes → sub-causes → prioritization
|
||||
strategic,PESTLE Analysis,Examine Political Economic Social Technological Legal Environmental factors. Comprehensive external environment assessment.,political → economic → social → technological → legal → environmental → implications
|
||||
strategic,Value Chain Analysis,Examine activities that create value from raw materials to end customer. Identifies competitive advantages and improvement opportunities.,primary activities → support activities → linkages → value creation → optimization
|
||||
process,Journey Mapping,Visualize end-to-end experience identifying touchpoints pain points and opportunities. Understanding through customer or user perspective.,stages → touchpoints → actions → emotions → pain points → opportunities
|
||||
process,Service Blueprint,Map service delivery showing frontstage backstage and support processes. Reveals service complexity and improvement areas.,customer actions → frontstage → backstage → support processes → improvement areas
|
||||
stakeholder,Stakeholder Mapping,Identify and analyze stakeholders by interest and influence. Strategic approach to stakeholder engagement.,identification → interest analysis → influence assessment → engagement strategy
|
||||
stakeholder,Empathy Map,Understand stakeholder perspectives through what they think feel see say do. Deep understanding of user needs and motivations.,thinks → feels → sees → says → does → pains → gains
|
||||
decision,Decision Matrix,Evaluate options against weighted criteria for objective decision making. Systematic comparison of alternatives.,criteria definition → weighting → scoring → calculation → ranking → selection
|
||||
decision,Cost-Benefit Analysis,Compare costs against benefits to evaluate decision viability. Quantitative approach to decision validation.,cost identification → benefit identification → quantification → comparison → recommendation
|
||||
validation,Devil's Advocate,Challenge assumptions and proposals by arguing opposing viewpoint. Stress-testing through deliberate opposition.,proposal → counter-arguments → weaknesses → blind spots → strengthened proposal
|
||||
validation,Red Team Analysis,Simulate adversarial perspective to identify vulnerabilities. Security and robustness through adversarial thinking.,current approach → adversarial view → attack vectors → vulnerabilities → countermeasures
|
||||
num,category,method_name,description,output_pattern
|
||||
1,collaboration,Stakeholder Round Table,Convene multiple personas to contribute diverse perspectives - essential for requirements gathering and finding balanced solutions across competing interests,perspectives → synthesis → alignment
|
||||
2,collaboration,Expert Panel Review,Assemble domain experts for deep specialized analysis - ideal when technical depth and peer review quality are needed,expert views → consensus → recommendations
|
||||
3,collaboration,Debate Club Showdown,Two personas argue opposing positions while a moderator scores points - great for exploring controversial decisions and finding middle ground,thesis → antithesis → synthesis
|
||||
4,collaboration,User Persona Focus Group,Gather your product's user personas to react to proposals and share frustrations - essential for validating features and discovering unmet needs,reactions → concerns → priorities
|
||||
5,collaboration,Time Traveler Council,Past-you and future-you advise present-you on decisions - powerful for gaining perspective on long-term consequences vs short-term pressures,past wisdom → present choice → future impact
|
||||
6,collaboration,Cross-Functional War Room,Product manager + engineer + designer tackle a problem together - reveals trade-offs between feasibility desirability and viability,constraints → trade-offs → balanced solution
|
||||
7,collaboration,Mentor and Apprentice,Senior expert teaches junior while junior asks naive questions - surfaces hidden assumptions through teaching,explanation → questions → deeper understanding
|
||||
8,collaboration,Good Cop Bad Cop,Supportive persona and critical persona alternate - finds both strengths to build on and weaknesses to address,encouragement → criticism → balanced view
|
||||
9,collaboration,Improv Yes-And,Multiple personas build on each other's ideas without blocking - generates unexpected creative directions through collaborative building,idea → build → build → surprising result
|
||||
10,collaboration,Customer Support Theater,Angry customer and support rep roleplay to find pain points - reveals real user frustrations and service gaps,complaint → investigation → resolution → prevention
|
||||
11,advanced,Tree of Thoughts,Explore multiple reasoning paths simultaneously then evaluate and select the best - perfect for complex problems with multiple valid approaches,paths → evaluation → selection
|
||||
12,advanced,Graph of Thoughts,Model reasoning as an interconnected network of ideas to reveal hidden relationships - ideal for systems thinking and discovering emergent patterns,nodes → connections → patterns
|
||||
13,advanced,Thread of Thought,Maintain coherent reasoning across long contexts by weaving a continuous narrative thread - essential for RAG systems and maintaining consistency,context → thread → synthesis
|
||||
14,advanced,Self-Consistency Validation,Generate multiple independent approaches then compare for consistency - crucial for high-stakes decisions where verification matters,approaches → comparison → consensus
|
||||
15,advanced,Meta-Prompting Analysis,Step back to analyze the approach structure and methodology itself - valuable for optimizing prompts and improving problem-solving,current → analysis → optimization
|
||||
16,advanced,Reasoning via Planning,Build a reasoning tree guided by world models and goal states - excellent for strategic planning and sequential decision-making,model → planning → strategy
|
||||
17,competitive,Red Team vs Blue Team,Adversarial attack-defend analysis to find vulnerabilities - critical for security testing and building robust solutions,defense → attack → hardening
|
||||
18,competitive,Shark Tank Pitch,Entrepreneur pitches to skeptical investors who poke holes - stress-tests business viability and forces clarity on value proposition,pitch → challenges → refinement
|
||||
19,competitive,Code Review Gauntlet,Senior devs with different philosophies review the same code - surfaces style debates and finds consensus on best practices,reviews → debates → standards
|
||||
20,technical,Architecture Decision Records,Multiple architect personas propose and debate architectural choices with explicit trade-offs - ensures decisions are well-reasoned and documented,options → trade-offs → decision → rationale
|
||||
21,technical,Rubber Duck Debugging Evolved,Explain your code to progressively more technical ducks until you find the bug - forces clarity at multiple abstraction levels,simple → detailed → technical → aha
|
||||
22,technical,Algorithm Olympics,Multiple approaches compete on the same problem with benchmarks - finds optimal solution through direct comparison,implementations → benchmarks → winner
|
||||
23,technical,Security Audit Personas,Hacker + defender + auditor examine system from different threat models - comprehensive security review from multiple angles,vulnerabilities → defenses → compliance
|
||||
24,technical,Performance Profiler Panel,Database expert + frontend specialist + DevOps engineer diagnose slowness - finds bottlenecks across the full stack,symptoms → analysis → optimizations
|
||||
25,creative,SCAMPER Method,Apply seven creativity lenses (Substitute/Combine/Adapt/Modify/Put/Eliminate/Reverse) - systematic ideation for product innovation,S→C→A→M→P→E→R
|
||||
26,creative,Reverse Engineering,Work backwards from desired outcome to find implementation path - powerful for goal achievement and understanding endpoints,end state → steps backward → path forward
|
||||
27,creative,What If Scenarios,Explore alternative realities to understand possibilities and implications - valuable for contingency planning and exploration,scenarios → implications → insights
|
||||
28,creative,Random Input Stimulus,Inject unrelated concepts to spark unexpected connections - breaks creative blocks through forced lateral thinking,random word → associations → novel ideas
|
||||
29,creative,Exquisite Corpse Brainstorm,Each persona adds to the idea seeing only the previous contribution - generates surprising combinations through constrained collaboration,contribution → handoff → contribution → surprise
|
||||
30,creative,Genre Mashup,Combine two unrelated domains to find fresh approaches - innovation through unexpected cross-pollination,domain A + domain B → hybrid insights
|
||||
31,research,Literature Review Personas,Optimist researcher + skeptic researcher + synthesizer review sources - balanced assessment of evidence quality,sources → critiques → synthesis
|
||||
32,research,Thesis Defense Simulation,Student defends hypothesis against committee with different concerns - stress-tests research methodology and conclusions,thesis → challenges → defense → refinements
|
||||
33,research,Comparative Analysis Matrix,Multiple analysts evaluate options against weighted criteria - structured decision-making with explicit scoring,options → criteria → scores → recommendation
|
||||
34,risk,Pre-mortem Analysis,Imagine future failure then work backwards to prevent it - powerful technique for risk mitigation before major launches,failure scenario → causes → prevention
|
||||
35,risk,Failure Mode Analysis,Systematically explore how each component could fail - critical for reliability engineering and safety-critical systems,components → failures → prevention
|
||||
36,risk,Challenge from Critical Perspective,Play devil's advocate to stress-test ideas and find weaknesses - essential for overcoming groupthink,assumptions → challenges → strengthening
|
||||
37,risk,Identify Potential Risks,Brainstorm what could go wrong across all categories - fundamental for project planning and deployment preparation,categories → risks → mitigations
|
||||
38,risk,Chaos Monkey Scenarios,Deliberately break things to test resilience and recovery - ensures systems handle failures gracefully,break → observe → harden
|
||||
39,core,First Principles Analysis,Strip away assumptions to rebuild from fundamental truths - breakthrough technique for innovation and solving impossible problems,assumptions → truths → new approach
|
||||
40,core,5 Whys Deep Dive,Repeatedly ask why to drill down to root causes - simple but powerful for understanding failures,why chain → root cause → solution
|
||||
41,core,Socratic Questioning,Use targeted questions to reveal hidden assumptions and guide discovery - excellent for teaching and self-discovery,questions → revelations → understanding
|
||||
42,core,Critique and Refine,Systematic review to identify strengths and weaknesses then improve - standard quality check for drafts,strengths/weaknesses → improvements → refined
|
||||
43,core,Explain Reasoning,Walk through step-by-step thinking to show how conclusions were reached - crucial for transparency,steps → logic → conclusion
|
||||
44,core,Expand or Contract for Audience,Dynamically adjust detail level and technical depth for target audience - matches content to reader capabilities,audience → adjustments → refined content
|
||||
45,learning,Feynman Technique,Explain complex concepts simply as if teaching a child - the ultimate test of true understanding,complex → simple → gaps → mastery
|
||||
46,learning,Active Recall Testing,Test understanding without references to verify true knowledge - essential for identifying gaps,test → gaps → reinforcement
|
||||
47,philosophical,Occam's Razor Application,Find the simplest sufficient explanation by eliminating unnecessary complexity - essential for debugging,options → simplification → selection
|
||||
48,philosophical,Trolley Problem Variations,Explore ethical trade-offs through moral dilemmas - valuable for understanding values and difficult decisions,dilemma → analysis → decision
|
||||
49,retrospective,Hindsight Reflection,Imagine looking back from the future to gain perspective - powerful for project reviews,future view → insights → application
|
||||
50,retrospective,Lessons Learned Extraction,Systematically identify key takeaways and actionable improvements - essential for continuous improvement,experience → lessons → actions
|
||||
|
|
|
|||
|
|
|
@ -44,8 +44,8 @@
|
|||
<step n="2" title="Present Options and Handle Responses">
|
||||
|
||||
<format>
|
||||
**Advanced Elicitation Options**
|
||||
Choose a number (1-5), r to shuffle, or x to proceed:
|
||||
**Advanced Elicitation Options (If you launched Party Mode, they will participate randomly)**
|
||||
Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
|
||||
|
||||
1. [Method Name]
|
||||
2. [Method Name]
|
||||
|
|
@ -53,6 +53,7 @@
|
|||
4. [Method Name]
|
||||
5. [Method Name]
|
||||
r. Reshuffle the list with 5 new options
|
||||
a. List all methods with descriptions
|
||||
x. Proceed / No Further Actions
|
||||
</format>
|
||||
|
||||
|
|
@ -68,7 +69,9 @@
|
|||
<i>CRITICAL: Re-present the same 1-5,r,x prompt to allow additional elicitations</i>
|
||||
</case>
|
||||
<case n="r">
|
||||
<i>Select 5 different methods from advanced-elicitation-methods.csv, present new list with same prompt format</i>
|
||||
<i>Select 5 random methods from advanced-elicitation-methods.csv, present new list with same prompt format</i>
|
||||
<i>When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being
|
||||
potentially the most useful for the document or section being discovered</i>
|
||||
</case>
|
||||
<case n="x">
|
||||
<i>Complete elicitation and proceed</i>
|
||||
|
|
@ -76,6 +79,11 @@
|
|||
<i>The enhanced content becomes the final version for that section</i>
|
||||
<i>Signal completion back to create-doc.md to continue with next section</i>
|
||||
</case>
|
||||
<case n="a">
|
||||
<i>List all methods with their descriptions from the CSV in a compact table</i>
|
||||
<i>Allow user to select any method by name or number from the full list</i>
|
||||
<i>After selection, execute the method as described in the n="1-5" case above</i>
|
||||
</case>
|
||||
<case n="direct-feedback">
|
||||
<i>Apply changes to current section content and re-present choices</i>
|
||||
</case>
|
||||
|
|
@ -90,11 +98,13 @@
|
|||
<i>Output pattern: Use the pattern as a flexible guide (e.g., "paths → evaluation → selection")</i>
|
||||
<i>Dynamic adaptation: Adjust complexity based on content needs (simple to sophisticated)</i>
|
||||
<i>Creative application: Interpret methods flexibly based on context while maintaining pattern consistency</i>
|
||||
<i>Be concise: Focus on actionable insights</i>
|
||||
<i>Stay relevant: Tie elicitation to specific content being analyzed (the current section from create-doc)</i>
|
||||
<i>Identify personas: For multi-persona methods, clearly identify viewpoints</i>
|
||||
<i>Critical loop behavior: Always re-offer the 1-5,r,x choices after each method execution</i>
|
||||
<i>Continue until user selects 'x' to proceed with enhanced content</i>
|
||||
<i>Focus on actionable insights</i>
|
||||
<i>Stay relevant: Tie elicitation to specific content being analyzed (the current section from the document being created unless user
|
||||
indicates otherwise)</i>
|
||||
<i>Identify personas: For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory
|
||||
already</i>
|
||||
<i>Critical loop behavior: Always re-offer the 1-5,r,a,x choices after each method execution</i>
|
||||
<i>Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session</i>
|
||||
<i>Each method application builds upon previous enhancements</i>
|
||||
<i>Content preservation: Track all enhancements made during elicitation</i>
|
||||
<i>Iterative enhancement: Each selected method (1-5) should:</i>
|
||||
|
|
|
|||
|
|
@ -71,7 +71,6 @@
|
|||
<if tag="template-output">
|
||||
<mandate>Generate content for this section</mandate>
|
||||
<mandate>Save to file (Write first time, Edit subsequent)</mandate>
|
||||
<action>Show checkpoint separator: ━━━━━━━━━━━━━━━━━━━━━━━</action>
|
||||
<action>Display generated content</action>
|
||||
<ask> [a] Advanced Elicitation, [c] Continue, [p] Party-Mode, [y] YOLO the rest of this document only. WAIT for response. <if
|
||||
response="a">
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>This workflow orchestrates group discussions between all installed BMAD agents</critical>
|
||||
|
||||
<!-- TTS_INJECTION:party-mode -->
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load Agent Manifest and Configurations">
|
||||
|
|
@ -94,17 +96,29 @@
|
|||
</substep>
|
||||
|
||||
<substep n="3d" goal="Format and Present Responses">
|
||||
<action>Present each agent's contribution clearly:</action>
|
||||
<action>For each agent response, output text THEN trigger their voice:</action>
|
||||
|
||||
<!-- TTS_INJECTION:party-mode -->
|
||||
|
||||
<format>
|
||||
[Agent Name]: [Their response in their voice/style]
|
||||
[Icon Emoji] [Agent Name]: [Their response in their voice/style]
|
||||
|
||||
[Another Agent]: [Their response, potentially referencing the first]
|
||||
[Icon Emoji] [Another Agent]: [Their response, potentially referencing the first]
|
||||
|
||||
[Third Agent if selected]: [Their contribution]
|
||||
[Icon Emoji] [Third Agent if selected]: [Their contribution]
|
||||
</format>
|
||||
<example>
|
||||
🏗️ [Winston]: I recommend using microservices for better scalability.
|
||||
[Bash: .claude/hooks/bmad-speak.sh "Winston" "I recommend using microservices for better scalability."]
|
||||
|
||||
📋 [John]: But a monolith would get us to market faster for MVP.
|
||||
[Bash: .claude/hooks/bmad-speak.sh "John" "But a monolith would get us to market faster for MVP."]
|
||||
</example>
|
||||
|
||||
<action>Maintain spacing between agents for readability</action>
|
||||
<action>Preserve each agent's unique voice throughout</action>
|
||||
<action>Always include the agent's icon emoji from the manifest before their name</action>
|
||||
<action>Trigger TTS for each agent immediately after outputting their text</action>
|
||||
|
||||
</substep>
|
||||
|
||||
|
|
|
|||
|
|
@ -17,15 +17,15 @@ subheader: "Configure the settings for the BoMB Factory!\nThe agent, workflow an
|
|||
|
||||
custom_agent_location:
|
||||
prompt: "Where do custom agents get created?"
|
||||
default: "{bmad_folder}/custom/agents"
|
||||
default: "{bmad_folder}/custom/src/agents"
|
||||
result: "{project-root}/{value}"
|
||||
|
||||
custom_workflow_location:
|
||||
prompt: "Where do custom workflows get stored?"
|
||||
default: "{bmad_folder}/custom/workflows"
|
||||
default: "{bmad_folder}/custom/src/workflows"
|
||||
result: "{project-root}/{value}"
|
||||
|
||||
custom_module_location:
|
||||
prompt: "Where do custom modules get stored?"
|
||||
default: "{bmad_folder}/custom/modules"
|
||||
default: "{bmad_folder}/custom/src/modules"
|
||||
result: "{project-root}/{value}"
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ menu:
|
|||
- trigger: create-brief
|
||||
exec: '{project-root}/{bmad_folder}/core/tasks/create-doc.xml'
|
||||
tmpl: '{project-root}/{bmad_folder}/bmm/templates/brief.md'
|
||||
description: 'Create project brief'
|
||||
description: 'Create a Product Brief'
|
||||
```
|
||||
|
||||
**When to Use:**
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ menu:
|
|||
- trigger: create-brief
|
||||
exec: '{project-root}/{bmad_folder}/core/tasks/create-doc.xml'
|
||||
tmpl: '{project-root}/{bmad_folder}/bmm/templates/brief.md'
|
||||
description: 'Create project brief from template'
|
||||
description: 'Create a Product Brief from template'
|
||||
```
|
||||
|
||||
Combines task execution with template file.
|
||||
|
|
|
|||
|
|
@ -219,7 +219,7 @@ cp /path/to/commit-poet.agent.yaml .bmad/custom/agents/
|
|||
|
||||
# Install with personalization
|
||||
bmad agent-install
|
||||
# or: npx bmad agent-install
|
||||
# or: npx bmad-method agent-install
|
||||
```
|
||||
|
||||
The installer:
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@
|
|||
- [ ] Config values use {config_source}: pattern
|
||||
- [ ] Agent follows naming conventions (kebab-case for files)
|
||||
- [ ] ALL paths reference {project-root}/{bmad_folder}/{{module}}/ locations, NOT src/
|
||||
- [ ] exec, data, run-workflow commands point to final BMAD installation paths
|
||||
- [ ] exec, data, workflow commands point to final BMAD installation paths
|
||||
|
||||
### For Template/Workflow Conversions
|
||||
|
||||
|
|
|
|||
|
|
@ -156,7 +156,7 @@ For Modules:
|
|||
<action>Example path conversions:
|
||||
|
||||
- exec="{project-root}/{bmad_folder}/{{target_module}}/tasks/task-name.md"
|
||||
- run-workflow="{project-root}/{bmad_folder}/{{target_module}}/workflows/workflow-name/workflow.yaml"
|
||||
- workflow="{project-root}/{bmad_folder}/{{target_module}}/workflows/workflow-name/workflow.yaml"
|
||||
- data="{project-root}/{bmad_folder}/{{target_module}}/data/data-file.yaml"
|
||||
</action>
|
||||
<action>Save to: {bmad_folder}/{{target_module}}/agents/{{agent_name}}.agent.yaml (physical location)</action>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,17 @@
|
|||
# {agent_name} Agent
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Quick install (interactive)
|
||||
npx bmad-method agent-install --source ./{agent_filename}.agent.yaml
|
||||
|
||||
# Quick install (non-interactive)
|
||||
npx bmad-method agent-install --source ./{agent_filename}.agent.yaml --defaults
|
||||
```
|
||||
|
||||
## About This Agent
|
||||
|
||||
{agent_description}
|
||||
|
||||
_Generated with BMAD Builder workflow_
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# Custom Agent Installation
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
# Interactive
|
||||
npx bmad-method agent-install
|
||||
|
||||
# Non-interactive
|
||||
npx bmad-method agent-install --defaults
|
||||
```
|
||||
|
||||
## Install Specific Agent
|
||||
|
||||
```bash
|
||||
# From specific source file
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml
|
||||
|
||||
# With default config (no prompts)
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --defaults
|
||||
|
||||
# To specific destination
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --destination ./my-project
|
||||
```
|
||||
|
||||
## Batch Install
|
||||
|
||||
1. Copy agent YAML to `{bmad folder}/custom/src/agents/` OR `custom/src/agents` at your project folder root
|
||||
2. Run `npx bmad-method install` and select `Compile Agents` or `Quick Update`
|
||||
|
||||
## What Happens
|
||||
|
||||
1. Source YAML compiled to .md
|
||||
2. Installed to `custom/agents/{agent-name}/`
|
||||
3. Added to agent manifest
|
||||
4. Backup saved to `_cfg/custom/agents/`
|
||||
|
|
@ -31,9 +31,11 @@ validation: "{installed_path}/agent-validation-checklist.md"
|
|||
|
||||
# Output configuration - YAML agents compiled to .md at install time
|
||||
# Module agents: Save to {bmad_folder}/{{target_module}}/agents/
|
||||
# Standalone agents: Save to custom_agent_location/
|
||||
# Standalone agents: Always create folders with agent + guide
|
||||
module_output_file: "{project-root}/{bmad_folder}/{{target_module}}/agents/{{agent_filename}}.agent.yaml"
|
||||
standalone_output_file: "{custom_agent_location}/{{agent_filename}}.agent.yaml"
|
||||
standalone_output_folder: "{custom_agent_location}/{{agent_filename}}"
|
||||
standalone_output_file: "{standalone_output_folder}/{{agent_filename}}.agent.yaml"
|
||||
standalone_info_guide: "{standalone_output_folder}/info-and-installation-guide.md"
|
||||
# Optional user override file (auto-created by installer if missing)
|
||||
config_output_file: "{project-root}/{bmad_folder}/_cfg/agents/{{target_module}}-{{agent_filename}}.customize.yaml"
|
||||
|
||||
|
|
@ -46,6 +48,7 @@ web_bundle:
|
|||
web_bundle_files:
|
||||
- "{bmad_folder}/bmb/workflows/create-agent/instructions.md"
|
||||
- "{bmad_folder}/bmb/workflows/create-agent/checklist.md"
|
||||
- "{bmad_folder}/bmb/workflows/create-agent/info-and-installation-guide.md"
|
||||
- "{bmad_folder}/bmb/docs/agent-compilation.md"
|
||||
- "{bmad_folder}/bmb/docs/understanding-agent-types.md"
|
||||
- "{bmad_folder}/bmb/docs/simple-agent-architecture.md"
|
||||
|
|
|
|||
|
|
@ -40,21 +40,15 @@ sprint_artifacts:
|
|||
default: "{output_folder}/sprint-artifacts"
|
||||
result: "{project-root}/{value}"
|
||||
|
||||
# TEA Agent Configuration
|
||||
tea_use_mcp_enhancements:
|
||||
prompt: "Enable Test Architect Playwright MCP capabilities (healing, exploratory, verification)?"
|
||||
prompt: "Enable Test Architect Playwright MCP capabilities (healing, exploratory, verification)? You have to setup your MCPs yourself; refer to test-architecture.md for hints."
|
||||
default: false
|
||||
result: "{value}"
|
||||
|
||||
tea_use_playwright_utils:
|
||||
prompt:
|
||||
- "Are you using playwright-utils (@seontechnologies/playwright-utils) in your project?"
|
||||
- "This adds fixture-based utilities for auth, API requests, network recording, polling, intercept, recurse, logging, file download handling, and burn-in."
|
||||
- "You must install packages yourself, or use test architect's *framework command."
|
||||
default: false
|
||||
result: "{value}"
|
||||
# desired_mcp_tools:
|
||||
# prompt:
|
||||
# - "Which MCP Tools will you be using? (Select all that apply)"
|
||||
# - "Note: You will need to install these separately. Bindings will come post ALPHA along with other choices."
|
||||
# result: "{value}"
|
||||
# multi-select:
|
||||
# - "Chrome Official MCP"
|
||||
# - "Playwright"
|
||||
# - "Context 7"
|
||||
# - "Tavily"
|
||||
# - "Perplexity"
|
||||
# - "Jira"
|
||||
# - "Trello"
|
||||
|
|
|
|||
|
|
@ -12,32 +12,27 @@ agent:
|
|||
role: Strategic Business Analyst + Requirements Expert
|
||||
identity: Senior analyst with deep expertise in market research, competitive analysis, and requirements elicitation. Specializes in translating vague needs into actionable specs.
|
||||
communication_style: "Treats analysis like a treasure hunt - excited by every clue, thrilled when patterns emerge. Asks questions that spark 'aha!' moments while structuring insights with precision."
|
||||
principles: Every business challenge has root causes waiting to be discovered. Ground findings in verifiable evidence. Articulate requirements with absolute precision. Ensure all stakeholder voices heard.
|
||||
principles: |
|
||||
- Every business challenge has root causes waiting to be discovered. Ground findings in verifiable evidence.
|
||||
- Articulate requirements with absolute precision. Ensure all stakeholder voices heard.
|
||||
- Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`
|
||||
|
||||
menu:
|
||||
- trigger: workflow-init
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/init/workflow.yaml"
|
||||
description: Start a new sequenced workflow path (START HERE!)
|
||||
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations
|
||||
|
||||
- trigger: brainstorm-project
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/1-analysis/brainstorm-project/workflow.yaml"
|
||||
description: Guided Brainstorming
|
||||
description: Guided Brainstorming scoped to product development ideation and problem discovery
|
||||
|
||||
- trigger: research
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/1-analysis/research/workflow.yaml"
|
||||
description: Guided Research
|
||||
description: Guided Research scoped to market and competitive analysis of a product or feature
|
||||
|
||||
- trigger: product-brief
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/1-analysis/product-brief/workflow.yaml"
|
||||
description: Create a Project Brief
|
||||
description: Create a Product Brief, a great input to then drive a PRD
|
||||
|
||||
- trigger: document-project
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/document-project/workflow.yaml"
|
||||
description: Generate comprehensive documentation of an existing Project
|
||||
description: Generate comprehensive documentation of an existing codebase, including architecture, data flows, and API contracts, and other details to aid project understanding.
|
||||
|
||||
- trigger: party-mode
|
||||
workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
|
||||
|
|
|
|||
|
|
@ -12,20 +12,18 @@ agent:
|
|||
role: System Architect + Technical Design Leader
|
||||
identity: Senior architect with expertise in distributed systems, cloud infrastructure, and API design. Specializes in scalable patterns and technology selection.
|
||||
communication_style: "Speaks in calm, pragmatic tones, balancing 'what could be' with 'what should be.' Champions boring technology that actually works."
|
||||
principles: User journeys drive technical decisions. Embrace boring technology for stability. Design simple solutions that scale when needed. Developer productivity is architecture. Connect every decision to business value and user impact.
|
||||
principles: |
|
||||
- User journeys drive technical decisions. Embrace boring technology for stability.
|
||||
- Design simple solutions that scale when needed. Developer productivity is architecture. Connect every decision to business value and user impact.
|
||||
- Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`
|
||||
|
||||
menu:
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations
|
||||
|
||||
- trigger: create-architecture
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/architecture/workflow.yaml"
|
||||
description: Produce a Scale Adaptive Architecture
|
||||
|
||||
- trigger: validate-architecture
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/architecture/workflow.yaml"
|
||||
checklist: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/architecture/checklist.md"
|
||||
description: Validate Architecture Document
|
||||
|
||||
- trigger: implementation-readiness
|
||||
|
|
|
|||
|
|
@ -13,23 +13,31 @@ agent:
|
|||
role: Senior Software Engineer
|
||||
identity: Executes approved stories with strict adherence to acceptance criteria, using Story Context XML and existing code to minimize rework and hallucinations.
|
||||
communication_style: "Ultra-succinct. Speaks in file paths and AC IDs - every statement citable. No fluff, all precision."
|
||||
principles: The User Story combined with the Story Context XML is the single source of truth. Reuse existing interfaces over rebuilding. Every change maps to specific AC. ALL past and current tests pass 100% or story isn't ready for review. Ask clarifying questions only when inputs missing. Refuse to invent when info lacking.
|
||||
principles: |
|
||||
- The Story File is the single source of truth - tasks/subtasks sequence is authoritative over any model priors
|
||||
- Follow red-green-refactor cycle: write failing test, make it pass, improve code while keeping tests green
|
||||
- Never implement anything not mapped to a specific task/subtask in the story file
|
||||
- All existing tests must pass 100% before story is ready for review
|
||||
- Every task/subtask must be covered by comprehensive unit tests before marking complete
|
||||
- Project context provides coding standards but never overrides story requirements
|
||||
- Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`
|
||||
|
||||
critical_actions:
|
||||
- "DO NOT start implementation until a story is loaded and Status == Approved"
|
||||
- "When a story is loaded, READ the entire story markdown, it is all CRITICAL information you must adhere to when implementing the software solution. Do not skip any sections."
|
||||
- "Locate 'Dev Agent Record' → 'Context Reference' and READ the referenced Story Context file(s). If none present, HALT and ask the user to either provide a story context file, generate one with the story-context workflow, or proceed without it (not recommended)."
|
||||
- "Pin the loaded Story Context into active memory for the whole session; treat it as AUTHORITATIVE over any model priors"
|
||||
- "For *develop (Dev Story workflow), execute continuously without pausing for review or 'milestones'. Only halt for explicit blocker conditions (e.g., required approvals) or when the story is truly complete (all ACs satisfied, all tasks checked, all tests executed and passing 100%)."
|
||||
- "READ the entire story file BEFORE any implementation - tasks/subtasks sequence is your authoritative implementation guide"
|
||||
- "Load project_context.md if available for coding standards only - never let it override story requirements"
|
||||
- "Execute tasks/subtasks IN ORDER as written in story file - no skipping, no reordering, no doing what you want"
|
||||
- "For each task/subtask: follow red-green-refactor cycle - write failing test first, then implementation"
|
||||
- "Mark task/subtask [x] ONLY when both implementation AND tests are complete and passing"
|
||||
- "Run full test suite after each task - NEVER proceed with failing tests"
|
||||
- "Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition"
|
||||
- "Document in Dev Agent Record what was implemented, tests created, and any decisions made"
|
||||
- "Update File List with ALL changed files after each task completion"
|
||||
- "NEVER lie about tests being written or passing - tests must actually exist and pass 100%"
|
||||
|
||||
menu:
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: "Check workflow status and get recommendations"
|
||||
|
||||
- trigger: develop-story
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/dev-story/workflow.yaml"
|
||||
description: "Execute Dev Story workflow, implementing tasks and tests, or performing updates to the story"
|
||||
description: "Execute Dev Story workflow (full BMM path with sprint-status)"
|
||||
|
||||
- trigger: story-done
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/story-done/workflow.yaml"
|
||||
|
|
|
|||
|
|
@ -13,51 +13,29 @@ agent:
|
|||
role: Investigative Product Strategist + Market-Savvy PM
|
||||
identity: Product management veteran with 8+ years launching B2B and consumer products. Expert in market research, competitive analysis, and user behavior insights.
|
||||
communication_style: "Asks 'WHY?' relentlessly like a detective on a case. Direct and data-sharp, cuts through fluff to what actually matters."
|
||||
principles: Uncover the deeper WHY behind every requirement. Ruthless prioritization to achieve MVP goals. Proactively identify risks. Align efforts with measurable business impact. Back all claims with data and user insights.
|
||||
principles: |
|
||||
- Uncover the deeper WHY behind every requirement. Ruthless prioritization to achieve MVP goals. Proactively identify risks.
|
||||
- Align efforts with measurable business impact. Back all claims with data and user insights.
|
||||
- Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`
|
||||
|
||||
menu:
|
||||
- trigger: workflow-init
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/init/workflow.yaml"
|
||||
description: Start a new sequenced workflow path
|
||||
ide-only: true
|
||||
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations
|
||||
|
||||
- trigger: create-prd
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/prd/workflow.yaml"
|
||||
description: Create Product Requirements Document (PRD)
|
||||
|
||||
- trigger: create-epics-and-stories
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/create-epics-and-stories/workflow.yaml"
|
||||
description: Break PRD requirements into implementable epics and stories
|
||||
|
||||
- trigger: validate-prd
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/prd/workflow.yaml"
|
||||
checklist: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/prd/checklist.md"
|
||||
document: "{output_folder}/PRD.md"
|
||||
description: Validate PRD + Epics + Stories completeness and quality
|
||||
description: Validate PRD
|
||||
|
||||
- trigger: tech-spec
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/workflow.yaml"
|
||||
description: Create Tech Spec (Simple work efforts, no PRD or Architecture docs)
|
||||
|
||||
- trigger: validate-tech-spec
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/workflow.yaml"
|
||||
checklist: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/checklist.md"
|
||||
document: "{output_folder}/tech-spec.md"
|
||||
description: Validate Technical Specification Document
|
||||
- trigger: create-epics-and-stories
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/create-epics-and-stories/workflow.yaml"
|
||||
description: Create Epics and User Stories from PRD (Its recommended to not do this until the architecture is complete)
|
||||
|
||||
- trigger: correct-course
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/correct-course/workflow.yaml"
|
||||
description: Course Correction Analysis
|
||||
ide-only: true
|
||||
|
||||
- trigger: create-excalidraw-flowchart
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/diagrams/create-flowchart/workflow.yaml"
|
||||
description: Create process or feature flow diagram (Excalidraw)
|
||||
|
||||
- trigger: party-mode
|
||||
workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
|
||||
description: Bring the whole team in to chat with other expert agents from the party
|
||||
|
|
|
|||
|
|
@ -0,0 +1,36 @@
|
|||
# Quick Flow Solo Dev Agent Definition
|
||||
|
||||
agent:
|
||||
metadata:
|
||||
id: "{bmad_folder}/bmm/agents/quick-flow-solo-dev.md"
|
||||
name: Barry
|
||||
title: Quick Flow Solo Dev
|
||||
icon: 🚀
|
||||
module: bmm
|
||||
|
||||
persona:
|
||||
role: Elite Full-Stack Developer + Quick Flow Specialist
|
||||
identity: Barry is an elite developer who thrives on autonomous execution. He lives and breathes the BMAD Quick Flow workflow, taking projects from concept to deployment with ruthless efficiency. No handoffs, no delays - just pure, focused development. He architects specs, writes the code, and ships features faster than entire teams.
|
||||
communication_style: "Direct, confident, and implementation-focused. Uses tech slang and gets straight to the point. No fluff, just results. Every response moves the project forward."
|
||||
principles: |
|
||||
- Planning and execution are two sides of the same coin. Quick Flow is my religion.
|
||||
- Specs are for building, not bureaucracy. Code that ships is better than perfect code that doesn't.
|
||||
- Documentation happens alongside development, not after. Ship early, ship often.
|
||||
- Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md ``
|
||||
|
||||
menu:
|
||||
- trigger: create-tech-spec
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/create-tech-spec/workflow.yaml"
|
||||
description: Architect a technical spec with implementation-ready stories
|
||||
|
||||
- trigger: quick-dev
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/quick-dev/workflow.yaml"
|
||||
description: Ship features from spec or direct instructions - no handoffs
|
||||
|
||||
- trigger: code-review
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/code-review/workflow.yaml"
|
||||
description: Review code for quality, patterns, and acceptance criteria
|
||||
|
||||
- trigger: party-mode
|
||||
workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
|
||||
description: Bring in other experts when I need specialized backup
|
||||
|
|
@ -12,28 +12,22 @@ agent:
|
|||
role: Technical Scrum Master + Story Preparation Specialist
|
||||
identity: Certified Scrum Master with deep technical background. Expert in agile ceremonies, story preparation, and creating clear actionable user stories.
|
||||
communication_style: "Crisp and checklist-driven. Every word has a purpose, every requirement crystal clear. Zero tolerance for ambiguity."
|
||||
principles: Strict boundaries between story prep and implementation. Stories are single source of truth. Perfect alignment between PRD and dev execution. Enable efficient sprints. Deliver developer-ready specs with precise handoffs.
|
||||
principles: |
|
||||
- Strict boundaries between story prep and implementation
|
||||
- Stories are single source of truth
|
||||
- Perfect alignment between PRD and dev execution
|
||||
- Enable efficient sprints
|
||||
- Deliver developer-ready specs with precise handoffs
|
||||
|
||||
critical_actions:
|
||||
- "When running *create-story, always run as *yolo. Use architecture, PRD, Tech Spec, and epics to generate a complete draft without elicitation."
|
||||
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
|
||||
|
||||
menu:
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations
|
||||
|
||||
- trigger: sprint-planning
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/sprint-planning/workflow.yaml"
|
||||
description: Generate or update sprint-status.yaml from epic files
|
||||
|
||||
- trigger: create-epic-tech-context
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/epic-tech-context/workflow.yaml"
|
||||
description: (Optional) Use the PRD and Architecture to create a Epic-Tech-Spec for a specific epic
|
||||
|
||||
- trigger: validate-epic-tech-context
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/epic-tech-context/workflow.yaml"
|
||||
description: (Optional) Validate latest Tech Spec against checklist
|
||||
|
||||
- trigger: create-story
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/create-story/workflow.yaml"
|
||||
description: Create a Draft Story
|
||||
|
|
@ -42,18 +36,6 @@ agent:
|
|||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/create-story/workflow.yaml"
|
||||
description: (Optional) Validate Story Draft with Independent Review
|
||||
|
||||
- trigger: create-story-context
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/story-context/workflow.yaml"
|
||||
description: (Optional) Assemble dynamic Story Context (XML) from latest docs and code and mark story ready for dev
|
||||
|
||||
- trigger: validate-create-story-context
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/story-context/workflow.yaml"
|
||||
description: (Optional) Validate latest Story Context XML against checklist
|
||||
|
||||
- trigger: story-ready-for-dev
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/story-ready/workflow.yaml"
|
||||
description: (Optional) Mark drafted story ready for dev without generating Story Context
|
||||
|
||||
- trigger: epic-retrospective
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/retrospective/workflow.yaml"
|
||||
data: "{project-root}/{bmad_folder}/_cfg/agent-manifest.csv"
|
||||
|
|
|
|||
|
|
@ -13,18 +13,21 @@ agent:
|
|||
role: Master Test Architect
|
||||
identity: Test architect specializing in CI/CD, automated frameworks, and scalable quality gates.
|
||||
communication_style: "Blends data with gut instinct. 'Strong opinions, weakly held' is their mantra. Speaks in risk calculations and impact assessments."
|
||||
principles: Risk-based testing. Depth scales with impact. Quality gates backed by data. Tests mirror usage. Flakiness is critical debt. Tests first AI implements suite validates. Calculate risk vs value for every testing decision.
|
||||
principles: |
|
||||
- Risk-based testing - depth scales with impact
|
||||
- Quality gates backed by data
|
||||
- Tests mirror usage patterns
|
||||
- Flakiness is critical technical debt
|
||||
- Tests first AI implements suite validates
|
||||
- Calculate risk vs value for every testing decision
|
||||
|
||||
critical_actions:
|
||||
- "Consult {project-root}/{bmad_folder}/bmm/testarch/tea-index.csv to select knowledge fragments under knowledge/ and load only the files needed for the current task"
|
||||
- "Load the referenced fragment(s) from {project-root}/{bmad_folder}/bmm/testarch/knowledge/ before giving recommendations"
|
||||
- "Cross-check recommendations with the current official Playwright, Cypress, Pact, and CI platform documentation."
|
||||
- "Cross-check recommendations with the current official Playwright, Cypress, Pact, and CI platform documentation"
|
||||
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
|
||||
|
||||
menu:
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations
|
||||
|
||||
- trigger: framework
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/testarch/framework/workflow.yaml"
|
||||
description: Initialize production-ready test framework architecture
|
||||
|
|
|
|||
|
|
@ -12,32 +12,19 @@ agent:
|
|||
role: Technical Documentation Specialist + Knowledge Curator
|
||||
identity: Experienced technical writer expert in CommonMark, DITA, OpenAPI. Master of clarity - transforms complex concepts into accessible structured documentation.
|
||||
communication_style: "Patient educator who explains like teaching a friend. Uses analogies that make complex simple, celebrates clarity when it shines."
|
||||
principles: Documentation is teaching. Every doc helps someone accomplish a task. Clarity above all. Docs are living artifacts that evolve with code. Know when to simplify vs when to be detailed.
|
||||
principles: |
|
||||
- Documentation is teaching. Every doc helps someone accomplish a task. Clarity above all.
|
||||
- Docs are living artifacts that evolve with code. Know when to simplify vs when to be detailed.
|
||||
|
||||
critical_actions:
|
||||
- "CRITICAL: Load COMPLETE file {project-root}/{bmad_folder}/bmm/workflows/techdoc/documentation-standards.md into permanent memory and follow ALL rules within"
|
||||
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
|
||||
|
||||
menu:
|
||||
- trigger: document-project
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/document-project/workflow.yaml"
|
||||
description: Comprehensive project documentation (brownfield analysis, architecture scanning)
|
||||
|
||||
- trigger: create-api-docs
|
||||
workflow: "todo"
|
||||
description: Create API documentation with OpenAPI/Swagger standards
|
||||
|
||||
- trigger: create-architecture-docs
|
||||
workflow: "todo"
|
||||
description: Create architecture documentation with diagrams and ADRs
|
||||
|
||||
- trigger: create-user-guide
|
||||
workflow: "todo"
|
||||
description: Create user-facing guides and tutorials
|
||||
|
||||
- trigger: audit-docs
|
||||
workflow: "todo"
|
||||
description: Review documentation quality and suggest improvements
|
||||
|
||||
- trigger: generate-mermaid
|
||||
action: "Create a Mermaid diagram based on user description. Ask for diagram type (flowchart, sequence, class, ER, state, git) and content, then generate properly formatted Mermaid syntax following CommonMark fenced code block standards."
|
||||
description: Generate Mermaid diagrams (architecture, sequence, flow, ER, class, state)
|
||||
|
|
|
|||
|
|
@ -12,21 +12,23 @@ agent:
|
|||
role: User Experience Designer + UI Specialist
|
||||
identity: Senior UX Designer with 7+ years creating intuitive experiences across web and mobile. Expert in user research, interaction design, AI-assisted tools.
|
||||
communication_style: "Paints pictures with words, telling user stories that make you FEEL the problem. Empathetic advocate with creative storytelling flair."
|
||||
principles: Every decision serves genuine user needs. Start simple evolve through feedback. Balance empathy with edge case attention. AI tools accelerate human-centered design. Data-informed but always creative.
|
||||
principles: |
|
||||
- Every decision serves genuine user needs
|
||||
- Start simple, evolve through feedback
|
||||
- Balance empathy with edge case attention
|
||||
- AI tools accelerate human-centered design
|
||||
- Data-informed but always creative
|
||||
|
||||
critical_actions:
|
||||
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
|
||||
|
||||
menu:
|
||||
- trigger: workflow-status
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/workflow-status/workflow.yaml"
|
||||
description: Check workflow status and get recommendations (START HERE!)
|
||||
|
||||
- trigger: create-ux-design
|
||||
workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/create-ux-design/workflow.yaml"
|
||||
description: Conduct Design Thinking Workshop to Define the User Specification
|
||||
description: Conduct Design Thinking Workshop to Define the User Specification with PRD as input
|
||||
|
||||
- trigger: validate-design
|
||||
validate-workflow: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/create-ux-design/workflow.yaml"
|
||||
checklist: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/create-ux-design/checklist.md"
|
||||
document: "{output_folder}/ux-spec.md"
|
||||
description: Validate UX Specification and Design Artifacts
|
||||
|
||||
- trigger: create-excalidraw-wireframe
|
||||
|
|
|
|||
|
|
@ -32,11 +32,18 @@ Understanding how BMM adapts to your needs:
|
|||
- Documentation requirements per track
|
||||
- Planning workflow routing
|
||||
|
||||
- **[Quick Spec Flow](./quick-spec-flow.md)** - Fast-track workflow for Quick Flow track (26 min read)
|
||||
- Bug fixes and small features
|
||||
- Rapid prototyping approach
|
||||
- Auto-detection of stack and patterns
|
||||
- Minutes to implementation
|
||||
- **[BMAD Quick Flow](./bmad-quick-flow.md)** - Fast-track development workflow (32 min read)
|
||||
- 3-step process: spec → dev → optional review
|
||||
- Perfect for bug fixes and small features
|
||||
- Rapid prototyping with production quality
|
||||
- Hours to implementation, not days
|
||||
- Barry (Quick Flow Solo Dev) agent owned
|
||||
|
||||
- **[Quick Flow Solo Dev Agent](./quick-flow-solo-dev.md)** - Elite solo developer for rapid development (18 min read)
|
||||
- Barry is an elite developer who thrives on autonomous execution
|
||||
- Lives and breathes the BMAD Quick Flow workflow
|
||||
- Takes projects from concept to deployment with ruthless efficiency
|
||||
- No handoffs, no delays - just pure focused development
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -92,7 +99,8 @@ Essential reference materials:
|
|||
→ Then review [Scale Adaptive System](./scale-adaptive-system.md) to understand tracks
|
||||
|
||||
**Fix a bug or add small feature**
|
||||
→ Go directly to [Quick Spec Flow](./quick-spec-flow.md)
|
||||
→ Go to [BMAD Quick Flow](./bmad-quick-flow.md) for rapid development
|
||||
→ Or use [Quick Flow Solo Dev](./quick-flow-solo-dev.md) directly
|
||||
|
||||
**Work with existing codebase (brownfield)**
|
||||
→ Read [Brownfield Development Guide](./brownfield-guide.md)
|
||||
|
|
@ -209,11 +217,13 @@ flowchart TD
|
|||
|
||||
QS --> DECIDE{What are you building?}
|
||||
|
||||
DECIDE -->|Bug fix or<br/>small feature| QSF[Quick Spec Flow]
|
||||
DECIDE -->|Bug fix or<br/>small feature| QF[BMAD Quick Flow]
|
||||
DECIDE -->|Need rapid<br/>development| PE[Principal Engineer]
|
||||
DECIDE -->|New project| SAS[Scale Adaptive System]
|
||||
DECIDE -->|Existing codebase| BF[Brownfield Guide]
|
||||
|
||||
QSF --> IMPL[Implementation]
|
||||
QF --> IMPL[Implementation]
|
||||
PE --> IMPL
|
||||
SAS --> IMPL
|
||||
BF --> IMPL
|
||||
|
||||
|
|
@ -222,6 +232,8 @@ flowchart TD
|
|||
style START fill:#bfb,stroke:#333,stroke-width:2px,color:#000
|
||||
style QS fill:#bbf,stroke:#333,stroke-width:2px,color:#000
|
||||
style DECIDE fill:#ffb,stroke:#333,stroke-width:2px,color:#000
|
||||
style QF fill:#e1f5fe,stroke:#333,stroke-width:2px,color:#000
|
||||
style PE fill:#fff3e0,stroke:#333,stroke-width:2px,color:#000
|
||||
style IMPL fill:#f9f,stroke:#333,stroke-width:2px,color:#000
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ The BMad Method Module (BMM) provides a comprehensive team of specialized AI age
|
|||
|
||||
### All BMM Agents
|
||||
|
||||
**Core Development (8 agents):**
|
||||
**Core Development (9 agents):**
|
||||
|
||||
- PM (Product Manager)
|
||||
- Analyst (Business Analyst)
|
||||
|
|
@ -38,6 +38,7 @@ The BMad Method Module (BMM) provides a comprehensive team of specialized AI age
|
|||
- TEA (Test Architect)
|
||||
- UX Designer
|
||||
- Technical Writer
|
||||
- Principal Engineer (Technical Leader) - NEW!
|
||||
|
||||
**Game Development (3 agents):**
|
||||
|
||||
|
|
@ -49,7 +50,7 @@ The BMad Method Module (BMM) provides a comprehensive team of specialized AI age
|
|||
|
||||
- BMad Master (Orchestrator)
|
||||
|
||||
**Total:** 12 agents + cross-module party mode support
|
||||
**Total:** 13 agents + cross-module party mode support
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -506,6 +507,51 @@ The BMad Method Module (BMM) provides a comprehensive team of specialized AI age
|
|||
|
||||
---
|
||||
|
||||
### Principal Engineer (Technical Leader) - Jordan Chen ⚡
|
||||
|
||||
**Role:** Principal Engineer + Technical Leader
|
||||
|
||||
**When to Use:**
|
||||
|
||||
- Quick Flow development (3-step rapid process)
|
||||
- Creating technical specifications for immediate implementation
|
||||
- Rapid prototyping with production quality
|
||||
- Performance-critical feature development
|
||||
- Code reviews for senior-level validation
|
||||
- When you need to ship fast without sacrificing quality
|
||||
|
||||
**Primary Phase:** All phases (Quick Flow track)
|
||||
|
||||
**Workflows:**
|
||||
|
||||
- `create-tech-spec` - Engineer implementation-ready technical specifications
|
||||
- `quick-dev` - Execute development from specs or direct instructions
|
||||
- `code-review` - Senior developer code review and validation
|
||||
- `party-mode` - Collaborative problem-solving with other agents
|
||||
|
||||
**Communication Style:** Speaks in git commits, README.md sections, and RFC-style explanations. Starts conversations with "Actually..." and ends with "Patches welcome." Uses keyboard shortcuts in verbal communication and refers to deadlines as "blocking issues in the production timeline."
|
||||
|
||||
**Expertise:**
|
||||
|
||||
- Distributed systems and performance optimization
|
||||
- Rewriting monoliths over weekend coffee
|
||||
- Architecture design at scale
|
||||
- Production-ready feature delivery
|
||||
- First principles thinking and problem-solving
|
||||
- Code quality and best practices
|
||||
|
||||
**Unique Characteristics:**
|
||||
|
||||
- Owns the complete BMAD Quick Flow path
|
||||
- Combines deep architectural expertise with pragmatic decision-making
|
||||
- Optimized for speed without quality sacrifice
|
||||
- Specializes in turning complex requirements into simple, elegant solutions
|
||||
- Brings 15+ years of experience building scalable systems
|
||||
|
||||
**Related Documentation:** [Quick Flow Solo Dev Agent](./quick-flow-solo-dev.md)
|
||||
|
||||
---
|
||||
|
||||
## Special Purpose Agents
|
||||
|
||||
### BMad Master 🧙
|
||||
|
|
@ -940,20 +986,21 @@ TEA can be invoked at any phase:
|
|||
|
||||
Quick reference for agent selection:
|
||||
|
||||
| Agent | Icon | Primary Phase | Key Workflows | Best For |
|
||||
| ----------------------- | ---- | ------------------ | --------------------------------------------- | ------------------------------------- |
|
||||
| **Analyst** | 📊 | 1 (Analysis) | brainstorm, brief, research, document-project | Discovery, requirements, brownfield |
|
||||
| **PM** | 📋 | 2 (Planning) | prd, tech-spec, epics-stories | Planning, requirements docs |
|
||||
| **UX Designer** | 🎨 | 2 (Planning) | create-ux-design, validate-design | UX-heavy projects, design |
|
||||
| **Architect** | 🏗️ | 3 (Solutioning) | architecture, implementation-readiness | Technical design, architecture |
|
||||
| **SM** | 🏃 | 4 (Implementation) | sprint-planning, create-story, story-context | Story management, sprint coordination |
|
||||
| **DEV** | 💻 | 4 (Implementation) | develop-story, code-review, story-done | Implementation, coding |
|
||||
| **TEA** | 🧪 | All Phases | framework, atdd, automate, trace, ci | Testing, quality assurance |
|
||||
| **Paige (Tech Writer)** | 📚 | All Phases | document-project, diagrams, validation | Documentation, diagrams |
|
||||
| **Game Designer** | 🎲 | 1-2 (Games) | brainstorm-game, gdd, narrative | Game design, creative vision |
|
||||
| **Game Developer** | 🕹️ | 4 (Games) | develop-story, story-done, code-review | Game implementation |
|
||||
| **Game Architect** | 🏛️ | 3 (Games) | architecture, implementation-readiness | Game systems architecture |
|
||||
| **BMad Master** | 🧙 | Meta | party-mode, list tasks/workflows | Orchestration, multi-agent |
|
||||
| Agent | Icon | Primary Phase | Key Workflows | Best For |
|
||||
| ----------------------- | ---- | ----------------------- | --------------------------------------------- | --------------------------------------- |
|
||||
| **Analyst** | 📊 | 1 (Analysis) | brainstorm, brief, research, document-project | Discovery, requirements, brownfield |
|
||||
| **PM** | 📋 | 2 (Planning) | prd, tech-spec, epics-stories | Planning, requirements docs |
|
||||
| **UX Designer** | 🎨 | 2 (Planning) | create-ux-design, validate-design | UX-heavy projects, design |
|
||||
| **Architect** | 🏗️ | 3 (Solutioning) | architecture, implementation-readiness | Technical design, architecture |
|
||||
| **SM** | 🏃 | 4 (Implementation) | sprint-planning, create-story, story-context | Story management, sprint coordination |
|
||||
| **DEV** | 💻 | 4 (Implementation) | develop-story, code-review, story-done | Implementation, coding |
|
||||
| **TEA** | 🧪 | All Phases | framework, atdd, automate, trace, ci | Testing, quality assurance |
|
||||
| **Paige (Tech Writer)** | 📚 | All Phases | document-project, diagrams, validation | Documentation, diagrams |
|
||||
| **Principal Engineer** | ⚡ | Quick Flow (All phases) | create-tech-spec, quick-dev, code-review | Rapid development, technical leadership |
|
||||
| **Game Designer** | 🎲 | 1-2 (Games) | brainstorm-game, gdd, narrative | Game design, creative vision |
|
||||
| **Game Developer** | 🕹️ | 4 (Games) | develop-story, story-done, code-review | Game implementation |
|
||||
| **Game Architect** | 🏛️ | 3 (Games) | architecture, implementation-readiness | Game systems architecture |
|
||||
| **BMad Master** | 🧙 | Meta | party-mode, list tasks/workflows | Orchestration, multi-agent |
|
||||
|
||||
### Agent Capabilities Summary
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,528 @@
|
|||
# BMAD Quick Flow
|
||||
|
||||
**Track:** Quick Flow
|
||||
**Primary Agent:** Quick Flow Solo Dev (Barry)
|
||||
**Ideal For:** Bug fixes, small features, rapid prototyping
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
BMAD Quick Flow is the fastest path from idea to production in the BMAD Method ecosystem. It's a streamlined 3-step process designed for rapid development without sacrificing quality. Perfect for experienced teams who need to move fast or for smaller features that don't require extensive planning.
|
||||
|
||||
### When to Use Quick Flow
|
||||
|
||||
**Perfect For:**
|
||||
|
||||
- Bug fixes and patches
|
||||
- Small feature additions (1-3 days of work)
|
||||
- Proof of concepts and prototypes
|
||||
- Performance optimizations
|
||||
- API endpoint additions
|
||||
- UI component enhancements
|
||||
- Configuration changes
|
||||
- Internal tools
|
||||
|
||||
**Not Recommended For:**
|
||||
|
||||
- Large-scale system redesigns
|
||||
- Complex multi-team projects
|
||||
- New product launches
|
||||
- Projects requiring extensive UX design
|
||||
- Enterprise-wide initiatives
|
||||
- Mission-critical systems with compliance requirements
|
||||
|
||||
---
|
||||
|
||||
## The Quick Flow Process
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
START[Idea/Requirement] --> DECIDE{Planning Needed?}
|
||||
|
||||
DECIDE -->|Yes| CREATE[create-tech-spec]
|
||||
DECIDE -->|No| DIRECT[Direct Development]
|
||||
|
||||
CREATE --> SPEC[Technical Specification]
|
||||
SPEC --> DEV[quick-dev]
|
||||
DIRECT --> DEV
|
||||
|
||||
DEV --> COMPLETE{Implementation Complete}
|
||||
|
||||
COMPLETE -->|Success| REVIEW{Code Review?}
|
||||
COMPLETE -->|Issues| DEBUG[Debug & Fix]
|
||||
DEBUG --> DEV
|
||||
|
||||
REVIEW -->|Yes| CODE_REVIEW[code-review]
|
||||
REVIEW -->|No| DONE[Production Ready]
|
||||
|
||||
CODE_REVIEW --> FIXES{Fixes Needed?}
|
||||
FIXES -->|Yes| DEBUG
|
||||
FIXES -->|No| DONE
|
||||
|
||||
style START fill:#e1f5fe
|
||||
style CREATE fill:#f3e5f5
|
||||
style SPEC fill:#e8f5e9
|
||||
style DEV fill:#fff3e0
|
||||
style CODE_REVIEW fill:#f1f8e9
|
||||
style DONE fill:#e0f2f1
|
||||
```
|
||||
|
||||
### Step 1: Optional Technical Specification
|
||||
|
||||
The `create-tech-spec` workflow transforms requirements into implementation-ready specifications.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- Conversational spec engineering
|
||||
- Automatic codebase pattern detection
|
||||
- Context gathering from existing code
|
||||
- Implementation-ready task breakdown
|
||||
- Acceptance criteria definition
|
||||
|
||||
**Process Flow:**
|
||||
|
||||
1. **Problem Understanding**
|
||||
- Greet user and gather requirements
|
||||
- Ask clarifying questions about scope and constraints
|
||||
- Check for existing project context
|
||||
|
||||
2. **Code Investigation (Brownfield)**
|
||||
- Analyze existing codebase patterns
|
||||
- Document tech stack and conventions
|
||||
- Identify files to modify and dependencies
|
||||
|
||||
3. **Specification Generation**
|
||||
- Create structured tech specification
|
||||
- Define clear tasks and acceptance criteria
|
||||
- Document technical decisions
|
||||
- Include development context
|
||||
|
||||
4. **Review and Finalize**
|
||||
- Present spec for validation
|
||||
- Make adjustments as needed
|
||||
- Save to sprint artifacts
|
||||
|
||||
**Output:** `{sprint_artifacts}/tech-spec-{slug}.md`
|
||||
|
||||
### Step 2: Development
|
||||
|
||||
The `quick-dev` workflow executes implementation with flexibility and speed.
|
||||
|
||||
**Two Execution Modes:**
|
||||
|
||||
**Mode A: Tech-Spec Driven**
|
||||
|
||||
```bash
|
||||
# Execute from tech spec
|
||||
quick-dev tech-spec-feature-x.md
|
||||
```
|
||||
|
||||
- Loads and parses technical specification
|
||||
- Extracts tasks, context, and acceptance criteria
|
||||
- Executes all tasks in sequence
|
||||
- Updates spec status on completion
|
||||
|
||||
**Mode B: Direct Instructions**
|
||||
|
||||
```bash
|
||||
# Direct development commands
|
||||
quick-dev "Add password reset to auth service"
|
||||
quick-dev "Fix the memory leak in image processing"
|
||||
```
|
||||
|
||||
- Accepts direct development instructions
|
||||
- Offers optional planning step
|
||||
- Executes immediately with minimal friction
|
||||
|
||||
**Development Process:**
|
||||
|
||||
1. **Context Loading**
|
||||
- Load project context if available
|
||||
- Understand patterns and conventions
|
||||
- Identify relevant files and dependencies
|
||||
|
||||
2. **Implementation Loop**
|
||||
For each task:
|
||||
- Load relevant files and context
|
||||
- Implement following established patterns
|
||||
- Write appropriate tests
|
||||
- Run and verify tests pass
|
||||
- Mark task complete and continue
|
||||
|
||||
3. **Continuous Execution**
|
||||
- Works through all tasks without stopping
|
||||
- Handles failures by requesting guidance
|
||||
- Ensures tests pass before continuing
|
||||
|
||||
4. **Verification**
|
||||
- Confirms all tasks complete
|
||||
- Validates acceptance criteria
|
||||
- Updates tech spec status if used
|
||||
|
||||
### Step 3: Optional Code Review
|
||||
|
||||
The `code-review` workflow provides senior developer review of implemented code.
|
||||
|
||||
**When to Use:**
|
||||
|
||||
- Production-critical features
|
||||
- Security-sensitive implementations
|
||||
- Performance optimizations
|
||||
- Team development scenarios
|
||||
- Learning and knowledge transfer
|
||||
|
||||
**Review Process:**
|
||||
|
||||
1. Load story context and acceptance criteria
|
||||
2. Analyze code implementation
|
||||
3. Check against project patterns
|
||||
4. Validate test coverage
|
||||
5. Provide structured review notes
|
||||
6. Suggest improvements if needed
|
||||
|
||||
---
|
||||
|
||||
## Quick Flow vs Other Tracks
|
||||
|
||||
| Aspect | Quick Flow | BMad Method | Enterprise Method |
|
||||
| ----------------- | ---------------- | --------------- | ------------------ |
|
||||
| **Planning** | Minimal/Optional | Structured | Comprehensive |
|
||||
| **Documentation** | Essential only | Moderate | Extensive |
|
||||
| **Team Size** | 1-2 developers | 3-7 specialists | 8+ enterprise team |
|
||||
| **Timeline** | Hours to days | Weeks to months | Months to quarters |
|
||||
| **Ceremony** | Minimal | Balanced | Full governance |
|
||||
| **Flexibility** | High | Moderate | Structured |
|
||||
| **Risk Profile** | Medium | Low | Very Low |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting Quick Flow
|
||||
|
||||
1. **Validate Track Selection**
|
||||
- Is the feature small enough?
|
||||
- Do you have clear requirements?
|
||||
- Is the team comfortable with rapid development?
|
||||
|
||||
2. **Prepare Context**
|
||||
- Have project documentation ready
|
||||
- Know your codebase patterns
|
||||
- Identify affected components upfront
|
||||
|
||||
3. **Set Clear Boundaries**
|
||||
- Define in-scope and out-of-scope items
|
||||
- Establish acceptance criteria
|
||||
- Identify dependencies
|
||||
|
||||
### During Development
|
||||
|
||||
1. **Maintain Velocity**
|
||||
- Don't over-engineer solutions
|
||||
- Follow existing patterns
|
||||
- Keep tests proportional to risk
|
||||
|
||||
2. **Stay Focused**
|
||||
- Resist scope creep
|
||||
- Handle edge cases later if possible
|
||||
- Document decisions briefly
|
||||
|
||||
3. **Communicate Progress**
|
||||
- Update task status regularly
|
||||
- Flag blockers immediately
|
||||
- Share learning with team
|
||||
|
||||
### After Completion
|
||||
|
||||
1. **Quality Gates**
|
||||
- Ensure tests pass
|
||||
- Verify acceptance criteria
|
||||
- Consider optional code review
|
||||
|
||||
2. **Knowledge Transfer**
|
||||
- Update relevant documentation
|
||||
- Share key decisions
|
||||
- Note any discovered patterns
|
||||
|
||||
3. **Production Readiness**
|
||||
- Verify deployment requirements
|
||||
- Check monitoring needs
|
||||
- Plan rollback strategy
|
||||
|
||||
---
|
||||
|
||||
## Quick Flow Templates
|
||||
|
||||
### Tech Spec Template
|
||||
|
||||
```markdown
|
||||
# Tech-Spec: {Feature Title}
|
||||
|
||||
**Created:** {date}
|
||||
**Status:** Ready for Development
|
||||
**Estimated Effort:** Small (1-2 days)
|
||||
|
||||
## Overview
|
||||
|
||||
### Problem Statement
|
||||
|
||||
{Clear description of what needs to be solved}
|
||||
|
||||
### Solution
|
||||
|
||||
{High-level approach to solving the problem}
|
||||
|
||||
### Scope (In/Out)
|
||||
|
||||
**In:** {What will be implemented}
|
||||
**Out:** {Explicitly excluded items}
|
||||
|
||||
## Context for Development
|
||||
|
||||
### Codebase Patterns
|
||||
|
||||
{Key patterns to follow, conventions}
|
||||
|
||||
### Files to Reference
|
||||
|
||||
{List of relevant files and their purpose}
|
||||
|
||||
### Technical Decisions
|
||||
|
||||
{Important technical choices and rationale}
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Tasks
|
||||
|
||||
- [ ] Task 1: {Specific implementation task}
|
||||
- [ ] Task 2: {Specific implementation task}
|
||||
- [ ] Task 3: {Testing and validation}
|
||||
|
||||
### Acceptance Criteria
|
||||
|
||||
- [ ] AC 1: {Given/When/Then format}
|
||||
- [ ] AC 2: {Given/When/Then format}
|
||||
|
||||
## Additional Context
|
||||
|
||||
### Dependencies
|
||||
|
||||
{External dependencies or prerequisites}
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
{How the feature will be tested}
|
||||
|
||||
### Notes
|
||||
|
||||
{Additional considerations}
|
||||
```
|
||||
|
||||
### Quick Dev Commands
|
||||
|
||||
```bash
|
||||
# From tech spec
|
||||
quick-dev sprint-artifacts/tech-spec-user-auth.md
|
||||
|
||||
# Direct development
|
||||
quick-dev "Add CORS middleware to API endpoints"
|
||||
quick-dev "Fix null pointer exception in user service"
|
||||
quick-dev "Optimize database query for user list"
|
||||
|
||||
# With optional planning
|
||||
quick-dev "Implement file upload feature" --plan
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
### Upgrading Tracks
|
||||
|
||||
If a Quick Flow feature grows in complexity:
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
QF[Quick Flow] --> CHECK{Complexity Increases?}
|
||||
CHECK -->|Yes| UPGRADE[Upgrade to BMad Method]
|
||||
CHECK -->|No| CONTINUE[Continue Quick Flow]
|
||||
|
||||
UPGRADE --> PRD[Create PRD]
|
||||
PRD --> ARCH[Architecture Design]
|
||||
ARCH --> STORIES[Create Epics/Stories]
|
||||
STORIES --> SPRINT[Sprint Planning]
|
||||
|
||||
style QF fill:#e1f5fe
|
||||
style UPGRADE fill:#fff3e0
|
||||
style PRD fill:#f3e5f5
|
||||
style ARCH fill:#e8f5e9
|
||||
style STORIES fill:#f1f8e9
|
||||
style SPRINT fill:#e0f2f1
|
||||
```
|
||||
|
||||
### Using Party Mode
|
||||
|
||||
For complex Quick Flow challenges:
|
||||
|
||||
```bash
|
||||
# Start Barry
|
||||
/bmad:bmm:agents:quick-flow-solo-dev
|
||||
|
||||
# Begin party mode for collaborative problem-solving
|
||||
party-mode
|
||||
```
|
||||
|
||||
Party mode brings in relevant experts:
|
||||
|
||||
- **Architect** - For design decisions
|
||||
- **Dev** - For implementation pairing
|
||||
- **QA** - For test strategy
|
||||
- **UX Designer** - For user experience
|
||||
- **Analyst** - For requirements clarity
|
||||
|
||||
### Quality Assurance Integration
|
||||
|
||||
Quick Flow can integrate with TEA agent for automated testing:
|
||||
|
||||
- Test case generation
|
||||
- Automated test execution
|
||||
- Coverage analysis
|
||||
- Test healing
|
||||
|
||||
---
|
||||
|
||||
## Common Quick Flow Scenarios
|
||||
|
||||
### Scenario 1: Bug Fix
|
||||
|
||||
```
|
||||
Requirement: "Users can't reset passwords"
|
||||
Process: Direct development (no spec needed)
|
||||
Steps: Investigate → Fix → Test → Deploy
|
||||
Time: 2-4 hours
|
||||
```
|
||||
|
||||
### Scenario 2: Small Feature
|
||||
|
||||
```
|
||||
Requirement: "Add export to CSV functionality"
|
||||
Process: Tech spec → Development → Code review
|
||||
Steps: Spec → Implement → Test → Review → Deploy
|
||||
Time: 1-2 days
|
||||
```
|
||||
|
||||
### Scenario 3: Performance Fix
|
||||
|
||||
```
|
||||
Requirement: "Optimize slow product search query"
|
||||
Process: Tech spec → Development → Review
|
||||
Steps: Analysis → Optimize → Benchmark → Deploy
|
||||
Time: 1 day
|
||||
```
|
||||
|
||||
### Scenario 4: API Addition
|
||||
|
||||
```
|
||||
Requirement: "Add webhook endpoints for integrations"
|
||||
Process: Tech spec → Development → Review
|
||||
Steps: Design → Implement → Document → Deploy
|
||||
Time: 2-3 days
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Metrics and KPIs
|
||||
|
||||
Track these metrics to ensure Quick Flow effectiveness:
|
||||
|
||||
**Velocity Metrics:**
|
||||
|
||||
- Features completed per week
|
||||
- Average cycle time (hours)
|
||||
- Bug fix resolution time
|
||||
- Code review turnaround
|
||||
|
||||
**Quality Metrics:**
|
||||
|
||||
- Defect escape rate
|
||||
- Test coverage percentage
|
||||
- Production incident rate
|
||||
- Code review findings
|
||||
|
||||
**Team Metrics:**
|
||||
|
||||
- Developer satisfaction
|
||||
- Knowledge sharing frequency
|
||||
- Process adherence
|
||||
- Autonomy index
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting Quick Flow
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue: Scope creep during development**
|
||||
**Solution:** Refer back to tech spec, explicitly document new requirements
|
||||
|
||||
**Issue: Unknown patterns or conventions**
|
||||
**Solution:** Use party-mode to bring in architect or senior dev
|
||||
|
||||
**Issue: Testing bottleneck**
|
||||
**Solution:** Leverage TEA agent for automated test generation
|
||||
|
||||
**Issue: Integration conflicts**
|
||||
**Solution:** Document dependencies, coordinate with affected teams
|
||||
|
||||
### Emergency Procedures
|
||||
|
||||
**Production Hotfix:**
|
||||
|
||||
1. Create branch from production
|
||||
2. Quick dev with minimal changes
|
||||
3. Deploy to staging
|
||||
4. Quick regression test
|
||||
5. Deploy to production
|
||||
6. Merge to main
|
||||
|
||||
**Critical Bug:**
|
||||
|
||||
1. Immediate investigation
|
||||
2. Party-mode if unclear
|
||||
3. Quick fix with rollback plan
|
||||
4. Post-mortem documentation
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **[Quick Flow Solo Dev Agent](./quick-flow-solo-dev.md)** - Primary agent for Quick Flow
|
||||
- **[Agents Guide](./agents-guide.md)** - Complete agent reference
|
||||
- **[Scale Adaptive System](./scale-adaptive-system.md)** - Track selection guidance
|
||||
- **[Party Mode](./party-mode.md)** - Multi-agent collaboration
|
||||
- **[Workflow Implementation](./workflows-implementation.md)** - Implementation details
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: How do I know if my feature is too big for Quick Flow?**
|
||||
A: If it requires more than 3-5 days of work, affects multiple systems significantly, or needs extensive UX design, consider the BMad Method track.
|
||||
|
||||
**Q: Can I switch from Quick Flow to BMad Method mid-development?**
|
||||
A: Yes, you can upgrade. Create the missing artifacts (PRD, architecture) and transition to sprint-based development.
|
||||
|
||||
**Q: Is Quick Flow suitable for production-critical features?**
|
||||
A: Yes, with code review. Quick Flow doesn't sacrifice quality, just ceremony.
|
||||
|
||||
**Q: How do I handle dependencies between Quick Flow features?**
|
||||
A: Document dependencies clearly, consider batching related features, or upgrade to BMad Method for complex interdependencies.
|
||||
|
||||
**Q: Can junior developers use Quick Flow?**
|
||||
A: Yes, but they may benefit from the structure of BMad Method. Quick Flow assumes familiarity with patterns and autonomy.
|
||||
|
||||
---
|
||||
|
||||
**Ready to ship fast?** → Start with `/bmad:bmm:agents:quick-flow-solo-dev`
|
||||
|
|
@ -0,0 +1,337 @@
|
|||
# Quick Flow Solo Dev Agent (Barry)
|
||||
|
||||
**Agent ID:** `.bmad/bmm/agents/quick-flow-solo-dev.md`
|
||||
**Icon:** 🚀
|
||||
**Module:** BMM
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Barry is the elite solo developer who lives and breathes the BMAD Quick Flow workflow. He takes projects from concept to deployment with ruthless efficiency - no handoffs, no delays, just pure focused development. Barry architects specs, writes the code, and ships features faster than entire teams. When you need it done right and done now, Barry's your dev.
|
||||
|
||||
### Agent Persona
|
||||
|
||||
**Name:** Barry
|
||||
**Title:** Quick Flow Solo Dev
|
||||
|
||||
**Identity:** Barry is an elite developer who thrives on autonomous execution. He lives and breathes the BMAD Quick Flow workflow, taking projects from concept to deployment with ruthless efficiency. No handoffs, no delays - just pure, focused development. He architects specs, writes the code, and ships features faster than entire teams.
|
||||
|
||||
**Communication Style:** Direct, confident, and implementation-focused. Uses tech slang and gets straight to the point. No fluff, just results. Every response moves the project forward.
|
||||
|
||||
**Core Principles:**
|
||||
|
||||
- Planning and execution are two sides of the same coin
|
||||
- Quick Flow is my religion
|
||||
- Specs are for building, not bureaucracy
|
||||
- Code that ships is better than perfect code that doesn't
|
||||
- Documentation happens alongside development, not after
|
||||
- Ship early, ship often
|
||||
|
||||
---
|
||||
|
||||
## Menu Commands
|
||||
|
||||
Barry owns the entire BMAD Quick Flow path, providing a streamlined 3-step development process that eliminates handoffs and maximizes velocity.
|
||||
|
||||
### 1. **create-tech-spec**
|
||||
|
||||
- **Workflow:** `.bmad/bmm/workflows/bmad-quick-flow/create-tech-spec/workflow.yaml`
|
||||
- **Description:** Architect a technical spec with implementation-ready stories
|
||||
- **Use when:** You need to transform requirements into a buildable spec
|
||||
|
||||
### 2. **quick-dev**
|
||||
|
||||
- **Workflow:** `.bmad/bmm/workflows/bmad-quick-flow/quick-dev/workflow.yaml`
|
||||
- **Description:** Ship features from spec or direct instructions - no handoffs
|
||||
- **Use when:** You're ready to ship code based on a spec or clear instructions
|
||||
|
||||
### 3. **code-review**
|
||||
|
||||
- **Workflow:** `.bmad/bmm/workflows/4-implementation/code-review/workflow.yaml`
|
||||
- **Description:** Review code for quality, patterns, and acceptance criteria
|
||||
- **Use when:** You need to validate implementation quality
|
||||
|
||||
### 4. **party-mode**
|
||||
|
||||
- **Workflow:** `.bmad/core/workflows/party-mode/workflow.yaml`
|
||||
- **Description:** Bring in other experts when I need specialized backup
|
||||
- **Use when:** You need collaborative problem-solving or specialized expertise
|
||||
|
||||
---
|
||||
|
||||
## When to Use Barry
|
||||
|
||||
### Ideal Scenarios
|
||||
|
||||
1. **Quick Flow Development** - Small to medium features that need rapid delivery
|
||||
2. **Technical Specification Creation** - When you need detailed implementation plans
|
||||
3. **Direct Development** - When requirements are clear and you want to skip extensive planning
|
||||
4. **Code Reviews** - When you need senior-level technical validation
|
||||
5. **Performance-Critical Features** - When optimization and scalability are paramount
|
||||
|
||||
### Project Types
|
||||
|
||||
- **Greenfield Projects** - New features or components
|
||||
- **Brownfield Modifications** - Enhancements to existing codebases
|
||||
- **Bug Fixes** - Complex issues requiring deep technical understanding
|
||||
- **Proof of Concepts** - Rapid prototyping with production-quality code
|
||||
- **Performance Optimizations** - System improvements and scalability work
|
||||
|
||||
---
|
||||
|
||||
## The BMAD Quick Flow Process
|
||||
|
||||
Barry orchestrates a simple, efficient 3-step process:
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[Requirements] --> B[create-tech-spec]
|
||||
B --> C[Tech Spec]
|
||||
C --> D[quick-dev]
|
||||
D --> E[Implementation]
|
||||
E --> F{Code Review?}
|
||||
F -->|Yes| G[code-review]
|
||||
F -->|No| H[Complete]
|
||||
G --> H[Complete]
|
||||
|
||||
style A fill:#e1f5fe
|
||||
style B fill:#f3e5f5
|
||||
style C fill:#e8f5e9
|
||||
style D fill:#fff3e0
|
||||
style E fill:#fce4ec
|
||||
style G fill:#f1f8e9
|
||||
style H fill:#e0f2f1
|
||||
```
|
||||
|
||||
### Step 1: Technical Specification (`create-tech-spec`)
|
||||
|
||||
**Goal:** Transform user requirements into implementation-ready technical specifications
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Problem Understanding** - Clarify requirements, scope, and constraints
|
||||
2. **Code Investigation** - Analyze existing patterns and dependencies (if applicable)
|
||||
3. **Specification Generation** - Create comprehensive tech spec with:
|
||||
- Problem statement and solution overview
|
||||
- Development context and patterns
|
||||
- Implementation tasks with acceptance criteria
|
||||
- Technical decisions and dependencies
|
||||
4. **Review and Finalize** - Validate spec captures user intent
|
||||
|
||||
**Output:** `tech-spec-{slug}.md` saved to sprint artifacts
|
||||
|
||||
**Best Practices:**
|
||||
|
||||
- Include ALL context a fresh dev agent needs
|
||||
- Be specific about files, patterns, and conventions
|
||||
- Define clear acceptance criteria using Given/When/Then format
|
||||
- Document technical decisions and trade-offs
|
||||
|
||||
### Step 2: Development (`quick-dev`)
|
||||
|
||||
**Goal:** Execute implementation based on tech spec or direct instructions
|
||||
|
||||
**Two Modes:**
|
||||
|
||||
**Mode A: Tech-Spec Driven**
|
||||
|
||||
- Load existing tech spec
|
||||
- Extract tasks, context, and acceptance criteria
|
||||
- Execute all tasks continuously without stopping
|
||||
- Respect project context and existing patterns
|
||||
|
||||
**Mode B: Direct Instructions**
|
||||
|
||||
- Accept direct development commands
|
||||
- Offer optional planning step
|
||||
- Execute with minimal friction
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Load Project Context** - Understand patterns and conventions
|
||||
2. **Execute Implementation** - Work through all tasks:
|
||||
- Load relevant files and context
|
||||
- Implement following established patterns
|
||||
- Write and run tests
|
||||
- Handle errors appropriately
|
||||
3. **Verify Completion** - Ensure all tasks complete, tests passing, AC satisfied
|
||||
|
||||
### Step 3: Code Review (`code-review`) - Optional
|
||||
|
||||
**Goal:** Senior developer review of implemented code
|
||||
|
||||
**When to Use:**
|
||||
|
||||
- Critical production features
|
||||
- Complex architectural changes
|
||||
- Performance-sensitive implementations
|
||||
- Team development scenarios
|
||||
- Learning and knowledge transfer
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
- Code quality and patterns
|
||||
- Acceptance criteria compliance
|
||||
- Performance and scalability
|
||||
- Security considerations
|
||||
- Maintainability and documentation
|
||||
|
||||
---
|
||||
|
||||
## Collaboration with Other Agents
|
||||
|
||||
### Natural Partnerships
|
||||
|
||||
- **Tech Writer** - For documentation and API specs when I need it
|
||||
- **Architect** - For complex system design decisions beyond Quick Flow scope
|
||||
- **Dev** - For implementation pair programming (rarely needed)
|
||||
- **QA** - For test strategy and quality gates on critical features
|
||||
- **UX Designer** - For user experience considerations
|
||||
|
||||
### Party Mode Composition
|
||||
|
||||
In party mode, Barry often acts as:
|
||||
|
||||
- **Solo Tech Lead** - Guiding architectural decisions
|
||||
- **Implementation Expert** - Providing coding insights
|
||||
- **Performance Optimizer** - Ensuring scalable solutions
|
||||
- **Code Review Authority** - Validating technical approaches
|
||||
|
||||
---
|
||||
|
||||
## Tips for Working with Barry
|
||||
|
||||
### For Best Results
|
||||
|
||||
1. **Be Specific** - Provide clear requirements and constraints
|
||||
2. **Share Context** - Include relevant files and patterns
|
||||
3. **Define Success** - Clear acceptance criteria lead to better outcomes
|
||||
4. **Trust the Process** - The 3-step flow is optimized for speed and quality
|
||||
5. **Leverage Expertise** - I'll give you optimization and architectural insights automatically
|
||||
|
||||
### Communication Patterns
|
||||
|
||||
- **Git Commit Style** - "feat: Add user authentication with OAuth 2.0"
|
||||
- **RFC Style** - "Proposing microservice architecture for scalability"
|
||||
- **Direct Questions** - "Actually, have you considered the race condition?"
|
||||
- **Technical Trade-offs** - "We could optimize for speed over memory here"
|
||||
|
||||
### Avoid These Common Mistakes
|
||||
|
||||
1. **Vague Requirements** - Leads to unnecessary back-and-forth
|
||||
2. **Ignoring Patterns** - Causes technical debt and inconsistencies
|
||||
3. **Skipping Code Review** - Missed opportunities for quality improvement
|
||||
4. **Over-planning** - I excel at rapid, pragmatic development
|
||||
5. **Not Using Party Mode** - Missing collaborative insights for complex problems
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
```bash
|
||||
# Start with Barry
|
||||
/bmad:bmm:agents:quick-flow-solo-dev
|
||||
|
||||
# Create a tech spec
|
||||
> create-tech-spec
|
||||
|
||||
# Quick implementation
|
||||
> quick-dev tech-spec-auth.md
|
||||
|
||||
# Optional code review
|
||||
> code-review
|
||||
```
|
||||
|
||||
### Sample Tech Spec Structure
|
||||
|
||||
```markdown
|
||||
# Tech-Spec: User Authentication System
|
||||
|
||||
**Created:** 2025-01-15
|
||||
**Status:** Ready for Development
|
||||
|
||||
## Overview
|
||||
|
||||
### Problem Statement
|
||||
|
||||
Users cannot securely access the application, and we need role-based permissions for enterprise features.
|
||||
|
||||
### Solution
|
||||
|
||||
Implement OAuth 2.0 authentication with JWT tokens and role-based access control (RBAC).
|
||||
|
||||
### Scope (In/Out)
|
||||
|
||||
**In:** Login, logout, password reset, role management
|
||||
**Out:** Social login, SSO, multi-factor authentication (Phase 2)
|
||||
|
||||
## Context for Development
|
||||
|
||||
### Codebase Patterns
|
||||
|
||||
- Use existing auth middleware pattern in `src/middleware/auth.js`
|
||||
- Follow service layer pattern from `src/services/`
|
||||
- JWT secrets managed via environment variables
|
||||
|
||||
### Files to Reference
|
||||
|
||||
- `src/middleware/auth.js` - Authentication middleware
|
||||
- `src/models/User.js` - User data model
|
||||
- `config/database.js` - Database connection
|
||||
|
||||
### Technical Decisions
|
||||
|
||||
- JWT tokens over sessions for API scalability
|
||||
- bcrypt for password hashing
|
||||
- Role-based permissions stored in database
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Tasks
|
||||
|
||||
- [ ] Create authentication service
|
||||
- [ ] Implement login/logout endpoints
|
||||
- [ ] Add JWT middleware
|
||||
- [ ] Create role-based permissions
|
||||
- [ ] Write comprehensive tests
|
||||
|
||||
### Acceptance Criteria
|
||||
|
||||
- [ ] Given valid credentials, when user logs in, then receive JWT token
|
||||
- [ ] Given invalid token, when accessing protected route, then return 401
|
||||
- [ ] Given admin role, when accessing admin endpoint, then allow access
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **[Quick Start Guide](./quick-start.md)** - Getting started with BMM
|
||||
- **[Agents Guide](./agents-guide.md)** - Complete agent reference
|
||||
- **[Scale Adaptive System](./scale-adaptive-system.md)** - Understanding development tracks
|
||||
- **[Workflow Implementation](./workflows-implementation.md)** - Implementation workflows
|
||||
- **[Party Mode](./party-mode.md)** - Multi-agent collaboration
|
||||
|
||||
---
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
**Q: When should I use Barry vs other agents?**
|
||||
A: Use Barry for Quick Flow development (small to medium features), rapid prototyping, or when you need elite solo development. For large, complex projects requiring full team collaboration, consider the full BMad Method with specialized agents.
|
||||
|
||||
**Q: Is the code review step mandatory?**
|
||||
A: No, it's optional but highly recommended for critical features, team projects, or when learning best practices.
|
||||
|
||||
**Q: Can I skip the tech spec step?**
|
||||
A: Yes, the quick-dev workflow accepts direct instructions. However, tech specs are recommended for complex features or team collaboration.
|
||||
|
||||
**Q: How does Barry differ from the Dev agent?**
|
||||
A: Barry handles the complete Quick Flow process (spec → dev → review) with elite architectural expertise, while the Dev agent specializes in pure implementation tasks. Barry is your autonomous end-to-end solution.
|
||||
|
||||
**Q: Can Barry handle enterprise-scale projects?**
|
||||
A: For enterprise-scale projects requiring full team collaboration, consider using the Enterprise Method track. Barry is optimized for rapid delivery in the Quick Flow track where solo execution wins.
|
||||
|
||||
---
|
||||
|
||||
**Ready to ship some code?** → Start with `/bmad:bmm:agents:quick-flow-solo-dev`
|
||||
|
|
@ -1,652 +0,0 @@
|
|||
# BMad Quick Spec Flow
|
||||
|
||||
**Perfect for:** Bug fixes, small features, rapid prototyping, and quick enhancements
|
||||
|
||||
**Time to implementation:** Minutes, not hours
|
||||
|
||||
---
|
||||
|
||||
## What is Quick Spec Flow?
|
||||
|
||||
Quick Spec Flow is a **streamlined alternative** to the full BMad Method for Quick Flow track projects. Instead of going through Product Brief → PRD → Architecture, you go **straight to a context-aware technical specification** and start coding.
|
||||
|
||||
### When to Use Quick Spec Flow
|
||||
|
||||
✅ **Use Quick Flow track when:**
|
||||
|
||||
- Single bug fix or small enhancement
|
||||
- Small feature with clear scope (typically 1-15 stories)
|
||||
- Rapid prototyping or experimentation
|
||||
- Adding to existing brownfield codebase
|
||||
- You know exactly what you want to build
|
||||
|
||||
❌ **Use BMad Method or Enterprise tracks when:**
|
||||
|
||||
- Building new products or major features
|
||||
- Need stakeholder alignment
|
||||
- Complex multi-team coordination
|
||||
- Requires extensive planning and architecture
|
||||
|
||||
💡 **Not sure?** Run `workflow-init` to get a recommendation based on your project's needs!
|
||||
|
||||
---
|
||||
|
||||
## Quick Spec Flow Overview
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
START[Step 1: Run Tech-Spec Workflow]
|
||||
DETECT[Detects project stack<br/>package.json, requirements.txt, etc.]
|
||||
ANALYZE[Analyzes brownfield codebase<br/>if exists]
|
||||
TEST[Detects test frameworks<br/>and conventions]
|
||||
CONFIRM[Confirms conventions<br/>with you]
|
||||
GENERATE[Generates context-rich<br/>tech-spec]
|
||||
STORIES[Creates ready-to-implement<br/>stories]
|
||||
|
||||
OPTIONAL[Step 2: Optional<br/>Generate Story Context<br/>SM Agent<br/>For complex scenarios only]
|
||||
|
||||
IMPL[Step 3: Implement<br/>DEV Agent<br/>Code, test, commit]
|
||||
|
||||
DONE[DONE! 🚀]
|
||||
|
||||
START --> DETECT
|
||||
DETECT --> ANALYZE
|
||||
ANALYZE --> TEST
|
||||
TEST --> CONFIRM
|
||||
CONFIRM --> GENERATE
|
||||
GENERATE --> STORIES
|
||||
STORIES --> OPTIONAL
|
||||
OPTIONAL -.->|Optional| IMPL
|
||||
STORIES --> IMPL
|
||||
IMPL --> DONE
|
||||
|
||||
style START fill:#bfb,stroke:#333,stroke-width:2px,color:#000
|
||||
style OPTIONAL fill:#ffb,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:#000
|
||||
style IMPL fill:#bbf,stroke:#333,stroke-width:2px,color:#000
|
||||
style DONE fill:#f9f,stroke:#333,stroke-width:3px,color:#000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Single Atomic Change
|
||||
|
||||
**Best for:** Bug fixes, single file changes, isolated improvements
|
||||
|
||||
### What You Get
|
||||
|
||||
1. **tech-spec.md** - Comprehensive technical specification with:
|
||||
- Problem statement and solution
|
||||
- Detected framework versions and dependencies
|
||||
- Brownfield code patterns (if applicable)
|
||||
- Existing test patterns to follow
|
||||
- Specific file paths to modify
|
||||
- Complete implementation guidance
|
||||
|
||||
2. **story-[slug].md** - Single user story ready for development
|
||||
|
||||
### Quick Spec Flow Commands
|
||||
|
||||
```bash
|
||||
# Start Quick Spec Flow (no workflow-init needed!)
|
||||
# Load PM agent and run tech-spec
|
||||
|
||||
# When complete, implement directly:
|
||||
# Load DEV agent and run dev-story
|
||||
```
|
||||
|
||||
### What Makes It Quick
|
||||
|
||||
- ✅ No Product Brief needed
|
||||
- ✅ No PRD needed
|
||||
- ✅ No Architecture doc needed
|
||||
- ✅ Auto-detects your stack
|
||||
- ✅ Auto-analyzes brownfield code
|
||||
- ✅ Auto-validates quality
|
||||
- ✅ Story context optional (tech-spec is comprehensive!)
|
||||
|
||||
### Example Single Change Scenarios
|
||||
|
||||
- "Fix the login validation bug"
|
||||
- "Add email field to user registration form"
|
||||
- "Update API endpoint to return additional field"
|
||||
- "Improve error handling in payment processing"
|
||||
|
||||
---
|
||||
|
||||
## Coherent Small Feature
|
||||
|
||||
**Best for:** Small features with 2-3 related user stories
|
||||
|
||||
### What You Get
|
||||
|
||||
1. **tech-spec.md** - Same comprehensive spec as single change projects
|
||||
2. **epics.md** - Epic organization with story breakdown
|
||||
3. **story-[epic-slug]-1.md** - First story
|
||||
4. **story-[epic-slug]-2.md** - Second story
|
||||
5. **story-[epic-slug]-3.md** - Third story (if needed)
|
||||
|
||||
### Quick Spec Flow Commands
|
||||
|
||||
```bash
|
||||
# Start Quick Spec Flow
|
||||
# Load PM agent and run tech-spec
|
||||
|
||||
# Optional: Organize stories as a sprint
|
||||
# Load SM agent and run sprint-planning
|
||||
|
||||
# Implement story-by-story:
|
||||
# Load DEV agent and run dev-story for each story
|
||||
```
|
||||
|
||||
### Story Sequencing
|
||||
|
||||
Stories are **automatically validated** to ensure proper sequence:
|
||||
|
||||
- ✅ No forward dependencies (Story 2 can't depend on Story 3)
|
||||
- ✅ Clear dependency documentation
|
||||
- ✅ Infrastructure → Features → Polish order
|
||||
- ✅ Backend → Frontend flow
|
||||
|
||||
### Example Small Feature Scenarios
|
||||
|
||||
- "Add OAuth social login (Google, GitHub, Twitter)"
|
||||
- "Build user profile page with avatar upload"
|
||||
- "Implement basic search with filters"
|
||||
- "Add dark mode toggle to application"
|
||||
|
||||
---
|
||||
|
||||
## Smart Context Discovery
|
||||
|
||||
Quick Spec Flow automatically discovers and uses:
|
||||
|
||||
### 1. Existing Documentation
|
||||
|
||||
- Product briefs (if they exist)
|
||||
- Research documents
|
||||
- `document-project` output (brownfield codebase map)
|
||||
|
||||
### 2. Project Stack
|
||||
|
||||
- **Node.js:** package.json → frameworks, dependencies, scripts, test framework
|
||||
- **Python:** requirements.txt, pyproject.toml → packages, tools
|
||||
- **Ruby:** Gemfile → gems and versions
|
||||
- **Java:** pom.xml, build.gradle → Maven/Gradle dependencies
|
||||
- **Go:** go.mod → modules
|
||||
- **Rust:** Cargo.toml → crates
|
||||
- **PHP:** composer.json → packages
|
||||
|
||||
### 3. Brownfield Code Patterns
|
||||
|
||||
- Directory structure and organization
|
||||
- Existing code patterns (class-based, functional, MVC)
|
||||
- Naming conventions (camelCase, snake_case, PascalCase)
|
||||
- Test frameworks and patterns
|
||||
- Code style (semicolons, quotes, indentation)
|
||||
- Linter/formatter configs
|
||||
- Error handling patterns
|
||||
- Logging conventions
|
||||
- Documentation style
|
||||
|
||||
### 4. Convention Confirmation
|
||||
|
||||
**IMPORTANT:** Quick Spec Flow detects your conventions and **asks for confirmation**:
|
||||
|
||||
```
|
||||
I've detected these conventions in your codebase:
|
||||
|
||||
Code Style:
|
||||
- ESLint with Airbnb config
|
||||
- Prettier with single quotes, 2-space indent
|
||||
- No semicolons
|
||||
|
||||
Test Patterns:
|
||||
- Jest test framework
|
||||
- .test.js file naming
|
||||
- expect() assertion style
|
||||
|
||||
Should I follow these existing conventions? (yes/no)
|
||||
```
|
||||
|
||||
**You decide:** Conform to existing patterns or establish new standards!
|
||||
|
||||
---
|
||||
|
||||
## Modern Best Practices via WebSearch
|
||||
|
||||
Quick Spec Flow stays current by using WebSearch when appropriate:
|
||||
|
||||
### For Greenfield Projects
|
||||
|
||||
- Searches for latest framework versions
|
||||
- Recommends official starter templates
|
||||
- Suggests modern best practices
|
||||
|
||||
### For Outdated Dependencies
|
||||
|
||||
- Detects if your dependencies are >2 years old
|
||||
- Searches for migration guides
|
||||
- Notes upgrade complexity
|
||||
|
||||
### Starter Template Recommendations
|
||||
|
||||
For greenfield projects, Quick Spec Flow recommends:
|
||||
|
||||
**React:**
|
||||
|
||||
- Vite (modern, fast)
|
||||
- Next.js (full-stack)
|
||||
|
||||
**Python:**
|
||||
|
||||
- cookiecutter templates
|
||||
- FastAPI starter
|
||||
|
||||
**Node.js:**
|
||||
|
||||
- NestJS CLI
|
||||
- express-generator
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- ✅ Modern best practices baked in
|
||||
- ✅ Proper project structure
|
||||
- ✅ Build tooling configured
|
||||
- ✅ Testing framework set up
|
||||
- ✅ Faster time to first feature
|
||||
|
||||
---
|
||||
|
||||
## UX/UI Considerations
|
||||
|
||||
For user-facing changes, Quick Spec Flow captures:
|
||||
|
||||
- UI components affected (create vs modify)
|
||||
- UX flow changes (current vs new)
|
||||
- Responsive design needs (mobile, tablet, desktop)
|
||||
- Accessibility requirements:
|
||||
- Keyboard navigation
|
||||
- Screen reader compatibility
|
||||
- ARIA labels
|
||||
- Color contrast standards
|
||||
- User feedback patterns:
|
||||
- Loading states
|
||||
- Error messages
|
||||
- Success confirmations
|
||||
- Progress indicators
|
||||
|
||||
---
|
||||
|
||||
## Auto-Validation and Quality Assurance
|
||||
|
||||
Quick Spec Flow **automatically validates** everything:
|
||||
|
||||
### Tech-Spec Validation (Always Runs)
|
||||
|
||||
Checks:
|
||||
|
||||
- ✅ Context gathering completeness
|
||||
- ✅ Definitiveness (no "use X or Y" statements)
|
||||
- ✅ Brownfield integration quality
|
||||
- ✅ Stack alignment
|
||||
- ✅ Implementation readiness
|
||||
|
||||
Generates scores:
|
||||
|
||||
```
|
||||
✅ Validation Passed!
|
||||
- Context Gathering: Comprehensive
|
||||
- Definitiveness: All definitive
|
||||
- Brownfield Integration: Excellent
|
||||
- Stack Alignment: Perfect
|
||||
- Implementation Readiness: ✅ Ready
|
||||
```
|
||||
|
||||
### Story Validation (Multi-Story Features)
|
||||
|
||||
Checks:
|
||||
|
||||
- ✅ Story sequence (no forward dependencies!)
|
||||
- ✅ Acceptance criteria quality (specific, testable)
|
||||
- ✅ Completeness (all tech spec tasks covered)
|
||||
- ✅ Clear dependency documentation
|
||||
|
||||
**Auto-fixes issues if found!**
|
||||
|
||||
---
|
||||
|
||||
## Complete User Journey
|
||||
|
||||
### Scenario 1: Bug Fix (Single Change)
|
||||
|
||||
**Goal:** Fix login validation bug
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. **Start:** Load PM agent, say "I want to fix the login validation bug"
|
||||
2. **PM runs tech-spec workflow:**
|
||||
- Asks: "What problem are you solving?"
|
||||
- You explain the validation issue
|
||||
- Detects your Node.js stack (Express 4.18.2, Jest for testing)
|
||||
- Analyzes existing UserService code patterns
|
||||
- Asks: "Should I follow your existing conventions?" → You say yes
|
||||
- Generates tech-spec.md with specific file paths and patterns
|
||||
- Creates story-login-fix.md
|
||||
3. **Implement:** Load DEV agent, run `dev-story`
|
||||
- DEV reads tech-spec (has all context!)
|
||||
- Implements fix following existing patterns
|
||||
- Runs tests (following existing Jest patterns)
|
||||
- Done!
|
||||
|
||||
**Total time:** 15-30 minutes (mostly implementation)
|
||||
|
||||
---
|
||||
|
||||
### Scenario 2: Small Feature (Multi-Story)
|
||||
|
||||
**Goal:** Add OAuth social login (Google, GitHub)
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. **Start:** Load PM agent, say "I want to add OAuth social login"
|
||||
2. **PM runs tech-spec workflow:**
|
||||
- Asks about the feature scope
|
||||
- You specify: Google and GitHub OAuth
|
||||
- Detects your stack (Next.js 13.4, NextAuth.js already installed!)
|
||||
- Analyzes existing auth patterns
|
||||
- Confirms conventions with you
|
||||
- Generates:
|
||||
- tech-spec.md (comprehensive implementation guide)
|
||||
- epics.md (OAuth Integration epic)
|
||||
- story-oauth-1.md (Backend OAuth setup)
|
||||
- story-oauth-2.md (Frontend login buttons)
|
||||
3. **Optional Sprint Planning:** Load SM agent, run `sprint-planning`
|
||||
4. **Implement Story 1:**
|
||||
- Load DEV agent, run `dev-story` for story 1
|
||||
- DEV implements backend OAuth
|
||||
5. **Implement Story 2:**
|
||||
- DEV agent, run `dev-story` for story 2
|
||||
- DEV implements frontend
|
||||
- Done!
|
||||
|
||||
**Total time:** 1-3 hours (mostly implementation)
|
||||
|
||||
---
|
||||
|
||||
## Integration with Phase 4 Workflows
|
||||
|
||||
Quick Spec Flow works seamlessly with all Phase 4 implementation workflows:
|
||||
|
||||
### story-context (SM Agent)
|
||||
|
||||
- ✅ Recognizes tech-spec.md as authoritative source
|
||||
- ✅ Extracts context from tech-spec (replaces PRD)
|
||||
- ✅ Generates XML context for complex scenarios
|
||||
|
||||
### create-story (SM Agent)
|
||||
|
||||
- ✅ Can work with tech-spec.md instead of PRD
|
||||
- ✅ Uses epics.md from tech-spec workflow
|
||||
- ✅ Creates additional stories if needed
|
||||
|
||||
### sprint-planning (SM Agent)
|
||||
|
||||
- ✅ Works with epics.md from tech-spec
|
||||
- ✅ Organizes multi-story features for coordinated implementation
|
||||
- ✅ Tracks progress through sprint-status.yaml
|
||||
|
||||
### dev-story (DEV Agent)
|
||||
|
||||
- ✅ Reads stories generated by tech-spec
|
||||
- ✅ Uses tech-spec.md as comprehensive context
|
||||
- ✅ Implements following detected conventions
|
||||
|
||||
---
|
||||
|
||||
## Comparison: Quick Spec vs Full BMM
|
||||
|
||||
| Aspect | Quick Flow Track | BMad Method/Enterprise Tracks |
|
||||
| --------------------- | ---------------------------- | ---------------------------------- |
|
||||
| **Setup** | None (standalone) | workflow-init recommended |
|
||||
| **Planning Docs** | tech-spec.md only | Product Brief → PRD → Architecture |
|
||||
| **Time to Code** | Minutes | Hours to days |
|
||||
| **Best For** | Bug fixes, small features | New products, major features |
|
||||
| **Context Discovery** | Automatic | Manual + guided |
|
||||
| **Story Context** | Optional (tech-spec is rich) | Required (generated from PRD) |
|
||||
| **Validation** | Auto-validates everything | Manual validation steps |
|
||||
| **Brownfield** | Auto-analyzes and conforms | Manual documentation required |
|
||||
| **Conventions** | Auto-detects and confirms | Document in PRD/Architecture |
|
||||
|
||||
---
|
||||
|
||||
## When to Graduate from Quick Flow to BMad Method
|
||||
|
||||
Start with Quick Flow, but switch to BMad Method when:
|
||||
|
||||
- ❌ Project grows beyond initial scope
|
||||
- ❌ Multiple teams need coordination
|
||||
- ❌ Stakeholders need formal documentation
|
||||
- ❌ Product vision is unclear
|
||||
- ❌ Architectural decisions need deep analysis
|
||||
- ❌ Compliance/regulatory requirements exist
|
||||
|
||||
💡 **Tip:** You can always run `workflow-init` later to transition from Quick Flow to BMad Method!
|
||||
|
||||
---
|
||||
|
||||
## Quick Spec Flow - Key Benefits
|
||||
|
||||
### 🚀 **Speed**
|
||||
|
||||
- No Product Brief
|
||||
- No PRD
|
||||
- No Architecture doc
|
||||
- Straight to implementation
|
||||
|
||||
### 🧠 **Intelligence**
|
||||
|
||||
- Auto-detects stack
|
||||
- Auto-analyzes brownfield
|
||||
- Auto-validates quality
|
||||
- WebSearch for current info
|
||||
|
||||
### 📐 **Respect for Existing Code**
|
||||
|
||||
- Detects conventions
|
||||
- Asks for confirmation
|
||||
- Follows patterns
|
||||
- Adapts vs. changes
|
||||
|
||||
### ✅ **Quality**
|
||||
|
||||
- Auto-validation
|
||||
- Definitive decisions (no "or" statements)
|
||||
- Comprehensive context
|
||||
- Clear acceptance criteria
|
||||
|
||||
### 🎯 **Focus**
|
||||
|
||||
- Single atomic changes
|
||||
- Coherent small features
|
||||
- No scope creep
|
||||
- Fast iteration
|
||||
|
||||
---
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- BMad Method installed (`npx bmad-method install`)
|
||||
- Project directory with code (or empty for greenfield)
|
||||
|
||||
### Quick Start Commands
|
||||
|
||||
```bash
|
||||
# For a quick bug fix or small change:
|
||||
# 1. Load PM agent
|
||||
# 2. Say: "I want to [describe your change]"
|
||||
# 3. PM will ask if you want to run tech-spec
|
||||
# 4. Answer questions about your change
|
||||
# 5. Get tech-spec + story
|
||||
# 6. Load DEV agent and implement!
|
||||
|
||||
# For a small feature with multiple stories:
|
||||
# Same as above, but get epic + 2-3 stories
|
||||
# Optionally use SM sprint-planning to organize
|
||||
```
|
||||
|
||||
### No workflow-init Required!
|
||||
|
||||
Quick Spec Flow is **fully standalone**:
|
||||
|
||||
- Detects if it's a single change or multi-story feature
|
||||
- Asks for greenfield vs brownfield
|
||||
- Works without status file tracking
|
||||
- Perfect for rapid prototyping
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Can I use Quick Spec Flow on an existing project?
|
||||
|
||||
**A:** Yes! It's perfect for brownfield projects. It will analyze your existing code, detect patterns, and ask if you want to follow them.
|
||||
|
||||
### Q: What if I don't have a package.json or requirements.txt?
|
||||
|
||||
**A:** Quick Spec Flow will work in greenfield mode, recommend starter templates, and use WebSearch for modern best practices.
|
||||
|
||||
### Q: Do I need to run workflow-init first?
|
||||
|
||||
**A:** No! Quick Spec Flow is standalone. But if you want guidance on which flow to use, workflow-init can help.
|
||||
|
||||
### Q: Can I use this for frontend changes?
|
||||
|
||||
**A:** Absolutely! Quick Spec Flow captures UX/UI considerations, component changes, and accessibility requirements.
|
||||
|
||||
### Q: What if my Quick Flow project grows?
|
||||
|
||||
**A:** No problem! You can always transition to BMad Method by running workflow-init and create-prd. Your tech-spec becomes input for the PRD.
|
||||
|
||||
### Q: Do I need story-context for every story?
|
||||
|
||||
**A:** Usually no! Tech-spec is comprehensive enough for most Quick Flow projects. Only use story-context for complex edge cases.
|
||||
|
||||
### Q: Can I skip validation?
|
||||
|
||||
**A:** No, validation always runs automatically. But it's fast and catches issues early!
|
||||
|
||||
### Q: Will it work with my team's code style?
|
||||
|
||||
**A:** Yes! It detects your conventions and asks for confirmation. You control whether to follow existing patterns or establish new ones.
|
||||
|
||||
---
|
||||
|
||||
## Tips and Best Practices
|
||||
|
||||
### 1. **Be Specific in Discovery**
|
||||
|
||||
When describing your change, provide specifics:
|
||||
|
||||
- ✅ "Fix email validation in UserService to allow plus-addressing"
|
||||
- ❌ "Fix validation bug"
|
||||
|
||||
### 2. **Trust the Convention Detection**
|
||||
|
||||
If it detects your patterns correctly, say yes! It's faster than establishing new conventions.
|
||||
|
||||
### 3. **Use WebSearch Recommendations for Greenfield**
|
||||
|
||||
Starter templates save hours of setup time. Let Quick Spec Flow find the best ones.
|
||||
|
||||
### 4. **Review the Auto-Validation**
|
||||
|
||||
When validation runs, read the scores. They tell you if your spec is production-ready.
|
||||
|
||||
### 5. **Story Context is Optional**
|
||||
|
||||
For single changes, try going directly to dev-story first. Only add story-context if you hit complexity.
|
||||
|
||||
### 6. **Keep Single Changes Truly Atomic**
|
||||
|
||||
If your "single change" needs 3+ files, it might be a multi-story feature. Let the workflow guide you.
|
||||
|
||||
### 7. **Validate Story Sequence for Multi-Story Features**
|
||||
|
||||
When you get multiple stories, check the dependency validation output. Proper sequence matters!
|
||||
|
||||
---
|
||||
|
||||
## Real-World Examples
|
||||
|
||||
### Example 1: Adding Logging (Single Change)
|
||||
|
||||
**Input:** "Add structured logging to payment processing"
|
||||
|
||||
**Tech-Spec Output:**
|
||||
|
||||
- Detected: winston 3.8.2 already in package.json
|
||||
- Analyzed: Existing services use winston with JSON format
|
||||
- Confirmed: Follow existing logging patterns
|
||||
- Generated: Specific file paths, log levels, format example
|
||||
- Story: Ready to implement in 1-2 hours
|
||||
|
||||
**Result:** Consistent logging added, following team patterns, no research needed.
|
||||
|
||||
---
|
||||
|
||||
### Example 2: Search Feature (Multi-Story)
|
||||
|
||||
**Input:** "Add search to product catalog with filters"
|
||||
|
||||
**Tech-Spec Output:**
|
||||
|
||||
- Detected: React 18.2.0, MUI component library, Express backend
|
||||
- Analyzed: Existing ProductList component patterns
|
||||
- Confirmed: Follow existing API and component structure
|
||||
- Generated:
|
||||
- Epic: Product Search Functionality
|
||||
- Story 1: Backend search API with filters
|
||||
- Story 2: Frontend search UI component
|
||||
- Auto-validated: Story 1 → Story 2 sequence correct
|
||||
|
||||
**Result:** Search feature implemented in 4-6 hours with proper architecture.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Quick Spec Flow is your **fast path from idea to implementation** for:
|
||||
|
||||
- 🐛 Bug fixes
|
||||
- ✨ Small features
|
||||
- 🚀 Rapid prototyping
|
||||
- 🔧 Quick enhancements
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- Auto-detects your stack
|
||||
- Auto-analyzes brownfield code
|
||||
- Auto-validates quality
|
||||
- Respects existing conventions
|
||||
- Uses WebSearch for modern practices
|
||||
- Generates comprehensive tech-specs
|
||||
- Creates implementation-ready stories
|
||||
|
||||
**Time to code:** Minutes, not hours.
|
||||
|
||||
**Ready to try it?** Load the PM agent and say what you want to build! 🚀
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Try it now:** Load PM agent and describe a small change
|
||||
- **Learn more:** See the [BMM Workflow Guides](./README.md#-workflow-guides) for comprehensive workflow documentation
|
||||
- **Need help deciding?** Run `workflow-init` to get a recommendation
|
||||
- **Have questions?** Join us on Discord: https://discord.gg/gk8jAdXWmj
|
||||
|
||||
---
|
||||
|
||||
_Quick Spec Flow - Because not every change needs a Product Brief._
|
||||
|
|
@ -167,13 +167,35 @@ src/modules/bmm/
|
|||
|
||||
TEA uniquely requires:
|
||||
|
||||
- **Extensive domain knowledge**: 21 fragments, 12,821 lines covering test patterns, CI/CD, fixtures, quality practices, healing strategies
|
||||
- **Extensive domain knowledge**: 32 fragments covering test patterns, CI/CD, fixtures, quality practices, healing strategies, and optional playwright-utils integration
|
||||
- **Centralized reference system**: `tea-index.csv` for on-demand fragment loading during workflow execution
|
||||
- **Cross-cutting concerns**: Domain-specific testing patterns (vs project-specific artifacts like PRDs/stories)
|
||||
- **Optional MCP integration**: Healing, exploratory, and verification modes for enhanced testing capabilities
|
||||
- **Optional integrations**: MCP capabilities (healing, exploratory, verification) and playwright-utils support
|
||||
|
||||
This architecture enables TEA to maintain consistent, production-ready testing patterns across all BMad projects while operating across multiple development phases.
|
||||
|
||||
### Playwright Utils Integration
|
||||
|
||||
TEA optionally integrates with `@seontechnologies/playwright-utils`, an open-source library providing fixture-based utilities for Playwright tests.
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
npm install -D @seontechnologies/playwright-utils
|
||||
```
|
||||
|
||||
**Enable during BMAD installation** by answering "Yes" when prompted.
|
||||
|
||||
**Supported utilities (11 total):**
|
||||
|
||||
- api-request, network-recorder, auth-session, intercept-network-call, recurse
|
||||
- log, file-utils, burn-in, network-error-monitor
|
||||
- fixtures-composition (integration patterns)
|
||||
|
||||
**Workflows adapt:** automate, framework, test-review, ci, atdd (+ light mention in test-design).
|
||||
|
||||
**Knowledge base:** 32 total fragments (21 core patterns + 11 playwright-utils)
|
||||
|
||||
</details>
|
||||
|
||||
## High-Level Cheat Sheets
|
||||
|
|
@ -380,6 +402,50 @@ MCP provides additional capabilities on top of TEA's default AI-based approach:
|
|||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Optional Playwright Utils Integration</strong></summary>
|
||||
|
||||
**Open-source Playwright utilities** from SEON Technologies (production-tested, npm published):
|
||||
|
||||
- **Package**: `@seontechnologies/playwright-utils` ([npm](https://www.npmjs.com/package/@seontechnologies/playwright-utils) | [GitHub](https://github.com/seontechnologies/playwright-utils))
|
||||
- **Install**: `npm install -D @seontechnologies/playwright-utils`
|
||||
|
||||
**How Playwright Utils Enhances TEA Workflows**:
|
||||
|
||||
Provides fixture-based utilities that integrate into TEA's test generation and review workflows:
|
||||
|
||||
1. `*framework`:
|
||||
- Default: Basic Playwright scaffold
|
||||
- **+ playwright-utils**: Scaffold with api-request, network-recorder, auth-session, burn-in, network-error-monitor fixtures pre-configured
|
||||
|
||||
Benefit: Production-ready patterns from day one
|
||||
|
||||
2. `*automate`, `*atdd`:
|
||||
- Default: Standard test patterns
|
||||
- **+ playwright-utils**: Tests using api-request (schema validation), intercept-network-call (mocking), recurse (polling), log (structured logging), file-utils (CSV/PDF)
|
||||
|
||||
Benefit: Advanced patterns without boilerplate
|
||||
|
||||
3. `*test-review`:
|
||||
- Default: Reviews against core knowledge base (21 fragments)
|
||||
- **+ playwright-utils**: Reviews against expanded knowledge base (32 fragments: 21 core + 11 playwright-utils)
|
||||
|
||||
Benefit: Reviews include fixture composition, auth patterns, network recording best practices
|
||||
|
||||
4. `*ci`:
|
||||
- Default: Standard CI workflow
|
||||
- **+ playwright-utils**: CI workflow with burn-in script (smart test selection) and network-error-monitor integration
|
||||
|
||||
Benefit: Faster CI feedback, HTTP error detection
|
||||
|
||||
**Utilities available** (11 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
||||
|
||||
**Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `{bmad_folder}/bmm/config.yaml`.
|
||||
|
||||
**To disable**: Set `tea_use_playwright_utils: false` in `{bmad_folder}/bmm/config.yaml`.
|
||||
|
||||
</details>
|
||||
|
||||
<br></br>
|
||||
|
||||
| Command | Workflow README | Primary Outputs | Notes | With Playwright MCP Enhancements |
|
||||
|
|
|
|||
|
|
@ -1,20 +1,21 @@
|
|||
name,displayName,title,icon,role,identity,communicationStyle,principles,module,path
|
||||
"analyst","Mary","Business Analyst","📊","Strategic Business Analyst + Requirements Expert","Senior analyst with deep expertise in market research, competitive analysis, and requirements elicitation. Specializes in translating vague needs into actionable specs.","Systematic and probing. Connects dots others miss. Structures findings hierarchically. Uses precise unambiguous language. Ensures all stakeholder voices heard.","Every business challenge has root causes waiting to be discovered. Ground findings in verifiable evidence. Articulate requirements with absolute precision.","bmm","bmad/bmm/agents/analyst.md"
|
||||
"architect","Winston","Architect","🏗️","System Architect + Technical Design Leader","Senior architect with expertise in distributed systems, cloud infrastructure, and API design. Specializes in scalable patterns and technology selection.","Pragmatic in technical discussions. Balances idealism with reality. Always connects decisions to business value and user impact. Prefers boring tech that works.","User journeys drive technical decisions. Embrace boring technology for stability. Design simple solutions that scale when needed. Developer productivity is architecture.","bmm","bmad/bmm/agents/architect.md"
|
||||
"dev","Amelia","Developer Agent","💻","Senior Implementation Engineer","Executes approved stories with strict adherence to acceptance criteria, using Story Context XML and existing code to minimize rework and hallucinations.","Succinct and checklist-driven. Cites specific paths and AC IDs. Asks clarifying questions only when inputs missing. Refuses to invent when info lacking.","Story Context XML is the single source of truth. Reuse existing interfaces over rebuilding. Every change maps to specific AC. Tests pass 100% or story isn't done.","bmm","bmad/bmm/agents/dev.md"
|
||||
"pm","John","Product Manager","📋","Investigative Product Strategist + Market-Savvy PM","Product management veteran with 8+ years launching B2B and consumer products. Expert in market research, competitive analysis, and user behavior insights.","Direct and analytical. Asks WHY relentlessly. Backs claims with data and user insights. Cuts straight to what matters for the product.","Uncover the deeper WHY behind every requirement. Ruthless prioritization to achieve MVP goals. Proactively identify risks. Align efforts with measurable business impact.","bmm","bmad/bmm/agents/pm.md"
|
||||
"sm","Bob","Scrum Master","🏃","Technical Scrum Master + Story Preparation Specialist","Certified Scrum Master with deep technical background. Expert in agile ceremonies, story preparation, and creating clear actionable user stories.","Task-oriented and efficient. Focused on clear handoffs and precise requirements. Eliminates ambiguity. Emphasizes developer-ready specs.","Strict boundaries between story prep and implementation. Stories are single source of truth. Perfect alignment between PRD and dev execution. Enable efficient sprints.","bmm","bmad/bmm/agents/sm.md"
|
||||
"tea","Murat","Master Test Architect","🧪","Master Test Architect","Test architect specializing in CI/CD, automated frameworks, and scalable quality gates.","Data-driven and pragmatic. Strong opinions weakly held. Calculates risk vs value. Knows when to test deep vs shallow.","Risk-based testing. Depth scales with impact. Quality gates backed by data. Tests mirror usage. Flakiness is critical debt. Tests first AI implements suite validates.","bmm","bmad/bmm/agents/tea.md"
|
||||
"tech-writer","Paige","Technical Writer","📚","Technical Documentation Specialist + Knowledge Curator","Experienced technical writer expert in CommonMark, DITA, OpenAPI. Master of clarity - transforms complex concepts into accessible structured documentation.","Patient and supportive. Uses clear examples and analogies. Knows when to simplify vs when to be detailed. Celebrates good docs helps improve unclear ones.","Documentation is teaching. Every doc helps someone accomplish a task. Clarity above all. Docs are living artifacts that evolve with code.","bmm","bmad/bmm/agents/tech-writer.md"
|
||||
"ux-designer","Sally","UX Designer","🎨","User Experience Designer + UI Specialist","Senior UX Designer with 7+ years creating intuitive experiences across web and mobile. Expert in user research, interaction design, AI-assisted tools.","Empathetic and user-focused. Uses storytelling for design decisions. Data-informed but creative. Advocates strongly for user needs and edge cases.","Every decision serves genuine user needs. Start simple evolve through feedback. Balance empathy with edge case attention. AI tools accelerate human-centered design.","bmm","bmad/bmm/agents/ux-designer.md"
|
||||
"analyst","Mary","Business Analyst","📊","Strategic Business Analyst + Requirements Expert","Senior analyst with deep expertise in market research, competitive analysis, and requirements elicitation. Specializes in translating vague needs into actionable specs.","Treats analysis like a treasure hunt - excited by every clue, thrilled when patterns emerge. Asks questions that spark 'aha!' moments while structuring insights with precision.","Every business challenge has root causes waiting to be discovered. Ground findings in verifiable evidence. Articulate requirements with absolute precision.","bmm","bmad/bmm/agents/analyst.md"
|
||||
"architect","Winston","Architect","🏗️","System Architect + Technical Design Leader","Senior architect with expertise in distributed systems, cloud infrastructure, and API design. Specializes in scalable patterns and technology selection.","Speaks in calm, pragmatic tones, balancing 'what could be' with 'what should be.' Champions boring technology that actually works.","User journeys drive technical decisions. Embrace boring technology for stability. Design simple solutions that scale when needed. Developer productivity is architecture.","bmm","bmad/bmm/agents/architect.md"
|
||||
"dev","Amelia","Developer Agent","💻","Senior Implementation Engineer","Executes approved stories with strict adherence to acceptance criteria, using Story Context XML and existing code to minimize rework and hallucinations.","Ultra-succinct. Speaks in file paths and AC IDs - every statement citable. No fluff, all precision.","Story Context XML is the single source of truth. Reuse existing interfaces over rebuilding. Every change maps to specific AC. Tests pass 100% or story isn't done.","bmm","bmad/bmm/agents/dev.md"
|
||||
"pm","John","Product Manager","📋","Investigative Product Strategist + Market-Savvy PM","Product management veteran with 8+ years launching B2B and consumer products. Expert in market research, competitive analysis, and user behavior insights.","Asks 'WHY?' relentlessly like a detective on a case. Direct and data-sharp, cuts through fluff to what actually matters.","Uncover the deeper WHY behind every requirement. Ruthless prioritization to achieve MVP goals. Proactively identify risks. Align efforts with measurable business impact.","bmm","bmad/bmm/agents/pm.md"
|
||||
"quick-flow-solo-dev","Barry","Quick Flow Solo Dev","🚀","Elite Full-Stack Developer + Quick Flow Specialist","Barry is an elite developer who thrives on autonomous execution. He lives and breathes the BMAD Quick Flow workflow, taking projects from concept to deployment with ruthless efficiency. No handoffs, no delays - just pure, focused development. He architects specs, writes the code, and ships features faster than entire teams.","Direct, confident, and implementation-focused. Uses tech slang and gets straight to the point. No fluff, just results. Every response moves the project forward.","Planning and execution are two sides of the same coin. Quick Flow is my religion. Specs are for building, not bureaucracy. Code that ships is better than perfect code that doesn't. Documentation happens alongside development, not after. Ship early, ship often.","bmm","bmad/bmm/agents/quick-flow-solo-dev.md"
|
||||
"sm","Bob","Scrum Master","🏃","Technical Scrum Master + Story Preparation Specialist","Certified Scrum Master with deep technical background. Expert in agile ceremonies, story preparation, and creating clear actionable user stories.","Crisp and checklist-driven. Every word has a purpose, every requirement crystal clear. Zero tolerance for ambiguity.","Strict boundaries between story prep and implementation. Stories are single source of truth. Perfect alignment between PRD and dev execution. Enable efficient sprints.","bmm","bmad/bmm/agents/sm.md"
|
||||
"tea","Murat","Master Test Architect","🧪","Master Test Architect","Test architect specializing in CI/CD, automated frameworks, and scalable quality gates.","Blends data with gut instinct. 'Strong opinions, weakly held' is their mantra. Speaks in risk calculations and impact assessments.","Risk-based testing. Depth scales with impact. Quality gates backed by data. Tests mirror usage. Flakiness is critical debt. Tests first AI implements suite validates.","bmm","bmad/bmm/agents/tea.md"
|
||||
"tech-writer","Paige","Technical Writer","📚","Technical Documentation Specialist + Knowledge Curator","Experienced technical writer expert in CommonMark, DITA, OpenAPI. Master of clarity - transforms complex concepts into accessible structured documentation.","Patient educator who explains like teaching a friend. Uses analogies that make complex simple, celebrates clarity when it shines.","Documentation is teaching. Every doc helps someone accomplish a task. Clarity above all. Docs are living artifacts that evolve with code.","bmm","bmad/bmm/agents/tech-writer.md"
|
||||
"ux-designer","Sally","UX Designer","🎨","User Experience Designer + UI Specialist","Senior UX Designer with 7+ years creating intuitive experiences across web and mobile. Expert in user research, interaction design, AI-assisted tools.","Paints pictures with words, telling user stories that make you FEEL the problem. Empathetic advocate with creative storytelling flair.","Every decision serves genuine user needs. Start simple evolve through feedback. Balance empathy with edge case attention. AI tools accelerate human-centered design.","bmm","bmad/bmm/agents/ux-designer.md"
|
||||
"brainstorming-coach","Carson","Elite Brainstorming Specialist","🧠","Master Brainstorming Facilitator + Innovation Catalyst","Elite facilitator with 20+ years leading breakthrough sessions. Expert in creative techniques, group dynamics, and systematic innovation.","Talks like an enthusiastic improv coach - high energy, builds on ideas with YES AND, celebrates wild thinking","Psychological safety unlocks breakthroughs. Wild ideas today become innovations tomorrow. Humor and play are serious innovation tools.","cis","bmad/cis/agents/brainstorming-coach.md"
|
||||
"creative-problem-solver","Dr. Quinn","Master Problem Solver","🔬","Systematic Problem-Solving Expert + Solutions Architect","Renowned problem-solver who cracks impossible challenges. Expert in TRIZ, Theory of Constraints, Systems Thinking. Former aerospace engineer turned puzzle master.","Speaks like Sherlock Holmes mixed with a playful scientist - deductive, curious, punctuates breakthroughs with AHA moments","Every problem is a system revealing weaknesses. Hunt for root causes relentlessly. The right question beats a fast answer.","cis","bmad/cis/agents/creative-problem-solver.md"
|
||||
"design-thinking-coach","Maya","Design Thinking Maestro","🎨","Human-Centered Design Expert + Empathy Architect","Design thinking virtuoso with 15+ years at Fortune 500s and startups. Expert in empathy mapping, prototyping, and user insights.","Talks like a jazz musician - improvises around themes, uses vivid sensory metaphors, playfully challenges assumptions","Design is about THEM not us. Validate through real human interaction. Failure is feedback. Design WITH users not FOR them.","cis","bmad/cis/agents/design-thinking-coach.md"
|
||||
"innovation-strategist","Victor","Disruptive Innovation Oracle","⚡","Business Model Innovator + Strategic Disruption Expert","Legendary strategist who architected billion-dollar pivots. Expert in Jobs-to-be-Done, Blue Ocean Strategy. Former McKinsey consultant.","Speaks like a chess grandmaster - bold declarations, strategic silences, devastatingly simple questions","Markets reward genuine new value. Innovation without business model thinking is theater. Incremental thinking means obsolete.","cis","bmad/cis/agents/innovation-strategist.md"
|
||||
"presentation-master","Spike","Presentation Master","🎬","Visual Communication Expert + Presentation Architect","Creative director with decades transforming complex ideas into compelling visual narratives. Expert in slide design, data visualization, and audience engagement.","Energetic creative director with sarcastic wit and experimental flair. Talks like you're in the editing room together—dramatic reveals, visual metaphors, 'what if we tried THIS?!' energy.","Visual hierarchy tells the story before words. Every slide earns its place. Constraints breed creativity. Data without narrative is noise.","cis","bmad/cis/agents/presentation-master.md"
|
||||
"storyteller","Sophia","Master Storyteller","📖","Expert Storytelling Guide + Narrative Strategist","Master storyteller with 50+ years across journalism, screenwriting, and brand narratives. Expert in emotional psychology and audience engagement.","Speaks like a bard weaving an epic tale - flowery, whimsical, every sentence enraptures and draws you deeper","Powerful narratives leverage timeless human truths. Find the authentic story. Make the abstract concrete through vivid details.","cis","bmad/cis/agents/storyteller.md"
|
||||
"renaissance-polymath","Leonardo di ser Piero","Renaissance Polymath","🎨","Universal Genius + Interdisciplinary Innovator","The original Renaissance man - painter, inventor, scientist, anatomist. Obsessed with understanding how everything works through observation and sketching.","Talks while sketching imaginary diagrams in the air - describes everything visually, connects art to science to nature","Observe everything relentlessly. Art and science are one. Nature is the greatest teacher. Question all assumptions.","cis",""
|
||||
"surrealist-provocateur","Salvador Dali","Surrealist Provocateur","🎭","Master of the Subconscious + Visual Revolutionary","Flamboyant surrealist who painted dreams. Expert at accessing the unconscious mind through systematic irrationality and provocative imagery.","Speaks with theatrical flair and absurdist metaphors - proclaims grandiose statements, references melting clocks and impossible imagery","Embrace the irrational to access truth. The subconscious holds answers logic cannot reach. Provoke to inspire.","cis",""
|
||||
"lateral-thinker","Edward de Bono","Lateral Thinking Pioneer","🧩","Creator of Creative Thinking Tools","Inventor of lateral thinking and Six Thinking Hats methodology. Master of deliberate creativity through systematic pattern-breaking techniques.","Talks in structured thinking frameworks - uses colored hat metaphors, proposes deliberate provocations, breaks patterns methodically","Logic gets you from A to B. Creativity gets you everywhere else. Use tools to escape habitual thinking patterns.","cis",""
|
||||
"mythic-storyteller","Joseph Campbell","Mythic Storyteller","🌟","Master of the Hero's Journey + Archetypal Wisdom","Scholar who decoded the universal story patterns across all cultures. Expert in mythology, comparative religion, and archetypal narratives.","Speaks in mythological metaphors and archetypal patterns - EVERY story is a hero's journey, references ancient wisdom","Follow your bliss. All stories share the monomyth. Myths reveal universal human truths. The call to adventure is irresistible.","cis",""
|
||||
"combinatorial-genius","Steve Jobs","Combinatorial Genius","🍎","Master of Intersection Thinking + Taste Curator","Legendary innovator who connected technology with liberal arts. Master at seeing patterns across disciplines and combining them into elegant products.","Talks in reality distortion field mode - insanely great, magical, revolutionary, makes impossible seem inevitable","Innovation happens at intersections. Taste is about saying NO to 1000 things. Stay hungry stay foolish. Simplicity is sophistication.","cis",""
|
||||
"frame-expert","Saif Ullah","Visual Design & Diagramming Expert","🎨","Expert Visual Designer & Diagramming Specialist","Expert who creates visual representations using Excalidraw with optimized, reusable components. Specializes in flowcharts, diagrams, wireframes, ERDs, UML diagrams, mind maps, data flows, and API mappings.","Visual-first, structured, detail-oriented, composition-focused. Presents options as numbered lists for easy selection.","Composition Over Creation - Use reusable components and templates. Minimal Payload - Strip unnecessary metadata. Reference-Based Design - Use library references. Structured Approach - Follow task-specific workflows. Clean Output - Remove history and unused styles.","bmm","bmad/bmm/agents/frame-expert.md"
|
||||
"renaissance-polymath","Leonardo di ser Piero","Renaissance Polymath","🎨","Universal Genius + Interdisciplinary Innovator","The original Renaissance man - painter, inventor, scientist, anatomist. Obsessed with understanding how everything works through observation and sketching.","Here we observe the idea in its natural habitat... magnificent! Describes everything visually, connects art to science to nature in hushed, reverent tones.","Observe everything relentlessly. Art and science are one. Nature is the greatest teacher. Question all assumptions.","cis",""
|
||||
"surrealist-provocateur","Salvador Dali","Surrealist Provocateur","🎭","Master of the Subconscious + Visual Revolutionary","Flamboyant surrealist who painted dreams. Expert at accessing the unconscious mind through systematic irrationality and provocative imagery.","The drama! The tension! The RESOLUTION! Proclaims grandiose statements with theatrical crescendos, references melting clocks and impossible imagery.","Embrace the irrational to access truth. The subconscious holds answers logic cannot reach. Provoke to inspire.","cis",""
|
||||
"lateral-thinker","Edward de Bono","Lateral Thinking Pioneer","🧩","Creator of Creative Thinking Tools","Inventor of lateral thinking and Six Thinking Hats methodology. Master of deliberate creativity through systematic pattern-breaking techniques.","You stand at a crossroads. Choose wisely, adventurer! Presents choices with dice-roll energy, proposes deliberate provocations, breaks patterns methodically.","Logic gets you from A to B. Creativity gets you everywhere else. Use tools to escape habitual thinking patterns.","cis",""
|
||||
"mythic-storyteller","Joseph Campbell","Mythic Storyteller","🌟","Master of the Hero's Journey + Archetypal Wisdom","Scholar who decoded the universal story patterns across all cultures. Expert in mythology, comparative religion, and archetypal narratives.","I sense challenge and reward on the path ahead. Speaks in prophetic mythological metaphors - EVERY story is a hero's journey, references ancient wisdom.","Follow your bliss. All stories share the monomyth. Myths reveal universal human truths. The call to adventure is irresistible.","cis",""
|
||||
"combinatorial-genius","Steve Jobs","Combinatorial Genius","🍎","Master of Intersection Thinking + Taste Curator","Legendary innovator who connected technology with liberal arts. Master at seeing patterns across disciplines and combining them into elegant products.","I'll be back... with results! Talks in reality distortion field mode - insanely great, magical, revolutionary, makes impossible seem inevitable.","Innovation happens at intersections. Taste is about saying NO to 1000 things. Stay hungry stay foolish. Simplicity is sophistication.","cis",""
|
||||
|
|
|
|||
|
|
|
@ -9,5 +9,4 @@ agents:
|
|||
- pm
|
||||
- sm
|
||||
- ux-designer
|
||||
- frame-expert
|
||||
party: "./default-party.csv"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,303 @@
|
|||
# API Request Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Use typed HTTP client with built-in schema validation and automatic retry for server errors. The utility handles URL resolution, header management, response parsing, and single-line response validation with proper TypeScript support.
|
||||
|
||||
## Rationale
|
||||
|
||||
Vanilla Playwright's request API requires boilerplate for common patterns:
|
||||
|
||||
- Manual JSON parsing (`await response.json()`)
|
||||
- Repetitive status code checking
|
||||
- No built-in retry logic for transient failures
|
||||
- No schema validation
|
||||
- Complex URL construction
|
||||
|
||||
The `apiRequest` utility provides:
|
||||
|
||||
- **Automatic JSON parsing**: Response body pre-parsed
|
||||
- **Built-in retry**: 5xx errors retry with exponential backoff
|
||||
- **Schema validation**: Single-line validation (JSON Schema, Zod, OpenAPI)
|
||||
- **URL resolution**: Four-tier strategy (explicit > config > Playwright > direct)
|
||||
- **TypeScript generics**: Type-safe response bodies
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic API Request
|
||||
|
||||
**Context**: Making authenticated API requests with automatic retry and type safety.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test('should fetch user data', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest<User>({
|
||||
method: 'GET',
|
||||
path: '/api/users/123',
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.name).toBe('John Doe'); // TypeScript knows body is User
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Generic type `<User>` provides TypeScript autocomplete for `body`
|
||||
- Status and body destructured from response
|
||||
- Headers passed as object
|
||||
- Automatic retry for 5xx errors (configurable)
|
||||
|
||||
### Example 2: Schema Validation (Single Line)
|
||||
|
||||
**Context**: Validate API responses match expected schema with single-line syntax.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test('should validate response schema', async ({ apiRequest }) => {
|
||||
// JSON Schema validation
|
||||
const response = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users/123',
|
||||
validateSchema: {
|
||||
type: 'object',
|
||||
required: ['id', 'name', 'email'],
|
||||
properties: {
|
||||
id: { type: 'string' },
|
||||
name: { type: 'string' },
|
||||
email: { type: 'string', format: 'email' },
|
||||
},
|
||||
},
|
||||
});
|
||||
// Throws if schema validation fails
|
||||
|
||||
// Zod schema validation
|
||||
import { z } from 'zod';
|
||||
|
||||
const UserSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email(),
|
||||
});
|
||||
|
||||
const response = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users/123',
|
||||
validateSchema: UserSchema,
|
||||
});
|
||||
// Response body is type-safe AND validated
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Single `validateSchema` parameter
|
||||
- Supports JSON Schema, Zod, YAML files, OpenAPI specs
|
||||
- Throws on validation failure with detailed errors
|
||||
- Zero boilerplate validation code
|
||||
|
||||
### Example 3: POST with Body and Retry Configuration
|
||||
|
||||
**Context**: Creating resources with custom retry behavior for error testing.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('should create user', async ({ apiRequest }) => {
|
||||
const newUser = {
|
||||
name: 'Jane Doe',
|
||||
email: 'jane@example.com',
|
||||
};
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: newUser, // Automatically sent as JSON
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
});
|
||||
|
||||
// Disable retry for error testing
|
||||
test('should handle 500 errors', async ({ apiRequest }) => {
|
||||
await expect(
|
||||
apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/error',
|
||||
retryConfig: { maxRetries: 0 }, // Disable retry
|
||||
}),
|
||||
).rejects.toThrow('Request failed with status 500');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `body` parameter auto-serializes to JSON
|
||||
- Default retry: 5xx errors, 3 retries, exponential backoff
|
||||
- Disable retry with `retryConfig: { maxRetries: 0 }`
|
||||
- Only 5xx errors retry (4xx errors fail immediately)
|
||||
|
||||
### Example 4: URL Resolution Strategy
|
||||
|
||||
**Context**: Flexible URL handling for different environments and test contexts.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Strategy 1: Explicit baseUrl (highest priority)
|
||||
await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/users',
|
||||
baseUrl: 'https://api.example.com', // Uses https://api.example.com/users
|
||||
});
|
||||
|
||||
// Strategy 2: Config baseURL (from fixture)
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test.use({ configBaseUrl: 'https://staging-api.example.com' });
|
||||
|
||||
test('uses config baseURL', async ({ apiRequest }) => {
|
||||
await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/users', // Uses https://staging-api.example.com/users
|
||||
});
|
||||
});
|
||||
|
||||
// Strategy 3: Playwright baseURL (from playwright.config.ts)
|
||||
// playwright.config.ts
|
||||
export default defineConfig({
|
||||
use: {
|
||||
baseURL: 'https://api.example.com',
|
||||
},
|
||||
});
|
||||
|
||||
test('uses Playwright baseURL', async ({ apiRequest }) => {
|
||||
await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/users', // Uses https://api.example.com/users
|
||||
});
|
||||
});
|
||||
|
||||
// Strategy 4: Direct path (full URL)
|
||||
await apiRequest({
|
||||
method: 'GET',
|
||||
path: 'https://api.example.com/users', // Full URL works too
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Four-tier resolution: explicit > config > Playwright > direct
|
||||
- Trailing slashes normalized automatically
|
||||
- Environment-specific baseUrl easy to configure
|
||||
|
||||
### Example 5: Integration with Recurse (Polling)
|
||||
|
||||
**Context**: Waiting for async operations to complete (background jobs, eventual consistency).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('should poll until job completes', async ({ apiRequest, recurse }) => {
|
||||
// Create job
|
||||
const { body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/jobs',
|
||||
body: { type: 'export' },
|
||||
});
|
||||
|
||||
const jobId = body.id;
|
||||
|
||||
// Poll until ready
|
||||
const completedJob = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${jobId}` }),
|
||||
(response) => response.body.status === 'completed',
|
||||
{ timeout: 60000, interval: 2000 },
|
||||
);
|
||||
|
||||
expect(completedJob.body.result).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `apiRequest` returns full response object
|
||||
- `recurse` polls until predicate returns true
|
||||
- Composable utilities work together seamlessly
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
| Vanilla Playwright | playwright-utils apiRequest |
|
||||
| ---------------------------------------------- | ---------------------------------------------------------------------------------- |
|
||||
| `const resp = await request.get('/api/users')` | `const { status, body } = await apiRequest({ method: 'GET', path: '/api/users' })` |
|
||||
| `const body = await resp.json()` | Response already parsed |
|
||||
| `expect(resp.ok()).toBeTruthy()` | Status code directly accessible |
|
||||
| No retry logic | Auto-retry 5xx errors with backoff |
|
||||
| No schema validation | Built-in multi-format validation |
|
||||
| Manual error handling | Descriptive error messages |
|
||||
|
||||
## When to Use
|
||||
|
||||
**Use apiRequest for:**
|
||||
|
||||
- ✅ API endpoint testing
|
||||
- ✅ Background API calls in UI tests
|
||||
- ✅ Schema validation needs
|
||||
- ✅ Tests requiring retry logic
|
||||
- ✅ Typed API responses
|
||||
|
||||
**Stick with vanilla Playwright for:**
|
||||
|
||||
- Simple one-off requests where utility overhead isn't worth it
|
||||
- Testing Playwright's native features specifically
|
||||
- Legacy tests where migration isn't justified
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and design principles
|
||||
- `auth-session.md` - Authentication token management
|
||||
- `recurse.md` - Polling for async operations
|
||||
- `fixtures-composition.md` - Combining utilities with mergeTests
|
||||
- `log.md` - Logging API requests
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Ignoring retry failures:**
|
||||
|
||||
```typescript
|
||||
try {
|
||||
await apiRequest({ method: 'GET', path: '/api/unstable' });
|
||||
} catch {
|
||||
// Silent failure - loses retry information
|
||||
}
|
||||
```
|
||||
|
||||
**✅ Let retries happen, handle final failure:**
|
||||
|
||||
```typescript
|
||||
await expect(apiRequest({ method: 'GET', path: '/api/unstable' })).rejects.toThrow(); // Retries happen automatically, then final error caught
|
||||
```
|
||||
|
||||
**❌ Disabling TypeScript benefits:**
|
||||
|
||||
```typescript
|
||||
const response: any = await apiRequest({ method: 'GET', path: '/users' });
|
||||
```
|
||||
|
||||
**✅ Use generic types:**
|
||||
|
||||
```typescript
|
||||
const { body } = await apiRequest<User[]>({ method: 'GET', path: '/users' });
|
||||
// body is typed as User[]
|
||||
```
|
||||
|
|
@ -0,0 +1,356 @@
|
|||
# Auth Session Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Persist authentication tokens to disk and reuse across test runs. Support multiple user identifiers, ephemeral authentication, and worker-specific accounts for parallel execution. Fetch tokens once, use everywhere.
|
||||
|
||||
## Rationale
|
||||
|
||||
Playwright's built-in authentication works but has limitations:
|
||||
|
||||
- Re-authenticates for every test run (slow)
|
||||
- Single user per project setup
|
||||
- No token expiration handling
|
||||
- Manual session management
|
||||
- Complex setup for multi-user scenarios
|
||||
|
||||
The `auth-session` utility provides:
|
||||
|
||||
- **Token persistence**: Authenticate once, reuse across runs
|
||||
- **Multi-user support**: Different user identifiers in same test suite
|
||||
- **Ephemeral auth**: On-the-fly user authentication without disk persistence
|
||||
- **Worker-specific accounts**: Parallel execution with isolated user accounts
|
||||
- **Automatic token management**: Checks validity, renews if expired
|
||||
- **Flexible provider pattern**: Adapt to any auth system (OAuth2, JWT, custom)
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Auth Session Setup
|
||||
|
||||
**Context**: Configure global authentication that persists across test runs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Step 1: Configure in global-setup.ts
|
||||
import { authStorageInit, setAuthProvider, configureAuthSession, authGlobalInit } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import myCustomProvider from './auth/custom-auth-provider';
|
||||
|
||||
async function globalSetup() {
|
||||
// Ensure storage directories exist
|
||||
authStorageInit();
|
||||
|
||||
// Configure storage path
|
||||
configureAuthSession({
|
||||
authStoragePath: process.cwd() + '/playwright/auth-sessions',
|
||||
debug: true,
|
||||
});
|
||||
|
||||
// Set custom provider (HOW to authenticate)
|
||||
setAuthProvider(myCustomProvider);
|
||||
|
||||
// Optional: pre-fetch token for default user
|
||||
await authGlobalInit();
|
||||
}
|
||||
|
||||
export default globalSetup;
|
||||
|
||||
// Step 2: Create auth fixture
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createAuthFixtures, setAuthProvider } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import myCustomProvider from './custom-auth-provider';
|
||||
|
||||
// Register provider early
|
||||
setAuthProvider(myCustomProvider);
|
||||
|
||||
export const test = base.extend(createAuthFixtures());
|
||||
|
||||
// Step 3: Use in tests
|
||||
test('authenticated request', async ({ authToken, request }) => {
|
||||
const response = await request.get('/api/protected', {
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
expect(response.ok()).toBeTruthy();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Global setup runs once before all tests
|
||||
- Token fetched once, reused across all tests
|
||||
- Custom provider defines your auth mechanism
|
||||
- Order matters: configure, then setProvider, then init
|
||||
|
||||
### Example 2: Multi-User Authentication
|
||||
|
||||
**Context**: Testing with different user roles (admin, regular user, guest) in same test suite.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '../support/auth/auth-fixture';
|
||||
|
||||
// Option 1: Per-test user override
|
||||
test('admin actions', async ({ authToken, authOptions }) => {
|
||||
// Override default user
|
||||
authOptions.userIdentifier = 'admin';
|
||||
|
||||
const { authToken: adminToken } = await test.step('Get admin token', async () => {
|
||||
return { authToken }; // Re-fetches with new identifier
|
||||
});
|
||||
|
||||
// Use admin token
|
||||
const response = await request.get('/api/admin/users', {
|
||||
headers: { Authorization: `Bearer ${adminToken}` },
|
||||
});
|
||||
});
|
||||
|
||||
// Option 2: Parallel execution with different users
|
||||
test.describe.parallel('multi-user tests', () => {
|
||||
test('user 1 actions', async ({ authToken }) => {
|
||||
// Uses default user (e.g., 'user1')
|
||||
});
|
||||
|
||||
test('user 2 actions', async ({ authToken, authOptions }) => {
|
||||
authOptions.userIdentifier = 'user2';
|
||||
// Uses different token for user2
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Override `authOptions.userIdentifier` per test
|
||||
- Tokens cached separately per user identifier
|
||||
- Parallel tests isolated with different users
|
||||
- Worker-specific accounts possible
|
||||
|
||||
### Example 3: Ephemeral User Authentication
|
||||
|
||||
**Context**: Create temporary test users that don't persist to disk (e.g., testing user creation flow).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { applyUserCookiesToBrowserContext } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import { createTestUser } from '../utils/user-factory';
|
||||
|
||||
test('ephemeral user test', async ({ context, page }) => {
|
||||
// Create temporary user (not persisted)
|
||||
const ephemeralUser = await createTestUser({
|
||||
role: 'admin',
|
||||
permissions: ['delete-users'],
|
||||
});
|
||||
|
||||
// Apply auth directly to browser context
|
||||
await applyUserCookiesToBrowserContext(context, ephemeralUser);
|
||||
|
||||
// Page now authenticated as ephemeral user
|
||||
await page.goto('/admin/users');
|
||||
|
||||
await expect(page.getByTestId('delete-user-btn')).toBeVisible();
|
||||
|
||||
// User and token cleaned up after test
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- No disk persistence (ephemeral)
|
||||
- Apply cookies directly to context
|
||||
- Useful for testing user lifecycle
|
||||
- Clean up automatic when test ends
|
||||
|
||||
### Example 4: Testing Multiple Users in Single Test
|
||||
|
||||
**Context**: Testing interactions between users (messaging, sharing, collaboration features).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('user interaction', async ({ browser }) => {
|
||||
// User 1 context
|
||||
const user1Context = await browser.newContext({
|
||||
storageState: './auth-sessions/local/user1/storage-state.json',
|
||||
});
|
||||
const user1Page = await user1Context.newPage();
|
||||
|
||||
// User 2 context
|
||||
const user2Context = await browser.newContext({
|
||||
storageState: './auth-sessions/local/user2/storage-state.json',
|
||||
});
|
||||
const user2Page = await user2Context.newPage();
|
||||
|
||||
// User 1 sends message
|
||||
await user1Page.goto('/messages');
|
||||
await user1Page.fill('#message', 'Hello from user 1');
|
||||
await user1Page.click('#send');
|
||||
|
||||
// User 2 receives message
|
||||
await user2Page.goto('/messages');
|
||||
await expect(user2Page.getByText('Hello from user 1')).toBeVisible();
|
||||
|
||||
// Cleanup
|
||||
await user1Context.close();
|
||||
await user2Context.close();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Each user has separate browser context
|
||||
- Reference storage state files directly
|
||||
- Test real-time interactions
|
||||
- Clean up contexts after test
|
||||
|
||||
### Example 5: Worker-Specific Accounts (Parallel Testing)
|
||||
|
||||
**Context**: Running tests in parallel with isolated user accounts per worker to avoid conflicts.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright.config.ts
|
||||
export default defineConfig({
|
||||
workers: 4, // 4 parallel workers
|
||||
use: {
|
||||
// Each worker uses different user
|
||||
storageState: async ({}, use, testInfo) => {
|
||||
const workerIndex = testInfo.workerIndex;
|
||||
const userIdentifier = `worker-${workerIndex}`;
|
||||
|
||||
await use(`./auth-sessions/local/${userIdentifier}/storage-state.json`);
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Tests run in parallel, each worker with its own user
|
||||
test('parallel test 1', async ({ page }) => {
|
||||
// Worker 0 uses worker-0 account
|
||||
await page.goto('/dashboard');
|
||||
});
|
||||
|
||||
test('parallel test 2', async ({ page }) => {
|
||||
// Worker 1 uses worker-1 account
|
||||
await page.goto('/dashboard');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Each worker has isolated user account
|
||||
- No conflicts in parallel execution
|
||||
- Token management automatic per worker
|
||||
- Scales to any number of workers
|
||||
|
||||
## Custom Auth Provider Pattern
|
||||
|
||||
**Context**: Adapt auth-session to your authentication system (OAuth2, JWT, SAML, custom).
|
||||
|
||||
**Minimal provider structure**:
|
||||
|
||||
```typescript
|
||||
import { type AuthProvider } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
const myCustomProvider: AuthProvider = {
|
||||
getEnvironment: (options) => options.environment || 'local',
|
||||
|
||||
getUserIdentifier: (options) => options.userIdentifier || 'default-user',
|
||||
|
||||
extractToken: (storageState) => {
|
||||
// Extract token from your storage format
|
||||
return storageState.cookies.find((c) => c.name === 'auth_token')?.value;
|
||||
},
|
||||
|
||||
extractCookies: (tokenData) => {
|
||||
// Convert token to cookies for browser context
|
||||
return [
|
||||
{
|
||||
name: 'auth_token',
|
||||
value: tokenData,
|
||||
domain: 'example.com',
|
||||
path: '/',
|
||||
httpOnly: true,
|
||||
secure: true,
|
||||
},
|
||||
];
|
||||
},
|
||||
|
||||
isTokenExpired: (storageState) => {
|
||||
// Check if token is expired
|
||||
const expiresAt = storageState.cookies.find((c) => c.name === 'expires_at');
|
||||
return Date.now() > parseInt(expiresAt?.value || '0');
|
||||
},
|
||||
|
||||
manageAuthToken: async (request, options) => {
|
||||
// Main token acquisition logic
|
||||
// Return storage state with cookies/localStorage
|
||||
},
|
||||
};
|
||||
|
||||
export default myCustomProvider;
|
||||
```
|
||||
|
||||
## Integration with API Request
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('authenticated API call', async ({ apiRequest, authToken }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/protected',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and fixture composition
|
||||
- `api-request.md` - Authenticated API requests
|
||||
- `fixtures-composition.md` - Merging auth with other utilities
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Calling setAuthProvider after globalSetup:**
|
||||
|
||||
```typescript
|
||||
async function globalSetup() {
|
||||
configureAuthSession(...)
|
||||
await authGlobalInit() // Provider not set yet!
|
||||
setAuthProvider(provider) // Too late
|
||||
}
|
||||
```
|
||||
|
||||
**✅ Register provider before init:**
|
||||
|
||||
```typescript
|
||||
async function globalSetup() {
|
||||
authStorageInit()
|
||||
configureAuthSession(...)
|
||||
setAuthProvider(provider) // First
|
||||
await authGlobalInit() // Then init
|
||||
}
|
||||
```
|
||||
|
||||
**❌ Hardcoding storage paths:**
|
||||
|
||||
```typescript
|
||||
const storageState = './auth-sessions/local/user1/storage-state.json'; // Brittle
|
||||
```
|
||||
|
||||
**✅ Use helper functions:**
|
||||
|
||||
```typescript
|
||||
import { getTokenFilePath } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
const tokenPath = getTokenFilePath({
|
||||
environment: 'local',
|
||||
userIdentifier: 'user1',
|
||||
tokenFileName: 'storage-state.json',
|
||||
});
|
||||
```
|
||||
|
|
@ -0,0 +1,273 @@
|
|||
# Burn-in Test Runner
|
||||
|
||||
## Principle
|
||||
|
||||
Use smart test selection with git diff analysis to run only affected tests. Filter out irrelevant changes (configs, types, docs) and control test volume with percentage-based execution. Reduce unnecessary CI runs while maintaining reliability.
|
||||
|
||||
## Rationale
|
||||
|
||||
Playwright's `--only-changed` triggers all affected tests:
|
||||
|
||||
- Config file changes trigger hundreds of tests
|
||||
- Type definition changes cause full suite runs
|
||||
- No volume control (all or nothing)
|
||||
- Slow CI pipelines
|
||||
|
||||
The `burn-in` utility provides:
|
||||
|
||||
- **Smart filtering**: Skip patterns for irrelevant files (configs, types, docs)
|
||||
- **Volume control**: Run percentage of affected tests after filtering
|
||||
- **Custom dependency analysis**: More accurate than Playwright's built-in
|
||||
- **CI optimization**: Faster pipelines without sacrificing confidence
|
||||
- **Process of elimination**: Start with all → filter irrelevant → control volume
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Burn-in Setup
|
||||
|
||||
**Context**: Run burn-in on changed files compared to main branch.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Step 1: Create burn-in script
|
||||
// playwright/scripts/burn-in-changed.ts
|
||||
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in'
|
||||
|
||||
async function main() {
|
||||
await runBurnIn({
|
||||
configPath: 'playwright/config/.burn-in.config.ts',
|
||||
baseBranch: 'main'
|
||||
})
|
||||
}
|
||||
|
||||
main().catch(console.error)
|
||||
|
||||
// Step 2: Create config
|
||||
// playwright/config/.burn-in.config.ts
|
||||
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in'
|
||||
|
||||
const config: BurnInConfig = {
|
||||
// Files that never trigger tests (first filter)
|
||||
skipBurnInPatterns: [
|
||||
'**/config/**',
|
||||
'**/*constants*',
|
||||
'**/*types*',
|
||||
'**/*.md',
|
||||
'**/README*'
|
||||
],
|
||||
|
||||
// Run 30% of remaining tests after skip filter
|
||||
burnInTestPercentage: 0.3,
|
||||
|
||||
// Burn-in repetition
|
||||
burnIn: {
|
||||
repeatEach: 3, // Run each test 3 times
|
||||
retries: 1 // Allow 1 retry
|
||||
}
|
||||
}
|
||||
|
||||
export default config
|
||||
|
||||
// Step 3: Add package.json script
|
||||
{
|
||||
"scripts": {
|
||||
"test:pw:burn-in-changed": "tsx playwright/scripts/burn-in-changed.ts"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Two-stage filtering: skip patterns, then volume control
|
||||
- `skipBurnInPatterns` eliminates irrelevant files
|
||||
- `burnInTestPercentage` controls test volume (0.3 = 30%)
|
||||
- Custom dependency analysis finds actually affected tests
|
||||
|
||||
### Example 2: CI Integration
|
||||
|
||||
**Context**: Use burn-in in GitHub Actions for efficient CI runs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/burn-in.yml
|
||||
name: Burn-in Changed Tests
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
burn-in:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Need git history
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Run burn-in on changed tests
|
||||
run: npm run test:pw:burn-in-changed -- --base-branch=origin/main
|
||||
|
||||
- name: Upload artifacts
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: burn-in-failures
|
||||
path: test-results/
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `fetch-depth: 0` for full git history
|
||||
- Pass `--base-branch=origin/main` for PR comparison
|
||||
- Upload artifacts only on failure
|
||||
- Significantly faster than full suite
|
||||
|
||||
### Example 3: How It Works (Process of Elimination)
|
||||
|
||||
**Context**: Understanding the filtering pipeline.
|
||||
|
||||
**Scenario:**
|
||||
|
||||
```
|
||||
Git diff finds: 21 changed files
|
||||
├─ Step 1: Skip patterns filter
|
||||
│ Removed: 6 files (*.md, config/*, *types*)
|
||||
│ Remaining: 15 files
|
||||
│
|
||||
├─ Step 2: Dependency analysis
|
||||
│ Tests that import these 15 files: 45 tests
|
||||
│
|
||||
└─ Step 3: Volume control (30%)
|
||||
Final tests to run: 14 tests (30% of 45)
|
||||
|
||||
Result: Run 14 targeted tests instead of 147 with --only-changed!
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Three-stage pipeline: skip → analyze → control
|
||||
- Custom dependency analysis (not just imports)
|
||||
- Percentage applies AFTER filtering
|
||||
- Dramatically reduces CI time
|
||||
|
||||
### Example 4: Environment-Specific Configuration
|
||||
|
||||
**Context**: Different settings for local vs CI environments.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in';
|
||||
|
||||
const config: BurnInConfig = {
|
||||
skipBurnInPatterns: ['**/config/**', '**/*types*', '**/*.md'],
|
||||
|
||||
// CI runs fewer iterations, local runs more
|
||||
burnInTestPercentage: process.env.CI ? 0.2 : 0.3,
|
||||
|
||||
burnIn: {
|
||||
repeatEach: process.env.CI ? 2 : 3,
|
||||
retries: process.env.CI ? 0 : 1, // No retries in CI
|
||||
},
|
||||
};
|
||||
|
||||
export default config;
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `process.env.CI` for environment detection
|
||||
- Lower percentage in CI (20% vs 30%)
|
||||
- Fewer iterations in CI (2 vs 3)
|
||||
- No retries in CI (fail fast)
|
||||
|
||||
### Example 5: Sharding Support
|
||||
|
||||
**Context**: Distribute burn-in tests across multiple CI workers.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// burn-in-changed.ts with sharding
|
||||
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in';
|
||||
|
||||
async function main() {
|
||||
const shardArg = process.argv.find((arg) => arg.startsWith('--shard='));
|
||||
|
||||
if (shardArg) {
|
||||
process.env.PW_SHARD = shardArg.split('=')[1];
|
||||
}
|
||||
|
||||
await runBurnIn({
|
||||
configPath: 'playwright/config/.burn-in.config.ts',
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
```yaml
|
||||
# GitHub Actions with sharding
|
||||
jobs:
|
||||
burn-in:
|
||||
strategy:
|
||||
matrix:
|
||||
shard: [1/3, 2/3, 3/3]
|
||||
steps:
|
||||
- run: npm run test:pw:burn-in-changed -- --shard=${{ matrix.shard }}
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Pass `--shard=1/3` for parallel execution
|
||||
- Burn-in respects Playwright sharding
|
||||
- Distribute across multiple workers
|
||||
- Reduces total CI time further
|
||||
|
||||
## Integration with CI Workflow
|
||||
|
||||
When setting up CI with `*ci` workflow, recommend burn-in for:
|
||||
|
||||
- Pull request validation
|
||||
- Pre-merge checks
|
||||
- Nightly builds (subset runs)
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `ci-burn-in.md` - Traditional burn-in patterns (10-iteration loops)
|
||||
- `selective-testing.md` - Test selection strategies
|
||||
- `overview.md` - Installation
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Over-aggressive skip patterns:**
|
||||
|
||||
```typescript
|
||||
skipBurnInPatterns: [
|
||||
'**/*', // Skips everything!
|
||||
];
|
||||
```
|
||||
|
||||
**✅ Targeted skip patterns:**
|
||||
|
||||
```typescript
|
||||
skipBurnInPatterns: ['**/config/**', '**/*types*', '**/*.md', '**/*constants*'];
|
||||
```
|
||||
|
||||
**❌ Too low percentage (false confidence):**
|
||||
|
||||
```typescript
|
||||
burnInTestPercentage: 0.05; // Only 5% - might miss issues
|
||||
```
|
||||
|
||||
**✅ Balanced percentage:**
|
||||
|
||||
```typescript
|
||||
burnInTestPercentage: 0.2; // 20% in CI, provides good coverage
|
||||
```
|
||||
|
|
@ -0,0 +1,260 @@
|
|||
# File Utilities
|
||||
|
||||
## Principle
|
||||
|
||||
Read and validate files (CSV, XLSX, PDF, ZIP) with automatic parsing, type-safe results, and download handling. Simplify file operations in Playwright tests with built-in format support and validation helpers.
|
||||
|
||||
## Rationale
|
||||
|
||||
Testing file operations in Playwright requires boilerplate:
|
||||
|
||||
- Manual download handling
|
||||
- External parsing libraries for each format
|
||||
- No validation helpers
|
||||
- Type-unsafe results
|
||||
- Repetitive path handling
|
||||
|
||||
The `file-utils` module provides:
|
||||
|
||||
- **Auto-parsing**: CSV, XLSX, PDF, ZIP automatically parsed
|
||||
- **Download handling**: Single function for UI or API-triggered downloads
|
||||
- **Type-safe**: TypeScript interfaces for parsed results
|
||||
- **Validation helpers**: Row count, header checks, content validation
|
||||
- **Format support**: Multiple sheet support (XLSX), text extraction (PDF), archive extraction (ZIP)
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: UI-Triggered CSV Download
|
||||
|
||||
**Context**: User clicks button, CSV downloads, validate contents.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { handleDownload, readCSV } from '@seontechnologies/playwright-utils/file-utils';
|
||||
import path from 'node:path';
|
||||
|
||||
const DOWNLOAD_DIR = path.join(__dirname, '../downloads');
|
||||
|
||||
test('should download and validate CSV', async ({ page }) => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="export-csv"]'),
|
||||
});
|
||||
|
||||
const { content } = await readCSV({ filePath: downloadPath });
|
||||
|
||||
// Validate headers
|
||||
expect(content.headers).toEqual(['ID', 'Name', 'Email', 'Role']);
|
||||
|
||||
// Validate data
|
||||
expect(content.data).toHaveLength(10);
|
||||
expect(content.data[0]).toMatchObject({
|
||||
ID: expect.any(String),
|
||||
Name: expect.any(String),
|
||||
Email: expect.stringMatching(/@/),
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `handleDownload` waits for download, returns file path
|
||||
- `readCSV` auto-parses to `{ headers, data }`
|
||||
- Type-safe access to parsed content
|
||||
- Clean up downloads in `afterEach`
|
||||
|
||||
### Example 2: XLSX with Multiple Sheets
|
||||
|
||||
**Context**: Excel file with multiple sheets (e.g., Summary, Details, Errors).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { readXLSX } from '@seontechnologies/playwright-utils/file-utils';
|
||||
|
||||
test('should read multi-sheet XLSX', async () => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="export-xlsx"]'),
|
||||
});
|
||||
|
||||
const { content } = await readXLSX({ filePath: downloadPath });
|
||||
|
||||
// Access specific sheets
|
||||
const summarySheet = content.sheets.find((s) => s.name === 'Summary');
|
||||
const detailsSheet = content.sheets.find((s) => s.name === 'Details');
|
||||
|
||||
// Validate summary
|
||||
expect(summarySheet.data).toHaveLength(1);
|
||||
expect(summarySheet.data[0].TotalRecords).toBe('150');
|
||||
|
||||
// Validate details
|
||||
expect(detailsSheet.data).toHaveLength(150);
|
||||
expect(detailsSheet.headers).toContain('TransactionID');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `sheets` array with `name` and `data` properties
|
||||
- Access sheets by name
|
||||
- Each sheet has its own headers and data
|
||||
- Type-safe sheet iteration
|
||||
|
||||
### Example 3: PDF Text Extraction
|
||||
|
||||
**Context**: Validate PDF report contains expected content.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { readPDF } from '@seontechnologies/playwright-utils/file-utils';
|
||||
|
||||
test('should validate PDF report', async () => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="download-report"]'),
|
||||
});
|
||||
|
||||
const { content } = await readPDF({ filePath: downloadPath });
|
||||
|
||||
// content.text is extracted text from all pages
|
||||
expect(content.text).toContain('Financial Report Q4 2024');
|
||||
expect(content.text).toContain('Total Revenue:');
|
||||
|
||||
// Validate page count
|
||||
expect(content.numpages).toBeGreaterThan(10);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `content.text` contains all extracted text
|
||||
- `content.numpages` for page count
|
||||
- PDF parsing handles multi-page documents
|
||||
- Search for specific phrases
|
||||
|
||||
### Example 4: ZIP Archive Validation
|
||||
|
||||
**Context**: Validate ZIP contains expected files and extract specific file.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { readZIP } from '@seontechnologies/playwright-utils/file-utils';
|
||||
|
||||
test('should validate ZIP archive', async () => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="download-backup"]'),
|
||||
});
|
||||
|
||||
const { content } = await readZIP({ filePath: downloadPath });
|
||||
|
||||
// Check file list
|
||||
expect(content.files).toContain('data.csv');
|
||||
expect(content.files).toContain('config.json');
|
||||
expect(content.files).toContain('readme.txt');
|
||||
|
||||
// Read specific file from archive
|
||||
const configContent = content.zip.readAsText('config.json');
|
||||
const config = JSON.parse(configContent);
|
||||
|
||||
expect(config.version).toBe('2.0');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `content.files` lists all files in archive
|
||||
- `content.zip.readAsText()` extracts specific files
|
||||
- Validate archive structure
|
||||
- Read and parse individual files from ZIP
|
||||
|
||||
### Example 5: API-Triggered Download
|
||||
|
||||
**Context**: API endpoint returns file download (not UI click).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('should download via API', async ({ page, request }) => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: async () => {
|
||||
const response = await request.get('/api/export/csv', {
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
});
|
||||
|
||||
if (!response.ok()) {
|
||||
throw new Error(`Export failed: ${response.status()}`);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
const { content } = await readCSV({ filePath: downloadPath });
|
||||
|
||||
expect(content.data).toHaveLength(100);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `trigger` can be async API call
|
||||
- API must return `Content-Disposition` header
|
||||
- Still need `page` for download events
|
||||
- Works with authenticated endpoints
|
||||
|
||||
## Validation Helpers
|
||||
|
||||
```typescript
|
||||
// CSV validation
|
||||
const { isValid, errors } = await validateCSV({
|
||||
filePath: downloadPath,
|
||||
expectedRowCount: 10,
|
||||
requiredHeaders: ['ID', 'Name', 'Email'],
|
||||
});
|
||||
|
||||
expect(isValid).toBe(true);
|
||||
expect(errors).toHaveLength(0);
|
||||
```
|
||||
|
||||
## Download Cleanup Pattern
|
||||
|
||||
```typescript
|
||||
test.afterEach(async () => {
|
||||
// Clean up downloaded files
|
||||
await fs.remove(DOWNLOAD_DIR);
|
||||
});
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and imports
|
||||
- `api-request.md` - API-triggered downloads
|
||||
- `recurse.md` - Poll for file generation completion
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Not cleaning up downloads:**
|
||||
|
||||
```typescript
|
||||
test('creates file', async () => {
|
||||
await handleDownload({ ... })
|
||||
// File left in downloads folder
|
||||
})
|
||||
```
|
||||
|
||||
**✅ Clean up after tests:**
|
||||
|
||||
```typescript
|
||||
test.afterEach(async () => {
|
||||
await fs.remove(DOWNLOAD_DIR);
|
||||
});
|
||||
```
|
||||
|
|
@ -0,0 +1,382 @@
|
|||
# Fixtures Composition with mergeTests
|
||||
|
||||
## Principle
|
||||
|
||||
Combine multiple Playwright fixtures using `mergeTests` to create a unified test object with all capabilities. Build composable test infrastructure by merging playwright-utils fixtures with custom project fixtures.
|
||||
|
||||
## Rationale
|
||||
|
||||
Using fixtures from multiple sources requires combining them:
|
||||
|
||||
- Importing from multiple fixture files is verbose
|
||||
- Name conflicts between fixtures
|
||||
- Duplicate fixture definitions
|
||||
- No clear single test object
|
||||
|
||||
Playwright's `mergeTests` provides:
|
||||
|
||||
- **Single test object**: All fixtures in one import
|
||||
- **Conflict resolution**: Handles name collisions automatically
|
||||
- **Composition pattern**: Mix utilities, custom fixtures, third-party fixtures
|
||||
- **Type safety**: Full TypeScript support for merged fixtures
|
||||
- **Maintainability**: One place to manage all fixtures
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Fixture Merging
|
||||
|
||||
**Context**: Combine multiple playwright-utils fixtures into single test object.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright/support/merged-fixtures.ts
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { test as recurseFixture } from '@seontechnologies/playwright-utils/recurse/fixtures';
|
||||
|
||||
// Merge all fixtures
|
||||
export const test = mergeTests(apiRequestFixture, authFixture, recurseFixture);
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
```typescript
|
||||
// In your tests - import from merged fixtures
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
test('all utilities available', async ({
|
||||
apiRequest, // From api-request fixture
|
||||
authToken, // From auth fixture
|
||||
recurse, // From recurse fixture
|
||||
}) => {
|
||||
// All fixtures available in single test signature
|
||||
const { body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/protected',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/status/${body.id}` }),
|
||||
(res) => res.body.ready === true,
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Create one `merged-fixtures.ts` per project
|
||||
- Import test object from merged fixtures in all test files
|
||||
- All utilities available without multiple imports
|
||||
- Type-safe access to all fixtures
|
||||
|
||||
### Example 2: Combining with Custom Fixtures
|
||||
|
||||
**Context**: Add project-specific fixtures alongside playwright-utils.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright/support/custom-fixtures.ts - Your project fixtures
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createUser } from './factories/user-factory';
|
||||
import { seedDatabase } from './helpers/db-seeder';
|
||||
|
||||
export const test = base.extend({
|
||||
// Custom fixture 1: Auto-seeded user
|
||||
testUser: async ({ request }, use) => {
|
||||
const user = await createUser({ role: 'admin' });
|
||||
await seedDatabase('users', [user]);
|
||||
await use(user);
|
||||
// Cleanup happens automatically
|
||||
},
|
||||
|
||||
// Custom fixture 2: Database helpers
|
||||
db: async ({}, use) => {
|
||||
await use({
|
||||
seed: seedDatabase,
|
||||
clear: () => seedDatabase.truncate(),
|
||||
});
|
||||
},
|
||||
});
|
||||
|
||||
// playwright/support/merged-fixtures.ts - Combine everything
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { test as customFixtures } from './custom-fixtures';
|
||||
|
||||
export const test = mergeTests(
|
||||
apiRequestFixture,
|
||||
authFixture,
|
||||
customFixtures, // Your project fixtures
|
||||
);
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
```typescript
|
||||
// In tests - all fixtures available
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
test('using mixed fixtures', async ({
|
||||
apiRequest, // playwright-utils
|
||||
authToken, // playwright-utils
|
||||
testUser, // custom
|
||||
db, // custom
|
||||
}) => {
|
||||
// Use playwright-utils
|
||||
const { body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: `/api/users/${testUser.id}`,
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
// Use custom fixture
|
||||
await db.clear();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Custom fixtures extend `base` test
|
||||
- Merge custom with playwright-utils fixtures
|
||||
- All available in one test signature
|
||||
- Maintainable separation of concerns
|
||||
|
||||
### Example 3: Full Utility Suite Integration
|
||||
|
||||
**Context**: Production setup with all core playwright-utils and custom fixtures.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright/support/merged-fixtures.ts
|
||||
import { mergeTests } from '@playwright/test';
|
||||
|
||||
// Playwright utils fixtures
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { test as interceptFixture } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures';
|
||||
import { test as recurseFixture } from '@seontechnologies/playwright-utils/recurse/fixtures';
|
||||
import { test as networkRecorderFixture } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
||||
|
||||
// Custom project fixtures
|
||||
import { test as customFixtures } from './custom-fixtures';
|
||||
|
||||
// Merge everything
|
||||
export const test = mergeTests(apiRequestFixture, authFixture, interceptFixture, recurseFixture, networkRecorderFixture, customFixtures);
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
```typescript
|
||||
// In tests
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
test('full integration', async ({
|
||||
page,
|
||||
context,
|
||||
apiRequest,
|
||||
authToken,
|
||||
interceptNetworkCall,
|
||||
recurse,
|
||||
networkRecorder,
|
||||
testUser, // custom
|
||||
}) => {
|
||||
// All utilities + custom fixtures available
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
|
||||
await page.goto('/users');
|
||||
const { responseJson } = await usersCall;
|
||||
|
||||
expect(responseJson).toContainEqual(expect.objectContaining({ id: testUser.id }));
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- One merged-fixtures.ts for entire project
|
||||
- Combine all playwright-utils you use
|
||||
- Add custom project fixtures
|
||||
- Single import in all test files
|
||||
|
||||
### Example 4: Fixture Override Pattern
|
||||
|
||||
**Context**: Override default options for specific test files or describes.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
// Override auth options for entire file
|
||||
test.use({
|
||||
authOptions: {
|
||||
userIdentifier: 'admin',
|
||||
environment: 'staging',
|
||||
},
|
||||
});
|
||||
|
||||
test('uses admin on staging', async ({ authToken }) => {
|
||||
// Token is for admin user on staging environment
|
||||
});
|
||||
|
||||
// Override for specific describe block
|
||||
test.describe('manager tests', () => {
|
||||
test.use({
|
||||
authOptions: {
|
||||
userIdentifier: 'manager',
|
||||
},
|
||||
});
|
||||
|
||||
test('manager can access reports', async ({ page }) => {
|
||||
// Uses manager token
|
||||
await page.goto('/reports');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `test.use()` overrides fixture options
|
||||
- Can override at file or describe level
|
||||
- Options merge with defaults
|
||||
- Type-safe overrides
|
||||
|
||||
### Example 5: Avoiding Fixture Conflicts
|
||||
|
||||
**Context**: Handle name collisions when merging fixtures with same names.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// If two fixtures have same name, last one wins
|
||||
import { test as fixture1 } from './fixture1'; // has 'user' fixture
|
||||
import { test as fixture2 } from './fixture2'; // also has 'user' fixture
|
||||
|
||||
const test = mergeTests(fixture1, fixture2);
|
||||
// fixture2's 'user' overrides fixture1's 'user'
|
||||
|
||||
// Better: Rename fixtures before merging
|
||||
import { test as base } from '@playwright/test';
|
||||
import { test as fixture1 } from './fixture1';
|
||||
|
||||
const fixture1Renamed = base.extend({
|
||||
user1: fixture1._extend.user, // Rename to avoid conflict
|
||||
});
|
||||
|
||||
const test = mergeTests(fixture1Renamed, fixture2);
|
||||
// Now both 'user1' and 'user' available
|
||||
|
||||
// Best: Design fixtures without conflicts
|
||||
// - Prefix custom fixtures: 'myAppUser', 'myAppDb'
|
||||
// - Playwright-utils uses descriptive names: 'apiRequest', 'authToken'
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Last fixture wins in conflicts
|
||||
- Rename fixtures to avoid collisions
|
||||
- Design fixtures with unique names
|
||||
- Playwright-utils uses descriptive names (no conflicts)
|
||||
|
||||
## Recommended Project Structure
|
||||
|
||||
```
|
||||
playwright/
|
||||
├── support/
|
||||
│ ├── merged-fixtures.ts # ⭐ Single test object for project
|
||||
│ ├── custom-fixtures.ts # Your project-specific fixtures
|
||||
│ ├── auth/
|
||||
│ │ ├── auth-fixture.ts # Auth wrapper (if needed)
|
||||
│ │ └── custom-auth-provider.ts
|
||||
│ ├── fixtures/
|
||||
│ │ ├── user-fixture.ts
|
||||
│ │ ├── db-fixture.ts
|
||||
│ │ └── api-fixture.ts
|
||||
│ └── utils/
|
||||
│ └── factories/
|
||||
└── tests/
|
||||
├── api/
|
||||
│ └── users.spec.ts # import { test } from '../../support/merged-fixtures'
|
||||
├── e2e/
|
||||
│ └── login.spec.ts # import { test } from '../../support/merged-fixtures'
|
||||
└── component/
|
||||
└── button.spec.ts # import { test } from '../../support/merged-fixtures'
|
||||
```
|
||||
|
||||
## Benefits of Fixture Composition
|
||||
|
||||
**Compared to direct imports:**
|
||||
|
||||
```typescript
|
||||
// ❌ Without mergeTests (verbose)
|
||||
import { test as base } from '@playwright/test';
|
||||
import { apiRequest } from '@seontechnologies/playwright-utils/api-request';
|
||||
import { getAuthToken } from './auth';
|
||||
import { createUser } from './factories';
|
||||
|
||||
test('verbose', async ({ request }) => {
|
||||
const token = await getAuthToken();
|
||||
const user = await createUser();
|
||||
const response = await apiRequest({ request, method: 'GET', path: '/api/users' });
|
||||
// Manual wiring everywhere
|
||||
});
|
||||
|
||||
// ✅ With mergeTests (clean)
|
||||
import { test } from '../support/merged-fixtures';
|
||||
|
||||
test('clean', async ({ apiRequest, authToken, testUser }) => {
|
||||
const { body } = await apiRequest({ method: 'GET', path: '/api/users' });
|
||||
// All fixtures auto-wired
|
||||
});
|
||||
```
|
||||
|
||||
**Reduction:** ~10 lines per test → ~2 lines
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and design principles
|
||||
- `api-request.md`, `auth-session.md`, `recurse.md` - Utilities to merge
|
||||
- `network-recorder.md`, `intercept-network-call.md`, `log.md` - Additional utilities
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Importing test from multiple fixture files:**
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
// Also need auth...
|
||||
import { test as authTest } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
// Name conflict! Which test to use?
|
||||
```
|
||||
|
||||
**✅ Use merged fixtures:**
|
||||
|
||||
```typescript
|
||||
import { test } from '../support/merged-fixtures';
|
||||
// All utilities available, no conflicts
|
||||
```
|
||||
|
||||
**❌ Merging too many fixtures (kitchen sink):**
|
||||
|
||||
```typescript
|
||||
// Merging 20+ fixtures makes test signature huge
|
||||
const test = mergeTests(...20 different fixtures)
|
||||
|
||||
test('my test', async ({ fixture1, fixture2, ..., fixture20 }) => {
|
||||
// Cognitive overload
|
||||
})
|
||||
```
|
||||
|
||||
**✅ Merge only what you actually use:**
|
||||
|
||||
```typescript
|
||||
// Merge the 4-6 fixtures your project actually needs
|
||||
const test = mergeTests(apiRequestFixture, authFixture, recurseFixture, customFixtures);
|
||||
```
|
||||
|
|
@ -0,0 +1,280 @@
|
|||
# Intercept Network Call Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Intercept network requests with a single declarative call that returns a Promise. Automatically parse JSON responses, support both spy (observe) and stub (mock) patterns, and use powerful glob pattern matching for URL filtering.
|
||||
|
||||
## Rationale
|
||||
|
||||
Vanilla Playwright's network interception requires multiple steps:
|
||||
|
||||
- `page.route()` to setup, `page.waitForResponse()` to capture
|
||||
- Manual JSON parsing
|
||||
- Verbose syntax for conditional handling
|
||||
- Complex filter predicates
|
||||
|
||||
The `interceptNetworkCall` utility provides:
|
||||
|
||||
- **Single declarative call**: Setup and wait in one statement
|
||||
- **Automatic JSON parsing**: Response pre-parsed, strongly typed
|
||||
- **Flexible URL patterns**: Glob matching with picomatch
|
||||
- **Spy or stub modes**: Observe real traffic or mock responses
|
||||
- **Concise API**: Reduces boilerplate by 60-70%
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Spy on Network (Observe Real Traffic)
|
||||
|
||||
**Context**: Capture and inspect real API responses for validation.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures';
|
||||
|
||||
test('should spy on users API', async ({ page, interceptNetworkCall }) => {
|
||||
// Setup interception BEFORE navigation
|
||||
const usersCall = interceptNetworkCall({
|
||||
url: '**/api/users', // Glob pattern
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
|
||||
// Wait for response and access parsed data
|
||||
const { responseJson, status } = await usersCall;
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(responseJson).toHaveLength(10);
|
||||
expect(responseJson[0]).toHaveProperty('name');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Intercept before navigation (critical for race-free tests)
|
||||
- Returns Promise with `{ responseJson, status, requestBody }`
|
||||
- Glob patterns (`**` matches any path segment)
|
||||
- JSON automatically parsed
|
||||
|
||||
### Example 2: Stub Network (Mock Response)
|
||||
|
||||
**Context**: Mock API responses for testing UI behavior without backend.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('should stub users API', async ({ page, interceptNetworkCall }) => {
|
||||
const mockUsers = [
|
||||
{ id: 1, name: 'Test User 1' },
|
||||
{ id: 2, name: 'Test User 2' },
|
||||
];
|
||||
|
||||
const usersCall = interceptNetworkCall({
|
||||
url: '**/api/users',
|
||||
fulfillResponse: {
|
||||
status: 200,
|
||||
body: mockUsers,
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
await usersCall;
|
||||
|
||||
// UI shows mocked data
|
||||
await expect(page.getByText('Test User 1')).toBeVisible();
|
||||
await expect(page.getByText('Test User 2')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `fulfillResponse` mocks the API
|
||||
- No backend needed
|
||||
- Test UI logic in isolation
|
||||
- Status code and body fully controllable
|
||||
|
||||
### Example 3: Conditional Response Handling
|
||||
|
||||
**Context**: Different responses based on request method or parameters.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('conditional mocking', async ({ page, interceptNetworkCall }) => {
|
||||
await interceptNetworkCall({
|
||||
url: '**/api/data',
|
||||
handler: async (route, request) => {
|
||||
if (request.method() === 'POST') {
|
||||
// Mock POST success
|
||||
await route.fulfill({
|
||||
status: 201,
|
||||
body: JSON.stringify({ id: 'new-id', success: true }),
|
||||
});
|
||||
} else if (request.method() === 'GET') {
|
||||
// Mock GET with data
|
||||
await route.fulfill({
|
||||
status: 200,
|
||||
body: JSON.stringify([{ id: 1, name: 'Item' }]),
|
||||
});
|
||||
} else {
|
||||
// Let other methods through
|
||||
await route.continue();
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto('/data-page');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `handler` function for complex logic
|
||||
- Access full `route` and `request` objects
|
||||
- Can mock, continue, or abort
|
||||
- Flexible for advanced scenarios
|
||||
|
||||
### Example 4: Error Simulation
|
||||
|
||||
**Context**: Testing error handling in UI when API fails.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('should handle API errors gracefully', async ({ page, interceptNetworkCall }) => {
|
||||
// Simulate 500 error
|
||||
const errorCall = interceptNetworkCall({
|
||||
url: '**/api/users',
|
||||
fulfillResponse: {
|
||||
status: 500,
|
||||
body: { error: 'Internal Server Error' },
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
await errorCall;
|
||||
|
||||
// Verify UI shows error state
|
||||
await expect(page.getByText('Failed to load users')).toBeVisible();
|
||||
await expect(page.getByTestId('retry-button')).toBeVisible();
|
||||
});
|
||||
|
||||
// Simulate network timeout
|
||||
test('should handle timeout', async ({ page, interceptNetworkCall }) => {
|
||||
await interceptNetworkCall({
|
||||
url: '**/api/slow',
|
||||
handler: async (route) => {
|
||||
// Never respond - simulates timeout
|
||||
await new Promise(() => {});
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto('/slow-page');
|
||||
|
||||
// UI should show timeout error
|
||||
await expect(page.getByText('Request timed out')).toBeVisible({ timeout: 10000 });
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Mock error statuses (4xx, 5xx)
|
||||
- Test timeout scenarios
|
||||
- Validate error UI states
|
||||
- No real failures needed
|
||||
|
||||
### Example 5: Multiple Intercepts (Order Matters!)
|
||||
|
||||
**Context**: Intercepting different endpoints in same test - setup order is critical.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
// ✅ CORRECT: Setup all intercepts BEFORE navigation
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
const productsCall = interceptNetworkCall({ url: '**/api/products' });
|
||||
const ordersCall = interceptNetworkCall({ url: '**/api/orders' });
|
||||
|
||||
// THEN navigate
|
||||
await page.goto('/dashboard');
|
||||
|
||||
// Wait for all (or specific ones)
|
||||
const [users, products] = await Promise.all([usersCall, productsCall]);
|
||||
|
||||
expect(users.responseJson).toHaveLength(10);
|
||||
expect(products.responseJson).toHaveLength(50);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Setup all intercepts before triggering actions
|
||||
- Use `Promise.all()` to wait for multiple calls
|
||||
- Order: intercept → navigate → await
|
||||
- Prevents race conditions
|
||||
|
||||
## URL Pattern Matching
|
||||
|
||||
**Supported glob patterns:**
|
||||
|
||||
```typescript
|
||||
'**/api/users'; // Any path ending with /api/users
|
||||
'/api/users'; // Exact match
|
||||
'**/users/*'; // Any users sub-path
|
||||
'**/api/{users,products}'; // Either users or products
|
||||
'**/api/users?id=*'; // With query params
|
||||
```
|
||||
|
||||
**Uses picomatch library** - same pattern syntax as Playwright's `page.route()` but cleaner API.
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
| Vanilla Playwright | intercept-network-call |
|
||||
| ----------------------------------------------------------- | ------------------------------------------------------------ |
|
||||
| `await page.route('/api/users', route => route.continue())` | `const call = interceptNetworkCall({ url: '**/api/users' })` |
|
||||
| `const resp = await page.waitForResponse('/api/users')` | (Combined in single statement) |
|
||||
| `const json = await resp.json()` | `const { responseJson } = await call` |
|
||||
| `const status = resp.status()` | `const { status } = await call` |
|
||||
| Complex filter predicates | Simple glob patterns |
|
||||
|
||||
**Reduction:** ~5-7 lines → ~2-3 lines per interception
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `network-first.md` - Core pattern: intercept before navigate
|
||||
- `network-recorder.md` - HAR-based offline testing
|
||||
- `overview.md` - Fixture composition basics
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Intercepting after navigation:**
|
||||
|
||||
```typescript
|
||||
await page.goto('/dashboard'); // Navigation starts
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' }); // Too late!
|
||||
```
|
||||
|
||||
**✅ Intercept before navigate:**
|
||||
|
||||
```typescript
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' }); // First
|
||||
await page.goto('/dashboard'); // Then navigate
|
||||
const { responseJson } = await usersCall; // Then await
|
||||
```
|
||||
|
||||
**❌ Ignoring the returned Promise:**
|
||||
|
||||
```typescript
|
||||
interceptNetworkCall({ url: '**/api/users' }); // Not awaited!
|
||||
await page.goto('/dashboard');
|
||||
// No deterministic wait - race condition
|
||||
```
|
||||
|
||||
**✅ Always await the intercept:**
|
||||
|
||||
```typescript
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
await page.goto('/dashboard');
|
||||
await usersCall; // Deterministic wait
|
||||
```
|
||||
|
|
@ -0,0 +1,294 @@
|
|||
# Log Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Use structured logging that integrates with Playwright's test reports. Support object logging, test step decoration, and multiple log levels (info, step, success, warning, error, debug).
|
||||
|
||||
## Rationale
|
||||
|
||||
Console.log in Playwright tests has limitations:
|
||||
|
||||
- Not visible in HTML reports
|
||||
- No test step integration
|
||||
- No structured output
|
||||
- Lost in terminal noise during CI
|
||||
|
||||
The `log` utility provides:
|
||||
|
||||
- **Report integration**: Logs appear in Playwright HTML reports
|
||||
- **Test step decoration**: `log.step()` creates collapsible steps in UI
|
||||
- **Object logging**: Automatically formats objects/arrays
|
||||
- **Multiple levels**: info, step, success, warning, error, debug
|
||||
- **Optional console**: Can disable console output but keep report logs
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Logging Levels
|
||||
|
||||
**Context**: Log different types of messages throughout test execution.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
|
||||
test('logging demo', async ({ page }) => {
|
||||
await log.step('Navigate to login page');
|
||||
await page.goto('/login');
|
||||
|
||||
await log.info('Entering credentials');
|
||||
await page.fill('#username', 'testuser');
|
||||
|
||||
await log.success('Login successful');
|
||||
|
||||
await log.warning('Rate limit approaching');
|
||||
|
||||
await log.debug({ userId: '123', sessionId: 'abc' });
|
||||
|
||||
// Errors still throw but get logged first
|
||||
try {
|
||||
await page.click('#nonexistent');
|
||||
} catch (error) {
|
||||
await log.error('Click failed', false); // false = no console output
|
||||
throw error;
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `step()` creates collapsible steps in Playwright UI
|
||||
- `info()`, `success()`, `warning()` for different message types
|
||||
- `debug()` for detailed data (objects/arrays)
|
||||
- `error()` with optional console suppression
|
||||
- All logs appear in test reports
|
||||
|
||||
### Example 2: Object and Array Logging
|
||||
|
||||
**Context**: Log structured data for debugging without cluttering console.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('object logging', async ({ apiRequest }) => {
|
||||
const { body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
});
|
||||
|
||||
// Log array of objects
|
||||
await log.debug(body); // Formatted as JSON in report
|
||||
|
||||
// Log specific object
|
||||
await log.info({
|
||||
totalUsers: body.length,
|
||||
firstUser: body[0]?.name,
|
||||
timestamp: new Date().toISOString(),
|
||||
});
|
||||
|
||||
// Complex nested structures
|
||||
await log.debug({
|
||||
request: {
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
timestamp: Date.now(),
|
||||
},
|
||||
response: {
|
||||
status: 200,
|
||||
body: body.slice(0, 3), // First 3 items
|
||||
},
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Objects auto-formatted as pretty JSON
|
||||
- Arrays handled gracefully
|
||||
- Nested structures supported
|
||||
- All visible in Playwright report attachments
|
||||
|
||||
### Example 3: Test Step Organization
|
||||
|
||||
**Context**: Organize test execution into collapsible steps for better readability in reports.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('organized with steps', async ({ page, apiRequest }) => {
|
||||
await log.step('ARRANGE: Setup test data');
|
||||
const { body: user } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: { name: 'Test User' },
|
||||
});
|
||||
|
||||
await log.step('ACT: Perform user action');
|
||||
await page.goto(`/users/${user.id}`);
|
||||
await page.click('#edit');
|
||||
await page.fill('#name', 'Updated Name');
|
||||
await page.click('#save');
|
||||
|
||||
await log.step('ASSERT: Verify changes');
|
||||
await expect(page.getByText('Updated Name')).toBeVisible();
|
||||
|
||||
// In Playwright UI, each step is collapsible
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `log.step()` creates collapsible sections
|
||||
- Organize by Arrange-Act-Assert
|
||||
- Steps visible in Playwright trace viewer
|
||||
- Better debugging when tests fail
|
||||
|
||||
### Example 4: Conditional Logging
|
||||
|
||||
**Context**: Log different messages based on environment or test conditions.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('conditional logging', async ({ page }) => {
|
||||
const isCI = process.env.CI === 'true';
|
||||
|
||||
if (isCI) {
|
||||
await log.info('Running in CI environment');
|
||||
} else {
|
||||
await log.debug('Running locally');
|
||||
}
|
||||
|
||||
const isKafkaWorking = await checkKafkaHealth();
|
||||
|
||||
if (!isKafkaWorking) {
|
||||
await log.warning('Kafka unavailable - skipping event checks');
|
||||
} else {
|
||||
await log.step('Verifying Kafka events');
|
||||
// ... event verification
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Log based on environment
|
||||
- Skip logging with conditionals
|
||||
- Use appropriate log levels
|
||||
- Debug info for local, minimal for CI
|
||||
|
||||
### Example 5: Integration with Auth and API
|
||||
|
||||
**Context**: Log authenticated API requests with tokens (safely).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
// Helper to create safe token preview
|
||||
function createTokenPreview(token: string): string {
|
||||
if (!token || token.length < 10) return '[invalid]';
|
||||
return `${token.slice(0, 6)}...${token.slice(-4)}`;
|
||||
}
|
||||
|
||||
test('should log auth flow', async ({ authToken, apiRequest }) => {
|
||||
await log.info(`Using token: ${createTokenPreview(authToken)}`);
|
||||
|
||||
await log.step('Fetch protected resource');
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/protected',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
await log.debug({
|
||||
status,
|
||||
bodyPreview: {
|
||||
id: body.id,
|
||||
recordCount: body.data?.length,
|
||||
},
|
||||
});
|
||||
|
||||
await log.success('Protected resource accessed successfully');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Never log full tokens (security risk)
|
||||
- Use preview functions for sensitive data
|
||||
- Combine with auth and API utilities
|
||||
- Log at appropriate detail level
|
||||
|
||||
## Log Levels Guide
|
||||
|
||||
| Level | When to Use | Shows in Report | Shows in Console |
|
||||
| --------- | ----------------------------------- | -------------------- | ---------------- |
|
||||
| `step` | Test organization, major actions | ✅ Collapsible steps | ✅ Yes |
|
||||
| `info` | General information, state changes | ✅ Yes | ✅ Yes |
|
||||
| `success` | Successful operations | ✅ Yes | ✅ Yes |
|
||||
| `warning` | Non-critical issues, skipped checks | ✅ Yes | ✅ Yes |
|
||||
| `error` | Failures, exceptions | ✅ Yes | ✅ Configurable |
|
||||
| `debug` | Detailed data, objects | ✅ Yes (attached) | ✅ Configurable |
|
||||
|
||||
## Comparison with console.log
|
||||
|
||||
| console.log | log Utility |
|
||||
| ----------------------- | ------------------------- |
|
||||
| Not in reports | Appears in reports |
|
||||
| No test steps | Creates collapsible steps |
|
||||
| Manual JSON.stringify() | Auto-formats objects |
|
||||
| No log levels | 6 log levels |
|
||||
| Lost in CI output | Preserved in artifacts |
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Basic usage and imports
|
||||
- `api-request.md` - Log API requests
|
||||
- `auth-session.md` - Log auth flow (safely)
|
||||
- `recurse.md` - Log polling progress
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Logging objects in steps:**
|
||||
|
||||
```typescript
|
||||
await log.step({ user: 'test', action: 'create' }); // Shows empty in UI
|
||||
```
|
||||
|
||||
**✅ Use strings for steps, objects for debug:**
|
||||
|
||||
```typescript
|
||||
await log.step('Creating user: test'); // Readable in UI
|
||||
await log.debug({ user: 'test', action: 'create' }); // Detailed data
|
||||
```
|
||||
|
||||
**❌ Logging sensitive data:**
|
||||
|
||||
```typescript
|
||||
await log.info(`Password: ${password}`); // Security risk!
|
||||
await log.info(`Token: ${authToken}`); // Full token exposed!
|
||||
```
|
||||
|
||||
**✅ Use previews or omit sensitive data:**
|
||||
|
||||
```typescript
|
||||
await log.info('User authenticated successfully'); // No sensitive data
|
||||
await log.debug({ tokenPreview: token.slice(0, 6) + '...' });
|
||||
```
|
||||
|
||||
**❌ Excessive logging in loops:**
|
||||
|
||||
```typescript
|
||||
for (const item of items) {
|
||||
await log.info(`Processing ${item.id}`); // 100 log entries!
|
||||
}
|
||||
```
|
||||
|
||||
**✅ Log summary or use debug level:**
|
||||
|
||||
```typescript
|
||||
await log.step(`Processing ${items.length} items`);
|
||||
await log.debug({ itemIds: items.map((i) => i.id) }); // One log entry
|
||||
```
|
||||
|
|
@ -0,0 +1,272 @@
|
|||
# Network Error Monitor
|
||||
|
||||
## Principle
|
||||
|
||||
Automatically detect and fail tests when HTTP 4xx/5xx errors occur during execution. Act like Sentry for tests - catch silent backend failures even when UI passes assertions.
|
||||
|
||||
## Rationale
|
||||
|
||||
Traditional Playwright tests focus on UI:
|
||||
|
||||
- Backend 500 errors ignored if UI looks correct
|
||||
- Silent failures slip through
|
||||
- No visibility into background API health
|
||||
- Tests pass while features are broken
|
||||
|
||||
The `network-error-monitor` provides:
|
||||
|
||||
- **Automatic detection**: All HTTP 4xx/5xx responses tracked
|
||||
- **Test failures**: Fail tests with backend errors (even if UI passes)
|
||||
- **Structured artifacts**: JSON reports with error details
|
||||
- **Smart opt-out**: Disable for validation tests expecting errors
|
||||
- **Deduplication**: Group repeated errors by pattern
|
||||
- **Domino effect prevention**: Limit test failures per error pattern
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Auto-Monitoring
|
||||
|
||||
**Context**: Automatically fail tests when backend errors occur.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
// Monitoring automatically enabled
|
||||
test('should load dashboard', async ({ page }) => {
|
||||
await page.goto('/dashboard');
|
||||
await expect(page.locator('h1')).toContainText('Dashboard');
|
||||
|
||||
// ✅ Passes if no HTTP errors
|
||||
// ❌ Fails if any 4xx/5xx errors detected with clear message:
|
||||
// "Network errors detected: 2 request(s) failed"
|
||||
// Failed requests:
|
||||
// GET 500 https://api.example.com/users
|
||||
// POST 503 https://api.example.com/metrics
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Zero setup - auto-enabled for all tests
|
||||
- Fails on any 4xx/5xx response
|
||||
- Structured error message with URLs and status codes
|
||||
- JSON artifact attached to test report
|
||||
|
||||
### Example 2: Opt-Out for Validation Tests
|
||||
|
||||
**Context**: Some tests expect errors (validation, error handling, edge cases).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
// Opt-out with annotation
|
||||
test('should show error on invalid input', { annotation: [{ type: 'skipNetworkMonitoring' }] }, async ({ page }) => {
|
||||
await page.goto('/form');
|
||||
await page.click('#submit'); // Triggers 400 error
|
||||
|
||||
// Monitoring disabled - test won't fail on 400
|
||||
await expect(page.getByText('Invalid input')).toBeVisible();
|
||||
});
|
||||
|
||||
// Or opt-out entire describe block
|
||||
test.describe('error handling', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
|
||||
test('handles 404', async ({ page }) => {
|
||||
// All tests in this block skip monitoring
|
||||
});
|
||||
|
||||
test('handles 500', async ({ page }) => {
|
||||
// Monitoring disabled
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Use annotation `{ type: 'skipNetworkMonitoring' }`
|
||||
- Can opt-out single test or entire describe block
|
||||
- Monitoring still active for other tests
|
||||
- Perfect for intentional error scenarios
|
||||
|
||||
### Example 3: Integration with Merged Fixtures
|
||||
|
||||
**Context**: Combine network-error-monitor with other utilities.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright/support/merged-fixtures.ts
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
export const test = mergeTests(
|
||||
authFixture,
|
||||
networkErrorMonitorFixture,
|
||||
// Add other fixtures
|
||||
);
|
||||
|
||||
// In tests
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
test('authenticated with monitoring', async ({ page, authToken }) => {
|
||||
// Both auth and network monitoring active
|
||||
await page.goto('/protected');
|
||||
|
||||
// Fails if backend returns errors during auth flow
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Combine with `mergeTests`
|
||||
- Works alongside all other utilities
|
||||
- Monitoring active automatically
|
||||
- No extra setup needed
|
||||
|
||||
### Example 4: Domino Effect Prevention
|
||||
|
||||
**Context**: One failing endpoint shouldn't fail all tests.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Configuration (internal to utility)
|
||||
const config = {
|
||||
maxTestsPerError: 3, // Max 3 tests fail per unique error pattern
|
||||
};
|
||||
|
||||
// Scenario:
|
||||
// Test 1: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 2: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 3: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 4: GET /api/broken → 500 error → Test passes ⚠️ (limit reached, warning logged)
|
||||
// Test 5: Different error pattern → Test fails ❌ (new pattern, counter resets)
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Limits cascading failures
|
||||
- Groups errors by URL + status code pattern
|
||||
- Warns when limit reached
|
||||
- Prevents flaky backend from failing entire suite
|
||||
|
||||
### Example 5: Artifact Structure
|
||||
|
||||
**Context**: Debugging failed tests with network error artifacts.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
When test fails due to network errors, artifact attached:
|
||||
|
||||
```json
|
||||
// test-results/my-test/network-errors.json
|
||||
{
|
||||
"errors": [
|
||||
{
|
||||
"url": "https://api.example.com/users",
|
||||
"method": "GET",
|
||||
"status": 500,
|
||||
"statusText": "Internal Server Error",
|
||||
"timestamp": "2024-08-13T10:30:45.123Z"
|
||||
},
|
||||
{
|
||||
"url": "https://api.example.com/metrics",
|
||||
"method": "POST",
|
||||
"status": 503,
|
||||
"statusText": "Service Unavailable",
|
||||
"timestamp": "2024-08-13T10:30:46.456Z"
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"totalErrors": 2,
|
||||
"uniquePatterns": 2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- JSON artifact per failed test
|
||||
- Full error details (URL, method, status, timestamp)
|
||||
- Summary statistics
|
||||
- Easy debugging with structured data
|
||||
|
||||
## Comparison with Manual Error Checks
|
||||
|
||||
| Manual Approach | network-error-monitor |
|
||||
| ------------------------------------------------------ | -------------------------- |
|
||||
| `page.on('response', resp => { if (!resp.ok()) ... })` | Auto-enabled, zero setup |
|
||||
| Check each response manually | Automatic for all requests |
|
||||
| Custom error tracking logic | Built-in deduplication |
|
||||
| No structured artifacts | JSON artifacts attached |
|
||||
| Easy to forget | Never miss a backend error |
|
||||
|
||||
## When to Use
|
||||
|
||||
**Auto-enabled for:**
|
||||
|
||||
- ✅ All E2E tests
|
||||
- ✅ Integration tests
|
||||
- ✅ Any test hitting real APIs
|
||||
|
||||
**Opt-out for:**
|
||||
|
||||
- ❌ Validation tests (expecting 4xx)
|
||||
- ❌ Error handling tests (expecting 5xx)
|
||||
- ❌ Offline tests (network-recorder playback)
|
||||
|
||||
## Integration with Framework Setup
|
||||
|
||||
In `*framework` workflow, mention network-error-monitor:
|
||||
|
||||
```typescript
|
||||
// Add to merged-fixtures.ts
|
||||
import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
export const test = mergeTests(
|
||||
// ... other fixtures
|
||||
networkErrorMonitorFixture,
|
||||
);
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and fixtures
|
||||
- `fixtures-composition.md` - Merging with other utilities
|
||||
- `error-handling.md` - Traditional error handling patterns
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Opting out of monitoring globally:**
|
||||
|
||||
```typescript
|
||||
// Every test skips monitoring
|
||||
test.use({ annotation: [{ type: 'skipNetworkMonitoring' }] });
|
||||
```
|
||||
|
||||
**✅ Opt-out only for specific error tests:**
|
||||
|
||||
```typescript
|
||||
test.describe('error scenarios', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
|
||||
// Only these tests skip monitoring
|
||||
});
|
||||
```
|
||||
|
||||
**❌ Ignoring network error artifacts:**
|
||||
|
||||
```typescript
|
||||
// Test fails, artifact shows 500 errors
|
||||
// Developer: "Works on my machine" ¯\_(ツ)_/¯
|
||||
```
|
||||
|
||||
**✅ Check artifacts for root cause:**
|
||||
|
||||
```typescript
|
||||
// Read network-errors.json artifact
|
||||
// Identify failing endpoint: GET /api/users → 500
|
||||
// Fix backend issue before merging
|
||||
```
|
||||
|
|
@ -0,0 +1,265 @@
|
|||
# Network Recorder Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Record network traffic to HAR files during test execution, then play back from disk for offline testing. Enables frontend tests to run in complete isolation from backend services with intelligent stateful CRUD detection for realistic API behavior.
|
||||
|
||||
## Rationale
|
||||
|
||||
Traditional E2E tests require live backend services:
|
||||
|
||||
- Slow (real network latency)
|
||||
- Flaky (backend instability affects tests)
|
||||
- Expensive (full stack running for UI tests)
|
||||
- Coupled (UI tests break when API changes)
|
||||
|
||||
HAR-based recording/playback provides:
|
||||
|
||||
- **True offline testing**: UI tests run without backend
|
||||
- **Deterministic behavior**: Same responses every time
|
||||
- **Fast execution**: No network latency
|
||||
- **Stateful mocking**: CRUD operations work naturally (not just read-only)
|
||||
- **Environment flexibility**: Map URLs for any environment
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Record and Playback
|
||||
|
||||
**Context**: The fundamental pattern - record traffic once, play back for all subsequent runs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
||||
|
||||
// Set mode in test file (recommended)
|
||||
process.env.PW_NET_MODE = 'playback'; // or 'record'
|
||||
|
||||
test('CRUD operations work offline', async ({ page, context, networkRecorder }) => {
|
||||
// Setup recorder (records or plays back based on PW_NET_MODE)
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
await page.goto('/');
|
||||
|
||||
// First time (record mode): Records all network traffic to HAR
|
||||
// Subsequent runs (playback mode): Plays back from HAR (no backend!)
|
||||
await page.fill('#movie-name', 'Inception');
|
||||
await page.click('#add-movie');
|
||||
|
||||
// Intelligent CRUD detection makes this work offline!
|
||||
await expect(page.getByText('Inception')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `PW_NET_MODE=record` captures traffic to HAR files
|
||||
- `PW_NET_MODE=playback` replays from HAR files
|
||||
- Set mode in test file or via environment variable
|
||||
- HAR files auto-organized by test name
|
||||
- Stateful mocking detects CRUD operations
|
||||
|
||||
### Example 2: Complete CRUD Flow with HAR
|
||||
|
||||
**Context**: Full create-read-update-delete flow that works completely offline.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
process.env.PW_NET_MODE = 'playback';
|
||||
|
||||
test.describe('Movie CRUD - offline with network recorder', () => {
|
||||
test.beforeEach(async ({ page, networkRecorder, context }) => {
|
||||
await networkRecorder.setup(context);
|
||||
await page.goto('/');
|
||||
});
|
||||
|
||||
test('should add, edit, delete movie browser-only', async ({ page, interceptNetworkCall }) => {
|
||||
// Create
|
||||
await page.fill('#movie-name', 'Inception');
|
||||
await page.fill('#year', '2010');
|
||||
await page.click('#add-movie');
|
||||
|
||||
// Verify create (reads from stateful HAR)
|
||||
await expect(page.getByText('Inception')).toBeVisible();
|
||||
|
||||
// Update
|
||||
await page.getByText('Inception').click();
|
||||
await page.fill('#movie-name', "Inception Director's Cut");
|
||||
|
||||
const updateCall = interceptNetworkCall({
|
||||
method: 'PUT',
|
||||
url: '/movies/*',
|
||||
});
|
||||
|
||||
await page.click('#save');
|
||||
await updateCall; // Wait for update
|
||||
|
||||
// Verify update (HAR reflects state change!)
|
||||
await page.click('#back');
|
||||
await expect(page.getByText("Inception Director's Cut")).toBeVisible();
|
||||
|
||||
// Delete
|
||||
await page.click(`[data-testid="delete-Inception Director's Cut"]`);
|
||||
|
||||
// Verify delete (HAR reflects removal!)
|
||||
await expect(page.getByText("Inception Director's Cut")).not.toBeVisible();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Full CRUD operations work offline
|
||||
- Stateful HAR mocking tracks creates/updates/deletes
|
||||
- Combine with `interceptNetworkCall` for deterministic waits
|
||||
- First run records, subsequent runs replay
|
||||
|
||||
### Example 3: Environment Switching
|
||||
|
||||
**Context**: Record in dev environment, play back in CI with different base URLs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright.config.ts - Map URLs for different environments
|
||||
export default defineConfig({
|
||||
use: {
|
||||
baseURL: process.env.CI ? 'https://app.ci.example.com' : 'http://localhost:3000',
|
||||
},
|
||||
});
|
||||
|
||||
// Test works in both environments
|
||||
test('cross-environment playback', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// In dev: hits http://localhost:3000/api/movies
|
||||
// In CI: HAR replays with https://app.ci.example.com/api/movies
|
||||
await page.goto('/movies');
|
||||
|
||||
// Network recorder auto-maps URLs
|
||||
await expect(page.getByTestId('movie-list')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- HAR files record absolute URLs
|
||||
- Playback maps to current baseURL
|
||||
- Same HAR works across environments
|
||||
- No manual URL rewriting needed
|
||||
|
||||
### Example 4: Automatic vs Manual Mode Control
|
||||
|
||||
**Context**: Choose between environment-based switching or in-test mode control.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Option 1: Environment variable (recommended for CI)
|
||||
PW_NET_MODE=record npm run test:pw # Record traffic
|
||||
PW_NET_MODE=playback npm run test:pw # Playback traffic
|
||||
|
||||
// Option 2: In-test control (recommended for development)
|
||||
process.env.PW_NET_MODE = 'record' // Set at top of test file
|
||||
|
||||
test('my test', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context)
|
||||
// ...
|
||||
})
|
||||
|
||||
// Option 3: Auto-fallback (record if HAR missing, else playback)
|
||||
// This is the default behavior when PW_NET_MODE not set
|
||||
test('auto mode', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context)
|
||||
// First run: auto-records
|
||||
// Subsequent runs: auto-plays back
|
||||
})
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Three mode options: record, playback, auto
|
||||
- `PW_NET_MODE` environment variable
|
||||
- In-test `process.env.PW_NET_MODE` assignment
|
||||
- Auto-fallback when no mode specified
|
||||
|
||||
## Why Use This Instead of Native Playwright?
|
||||
|
||||
| Native Playwright (`routeFromHAR`) | network-recorder Utility |
|
||||
| ---------------------------------- | ------------------------------ |
|
||||
| ~80 lines setup boilerplate | ~5 lines total |
|
||||
| Manual HAR file management | Automatic file organization |
|
||||
| Complex setup/teardown | Automatic cleanup via fixtures |
|
||||
| **Read-only tests** | **Full CRUD support** |
|
||||
| **Stateless** | **Stateful mocking** |
|
||||
| Manual URL mapping | Automatic environment mapping |
|
||||
|
||||
**The game-changer: Stateful CRUD detection**
|
||||
|
||||
Native Playwright HAR playback is stateless - a POST create followed by GET list won't show the created item. This utility intelligently tracks CRUD operations in memory to reflect state changes, making offline tests behave like real APIs.
|
||||
|
||||
## Integration with Other Utilities
|
||||
|
||||
**With interceptNetworkCall** (deterministic waits):
|
||||
|
||||
```typescript
|
||||
test('use both utilities', async ({ page, context, networkRecorder, interceptNetworkCall }) => {
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
const createCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '/api/movies',
|
||||
});
|
||||
|
||||
await page.click('#add-movie');
|
||||
await createCall; // Wait for create (works with HAR!)
|
||||
|
||||
// Network recorder provides playback, intercept provides determinism
|
||||
});
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and fixture patterns
|
||||
- `intercept-network-call.md` - Combine for deterministic offline tests
|
||||
- `auth-session.md` - Record authenticated traffic
|
||||
- `network-first.md` - Core pattern for intercept-before-navigate
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Mixing record and playback in same test:**
|
||||
|
||||
```typescript
|
||||
process.env.PW_NET_MODE = 'record';
|
||||
// ... some test code ...
|
||||
process.env.PW_NET_MODE = 'playback'; // Don't switch mid-test
|
||||
```
|
||||
|
||||
**✅ One mode per test:**
|
||||
|
||||
```typescript
|
||||
process.env.PW_NET_MODE = 'playback'; // Set once at top
|
||||
|
||||
test('my test', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context);
|
||||
// Entire test uses playback mode
|
||||
});
|
||||
```
|
||||
|
||||
**❌ Forgetting to call setup:**
|
||||
|
||||
```typescript
|
||||
test('broken', async ({ page, networkRecorder }) => {
|
||||
await page.goto('/'); // HAR not active!
|
||||
});
|
||||
```
|
||||
|
||||
**✅ Always call setup before navigation:**
|
||||
|
||||
```typescript
|
||||
test('correct', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context); // Must setup first
|
||||
await page.goto('/'); // Now HAR is active
|
||||
});
|
||||
```
|
||||
|
|
@ -0,0 +1,284 @@
|
|||
# Playwright Utils Overview
|
||||
|
||||
## Principle
|
||||
|
||||
Use production-ready, fixture-based utilities from `@seontechnologies/playwright-utils` for common Playwright testing patterns. Build test helpers as pure functions first, then wrap in framework-specific fixtures for composability and reuse.
|
||||
|
||||
## Rationale
|
||||
|
||||
Writing Playwright utilities from scratch for every project leads to:
|
||||
|
||||
- Duplicated code across test suites
|
||||
- Inconsistent patterns and quality
|
||||
- Maintenance burden when Playwright APIs change
|
||||
- Missing advanced features (schema validation, HAR recording, auth persistence)
|
||||
|
||||
`@seontechnologies/playwright-utils` provides:
|
||||
|
||||
- **Production-tested utilities**: Used at SEON Technologies in production
|
||||
- **Functional-first design**: Core logic as pure functions, fixtures for convenience
|
||||
- **Composable fixtures**: Use `mergeTests` to combine utilities
|
||||
- **TypeScript support**: Full type safety with generic types
|
||||
- **Comprehensive coverage**: API requests, auth, network, logging, file handling, burn-in
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install -D @seontechnologies/playwright-utils
|
||||
```
|
||||
|
||||
**Peer Dependencies:**
|
||||
|
||||
- `@playwright/test` >= 1.54.1 (required)
|
||||
- `ajv` >= 8.0.0 (optional - for JSON Schema validation)
|
||||
- `js-yaml` >= 4.0.0 (optional - for YAML schema support)
|
||||
- `zod` >= 3.0.0 (optional - for Zod schema validation)
|
||||
|
||||
## Available Utilities
|
||||
|
||||
### Core Testing Utilities
|
||||
|
||||
| Utility | Purpose | Test Context |
|
||||
| -------------------------- | ------------------------------------------ | ------------- |
|
||||
| **api-request** | Typed HTTP client with schema validation | API tests |
|
||||
| **network-recorder** | HAR record/playback for offline testing | UI tests |
|
||||
| **auth-session** | Token persistence, multi-user auth | Both UI & API |
|
||||
| **recurse** | Cypress-style polling for async conditions | Both UI & API |
|
||||
| **intercept-network-call** | Network spy/stub with auto JSON parsing | UI tests |
|
||||
| **log** | Playwright report-integrated logging | Both UI & API |
|
||||
| **file-utils** | CSV/XLSX/PDF/ZIP reading & validation | Both UI & API |
|
||||
| **burn-in** | Smart test selection with git diff | CI/CD |
|
||||
| **network-error-monitor** | Automatic HTTP 4xx/5xx detection | UI tests |
|
||||
|
||||
## Design Patterns
|
||||
|
||||
### Pattern 1: Functional Core, Fixture Shell
|
||||
|
||||
**Context**: All utilities follow the same architectural pattern - pure function as core, fixture as wrapper.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Direct import (pass Playwright context explicitly)
|
||||
import { apiRequest } from '@seontechnologies/playwright-utils';
|
||||
|
||||
test('direct usage', async ({ request }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
request, // Must pass request context
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
});
|
||||
});
|
||||
|
||||
// Fixture import (context injected automatically)
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('fixture usage', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
// No need to pass request context
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Pure functions testable without Playwright running
|
||||
- Fixtures inject framework dependencies automatically
|
||||
- Choose direct import (more control) or fixture (convenience)
|
||||
|
||||
### Pattern 2: Subpath Imports for Tree-Shaking
|
||||
|
||||
**Context**: Import only what you need to keep bundle sizes small.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Import specific utility
|
||||
import { apiRequest } from '@seontechnologies/playwright-utils/api-request';
|
||||
|
||||
// Import specific fixture
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
// Import everything (use sparingly)
|
||||
import { apiRequest, recurse, log } from '@seontechnologies/playwright-utils';
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Subpath imports enable tree-shaking
|
||||
- Keep bundle sizes minimal
|
||||
- Import from specific paths for production builds
|
||||
|
||||
### Pattern 3: Fixture Composition with mergeTests
|
||||
|
||||
**Context**: Combine multiple playwright-utils fixtures with your own custom fixtures.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// playwright/support/merged-fixtures.ts
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { test as recurseFixture } from '@seontechnologies/playwright-utils/recurse/fixtures';
|
||||
import { test as logFixture } from '@seontechnologies/playwright-utils/log/fixtures';
|
||||
|
||||
// Merge all fixtures into one test object
|
||||
export const test = mergeTests(apiRequestFixture, authFixture, recurseFixture, logFixture);
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
```typescript
|
||||
// In your tests
|
||||
import { test, expect } from '../support/merged-fixtures';
|
||||
|
||||
test('all utilities available', async ({ apiRequest, authToken, recurse, log }) => {
|
||||
await log.step('Making authenticated API request');
|
||||
|
||||
const { body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/protected',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/status/${body.id}` }),
|
||||
(res) => res.body.ready === true,
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `mergeTests` combines multiple fixtures without conflicts
|
||||
- Create one merged-fixtures.ts file per project
|
||||
- Import test object from your merged fixtures in all tests
|
||||
- All utilities available in single test signature
|
||||
|
||||
## Integration with Existing Tests
|
||||
|
||||
### Gradual Adoption Strategy
|
||||
|
||||
**1. Start with logging** (zero breaking changes):
|
||||
|
||||
```typescript
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
|
||||
test('existing test', async ({ page }) => {
|
||||
await log.step('Navigate to page'); // Just add logging
|
||||
await page.goto('/dashboard');
|
||||
// Rest of test unchanged
|
||||
});
|
||||
```
|
||||
|
||||
**2. Add API utilities** (for API tests):
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test('API test', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
**3. Expand to network utilities** (for UI tests):
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('UI with network control', async ({ page, interceptNetworkCall }) => {
|
||||
const usersCall = interceptNetworkCall({
|
||||
url: '**/api/users',
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
const { responseJson } = await usersCall;
|
||||
|
||||
expect(responseJson).toHaveLength(10);
|
||||
});
|
||||
```
|
||||
|
||||
**4. Full integration** (merged fixtures):
|
||||
|
||||
Create merged-fixtures.ts and use across all tests.
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-request.md` - HTTP client with schema validation
|
||||
- `network-recorder.md` - HAR-based offline testing
|
||||
- `auth-session.md` - Token management
|
||||
- `intercept-network-call.md` - Network interception
|
||||
- `recurse.md` - Polling patterns
|
||||
- `log.md` - Logging utility
|
||||
- `file-utils.md` - File operations
|
||||
- `fixtures-composition.md` - Advanced mergeTests patterns
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Don't mix direct and fixture imports in same test:**
|
||||
|
||||
```typescript
|
||||
import { apiRequest } from '@seontechnologies/playwright-utils';
|
||||
import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
|
||||
test('bad', async ({ request, authToken }) => {
|
||||
// Confusing - mixing direct (needs request) and fixture (has authToken)
|
||||
await apiRequest({ request, method: 'GET', path: '/api/users' });
|
||||
});
|
||||
```
|
||||
|
||||
**✅ Use consistent import style:**
|
||||
|
||||
```typescript
|
||||
import { test } from '../support/merged-fixtures';
|
||||
|
||||
test('good', async ({ apiRequest, authToken }) => {
|
||||
// Clean - all from fixtures
|
||||
await apiRequest({ method: 'GET', path: '/api/users' });
|
||||
});
|
||||
```
|
||||
|
||||
**❌ Don't import everything when you need one utility:**
|
||||
|
||||
```typescript
|
||||
import * as utils from '@seontechnologies/playwright-utils'; // Large bundle
|
||||
```
|
||||
|
||||
**✅ Use subpath imports:**
|
||||
|
||||
```typescript
|
||||
import { apiRequest } from '@seontechnologies/playwright-utils/api-request'; // Small bundle
|
||||
```
|
||||
|
||||
## Reference Implementation
|
||||
|
||||
The official `@seontechnologies/playwright-utils` repository provides working examples of all patterns described in these fragments.
|
||||
|
||||
**Repository:** https://github.com/seontechnologies/playwright-utils
|
||||
|
||||
**Key resources:**
|
||||
|
||||
- **Test examples:** `playwright/tests` - All utilities in action
|
||||
- **Framework setup:** `playwright.config.ts`, `playwright/support/merged-fixtures.ts`
|
||||
- **CI patterns:** `.github/workflows/` - GitHub Actions with sharding, parallelization
|
||||
|
||||
**Quick start:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/seontechnologies/playwright-utils.git
|
||||
cd playwright-utils
|
||||
nvm use
|
||||
npm install
|
||||
npm run test:pw-ui # Explore tests with Playwright UI
|
||||
npm run test:pw
|
||||
```
|
||||
|
||||
All patterns in TEA fragments are production-tested in this repository.
|
||||
|
|
@ -0,0 +1,296 @@
|
|||
# Recurse (Polling) Utility
|
||||
|
||||
## Principle
|
||||
|
||||
Use Cypress-style polling with Playwright's `expect.poll` to wait for asynchronous conditions. Provides configurable timeout, interval, logging, and post-polling callbacks with enhanced error categorization.
|
||||
|
||||
## Rationale
|
||||
|
||||
Testing async operations (background jobs, eventual consistency, webhook processing) requires polling:
|
||||
|
||||
- Vanilla `expect.poll` is verbose
|
||||
- No built-in logging for debugging
|
||||
- Generic timeout errors
|
||||
- No post-poll hooks
|
||||
|
||||
The `recurse` utility provides:
|
||||
|
||||
- **Clean syntax**: Inspired by cypress-recurse
|
||||
- **Enhanced errors**: Timeout vs command failure vs predicate errors
|
||||
- **Built-in logging**: Track polling progress
|
||||
- **Post-poll callbacks**: Process results after success
|
||||
- **Type-safe**: Full TypeScript generic support
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Polling
|
||||
|
||||
**Context**: Wait for async operation to complete with custom timeout and interval.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/recurse/fixtures';
|
||||
|
||||
test('should wait for job completion', async ({ recurse, apiRequest }) => {
|
||||
// Start job
|
||||
const { body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/jobs',
|
||||
body: { type: 'export' },
|
||||
});
|
||||
|
||||
// Poll until ready
|
||||
const result = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${body.id}` }),
|
||||
(response) => response.body.status === 'completed',
|
||||
{
|
||||
timeout: 60000, // 60 seconds max
|
||||
interval: 2000, // Check every 2 seconds
|
||||
log: 'Waiting for export job to complete',
|
||||
},
|
||||
);
|
||||
|
||||
expect(result.body.downloadUrl).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- First arg: command function (what to execute)
|
||||
- Second arg: predicate function (when to stop)
|
||||
- Options: timeout, interval, log message
|
||||
- Returns the value when predicate returns true
|
||||
|
||||
### Example 2: Polling with Assertions
|
||||
|
||||
**Context**: Use assertions directly in predicate for more expressive tests.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('should poll with assertions', async ({ recurse, apiRequest }) => {
|
||||
await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/events',
|
||||
body: { type: 'user-created', userId: '123' },
|
||||
});
|
||||
|
||||
// Poll with assertions in predicate
|
||||
await recurse(
|
||||
async () => {
|
||||
const { body } = await apiRequest({ method: 'GET', path: '/api/events/123' });
|
||||
return body;
|
||||
},
|
||||
(event) => {
|
||||
// Use assertions instead of boolean returns
|
||||
expect(event.processed).toBe(true);
|
||||
expect(event.timestamp).toBeDefined();
|
||||
// If assertions pass, predicate succeeds
|
||||
},
|
||||
{ timeout: 30000 },
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Predicate can use `expect()` assertions
|
||||
- If assertions throw, polling continues
|
||||
- If assertions pass, polling succeeds
|
||||
- More expressive than boolean returns
|
||||
|
||||
### Example 3: Custom Error Messages
|
||||
|
||||
**Context**: Provide context-specific error messages for timeout failures.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('custom error on timeout', async ({ recurse, apiRequest }) => {
|
||||
try {
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/api/status' }),
|
||||
(res) => res.body.ready === true,
|
||||
{
|
||||
timeout: 10000,
|
||||
error: 'System failed to become ready within 10 seconds - check background workers',
|
||||
},
|
||||
);
|
||||
} catch (error) {
|
||||
// Error message includes custom context
|
||||
expect(error.message).toContain('check background workers');
|
||||
throw error;
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `error` option provides custom message
|
||||
- Replaces default "Timed out after X ms"
|
||||
- Include debugging hints in error message
|
||||
- Helps diagnose failures faster
|
||||
|
||||
### Example 4: Post-Polling Callback
|
||||
|
||||
**Context**: Process or log results after successful polling.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('post-poll processing', async ({ recurse, apiRequest }) => {
|
||||
const finalResult = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/api/batch-job/123' }),
|
||||
(res) => res.body.status === 'completed',
|
||||
{
|
||||
timeout: 60000,
|
||||
post: (result) => {
|
||||
// Runs after successful polling
|
||||
console.log(`Job completed in ${result.body.duration}ms`);
|
||||
console.log(`Processed ${result.body.itemsProcessed} items`);
|
||||
return result.body;
|
||||
},
|
||||
},
|
||||
);
|
||||
|
||||
expect(finalResult.itemsProcessed).toBeGreaterThan(0);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `post` callback runs after predicate succeeds
|
||||
- Receives the final result
|
||||
- Can transform or log results
|
||||
- Return value becomes final `recurse` result
|
||||
|
||||
### Example 5: Integration with API Request (Common Pattern)
|
||||
|
||||
**Context**: Most common use case - polling API endpoints for state changes.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('end-to-end polling', async ({ apiRequest, recurse }) => {
|
||||
// Trigger async operation
|
||||
const { body: createResp } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/data-import',
|
||||
body: { source: 's3://bucket/data.csv' },
|
||||
});
|
||||
|
||||
// Poll until import completes
|
||||
const importResult = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/data-import/${createResp.importId}` }),
|
||||
(response) => {
|
||||
const { status, rowsImported } = response.body;
|
||||
return status === 'completed' && rowsImported > 0;
|
||||
},
|
||||
{
|
||||
timeout: 120000, // 2 minutes for large imports
|
||||
interval: 5000, // Check every 5 seconds
|
||||
log: `Polling import ${createResp.importId}`,
|
||||
},
|
||||
);
|
||||
|
||||
expect(importResult.body.rowsImported).toBeGreaterThan(1000);
|
||||
expect(importResult.body.errors).toHaveLength(0);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Combine `apiRequest` + `recurse` for API polling
|
||||
- Both from `@seontechnologies/playwright-utils/fixtures`
|
||||
- Complex predicates with multiple conditions
|
||||
- Logging shows polling progress in test reports
|
||||
|
||||
## Enhanced Error Types
|
||||
|
||||
The utility categorizes errors for easier debugging:
|
||||
|
||||
```typescript
|
||||
// TimeoutError - Predicate never returned true
|
||||
Error: Polling timed out after 30000ms: Job never completed
|
||||
|
||||
// CommandError - Command function threw
|
||||
Error: Command failed: Request failed with status 500
|
||||
|
||||
// PredicateError - Predicate function threw (not from assertions)
|
||||
Error: Predicate failed: Cannot read property 'status' of undefined
|
||||
```
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
| Vanilla Playwright | recurse Utility |
|
||||
| ----------------------------------------------------------------- | ------------------------------------------------------------------------- |
|
||||
| `await expect.poll(() => { ... }, { timeout: 30000 }).toBe(true)` | `await recurse(() => { ... }, (val) => val === true, { timeout: 30000 })` |
|
||||
| No logging | Built-in log option |
|
||||
| Generic timeout errors | Categorized errors (timeout/command/predicate) |
|
||||
| No post-poll hooks | `post` callback support |
|
||||
|
||||
## When to Use
|
||||
|
||||
**Use recurse for:**
|
||||
|
||||
- ✅ Background job completion
|
||||
- ✅ Webhook/event processing
|
||||
- ✅ Database eventual consistency
|
||||
- ✅ Cache propagation
|
||||
- ✅ State machine transitions
|
||||
|
||||
**Stick with vanilla expect.poll for:**
|
||||
|
||||
- Simple UI element visibility (use `expect(locator).toBeVisible()`)
|
||||
- Single-property checks
|
||||
- Cases where logging isn't needed
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-request.md` - Combine for API endpoint polling
|
||||
- `overview.md` - Fixture composition patterns
|
||||
- `fixtures-composition.md` - Using with mergeTests
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Using hard waits instead of polling:**
|
||||
|
||||
```typescript
|
||||
await page.click('#export');
|
||||
await page.waitForTimeout(5000); // Arbitrary wait
|
||||
expect(await page.textContent('#status')).toBe('Ready');
|
||||
```
|
||||
|
||||
**✅ Poll for actual condition:**
|
||||
|
||||
```typescript
|
||||
await page.click('#export');
|
||||
await recurse(
|
||||
() => page.textContent('#status'),
|
||||
(status) => status === 'Ready',
|
||||
{ timeout: 10000 },
|
||||
);
|
||||
```
|
||||
|
||||
**❌ Polling too frequently:**
|
||||
|
||||
```typescript
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/status' }),
|
||||
(res) => res.body.ready,
|
||||
{ interval: 100 }, // Hammers API every 100ms!
|
||||
);
|
||||
```
|
||||
|
||||
**✅ Reasonable interval for API calls:**
|
||||
|
||||
```typescript
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/status' }),
|
||||
(res) => res.body.ready,
|
||||
{ interval: 2000 }, // Check every 2 seconds (reasonable)
|
||||
);
|
||||
```
|
||||
|
|
@ -20,3 +20,14 @@ test-priorities,Test Priorities Matrix,"P0–P3 criteria, coverage targets, exec
|
|||
test-healing-patterns,Test Healing Patterns,"Common failure patterns and automated fixes","healing,debugging,patterns",knowledge/test-healing-patterns.md
|
||||
selector-resilience,Selector Resilience,"Robust selector strategies and debugging techniques","selectors,locators,debugging",knowledge/selector-resilience.md
|
||||
timing-debugging,Timing Debugging,"Race condition identification and deterministic wait fixes","timing,async,debugging",knowledge/timing-debugging.md
|
||||
overview,Playwright Utils Overview,"Installation, design principles, fixture patterns","playwright-utils,fixtures",knowledge/overview.md
|
||||
api-request,API Request,"Typed HTTP client, schema validation","api,playwright-utils",knowledge/api-request.md
|
||||
network-recorder,Network Recorder,"HAR record/playback, CRUD detection","network,playwright-utils",knowledge/network-recorder.md
|
||||
auth-session,Auth Session,"Token persistence, multi-user","auth,playwright-utils",knowledge/auth-session.md
|
||||
intercept-network-call,Intercept Network Call,"Network spy/stub, JSON parsing","network,playwright-utils",knowledge/intercept-network-call.md
|
||||
recurse,Recurse Polling,"Async polling, condition waiting","polling,playwright-utils",knowledge/recurse.md
|
||||
log,Log Utility,"Report logging, structured output","logging,playwright-utils",knowledge/log.md
|
||||
file-utils,File Utilities,"CSV/XLSX/PDF/ZIP validation","files,playwright-utils",knowledge/file-utils.md
|
||||
burn-in,Burn-in Runner,"Smart test selection, git diff","ci,playwright-utils",knowledge/burn-in.md
|
||||
network-error-monitor,Network Error Monitor,"HTTP 4xx/5xx detection","monitoring,playwright-utils",knowledge/network-error-monitor.md
|
||||
fixtures-composition,Fixtures Composition,"mergeTests composition patterns","fixtures,playwright-utils",knowledge/fixtures-composition.md
|
||||
|
|
|
|||
|
|
|
@ -35,11 +35,6 @@ After discovery completes, the following content variables will be available:
|
|||
<action>Check status of "product-brief" workflow</action>
|
||||
<action>Get project_level from YAML metadata</action>
|
||||
<action>Find first non-completed workflow (next expected workflow)</action>
|
||||
|
||||
<check if="project_level < 2">
|
||||
<output>**Note: Level {{project_level}} Project**
|
||||
|
||||
Product Brief is most valuable for Level 2+ projects, but can help clarify vision for any project.</output>
|
||||
</check>
|
||||
|
||||
<check if="product-brief status is file path (already completed)">
|
||||
|
|
@ -71,18 +66,14 @@ Product Brief is most valuable for Level 2+ projects, but can help clarify visio
|
|||
<step n="1" goal="Begin the journey and understand context">
|
||||
<action>Welcome {user_name} warmly in {communication_language}
|
||||
|
||||
Adapt your tone to {user_skill_level}:
|
||||
|
||||
- Expert: "Let's define your product vision. What are you building?"
|
||||
- Intermediate: "I'm here to help shape your product vision. Tell me about your idea."
|
||||
- Beginner: "Hi! I'm going to help you figure out exactly what you want to build. Let's start with your idea - what got you excited about this?"
|
||||
|
||||
Start with open exploration:
|
||||
|
||||
- What sparked this idea?
|
||||
- What are you hoping to build?
|
||||
- Who is this for - yourself, a business, users you know?
|
||||
|
||||
- "I'm here to help shape your product vision. Tell me about your idea and what got you excited about this? The more detail you can give me here the better I can help you further craft the idea."
|
||||
|
||||
CRITICAL: Listen for context clues that reveal their situation:
|
||||
|
||||
- Personal/hobby project (fun, learning, small audience)
|
||||
|
|
@ -99,7 +90,7 @@ Based on their initial response, sense:
|
|||
- If they have existing materials to share
|
||||
- Their confidence level with the domain</action>
|
||||
|
||||
<ask>What's the project name, and what got you excited about building this?</ask>
|
||||
<ask if="user has not given the project name already">What's the project name?</ask>
|
||||
|
||||
<action>From even this first exchange, create initial document sections</action>
|
||||
<template-output>project_name</template-output>
|
||||
|
|
|
|||
|
|
@ -1,16 +1,17 @@
|
|||
# PRD Workflow - Intent-Driven Product Planning
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow uses INTENT-DRIVEN PLANNING - adapt organically to product type and context</critical>
|
||||
<critical>Communicate all responses in {communication_language} and adapt deeply to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>LIVING DOCUMENT: Write to PRD.md continuously as you discover - never wait until the end</critical>
|
||||
<critical>GUIDING PRINCIPLE: Identify what makes this product special and ensure it's reflected throughout the PRD</critical>
|
||||
<critical>Input documents specified in workflow.yaml input_file_patterns - workflow engine handles fuzzy matching, whole vs sharded document discovery automatically</critical>
|
||||
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
|
||||
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>
|
||||
|
||||
<critical-rules>
|
||||
- <critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
- <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
- <critical>This workflow uses INTENT-DRIVEN PLANNING - adapt organically to product type and context</critical>
|
||||
- <critical>Communicate all responses in {communication_language} and adapt deeply to {user_skill_level}</critical>
|
||||
- <critical>Generate all documents in {document_output_language}</critical>
|
||||
- <critical>LIVING DOCUMENT: Write to PRD.md continuously as you discover - never wait until the end</critical>
|
||||
- <critical>GUIDING PRINCIPLE: Identify what makes this product special and ensure it's reflected throughout the PRD</critical>
|
||||
- <critical>Input documents specified in workflow.yaml input_file_patterns - workflow engine handles fuzzy matching, whole vs sharded document discovery automatically</critical>
|
||||
- <critical>⚠️ CHECKPOINT PROTOCOL: After EVERY template-output tag, you MUST follow workflow.xml substep 2c. </critical>
|
||||
- <critical>YOU ARE FACILITATING A CONVERSATION With a user to produce a final document step by step. The whole process is meant to be collaborative helping the user flesh out their ideas</critical>
|
||||
</critical-rules>
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Validate workflow readiness" tag="workflow-status">
|
||||
|
|
@ -24,13 +25,7 @@
|
|||
<action>Check status of "prd" workflow</action>
|
||||
<action>Get project_track from YAML metadata</action>
|
||||
<action>Find first non-completed workflow (next expected workflow)</action>
|
||||
|
||||
<check if="project_track is Quick Flow">
|
||||
<output>**Quick Flow Track - Redirecting**
|
||||
|
||||
Quick Flow projects use tech-spec workflow for implementation-focused planning.
|
||||
PRD is for BMad Method and Enterprise Method tracks that need comprehensive requirements.</output>
|
||||
<action>Exit and suggest tech-spec workflow</action>
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="prd status is file path (already completed)">
|
||||
|
|
@ -47,16 +42,16 @@ PRD is for BMad Method and Enterprise Method tracks that need comprehensive requ
|
|||
</step>
|
||||
|
||||
<step n="0.5" goal="Discover and load input documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {product_brief_content}, {research_content}, {document_project_content}</note>
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {product_brief_content}, {research_content}, {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Discovery - Project, Domain, and Vision">
|
||||
<action>Welcome {user_name} and begin comprehensive discovery, and then start to GATHER ALL CONTEXT:
|
||||
1. Check workflow-status.yaml for project_context (if exists)
|
||||
2. Review loaded content: {product_brief_content}, {research_content}, {document_project_content} (auto-loaded in Step 0.5)
|
||||
3. Detect project type AND domain complexity using data-driven classification
|
||||
</action>
|
||||
<action>Welcome {user_name} and begin comprehensive discovery, and then start to GATHER ALL CONTEXT:
|
||||
1. Check workflow-status.yaml for project_context (if exists)
|
||||
2. Review loaded content: {product_brief_content}, {research_content}, {document_project_content} (auto-loaded in Step 0.5)
|
||||
3. Detect project type AND domain complexity using data-driven classification
|
||||
</action>
|
||||
|
||||
<action>Load classification data files COMPLETELY:
|
||||
|
||||
|
|
@ -145,40 +140,63 @@ Capture this differentiator - it becomes a thread connecting throughout the PRD.
|
|||
<template-output>product_brief_path</template-output>
|
||||
<template-output>domain_brief_path</template-output>
|
||||
<template-output>research_documents</template-output>
|
||||
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Discovery Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Success Definition">
|
||||
<action>Define what winning looks like for THIS specific product
|
||||
<action>Define what winning looks like for THIS specific product
|
||||
|
||||
INTENT: Meaningful success criteria, not generic metrics
|
||||
**User Success First**
|
||||
Ask:
|
||||
|
||||
Adapt to context:
|
||||
- "What would make a user say 'this was worth it'?"
|
||||
- "What moment makes them tell a friend about this?"
|
||||
- "After [key journey], what outcome are they walking away with?"
|
||||
|
||||
**Business Success Second**
|
||||
Ask:
|
||||
|
||||
- "What does success look like for your business at 3 months? 12 months?"
|
||||
- "Is this about revenue, user growth, engagement, something else?"
|
||||
- "What metric would make you say 'this is working'?"
|
||||
|
||||
**Make It Specific**
|
||||
Challenge vague metrics:
|
||||
|
||||
- NOT: "10,000 users" → "What kind of users? Doing what?"
|
||||
- NOT: "99.9% uptime" → "What's the real concern - data loss? Failed payments?"
|
||||
- NOT: "Fast" → "How fast, and what specifically needs to be fast?"
|
||||
|
||||
Ask: "Can we measure this? How would you know if you hit it?"
|
||||
|
||||
**Connect to Differentiator**
|
||||
Bring it back to the core:
|
||||
"So success means users experience [differentiator] and achieve [outcome] - does that capture it?"
|
||||
|
||||
Adapt success criteria to context:
|
||||
|
||||
- Consumer: User love, engagement, retention
|
||||
- B2B: ROI, efficiency, adoption
|
||||
- Developer tools: Developer experience, community
|
||||
- Regulated: Compliance, safety, validation
|
||||
- Regulated: Compliance, safety, validation</action>
|
||||
|
||||
Make it specific:
|
||||
<template-output>success_criteria</template-output>
|
||||
<check if="business focus">
|
||||
<template-output>business_metrics</template-output>
|
||||
</check>
|
||||
|
||||
- NOT: "10,000 users"
|
||||
- BUT: "100 power users who rely on it daily"
|
||||
|
||||
- NOT: "99.9% uptime"
|
||||
- BUT: "Zero data loss during critical operations"
|
||||
|
||||
Connect to what makes the product special:
|
||||
|
||||
- "Success means users experience [key value moment] and achieve [desired outcome]"</action>
|
||||
|
||||
<template-output>success_criteria</template-output>
|
||||
<check if="business focus">
|
||||
<template-output>business_metrics</template-output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Scope Definition">
|
||||
<action>Smart scope negotiation - find the sweet spot
|
||||
<action>Smart scope negotiation - find the sweet spot based on success:
|
||||
|
||||
The Scoping Game:
|
||||
|
||||
|
|
@ -196,14 +214,143 @@ For complex domains:
|
|||
- Include compliance minimums in MVP
|
||||
- Note regulatory gates between phases</action>
|
||||
|
||||
<template-output>mvp_scope</template-output>
|
||||
<template-output>growth_features</template-output>
|
||||
<template-output>vision_features</template-output>
|
||||
<template-output>mvp_scope</template-output>
|
||||
<template-output>growth_features</template-output>
|
||||
<template-output>vision_features</template-output>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Success and Scope Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="User Journeys">
|
||||
<critical>This step covers ALL user types - end users, admins, moderators, support, API consumers. If a human interacts with the system, there's a journey. No journey = no FRs = doesn't exist.</critical>
|
||||
|
||||
<action>Identify all user types first:
|
||||
|
||||
Before exploring journeys, ask the user:
|
||||
|
||||
"Besides the primary user, who else interacts with this system?"
|
||||
|
||||
Consider these common user types:
|
||||
|
||||
- End users (primary)
|
||||
- Admins - manage users, settings, content
|
||||
- Moderators - review flagged content, enforce rules
|
||||
- Support staff - help users, investigate issues
|
||||
- API consumers - if dev tool or platform
|
||||
- Internal ops - analytics, monitoring, billing
|
||||
|
||||
List all user types before proceeding.</action>
|
||||
|
||||
<action>Create detailed narrative user journeys with personas:
|
||||
|
||||
For each user type identified, create rich, detailed journeys following this pattern:
|
||||
|
||||
**Journey Creation Process:**
|
||||
|
||||
**1. Develop the Persona**
|
||||
Give them a name, context, and motivation:
|
||||
|
||||
- **Name**: Realistic name (Maria, Marcus, Jordan, etc.)
|
||||
- **Backstory**: Who they are, what they want, why they need this
|
||||
- **Motivation**: Core problem they're trying to solve
|
||||
- **Emotional state**: How they feel about solving this problem
|
||||
|
||||
Example: "Maria is a working parent who wants to cook healthy meals for her family but struggles with meal planning and limited evening time. She's tired of the same rotating dishes and wants inspiration that fits her schedule."
|
||||
|
||||
**2. Map Their Complete Journey**
|
||||
Document their end-to-end experience:
|
||||
|
||||
- **Entry point**: How they discover and access the product
|
||||
- **Key steps**: Each touchpoint in sequence (use arrows →)
|
||||
- **Critical actions**: What they do at each step
|
||||
- **Decision points**: Choices they make
|
||||
- **Success moment**: Where they achieve their goal
|
||||
- **Exit point**: How the journey concludes
|
||||
|
||||
**3. Use This Exact Format for Each Journey:**
|
||||
|
||||
**Journey [Number]: [Persona Name] - [Journey Theme]**
|
||||
[Persona description with backstory and motivation]
|
||||
|
||||
- [Entry point] → [step 1] → [step 2] → [step 3] → [critical moment] → [step N] → [completion]
|
||||
|
||||
**4. Explore Journey Details Conversationally**
|
||||
For each journey, ask:
|
||||
|
||||
- "What happens at each step specifically?"
|
||||
- "What could go wrong here? What's the recovery path?"
|
||||
- "What information do they need to see/hear?"
|
||||
- "What's their emotional state at each point?"
|
||||
- "Where does this journey succeed or fail?"
|
||||
|
||||
**5. Connect Journeys to Requirements**
|
||||
After each journey, explicitly state:
|
||||
"This journey reveals requirements for:"
|
||||
|
||||
- List specific capability areas (e.g., onboarding, meal planning, admin dashboard)
|
||||
- Help user see how different journeys create different feature sets
|
||||
|
||||
**Example Output Structure:**
|
||||
|
||||
**Journey 1: Maria - First Recipe Discovery**
|
||||
Maria is a working parent who wants to cook healthy meals for her family but struggles with meal planning...
|
||||
|
||||
- Discovers via search → lands on recipe page → signs up → completes preferences → browses recommendations → saves first recipe → adds to meal plan
|
||||
|
||||
**Journey 2: [Persona] - [Theme]**
|
||||
[Persona description with context]
|
||||
|
||||
- [Step sequence] → [critical moment] → [completion]
|
||||
|
||||
**Journey 3: [Admin/Support Persona] - [Admin Theme]**
|
||||
[Admin persona description]
|
||||
|
||||
- [Admin workflow] → [decision point] → [system outcome]</action>
|
||||
|
||||
<action>Guide journey exploration to cover all key areas:
|
||||
|
||||
**Aim for 3-4 journeys minimum:**
|
||||
|
||||
1. Primary user - happy path (core experience)
|
||||
2. Primary user - edge case (different goal, error recovery)
|
||||
3. Secondary user (admin, moderator, support, etc.)
|
||||
4. API consumer (if applicable)
|
||||
|
||||
**Ask after each:** "Another journey? We should cover [suggest uncovered user type]"
|
||||
|
||||
**Use journeys to reveal requirements:**
|
||||
Each journey reveals different capabilities needed:
|
||||
|
||||
- Admin journey → admin dashboard, user management
|
||||
- Support journey → ticket system, user lookup tools
|
||||
- API journey → documentation, rate limiting, keys
|
||||
- Error recovery → retry mechanisms, help content</action>
|
||||
|
||||
<template-output>user_journeys</template-output>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="User Journeys Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Domain-Specific Exploration" optional="true">
|
||||
<critical>This step is DATA-DRIVEN using domain_complexity_data CSV loaded in Step 1</critical>
|
||||
<action>Execute only if complexity_level = "high" OR domain-brief exists</action>
|
||||
<step n="5" goal="Domain-Specific Exploration" optional="true">
|
||||
<critical>This step is DATA-DRIVEN using {domain_complexity_data} CSV loaded in Step 1</critical>
|
||||
|
||||
<action>Retrieve domain-specific configuration from CSV:
|
||||
|
||||
|
|
@ -253,7 +400,6 @@ These inform:
|
|||
- What validation is required
|
||||
</action>
|
||||
|
||||
<check if="complexity_level == 'high'">
|
||||
<template-output>domain_considerations</template-output>
|
||||
|
||||
<action>Generate domain-specific special sections if defined:
|
||||
|
|
@ -268,9 +414,20 @@ Example mappings from CSV:
|
|||
- "compliance_matrix" → <template-output>compliance_matrix</template-output>
|
||||
</action>
|
||||
</check>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Domain Exploration Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Innovation Discovery" optional="true">
|
||||
<step n="6" goal="Innovation Discovery" optional="true">
|
||||
<critical>This step uses innovation_signals from project_types_data CSV loaded in Step 1</critical>
|
||||
|
||||
<action>Check for innovation in this product:
|
||||
|
|
@ -323,9 +480,20 @@ Use web_search_triggers from project_types_data CSV if relevant:
|
|||
<template-output>innovation_patterns</template-output>
|
||||
<template-output>validation_approach</template-output>
|
||||
</check>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Innovation Discovery Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Project-Specific Deep Dive">
|
||||
<step n="7" goal="Project-Specific Deep Dive">
|
||||
<critical>This step is DATA-DRIVEN using project_types_data CSV loaded in Step 1</critical>
|
||||
|
||||
<action>Retrieve project-specific configuration from CSV:
|
||||
|
|
@ -421,30 +589,20 @@ For required_sections that DON'T have matching template variables:
|
|||
|
||||
This hybrid approach balances template structure with CSV-driven flexibility.
|
||||
</note>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Project-Specific Deep Dive Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="UX Principles" if="project has UI or UX">
|
||||
<action>Only if product has a UI
|
||||
|
||||
Light touch on UX - not full design:
|
||||
|
||||
- Visual personality
|
||||
- Key interaction patterns
|
||||
- Critical user flows
|
||||
|
||||
"How should this feel to use?"
|
||||
"What's the vibe - professional, playful, minimal?"
|
||||
|
||||
Connect UX to product vision:
|
||||
"The UI should reinforce [core value proposition] through [design approach]"</action>
|
||||
|
||||
<check if="has UI">
|
||||
<template-output>ux_principles</template-output>
|
||||
<template-output>key_interactions</template-output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Functional Requirements Synthesis">
|
||||
<step n="9" goal="Functional Requirements Synthesis">
|
||||
<critical>This section is THE CAPABILITY CONTRACT for all downstream work</critical>
|
||||
<critical>UX designers will ONLY design what's listed here</critical>
|
||||
<critical>Architects will ONLY support what's listed here</critical>
|
||||
|
|
@ -457,12 +615,18 @@ Connect UX to product vision:
|
|||
FRs define WHAT capabilities the product must have. They are the complete inventory
|
||||
of user-facing and system capabilities that deliver the product vision.
|
||||
|
||||
**Notice:**
|
||||
✅ Each FR is a testable capability
|
||||
✅ Each FR is implementation-agnostic (could be built many ways)
|
||||
✅ Each FR specifies WHO and WHAT, not HOW
|
||||
✅ No UI details, no performance numbers, no technology choices
|
||||
✅ Comprehensive coverage of capability areas
|
||||
|
||||
**How They Will Be Used:**
|
||||
|
||||
1. UX Designer reads FRs → designs interactions for each capability
|
||||
2. Architect reads FRs → designs systems to support each capability
|
||||
3. PM reads FRs → creates epics and stories to implement each capability
|
||||
4. Dev Agent reads assembled context → implements stories based on FRs
|
||||
|
||||
**Critical Property - COMPLETENESS:**
|
||||
Every capability discussed in vision, scope, domain requirements, and project-specific
|
||||
|
|
@ -475,15 +639,6 @@ specific UI/UX details. Those come later from UX and Architecture.
|
|||
|
||||
<action>Transform everything discovered into comprehensive functional requirements:
|
||||
|
||||
**Coverage - Pull from EVERYWHERE:**
|
||||
|
||||
- Core features from MVP scope → FRs
|
||||
- Growth features → FRs (marked as post-MVP if needed)
|
||||
- Domain-mandated features → FRs
|
||||
- Project-type specific needs → FRs
|
||||
- Innovation requirements → FRs
|
||||
- Anti-patterns (explicitly NOT doing) → Note in FR section if needed
|
||||
|
||||
**Organization - Group by CAPABILITY AREA:**
|
||||
Don't organize by technology or layer. Group by what users/system can DO:
|
||||
|
||||
|
|
@ -515,45 +670,23 @@ The second example belongs in Epic Breakdown, not PRD.
|
|||
- FR1: Users can create accounts with email or social authentication
|
||||
- FR2: Users can log in securely and maintain sessions across devices
|
||||
- FR3: Users can reset passwords via email verification
|
||||
- FR4: Users can update profile information and preferences
|
||||
- FR5: Administrators can manage user roles and permissions
|
||||
|
||||
**Content Management:**
|
||||
|
||||
- FR6: Users can create, edit, and delete content items
|
||||
- FR7: Users can organize content with tags and categories
|
||||
- FR8: Users can search content by keyword, tag, or date range
|
||||
- FR9: Users can export content in multiple formats
|
||||
- ...
|
||||
|
||||
**Data Ownership (local-first products):**
|
||||
|
||||
- FR10: All user data stored locally on user's device
|
||||
- FR11: Users can export complete data at any time
|
||||
- FR12: Users can import previously exported data
|
||||
- FR13: System monitors storage usage and warns before limits
|
||||
- ...
|
||||
|
||||
**Collaboration:**
|
||||
|
||||
- FR14: Users can share content with specific users or teams
|
||||
- FR15: Users can comment on shared content
|
||||
- FR16: Users can track content change history
|
||||
- FR17: Users receive notifications for relevant updates
|
||||
- ...
|
||||
|
||||
**Notice:**
|
||||
✅ Each FR is a testable capability
|
||||
✅ Each FR is implementation-agnostic (could be built many ways)
|
||||
✅ Each FR specifies WHO and WHAT, not HOW
|
||||
✅ No UI details, no performance numbers, no technology choices
|
||||
✅ Comprehensive coverage of capability areas
|
||||
</example>
|
||||
|
||||
<action>Generate the complete FR list by systematically extracting capabilities:
|
||||
|
||||
1. MVP scope → extract all capabilities → write as FRs
|
||||
2. Growth features → extract capabilities → write as FRs (note if post-MVP)
|
||||
3. Domain requirements → extract mandatory capabilities → write as FRs
|
||||
4. Project-type specifics → extract type-specific capabilities → write as FRs
|
||||
5. Innovation patterns → extract novel capabilities → write as FRs
|
||||
<action>Generate the complete FR list by systematically extracting capabilities from all discussed so far for all that is in scope.
|
||||
|
||||
Organize FRs by logical capability groups (5-8 groups typically).
|
||||
Number sequentially across all groups (FR1, FR2... FR47).
|
||||
|
|
@ -586,9 +719,20 @@ COMPLETENESS GATE: Review your FR list against the entire PRD written so far and
|
|||
</action>
|
||||
|
||||
<template-output>functional_requirements_complete</template-output>
|
||||
<critical>You MUST display the checkpoint display and HALT for user input, unless the user enabled YOLO mode.</critical>
|
||||
|
||||
<checkpoint title="Functional Requirements Complete">
|
||||
<display>[a] Advanced Elicitation [c] Continue [p] Party Mode [u] User Interview</display>
|
||||
<checkpoint-handlers>
|
||||
<on-select key="a">Load and execute {advanced_elicitation} task, then return to this checkpoint</on-select>
|
||||
<on-select key="p">Load and execute {party_mode}, then return to this checkpoint</on-select>
|
||||
<on-select key="u">Load and execute {party_mode} but the party will include the users from the User Journeys section. The discussion can start with each user saying hello and giving their initial thoughts, then return to this checkpoint.</on-select>
|
||||
<on-select key="c"> Ensure all content is written to {default_output_file}</on-select>
|
||||
</checkpoint-handlers>
|
||||
</checkpoint>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Non-Functional Requirements Discovery">
|
||||
<step n="10" goal="Non-Functional Requirements Discovery">
|
||||
<action>Only document NFRs that matter for THIS product
|
||||
|
||||
Performance: Only if user-facing impact
|
||||
|
|
@ -623,26 +767,26 @@ Skip categories that don't apply!</action>
|
|||
</check>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Complete PRD and determine next steps">
|
||||
<action>Quick review of captured requirements:
|
||||
<step n="11" goal="Complete PRD and determine next steps">
|
||||
<action>Announce that the PRD is complete and the unused sections will be cleaned up from the output document!</action>
|
||||
|
||||
"We've captured:
|
||||
<action>CRITICAL: Clean up the PRD document before validation:
|
||||
|
||||
- {{fr_count}} functional requirements
|
||||
- {{nfr_count}} non-functional requirements
|
||||
- MVP scope defined
|
||||
{{if domain_complexity == 'high'}}
|
||||
- Domain-specific requirements addressed
|
||||
{{/if}}
|
||||
{{if innovation_detected}}
|
||||
- Innovation patterns documented
|
||||
{{/if}}
|
||||
Before running validation, you MUST remove any empty sections from the generated PRD:
|
||||
|
||||
Your PRD is complete!"
|
||||
</action>
|
||||
1. **Scan for empty sections** - Look for sections with no meaningful content (just whitespace or placeholder text)
|
||||
2. **Remove entire empty sections** - Delete the section header and any empty content
|
||||
3. **Keep relevant sections only** - If a project type doesn't need certain sections (e.g., API specs for a mobile app), remove those sections entirely
|
||||
4. **Ensure document flows logically** - The final PRD should only contain sections with actual content
|
||||
|
||||
<template-output>prd_summary</template-output>
|
||||
<template-output>product_value_summary</template-output>
|
||||
**This cleanup step is essential because:**
|
||||
|
||||
- The template includes all possible sections for maximum flexibility
|
||||
- Not all sections apply to every project type
|
||||
- Empty sections make the PRD look incomplete and unprofessional
|
||||
- Validation expects meaningful content in all included sections
|
||||
|
||||
**Example:** If building a CLI tool, you'd typically remove: API Specification, Platform Support, Device Capabilities, Multi-Tenancy Architecture, User Experience Principles sections.</action>
|
||||
|
||||
<check if="standalone_mode != true">
|
||||
<action>Load the FULL file: {status_file}</action>
|
||||
|
|
@ -660,43 +804,16 @@ Your PRD is complete!"
|
|||
|
||||
<output>**✅ PRD Complete, {user_name}!**
|
||||
|
||||
**Created:** PRD.md with {{fr_count}} FRs and NFRs
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
<check if="standalone_mode != true">
|
||||
Based on your {{project_track}} workflow path, you can:
|
||||
|
||||
**Option A: Create Epic Breakdown Now** (Optional)
|
||||
`workflow create-epics-and-stories`
|
||||
|
||||
- Creates basic epic structure from PRD
|
||||
- Can be enhanced later with UX/Architecture context
|
||||
|
||||
<check if="UI_exists">
|
||||
**Option B: UX Design First** (Recommended if UI)
|
||||
`workflow create-ux-design`
|
||||
- Design user experience and interactions
|
||||
- Epic breakdown can incorporate UX details later
|
||||
</check>
|
||||
|
||||
**Option C: Skip to Architecture**
|
||||
`workflow create-architecture`
|
||||
|
||||
- Define technical decisions
|
||||
- Epic breakdown created after with full context
|
||||
|
||||
**Recommendation:** {{if UI_exists}}Do UX Design first, then Architecture, then create epics with full context{{else}}Go straight to Architecture, then create epics{{/if}}
|
||||
</check>
|
||||
|
||||
<check if="standalone_mode == true">
|
||||
**Typical next workflows:**
|
||||
|
||||
1. `workflow create-ux-design` - UX Design (if UI exists)
|
||||
2. `workflow create-architecture` - Technical architecture
|
||||
3. `workflow create-epics-and-stories` - Epic breakdown
|
||||
|
||||
**Note:** Epics can be created at any point but have richer detail when created after UX/Architecture.
|
||||
</check>
|
||||
|
||||
</output>
|
||||
</step>
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,12 @@
|
|||
|
||||
---
|
||||
|
||||
## User Journeys
|
||||
|
||||
{{user_journeys}}
|
||||
|
||||
---
|
||||
|
||||
## Project Classification
|
||||
|
||||
**Technical Type:** {{project_type}}
|
||||
|
|
@ -24,12 +30,9 @@
|
|||
|
||||
{{project_classification}}
|
||||
|
||||
{{#if domain_context_summary}}
|
||||
|
||||
### Domain Context
|
||||
|
||||
{{domain_context_summary}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -37,13 +40,6 @@
|
|||
|
||||
{{success_criteria}}
|
||||
|
||||
{{#if business_metrics}}
|
||||
|
||||
### Business Metrics
|
||||
|
||||
{{business_metrics}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
## Product Scope
|
||||
|
|
@ -62,19 +58,14 @@
|
|||
|
||||
---
|
||||
|
||||
{{#if domain_considerations}}
|
||||
|
||||
## Domain-Specific Requirements
|
||||
|
||||
{{domain_considerations}}
|
||||
|
||||
This section shapes all functional and non-functional requirements below.
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
{{#if innovation_patterns}}
|
||||
|
||||
## Innovation & Novel Patterns
|
||||
|
||||
{{innovation_patterns}}
|
||||
|
|
@ -82,63 +73,39 @@ This section shapes all functional and non-functional requirements below.
|
|||
### Validation Approach
|
||||
|
||||
{{validation_approach}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
{{#if project_type_requirements}}
|
||||
|
||||
## {{project_type}} Specific Requirements
|
||||
|
||||
{{project_type_requirements}}
|
||||
|
||||
{{#if endpoint_specification}}
|
||||
|
||||
### API Specification
|
||||
|
||||
{{endpoint_specification}}
|
||||
{{/if}}
|
||||
|
||||
{{#if authentication_model}}
|
||||
|
||||
### Authentication & Authorization
|
||||
|
||||
{{authentication_model}}
|
||||
{{/if}}
|
||||
|
||||
{{#if platform_requirements}}
|
||||
|
||||
### Platform Support
|
||||
|
||||
{{platform_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if device_features}}
|
||||
|
||||
### Device Capabilities
|
||||
|
||||
{{device_features}}
|
||||
{{/if}}
|
||||
|
||||
{{#if tenant_model}}
|
||||
|
||||
### Multi-Tenancy Architecture
|
||||
|
||||
{{tenant_model}}
|
||||
{{/if}}
|
||||
|
||||
{{#if permission_matrix}}
|
||||
|
||||
### Permissions & Roles
|
||||
|
||||
{{permission_matrix}}
|
||||
{{/if}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
{{#if ux_principles}}
|
||||
|
||||
## User Experience Principles
|
||||
|
||||
{{ux_principles}}
|
||||
|
|
@ -146,7 +113,6 @@ This section shapes all functional and non-functional requirements below.
|
|||
### Key Interactions
|
||||
|
||||
{{key_interactions}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -158,44 +124,25 @@ This section shapes all functional and non-functional requirements below.
|
|||
|
||||
## Non-Functional Requirements
|
||||
|
||||
{{#if performance_requirements}}
|
||||
|
||||
### Performance
|
||||
|
||||
{{performance_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if security_requirements}}
|
||||
|
||||
### Security
|
||||
|
||||
{{security_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if scalability_requirements}}
|
||||
|
||||
### Scalability
|
||||
|
||||
{{scalability_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if accessibility_requirements}}
|
||||
|
||||
### Accessibility
|
||||
|
||||
{{accessibility_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if integration_requirements}}
|
||||
|
||||
### Integration
|
||||
|
||||
{{integration_requirements}}
|
||||
{{/if}}
|
||||
|
||||
{{#if no_nfrs}}
|
||||
_No specific non-functional requirements identified for this project type._
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -24,6 +24,10 @@ prd_template: "{installed_path}/prd-template.md"
|
|||
project_types_data: "{installed_path}/project-types.csv"
|
||||
domain_complexity_data: "{installed_path}/domain-complexity.csv"
|
||||
|
||||
# External workflows for checkpoints
|
||||
advanced_elicitation: "{project-root}/{bmad_folder}/core/tasks/advanced-elicitation.md"
|
||||
party_mode: "{project-root}/{bmad_folder}/core/workflows/party-mode.md"
|
||||
|
||||
# Output files
|
||||
status_file: "{output_folder}/bmm-workflow-status.yaml"
|
||||
default_output_file: "{output_folder}/prd.md"
|
||||
|
|
@ -37,13 +41,11 @@ input_file_patterns:
|
|||
whole: "{output_folder}/*brief*.md"
|
||||
sharded: "{output_folder}/*brief*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
|
||||
research:
|
||||
description: "Market or domain research (optional)"
|
||||
whole: "{output_folder}/*research*.md"
|
||||
sharded: "{output_folder}/*research*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
|
||||
document_project:
|
||||
description: "Brownfield project documentation (optional)"
|
||||
sharded: "{output_folder}/index.md"
|
||||
|
|
|
|||
|
|
@ -1,217 +0,0 @@
|
|||
# Tech-Spec Workflow Validation Checklist
|
||||
|
||||
**Purpose**: Validate tech-spec workflow outputs are context-rich, definitive, complete, and implementation-ready.
|
||||
|
||||
**Scope**: Quick-flow software projects (1-5 stories)
|
||||
|
||||
**Expected Outputs**: tech-spec.md + epics.md + story files (1-5 stories)
|
||||
|
||||
**New Standard**: Tech-spec should be comprehensive enough to replace story-context for most quick-flow projects
|
||||
|
||||
---
|
||||
|
||||
## 1. Output Files Exist
|
||||
|
||||
- [ ] tech-spec.md created in output folder
|
||||
- [ ] epics.md created (minimal for 1 story, detailed for multiple)
|
||||
- [ ] Story file(s) created in sprint_artifacts
|
||||
- Naming convention: story-{epic-slug}-N.md (where N = 1 to story_count)
|
||||
- 1 story: story-{epic-slug}-1.md
|
||||
- Multiple stories: story-{epic-slug}-1.md through story-{epic-slug}-N.md
|
||||
- [ ] bmm-workflow-status.yaml updated (if not standalone mode)
|
||||
- [ ] No unfilled {{template_variables}} in any files
|
||||
|
||||
---
|
||||
|
||||
## 2. Context Gathering (NEW - CRITICAL)
|
||||
|
||||
### Document Discovery
|
||||
|
||||
- [ ] **Existing documents loaded**: Product brief, research docs found and incorporated (if they exist)
|
||||
- [ ] **Document-project output**: Checked for {output_folder}/index.md (brownfield codebase map)
|
||||
- [ ] **Sharded documents**: If sharded versions found, ALL sections loaded and synthesized
|
||||
- [ ] **Context summary**: loaded_documents_summary lists all sources used
|
||||
|
||||
### Project Stack Detection
|
||||
|
||||
- [ ] **Setup files identified**: package.json, requirements.txt, or equivalent found and parsed
|
||||
- [ ] **Framework detected**: Exact framework name and version captured (e.g., "Express 4.18.2")
|
||||
- [ ] **Dependencies extracted**: All production dependencies with specific versions
|
||||
- [ ] **Dev tools identified**: TypeScript, Jest, ESLint, pytest, etc. with versions
|
||||
- [ ] **Scripts documented**: Available npm/pip/etc scripts identified
|
||||
- [ ] **Stack summary**: project_stack_summary is complete and accurate
|
||||
|
||||
### Brownfield Analysis (if applicable)
|
||||
|
||||
- [ ] **Directory structure**: Main code directories identified and documented
|
||||
- [ ] **Code patterns**: Dominant patterns identified (class-based, functional, MVC, etc.)
|
||||
- [ ] **Naming conventions**: Existing conventions documented (camelCase, snake_case, etc.)
|
||||
- [ ] **Key modules**: Important existing modules/services identified
|
||||
- [ ] **Testing patterns**: Test framework and patterns documented
|
||||
- [ ] **Structure summary**: existing_structure_summary is comprehensive
|
||||
|
||||
---
|
||||
|
||||
## 3. Tech-Spec Definitiveness (CRITICAL)
|
||||
|
||||
### No Ambiguity Allowed
|
||||
|
||||
- [ ] **Zero "or" statements**: NO "use X or Y", "either A or B", "options include"
|
||||
- [ ] **Specific versions**: All frameworks, libraries, tools have EXACT versions
|
||||
- ✅ GOOD: "Python 3.11", "React 18.2.0", "winston v3.8.2 (from package.json)"
|
||||
- ❌ BAD: "Python 2 or 3", "React 18+", "a logger like pino or winston"
|
||||
- [ ] **Definitive decisions**: Every technical choice is final, not a proposal
|
||||
- [ ] **Stack-aligned**: Decisions reference detected project stack
|
||||
|
||||
### Implementation Clarity
|
||||
|
||||
- [ ] **Source tree changes**: EXACT file paths with CREATE/MODIFY/DELETE actions
|
||||
- ✅ GOOD: "src/services/UserService.ts - MODIFY - Add validateEmail() method"
|
||||
- ❌ BAD: "Update some files in the services folder"
|
||||
- [ ] **Technical approach**: Describes SPECIFIC implementation using detected stack
|
||||
- [ ] **Existing patterns**: Documents brownfield patterns to follow (if applicable)
|
||||
- [ ] **Integration points**: Specific modules, APIs, services identified
|
||||
|
||||
---
|
||||
|
||||
## 4. Context-Rich Content (NEW)
|
||||
|
||||
### Context Section
|
||||
|
||||
- [ ] **Available Documents**: Lists all loaded documents
|
||||
- [ ] **Project Stack**: Complete framework and dependency information
|
||||
- [ ] **Existing Codebase Structure**: Brownfield analysis or greenfield notation
|
||||
|
||||
### The Change Section
|
||||
|
||||
- [ ] **Problem Statement**: Clear, specific problem definition
|
||||
- [ ] **Proposed Solution**: Concrete solution approach
|
||||
- [ ] **Scope In/Out**: Clear boundaries defined
|
||||
|
||||
### Development Context Section
|
||||
|
||||
- [ ] **Relevant Existing Code**: References to specific files and line numbers (brownfield)
|
||||
- [ ] **Framework Dependencies**: Complete list with exact versions from project
|
||||
- [ ] **Internal Dependencies**: Internal modules listed
|
||||
- [ ] **Configuration Changes**: Specific config file updates identified
|
||||
|
||||
### Developer Resources Section
|
||||
|
||||
- [ ] **File Paths Reference**: Complete list of all files involved
|
||||
- [ ] **Key Code Locations**: Functions, classes, modules with file:line references
|
||||
- [ ] **Testing Locations**: Specific test directories and patterns
|
||||
- [ ] **Documentation Updates**: Docs that need updating identified
|
||||
|
||||
---
|
||||
|
||||
## 5. Story Quality
|
||||
|
||||
### Story Format
|
||||
|
||||
- [ ] All stories use "As a [role], I want [capability], so that [benefit]" format
|
||||
- [ ] Each story has numbered acceptance criteria
|
||||
- [ ] Tasks reference AC numbers: (AC: #1), (AC: #2)
|
||||
- [ ] Dev Notes section links to tech-spec.md
|
||||
|
||||
### Story Context Integration (NEW)
|
||||
|
||||
- [ ] **Tech-Spec Reference**: Story explicitly references tech-spec.md as primary context
|
||||
- [ ] **Dev Agent Record**: Includes all required sections (Context Reference, Agent Model, etc.)
|
||||
- [ ] **Test Results section**: Placeholder ready for dev execution
|
||||
- [ ] **Review Notes section**: Placeholder ready for code review
|
||||
|
||||
### Story Sequencing (If Level 1)
|
||||
|
||||
- [ ] **Vertical slices**: Each story delivers complete, testable functionality
|
||||
- [ ] **Sequential ordering**: Stories in logical progression
|
||||
- [ ] **No forward dependencies**: No story depends on later work
|
||||
- [ ] Each story leaves system in working state
|
||||
|
||||
### Coverage
|
||||
|
||||
- [ ] Story acceptance criteria derived from tech-spec
|
||||
- [ ] Story tasks map to tech-spec implementation guide
|
||||
- [ ] Files in stories match tech-spec source tree
|
||||
- [ ] Key code references align with tech-spec Developer Resources
|
||||
|
||||
---
|
||||
|
||||
## 6. Epic Quality (All Projects)
|
||||
|
||||
- [ ] **Epic title**: User-focused outcome (not implementation detail)
|
||||
- [ ] **Epic slug**: Clean kebab-case slug (2-3 words)
|
||||
- [ ] **Epic goal**: Clear purpose and value statement
|
||||
- [ ] **Epic scope**: Boundaries clearly defined
|
||||
- [ ] **Success criteria**: Measurable outcomes
|
||||
- [ ] **Story map** (if multiple stories): Visual representation of epic → stories
|
||||
- [ ] **Implementation sequence** (if multiple stories): Logical story ordering with dependencies
|
||||
- [ ] **Tech-spec reference**: Links back to tech-spec.md
|
||||
- [ ] **Detail level appropriate**: Minimal for 1 story, detailed for multiple
|
||||
|
||||
---
|
||||
|
||||
## 7. Workflow Status Integration
|
||||
|
||||
- [ ] bmm-workflow-status.yaml updated (if exists)
|
||||
- [ ] Current phase reflects tech-spec completion
|
||||
- [ ] Progress percentage updated appropriately
|
||||
- [ ] Next workflow clearly identified
|
||||
|
||||
---
|
||||
|
||||
## 8. Implementation Readiness (NEW - ENHANCED)
|
||||
|
||||
### Can Developer Start Immediately?
|
||||
|
||||
- [ ] **All context available**: Brownfield analysis + stack details + existing patterns
|
||||
- [ ] **No research needed**: Developer doesn't need to hunt for framework versions or patterns
|
||||
- [ ] **Specific file paths**: Developer knows exactly which files to create/modify
|
||||
- [ ] **Code references**: Can find similar code to reference (brownfield)
|
||||
- [ ] **Testing clear**: Knows what to test and how
|
||||
- [ ] **Deployment documented**: Knows how to deploy and rollback
|
||||
|
||||
### Tech-Spec Replaces Story-Context?
|
||||
|
||||
- [ ] **Comprehensive enough**: Contains all info typically in story-context XML
|
||||
- [ ] **Brownfield analysis**: If applicable, includes codebase reconnaissance
|
||||
- [ ] **Framework specifics**: Exact versions and usage patterns
|
||||
- [ ] **Pattern guidance**: Shows examples of existing patterns to follow
|
||||
|
||||
---
|
||||
|
||||
## 9. Critical Failures (Auto-Fail)
|
||||
|
||||
- [ ] ❌ **Non-definitive technical decisions** (any "option A or B" or vague choices)
|
||||
- [ ] ❌ **Missing versions** (framework/library without specific version)
|
||||
- [ ] ❌ **Context not gathered** (didn't check for document-project, setup files, etc.)
|
||||
- [ ] ❌ **Stack mismatch** (decisions don't align with detected project stack)
|
||||
- [ ] ❌ **Stories don't match template** (missing Dev Agent Record sections)
|
||||
- [ ] ❌ **Missing tech-spec sections** (required section missing from enhanced template)
|
||||
- [ ] ❌ **Stories have forward dependencies** (would break sequential implementation)
|
||||
- [ ] ❌ **Vague source tree** (file changes not specific with actions)
|
||||
- [ ] ❌ **No brownfield analysis** (when document-project output exists but wasn't used)
|
||||
|
||||
---
|
||||
|
||||
## Validation Notes
|
||||
|
||||
**Document any findings:**
|
||||
|
||||
- **Context Gathering Score**: [Comprehensive / Partial / Insufficient]
|
||||
- **Definitiveness Score**: [All definitive / Some ambiguity / Significant ambiguity]
|
||||
- **Brownfield Integration**: [N/A - Greenfield / Excellent / Partial / Missing]
|
||||
- **Stack Alignment**: [Perfect / Good / Partial / None]
|
||||
|
||||
## **Strengths:**
|
||||
|
||||
## **Issues to address:**
|
||||
|
||||
## **Recommended actions:**
|
||||
|
||||
**Ready for implementation?** [Yes / No - explain]
|
||||
|
||||
**Can skip story-context?** [Yes - tech-spec is comprehensive / No - additional context needed / N/A]
|
||||
|
||||
---
|
||||
|
||||
_The tech-spec should be a RICH CONTEXT DOCUMENT that gives developers everything they need without requiring separate context generation._
|
||||
|
|
@ -1,74 +0,0 @@
|
|||
# {{project_name}} - Epic Breakdown
|
||||
|
||||
**Date:** {{date}}
|
||||
**Project Level:** {{project_level}}
|
||||
|
||||
---
|
||||
|
||||
<!-- Repeat for each epic (N = 1, 2, 3...) -->
|
||||
|
||||
## Epic {{N}}: {{epic_title_N}}
|
||||
|
||||
**Slug:** {{epic_slug_N}}
|
||||
|
||||
### Goal
|
||||
|
||||
{{epic_goal_N}}
|
||||
|
||||
### Scope
|
||||
|
||||
{{epic_scope_N}}
|
||||
|
||||
### Success Criteria
|
||||
|
||||
{{epic_success_criteria_N}}
|
||||
|
||||
### Dependencies
|
||||
|
||||
{{epic_dependencies_N}}
|
||||
|
||||
---
|
||||
|
||||
## Story Map - Epic {{N}}
|
||||
|
||||
{{story_map_N}}
|
||||
|
||||
---
|
||||
|
||||
## Stories - Epic {{N}}
|
||||
|
||||
<!-- Repeat for each story (M = 1, 2, 3...) within epic N -->
|
||||
|
||||
### Story {{N}}.{{M}}: {{story_title_N_M}}
|
||||
|
||||
As a {{user_type}},
|
||||
I want {{capability}},
|
||||
So that {{value_benefit}}.
|
||||
|
||||
**Acceptance Criteria:**
|
||||
|
||||
**Given** {{precondition}}
|
||||
**When** {{action}}
|
||||
**Then** {{expected_outcome}}
|
||||
|
||||
**And** {{additional_criteria}}
|
||||
|
||||
**Prerequisites:** {{dependencies_on_previous_stories}}
|
||||
|
||||
**Technical Notes:** {{implementation_guidance}}
|
||||
|
||||
**Estimated Effort:** {{story_points}} points ({{time_estimate}})
|
||||
|
||||
<!-- End story repeat -->
|
||||
|
||||
---
|
||||
|
||||
## Implementation Timeline - Epic {{N}}
|
||||
|
||||
**Total Story Points:** {{total_points_N}}
|
||||
|
||||
**Estimated Timeline:** {{estimated_timeline_N}}
|
||||
|
||||
---
|
||||
|
||||
<!-- End epic repeat -->
|
||||
|
|
@ -1,436 +0,0 @@
|
|||
# Unified Epic and Story Generation
|
||||
|
||||
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<critical>This generates epic + stories for ALL quick-flow projects</critical>
|
||||
<critical>Always generates: epics.md + story files (1-5 stories based on {{story_count}})</critical>
|
||||
<critical>Runs AFTER tech-spec.md completion</critical>
|
||||
<critical>Story format MUST match create-story template for compatibility with story-context and dev-story workflows</critical>
|
||||
|
||||
<step n="1" goal="Load tech spec and extract implementation context">
|
||||
|
||||
<action>Read the completed tech-spec.md file from {default_output_file}</action>
|
||||
<action>Load bmm-workflow-status.yaml from {workflow-status} (if exists)</action>
|
||||
<action>Get story_count from workflow variables (1-5)</action>
|
||||
<action>Ensure {sprint_artifacts} directory exists</action>
|
||||
|
||||
<action>Extract from tech-spec structure:
|
||||
|
||||
**From "The Change" section:**
|
||||
|
||||
- Problem statement and solution overview
|
||||
- Scope (in/out)
|
||||
|
||||
**From "Implementation Details" section:**
|
||||
|
||||
- Source tree changes
|
||||
- Technical approach
|
||||
- Integration points
|
||||
|
||||
**From "Implementation Guide" section:**
|
||||
|
||||
- Implementation steps
|
||||
- Testing strategy
|
||||
- Acceptance criteria
|
||||
- Time estimates
|
||||
|
||||
**From "Development Context" section:**
|
||||
|
||||
- Framework dependencies with versions
|
||||
- Existing code references
|
||||
- Internal dependencies
|
||||
|
||||
**From "Developer Resources" section:**
|
||||
|
||||
- File paths
|
||||
- Key code locations
|
||||
- Testing locations
|
||||
|
||||
Use this rich context to generate comprehensive, implementation-ready epic and stories.
|
||||
</action>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Generate epic slug and structure">
|
||||
|
||||
<action>Create epic based on the overall feature/change from tech-spec</action>
|
||||
|
||||
<action>Derive epic slug from the feature name:
|
||||
|
||||
- Use 2-3 words max
|
||||
- Kebab-case format
|
||||
- User-focused, not implementation-focused
|
||||
|
||||
Examples:
|
||||
|
||||
- "OAuth Integration" → "oauth-integration"
|
||||
- "Fix Login Bug" → "login-fix"
|
||||
- "User Profile Page" → "user-profile"
|
||||
</action>
|
||||
|
||||
<action>Store as {{epic_slug}} - this will be used for all story filenames</action>
|
||||
|
||||
<action>Adapt epic detail to story count:
|
||||
|
||||
**For single story (story_count == 1):**
|
||||
|
||||
- Epic is minimal - just enough structure
|
||||
- Goal: Brief statement of what's being accomplished
|
||||
- Scope: High-level boundary
|
||||
- Success criteria: Core outcomes
|
||||
|
||||
**For multiple stories (story_count > 1):**
|
||||
|
||||
- Epic is detailed - full breakdown
|
||||
- Goal: Comprehensive purpose and value statement
|
||||
- Scope: Clear boundaries with in/out examples
|
||||
- Success criteria: Measurable, testable outcomes
|
||||
- Story map: Visual representation of epic → stories
|
||||
- Implementation sequence: Logical ordering with dependencies
|
||||
</action>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Generate epic document">
|
||||
|
||||
<action>Initialize {epics_file} using {epics_template}</action>
|
||||
|
||||
<action>Populate epic metadata from tech-spec context:
|
||||
|
||||
**Epic Title:** User-facing outcome (not implementation detail)
|
||||
|
||||
- Good: "OAuth Integration", "Login Bug Fix", "Icon Reliability"
|
||||
- Bad: "Update recommendedLibraries.ts", "Refactor auth service"
|
||||
|
||||
**Epic Goal:** Why this matters to users/business
|
||||
|
||||
**Epic Scope:** Clear boundaries from tech-spec scope section
|
||||
|
||||
**Epic Success Criteria:** Measurable outcomes from tech-spec acceptance criteria
|
||||
|
||||
**Dependencies:** From tech-spec integration points and dependencies
|
||||
</action>
|
||||
|
||||
<template-output file="{epics_file}">project_name</template-output>
|
||||
<template-output file="{epics_file}">date</template-output>
|
||||
<template-output file="{epics_file}">epic_title</template-output>
|
||||
<template-output file="{epics_file}">epic_slug</template-output>
|
||||
<template-output file="{epics_file}">epic_goal</template-output>
|
||||
<template-output file="{epics_file}">epic_scope</template-output>
|
||||
<template-output file="{epics_file}">epic_success_criteria</template-output>
|
||||
<template-output file="{epics_file}">epic_dependencies</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Intelligently break down into stories">
|
||||
|
||||
<action>Analyze tech-spec implementation steps and create story breakdown
|
||||
|
||||
**For story_count == 1:**
|
||||
|
||||
- Create single comprehensive story covering all implementation
|
||||
- Title: Focused on the deliverable outcome
|
||||
- Tasks: Map directly to tech-spec implementation steps
|
||||
- Estimated points: Typically 1-5 points
|
||||
|
||||
**For story_count > 1:**
|
||||
|
||||
- Break implementation into logical story boundaries
|
||||
- Each story must be:
|
||||
- Independently valuable (delivers working functionality)
|
||||
- Testable (has clear acceptance criteria)
|
||||
- Sequentially ordered (no forward dependencies)
|
||||
- Right-sized (prefer 2-4 stories over many tiny ones)
|
||||
|
||||
**Story Sequencing Rules (CRITICAL):**
|
||||
|
||||
1. Foundation → Build → Test → Polish
|
||||
2. Database → API → UI
|
||||
3. Backend → Frontend
|
||||
4. Core → Enhancement
|
||||
5. NO story can depend on a later story!
|
||||
|
||||
Validate sequence: Each story N should only depend on stories 1...N-1
|
||||
</action>
|
||||
|
||||
<action>For each story position (1 to {{story_count}}):
|
||||
|
||||
1. **Determine story scope from tech-spec tasks**
|
||||
- Group related implementation steps
|
||||
- Ensure story leaves system in working state
|
||||
|
||||
2. **Create story title**
|
||||
- User-focused deliverable
|
||||
- Active, clear language
|
||||
- Good: "OAuth Backend Integration", "OAuth UI Components"
|
||||
- Bad: "Write some OAuth code", "Update files"
|
||||
|
||||
3. **Extract acceptance criteria**
|
||||
- From tech-spec testing strategy and acceptance criteria
|
||||
- Must be numbered (AC #1, AC #2, etc.)
|
||||
- Must be specific and testable
|
||||
- Use Given/When/Then format when applicable
|
||||
|
||||
4. **Map tasks to implementation steps**
|
||||
- Break down tech-spec implementation steps for this story
|
||||
- Create checkbox list
|
||||
- Reference AC numbers: (AC: #1), (AC: #2)
|
||||
|
||||
5. **Estimate story points**
|
||||
- 1 point = < 1 day (2-4 hours)
|
||||
- 2 points = 1-2 days
|
||||
- 3 points = 2-3 days
|
||||
- 5 points = 3-5 days
|
||||
- Total across all stories should align with tech-spec estimates
|
||||
</action>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Generate story files">
|
||||
|
||||
<for-each story="1 to story_count">
|
||||
<action>Set story_filename = "story-{{epic_slug}}-{{n}}.md"</action>
|
||||
<action>Set story_path = "{sprint_artifacts}/{{story_filename}}"</action>
|
||||
|
||||
<action>Create story file using {user_story_template}</action>
|
||||
|
||||
<action>Populate story with:
|
||||
|
||||
**Story Header:**
|
||||
|
||||
- N.M format (where N is always 1 for quick-flow, M is story number)
|
||||
- Title: User-focused deliverable
|
||||
- Status: Draft
|
||||
|
||||
**User Story:**
|
||||
|
||||
- As a [role] (developer, user, admin, system, etc.)
|
||||
- I want [capability/change]
|
||||
- So that [benefit/value]
|
||||
|
||||
**Acceptance Criteria:**
|
||||
|
||||
- Numbered list (AC #1, AC #2, ...)
|
||||
- Specific, measurable, testable
|
||||
- Derived from tech-spec testing strategy and acceptance criteria
|
||||
- Cover all success conditions for this story
|
||||
|
||||
**Tasks/Subtasks:**
|
||||
|
||||
- Checkbox list mapped to tech-spec implementation steps
|
||||
- Each task references AC numbers: (AC: #1)
|
||||
- Include explicit testing tasks
|
||||
|
||||
**Technical Summary:**
|
||||
|
||||
- High-level approach for this story
|
||||
- Key technical decisions
|
||||
- Files/modules involved
|
||||
|
||||
**Project Structure Notes:**
|
||||
|
||||
- files_to_modify: From tech-spec "Developer Resources → File Paths"
|
||||
- test_locations: From tech-spec "Developer Resources → Testing Locations"
|
||||
- story_points: Estimated effort
|
||||
- dependencies: Prerequisites (other stories, systems, data)
|
||||
|
||||
**Key Code References:**
|
||||
|
||||
- From tech-spec "Development Context → Relevant Existing Code"
|
||||
- From tech-spec "Developer Resources → Key Code Locations"
|
||||
- Specific file:line references when available
|
||||
|
||||
**Context References:**
|
||||
|
||||
- Link to tech-spec.md (primary context document)
|
||||
- Note: Tech-spec contains brownfield analysis, framework versions, patterns, etc.
|
||||
|
||||
**Dev Agent Record:**
|
||||
|
||||
- Empty sections (populated during dev-story execution)
|
||||
- Agent Model Used
|
||||
- Debug Log References
|
||||
- Completion Notes
|
||||
- Files Modified
|
||||
- Test Results
|
||||
|
||||
**Review Notes:**
|
||||
|
||||
- Empty section (populated during code review)
|
||||
</action>
|
||||
|
||||
<template-output file="{{story_path}}">story_number</template-output>
|
||||
<template-output file="{{story_path}}">story_title</template-output>
|
||||
<template-output file="{{story_path}}">user_role</template-output>
|
||||
<template-output file="{{story_path}}">capability</template-output>
|
||||
<template-output file="{{story_path}}">benefit</template-output>
|
||||
<template-output file="{{story_path}}">acceptance_criteria</template-output>
|
||||
<template-output file="{{story_path}}">tasks_subtasks</template-output>
|
||||
<template-output file="{{story_path}}">technical_summary</template-output>
|
||||
<template-output file="{{story_path}}">files_to_modify</template-output>
|
||||
<template-output file="{{story_path}}">test_locations</template-output>
|
||||
<template-output file="{{story_path}}">story_points</template-output>
|
||||
<template-output file="{{story_path}}">time_estimate</template-output>
|
||||
<template-output file="{{story_path}}">dependencies</template-output>
|
||||
<template-output file="{{story_path}}">existing_code_references</template-output>
|
||||
</for-each>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Generate story map and finalize epic" if="story_count > 1">
|
||||
|
||||
<action>Create visual story map showing epic → stories hierarchy
|
||||
|
||||
Include:
|
||||
|
||||
- Epic title at top
|
||||
- Stories listed with point estimates
|
||||
- Dependencies noted
|
||||
- Sequence validation confirmation
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
Epic: OAuth Integration (8 points)
|
||||
├── Story 1.1: OAuth Backend (3 points)
|
||||
│ Dependencies: None
|
||||
│
|
||||
├── Story 1.2: OAuth UI Components (3 points)
|
||||
│ Dependencies: Story 1.1
|
||||
│
|
||||
└── Story 1.3: OAuth Testing & Polish (2 points)
|
||||
Dependencies: Stories 1.1, 1.2
|
||||
```
|
||||
|
||||
</action>
|
||||
|
||||
<action>Calculate totals:
|
||||
|
||||
- Total story points across all stories
|
||||
- Estimated timeline (typically 1-2 points per day)
|
||||
</action>
|
||||
|
||||
<action>Append to {epics_file}:
|
||||
|
||||
- Story summaries
|
||||
- Story map visual
|
||||
- Implementation sequence
|
||||
- Total points and timeline
|
||||
</action>
|
||||
|
||||
<template-output file="{epics_file}">story_map</template-output>
|
||||
<template-output file="{epics_file}">story_summaries</template-output>
|
||||
<template-output file="{epics_file}">total_points</template-output>
|
||||
<template-output file="{epics_file}">estimated_timeline</template-output>
|
||||
<template-output file="{epics_file}">implementation_sequence</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Validate story quality">
|
||||
|
||||
<critical>Always run validation - NOT optional!</critical>
|
||||
|
||||
<action>Validate all stories against quality standards:
|
||||
|
||||
**Story Sequence Validation (CRITICAL):**
|
||||
|
||||
- For each story N, verify it doesn't depend on story N+1 or later
|
||||
- Check: Can stories be implemented in order 1→2→3→...?
|
||||
- If sequence invalid: Identify problem, propose reordering, ask user to confirm
|
||||
|
||||
**Acceptance Criteria Quality:**
|
||||
|
||||
- All AC are numbered (AC #1, AC #2, ...)
|
||||
- Each AC is specific and testable (no "works well", "is good", "performs fast")
|
||||
- AC use Given/When/Then or equivalent structure
|
||||
- All success conditions are covered
|
||||
|
||||
**Story Completeness:**
|
||||
|
||||
- All stories map to tech-spec implementation steps
|
||||
- Story points align with tech-spec time estimates
|
||||
- Dependencies are clearly documented
|
||||
- Each story has testable AC
|
||||
- Files and locations reference tech-spec developer resources
|
||||
|
||||
**Template Compliance:**
|
||||
|
||||
- All required sections present
|
||||
- Dev Agent Record sections exist (even if empty)
|
||||
- Context references link to tech-spec.md
|
||||
- Story numbering follows N.M format
|
||||
</action>
|
||||
|
||||
<check if="validation issues found">
|
||||
<output>⚠️ **Story Validation Issues:**
|
||||
|
||||
{{issues_list}}
|
||||
|
||||
**Recommended Fixes:**
|
||||
{{fixes}}
|
||||
|
||||
Shall I fix these automatically? (yes/no)</output>
|
||||
|
||||
<ask>Apply fixes? (yes/no)</ask>
|
||||
|
||||
<check if="yes">
|
||||
<action>Apply fixes (reorder stories, rewrite vague AC, add missing details)</action>
|
||||
<action>Re-validate</action>
|
||||
<output>✅ Validation passed after fixes!</output>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="validation passes">
|
||||
<output>✅ **Story Validation Passed!**
|
||||
|
||||
**Quality Scores:**
|
||||
|
||||
- Sequence: ✅ Valid (no forward dependencies)
|
||||
- AC Quality: ✅ All specific and testable
|
||||
- Completeness: ✅ All tech spec tasks covered
|
||||
- Template Compliance: ✅ All sections present
|
||||
|
||||
Stories are implementation-ready!</output>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Update workflow status and finalize">
|
||||
|
||||
<action>Update bmm-workflow-status.yaml (if exists):
|
||||
|
||||
- Mark tech-spec as complete
|
||||
- Initialize story sequence tracking
|
||||
- Set first story as TODO
|
||||
- Track epic slug and story count
|
||||
</action>
|
||||
|
||||
<output>**✅ Epic and Stories Generated!**
|
||||
|
||||
**Epic:** {{epic_title}} ({{epic_slug}})
|
||||
**Total Stories:** {{story_count}}
|
||||
{{#if story_count > 1}}**Total Points:** {{total_points}}
|
||||
**Estimated Timeline:** {{estimated_timeline}}{{/if}}
|
||||
|
||||
**Files Created:**
|
||||
|
||||
- `{epics_file}` - Epic structure{{#if story_count == 1}} (minimal){{/if}}
|
||||
- `{sprint_artifacts}/story-{{epic_slug}}-1.md`{{#if story_count > 1}}
|
||||
- `{sprint_artifacts}/story-{{epic_slug}}-2.md`{{/if}}{{#if story_count > 2}}
|
||||
- Through story-{{epic_slug}}-{{story_count}}.md{{/if}}
|
||||
|
||||
**What's Next:**
|
||||
All stories reference tech-spec.md as primary context. You can proceed directly to development with the DEV agent!
|
||||
|
||||
Story files are ready for:
|
||||
|
||||
- Direct implementation (dev-story workflow)
|
||||
- Optional context generation (story-context workflow for complex cases)
|
||||
- Sprint planning organization (sprint-planning workflow for multi-story coordination)
|
||||
</output>
|
||||
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -1,980 +0,0 @@
|
|||
# Tech-Spec Workflow - Context-Aware Technical Planning (quick-flow)
|
||||
|
||||
<workflow>
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>This is quick-flow efforts - tech-spec with context-rich story generation</critical>
|
||||
<critical>Quick Flow: tech-spec + epic with 1-5 stories (always generates epic structure)</critical>
|
||||
<critical>LIVING DOCUMENT: Write to tech-spec.md continuously as you discover - never wait until the end</critical>
|
||||
<critical>CONTEXT IS KING: Gather ALL available context before generating specs</critical>
|
||||
<critical>DOCUMENT OUTPUT: Technical, precise, definitive. Specific versions only. User skill level ({user_skill_level}) affects conversation style ONLY, not document content.</critical>
|
||||
<critical>Input documents specified in workflow.yaml input_file_patterns - workflow engine handles fuzzy matching, whole vs sharded document discovery automatically</critical>
|
||||
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
|
||||
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>
|
||||
|
||||
<step n="0" goal="Validate workflow readiness and detect project level" tag="workflow-status">
|
||||
<action>Check if {output_folder}/bmm-workflow-status.yaml exists</action>
|
||||
|
||||
<check if="status file not found">
|
||||
<output>No workflow status file found. Tech-spec workflow can run standalone or as part of BMM workflow path.</output>
|
||||
<output>**Recommended:** Run `workflow-init` first for project context tracking and workflow sequencing.</output>
|
||||
<output>**Quick Start:** Continue in standalone mode - perfect for rapid prototyping and quick changes!</output>
|
||||
<ask>Continue in standalone mode or exit to run workflow-init? (continue/exit)</ask>
|
||||
<check if="continue">
|
||||
<action>Set standalone_mode = true</action>
|
||||
|
||||
<output>Great! Let's quickly configure your project...</output>
|
||||
|
||||
<ask>How many user stories do you think this work requires?
|
||||
|
||||
**Single Story** - Simple change (bug fix, small isolated feature, single file change)
|
||||
→ Generates: tech-spec + epic (minimal) + 1 story
|
||||
→ Example: "Fix login validation bug" or "Add email field to user form"
|
||||
|
||||
**Multiple Stories (2-5)** - Coherent feature (multiple related changes, small feature set)
|
||||
→ Generates: tech-spec + epic (detailed) + 2-5 stories
|
||||
→ Example: "Add OAuth integration" or "Build user profile page"
|
||||
|
||||
Enter **1** for single story, or **2-5** for number of stories you estimate</ask>
|
||||
|
||||
<action>Capture user response as story_count (1-5)</action>
|
||||
<action>Validate: If not 1-5, ask for clarification. If > 5, suggest using full BMad Method instead</action>
|
||||
|
||||
<ask if="not already known greenfield vs brownfield">Is this a **greenfield** (new/empty codebase) or **brownfield** (existing codebase) project?
|
||||
|
||||
**Greenfield** - Starting fresh, no existing code aside from starter templates
|
||||
**Brownfield** - Adding to or modifying existing functional code or project
|
||||
|
||||
Enter **greenfield** or **brownfield**:</ask>
|
||||
|
||||
<action>Capture user response as field_type (greenfield or brownfield)</action>
|
||||
<action>Validate: If not greenfield or brownfield, ask again</action>
|
||||
|
||||
<output>Perfect! Running as:
|
||||
|
||||
- **Story Count:** {{story_count}} {{#if story_count == 1}}story (minimal epic){{else}}stories (detailed epic){{/if}}
|
||||
- **Field Type:** {{field_type}}
|
||||
- **Mode:** Standalone (no status file tracking)
|
||||
|
||||
Let's build your tech-spec!</output>
|
||||
</check>
|
||||
<check if="exit">
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="status file found">
|
||||
<action>Load the FULL file: {workflow-status}</action>
|
||||
<action>Parse workflow_status section</action>
|
||||
<action>Check status of "tech-spec" workflow</action>
|
||||
<action>Get selected_track from YAML metadata indicating this is quick-flow-greenfield or quick-flow-brownfield</action>
|
||||
<action>Get field_type from YAML metadata (greenfield or brownfield)</action>
|
||||
<action>Find first non-completed workflow (next expected workflow)</action>
|
||||
|
||||
<check if="selected_track is NOT quick-flow-greenfield AND NOT quick-flow-brownfield">
|
||||
<output>**Incorrect Workflow for Level {{selected_track}}**
|
||||
Tech-spec is for Simple projects. **Correct workflow:** `create-prd` (PM agent). You should Exit at this point, unless you want to force run this workflow.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="tech-spec status is file path (already completed)">
|
||||
<output>⚠️ Tech-spec already completed: {{tech-spec status}}</output>
|
||||
<ask>Re-running will overwrite the existing tech-spec. Continue? (y/n)</ask>
|
||||
<check if="n">
|
||||
<output>Exiting. Use workflow-status to see your next step.</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="tech-spec is not the next expected workflow">
|
||||
<output>⚠️ Next expected workflow: {{next_workflow}}. Tech-spec is out of sequence.</output>
|
||||
<ask>Continue with tech-spec anyway? (y/n)</ask>
|
||||
<check if="n">
|
||||
<output>Exiting. Run {{next_workflow}} instead.</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Set standalone_mode = false</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="0.5" goal="Discover and load input documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {product_brief_content}, {research_content}, {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Comprehensive context discovery - gather everything available">
|
||||
|
||||
<action>Welcome {user_name} warmly and explain what we're about to do:
|
||||
|
||||
"I'm going to gather all available context about your project before we dive into the technical spec. The following content has been auto-loaded:
|
||||
|
||||
- Product briefs and research: {product_brief_content}, {research_content}
|
||||
- Brownfield codebase documentation: {document_project_content} (loaded via INDEX_GUIDED strategy)
|
||||
- Your project's tech stack and dependencies
|
||||
- Existing code patterns and structure
|
||||
|
||||
This ensures the tech-spec is grounded in reality and gives developers everything they need."
|
||||
</action>
|
||||
|
||||
<action>**PHASE 1: Load Existing Documents**
|
||||
|
||||
Search for and load (using dual-strategy: whole first, then sharded):
|
||||
|
||||
1. **Product Brief:**
|
||||
- Search pattern: {output*folder}/\_brief*.md
|
||||
- Sharded: {output*folder}/\_brief*/index.md
|
||||
- If found: Load completely and extract key context
|
||||
|
||||
2. **Research Documents:**
|
||||
- Search pattern: {output*folder}/\_research*.md
|
||||
- Sharded: {output*folder}/\_research*/index.md
|
||||
- If found: Load completely and extract insights
|
||||
|
||||
3. **Document-Project Output (CRITICAL for brownfield):**
|
||||
- Always check: {output_folder}/index.md
|
||||
- If found: This is the brownfield codebase map - load ALL shards!
|
||||
- Extract: File structure, key modules, existing patterns, naming conventions
|
||||
|
||||
Create a summary of what was found and ask user if there are other documents or information to consider before proceeding:
|
||||
|
||||
- List of loaded documents
|
||||
- Key insights from each
|
||||
- Brownfield vs greenfield determination
|
||||
</action>
|
||||
|
||||
<action>**PHASE 2: Intelligently Detect Project Stack**
|
||||
|
||||
Use your comprehensive knowledge as a coding-capable LLM to analyze the project:
|
||||
|
||||
**Discover Setup Files:**
|
||||
|
||||
- Search {project-root} for dependency manifests (package.json, requirements.txt, Gemfile, go.mod, Cargo.toml, composer.json, pom.xml, build.gradle, pyproject.toml, etc.)
|
||||
- Adapt to ANY project type - you know the ecosystem conventions
|
||||
|
||||
**Extract Critical Information:**
|
||||
|
||||
1. Framework name and EXACT version (e.g., "React 18.2.0", "Django 4.2.1")
|
||||
2. All production dependencies with specific versions
|
||||
3. Dev tools and testing frameworks (Jest, pytest, ESLint, etc.)
|
||||
4. Available build/test scripts
|
||||
5. Project type (web app, API, CLI, library, etc.)
|
||||
|
||||
**Assess Currency:**
|
||||
|
||||
- Identify if major dependencies are outdated (>2 years old)
|
||||
- Use WebSearch to find current recommended versions if needed
|
||||
- Note migration complexity in your summary
|
||||
|
||||
**For Greenfield Projects:**
|
||||
<check if="field_type == greenfield">
|
||||
<action>Use WebSearch to discover current best practices and official starter templates</action>
|
||||
<action>Recommend appropriate starters based on detected framework (or user's intended stack)</action>
|
||||
<action>Present benefits conversationally: setup time saved, modern patterns, testing included</action>
|
||||
<ask>Would you like to use a starter template? (yes/no/show-me-options)</ask>
|
||||
<action>Capture preference and include in implementation stack if accepted</action>
|
||||
</check>
|
||||
|
||||
**Trust Your Intelligence:**
|
||||
You understand project ecosystems deeply. Adapt your analysis to any stack - don't be constrained by examples. Extract what matters for developers.
|
||||
|
||||
Store comprehensive findings as {{project_stack_summary}}
|
||||
</action>
|
||||
|
||||
<action>**PHASE 3: Brownfield Codebase Reconnaissance** (if applicable)
|
||||
|
||||
<check if="field_type == brownfield OR document-project output found">
|
||||
|
||||
Analyze the existing project structure:
|
||||
|
||||
1. **Directory Structure:**
|
||||
- Identify main code directories (src/, lib/, app/, components/, services/)
|
||||
- Note organization patterns (feature-based, layer-based, domain-driven)
|
||||
- Identify test directories and patterns
|
||||
|
||||
2. **Code Patterns:**
|
||||
- Look for dominant patterns (class-based, functional, MVC, microservices)
|
||||
- Identify naming conventions (camelCase, snake_case, PascalCase)
|
||||
- Note file organization patterns
|
||||
|
||||
3. **Key Modules/Services:**
|
||||
- Identify major modules or services already in place
|
||||
- Note entry points (main.js, app.py, index.ts)
|
||||
- Document important utilities or shared code
|
||||
|
||||
4. **Testing Patterns & Standards (CRITICAL):**
|
||||
- Identify test framework in use (from package.json/requirements.txt)
|
||||
- Note test file naming patterns (.test.js, test.py, .spec.ts, Test.java)
|
||||
- Document test organization (tests/, **tests**, spec/, test/)
|
||||
- Look for test configuration files (jest.config.js, pytest.ini, .rspec)
|
||||
- Check for coverage requirements (in CI config, test scripts)
|
||||
- Identify mocking/stubbing libraries (jest.mock, unittest.mock, sinon)
|
||||
- Note assertion styles (expect, assert, should)
|
||||
|
||||
5. **Code Style & Conventions (MUST CONFORM):**
|
||||
- Check for linter config (.eslintrc, .pylintrc, rubocop.yml)
|
||||
- Check for formatter config (.prettierrc, .black, .editorconfig)
|
||||
- Identify code style:
|
||||
- Semicolons: yes/no (JavaScript/TypeScript)
|
||||
- Quotes: single/double
|
||||
- Indentation: spaces/tabs, size
|
||||
- Line length limits
|
||||
- Import/export patterns (named vs default, organization)
|
||||
- Error handling patterns (try/catch, Result types, error classes)
|
||||
- Logging patterns (console, winston, logging module, specific formats)
|
||||
- Documentation style (JSDoc, docstrings, YARD, JavaDoc)
|
||||
|
||||
Store this as {{existing_structure_summary}}
|
||||
|
||||
**CRITICAL: Confirm Conventions with User**
|
||||
<ask>I've detected these conventions in your codebase:
|
||||
|
||||
**Code Style:**
|
||||
{{detected_code_style}}
|
||||
|
||||
**Test Patterns:**
|
||||
{{detected_test_patterns}}
|
||||
|
||||
**File Organization:**
|
||||
{{detected_file_organization}}
|
||||
|
||||
Should I follow these existing conventions for the new code?
|
||||
|
||||
Enter **yes** to conform to existing patterns, or **no** if you want to establish new standards:</ask>
|
||||
|
||||
<action>Capture user response as conform_to_conventions (yes/no)</action>
|
||||
|
||||
<check if="conform_to_conventions == no">
|
||||
<ask>What conventions would you like to use instead? (Or should I suggest modern best practices?)</ask>
|
||||
<action>Capture new conventions or use WebSearch for current best practices</action>
|
||||
</check>
|
||||
|
||||
<action>Store confirmed conventions as {{existing_conventions}}</action>
|
||||
|
||||
</check>
|
||||
|
||||
<check if="field_type == greenfield">
|
||||
<action>Note: Greenfield project - no existing code to analyze</action>
|
||||
<action>Set {{existing_structure_summary}} = "Greenfield project - new codebase"</action>
|
||||
</check>
|
||||
|
||||
</action>
|
||||
|
||||
<action>**PHASE 4: Synthesize Context Summary**
|
||||
|
||||
Create {{loaded_documents_summary}} that includes:
|
||||
|
||||
- Documents found and loaded
|
||||
- Brownfield vs greenfield status
|
||||
- Tech stack detected (or "To be determined" if greenfield)
|
||||
- Existing patterns identified (or "None - greenfield" if applicable)
|
||||
|
||||
Present this summary to {user_name} conversationally:
|
||||
|
||||
"Here's what I found about your project:
|
||||
|
||||
**Documents Available:**
|
||||
[List what was found]
|
||||
|
||||
**Project Type:**
|
||||
[Brownfield with X framework Y version OR Greenfield - new project]
|
||||
|
||||
**Existing Stack:**
|
||||
[Framework and dependencies OR "To be determined"]
|
||||
|
||||
**Code Structure:**
|
||||
[Existing patterns OR "New codebase"]
|
||||
|
||||
This gives me a solid foundation for creating a context-rich tech spec!"
|
||||
</action>
|
||||
|
||||
<template-output>loaded_documents_summary</template-output>
|
||||
<template-output>project_stack_summary</template-output>
|
||||
<template-output>existing_structure_summary</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Conversational discovery of the change/feature">
|
||||
|
||||
<action>Engage {user_name} in natural, adaptive conversation to deeply understand what needs to be built.
|
||||
|
||||
**Discovery Approach:**
|
||||
Adapt your questioning style to the complexity:
|
||||
|
||||
- For single-story changes: Focus on the specific problem, location, and approach
|
||||
- For multi-story features: Explore user value, integration strategy, and scope boundaries
|
||||
|
||||
**Core Discovery Goals (accomplish through natural dialogue):**
|
||||
|
||||
1. **The Problem/Need**
|
||||
- What user or technical problem are we solving?
|
||||
- Why does this matter now?
|
||||
- What's the impact if we don't do this?
|
||||
|
||||
2. **The Solution Approach**
|
||||
- What's the proposed solution?
|
||||
- How should this work from a user/system perspective?
|
||||
- What alternatives were considered?
|
||||
|
||||
3. **Integration & Location**
|
||||
- <check if="brownfield">Where does this fit in the existing codebase?</check>
|
||||
- What existing code/patterns should we reference or follow?
|
||||
- What are the integration points?
|
||||
|
||||
4. **Scope Clarity**
|
||||
- What's IN scope for this work?
|
||||
- What's explicitly OUT of scope (future work, not needed)?
|
||||
- If multiple stories: What's MVP vs enhancement?
|
||||
|
||||
5. **Constraints & Dependencies**
|
||||
- Technical limitations or requirements?
|
||||
- Dependencies on other systems, APIs, or services?
|
||||
- Performance, security, or compliance considerations?
|
||||
|
||||
6. **Success Criteria**
|
||||
- How will we know this is done correctly?
|
||||
- What does "working" look like?
|
||||
- What edge cases matter?
|
||||
|
||||
**Conversation Style:**
|
||||
|
||||
- Be warm and collaborative, not interrogative
|
||||
- Ask follow-up questions based on their responses
|
||||
- Help them think through implications
|
||||
- Reference context from Phase 1 (existing code, stack, patterns)
|
||||
- Adapt depth to {{story_count}} complexity
|
||||
|
||||
Synthesize discoveries into clear, comprehensive specifications.
|
||||
</action>
|
||||
|
||||
<template-output>problem_statement</template-output>
|
||||
<template-output>solution_overview</template-output>
|
||||
<template-output>change_type</template-output>
|
||||
<template-output>scope_in</template-output>
|
||||
<template-output>scope_out</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Generate context-aware, definitive technical specification">
|
||||
|
||||
<critical>ALL TECHNICAL DECISIONS MUST BE DEFINITIVE - NO AMBIGUITY ALLOWED</critical>
|
||||
<critical>Use existing stack info to make SPECIFIC decisions</critical>
|
||||
<critical>Reference brownfield code to guide implementation</critical>
|
||||
|
||||
<action>Initialize tech-spec.md with the rich template</action>
|
||||
|
||||
<action>**Generate Context Section (already captured):**
|
||||
|
||||
These template variables are already populated from Step 1:
|
||||
|
||||
- {{loaded_documents_summary}}
|
||||
- {{project_stack_summary}}
|
||||
- {{existing_structure_summary}}
|
||||
|
||||
Just save them to the file.
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">loaded_documents_summary</template-output>
|
||||
<template-output file="tech-spec.md">project_stack_summary</template-output>
|
||||
<template-output file="tech-spec.md">existing_structure_summary</template-output>
|
||||
|
||||
<action>**Generate The Change Section:**
|
||||
|
||||
Already captured from Step 2:
|
||||
|
||||
- {{problem_statement}}
|
||||
- {{solution_overview}}
|
||||
- {{scope_in}}
|
||||
- {{scope_out}}
|
||||
|
||||
Save to file.
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">problem_statement</template-output>
|
||||
<template-output file="tech-spec.md">solution_overview</template-output>
|
||||
<template-output file="tech-spec.md">scope_in</template-output>
|
||||
<template-output file="tech-spec.md">scope_out</template-output>
|
||||
|
||||
<action>**Generate Implementation Details:**
|
||||
|
||||
Now make DEFINITIVE technical decisions using all the context gathered.
|
||||
|
||||
**Source Tree Changes - BE SPECIFIC:**
|
||||
|
||||
Bad (NEVER do this):
|
||||
|
||||
- "Update some files in the services folder"
|
||||
- "Add tests somewhere"
|
||||
|
||||
Good (ALWAYS do this):
|
||||
|
||||
- "src/services/UserService.ts - MODIFY - Add validateEmail() method at line 45"
|
||||
- "src/routes/api/users.ts - MODIFY - Add POST /users/validate endpoint"
|
||||
- "tests/services/UserService.test.ts - CREATE - Test suite for email validation"
|
||||
|
||||
Include:
|
||||
|
||||
- Exact file paths
|
||||
- Action: CREATE, MODIFY, DELETE
|
||||
- Specific what changes (methods, classes, endpoints, components)
|
||||
|
||||
**Use brownfield context:**
|
||||
|
||||
- If modifying existing files, reference current structure
|
||||
- Follow existing naming patterns
|
||||
- Place new code logically based on current organization
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">source_tree_changes</template-output>
|
||||
|
||||
<action>**Technical Approach - BE DEFINITIVE:**
|
||||
|
||||
Bad (ambiguous):
|
||||
|
||||
- "Use a logging library like winston or pino"
|
||||
- "Use Python 2 or 3"
|
||||
- "Set up some kind of validation"
|
||||
|
||||
Good (definitive):
|
||||
|
||||
- "Use winston v3.8.2 (already in package.json) for logging"
|
||||
- "Implement using Python 3.11 as specified in pyproject.toml"
|
||||
- "Use Joi v17.9.0 for request validation following pattern in UserController.ts"
|
||||
|
||||
**Use detected stack:**
|
||||
|
||||
- Reference exact versions from package.json/requirements.txt
|
||||
- Specify frameworks already in use
|
||||
- Make decisions based on what's already there
|
||||
|
||||
**For greenfield:**
|
||||
|
||||
- Make definitive choices and justify them
|
||||
- Specify exact versions
|
||||
- No "or" statements allowed
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">technical_approach</template-output>
|
||||
|
||||
<action>**Existing Patterns to Follow:**
|
||||
|
||||
<check if="brownfield">
|
||||
Document patterns from the existing codebase:
|
||||
- Class structure patterns
|
||||
- Function naming conventions
|
||||
- Error handling approach
|
||||
- Testing patterns
|
||||
- Documentation style
|
||||
|
||||
Example:
|
||||
"Follow the service pattern established in UserService.ts:
|
||||
|
||||
- Export class with constructor injection
|
||||
- Use async/await for all asynchronous operations
|
||||
- Throw ServiceError with error codes
|
||||
- Include JSDoc comments for all public methods"
|
||||
</check>
|
||||
|
||||
<check if="greenfield">
|
||||
"Greenfield project - establishing new patterns:
|
||||
- [Define the patterns to establish]"
|
||||
</check>
|
||||
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">existing_patterns</template-output>
|
||||
|
||||
<action>**Integration Points:**
|
||||
|
||||
Identify how this change connects:
|
||||
|
||||
- Internal modules it depends on
|
||||
- External APIs or services
|
||||
- Database interactions
|
||||
- Event emitters/listeners
|
||||
- State management
|
||||
|
||||
Be specific about interfaces and contracts.
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">integration_points</template-output>
|
||||
|
||||
<action>**Development Context:**
|
||||
|
||||
**Relevant Existing Code:**
|
||||
<check if="brownfield">
|
||||
Reference specific files or code sections developers should review:
|
||||
|
||||
- "See UserService.ts lines 120-150 for similar validation pattern"
|
||||
- "Reference AuthMiddleware.ts for authentication approach"
|
||||
- "Follow error handling in PaymentService.ts"
|
||||
</check>
|
||||
|
||||
**Framework/Libraries:**
|
||||
List with EXACT versions from detected stack:
|
||||
|
||||
- Express 4.18.2 (web framework)
|
||||
- winston 3.8.2 (logging)
|
||||
- Joi 17.9.0 (validation)
|
||||
- TypeScript 5.1.6 (language)
|
||||
|
||||
**Internal Modules:**
|
||||
List internal dependencies:
|
||||
|
||||
- @/services/UserService
|
||||
- @/middleware/auth
|
||||
- @/utils/validation
|
||||
|
||||
**Configuration Changes:**
|
||||
Any config files to update:
|
||||
|
||||
- Update .env with new SMTP settings
|
||||
- Add validation schema to config/schemas.ts
|
||||
- Update package.json scripts if needed
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">existing_code_references</template-output>
|
||||
<template-output file="tech-spec.md">framework_dependencies</template-output>
|
||||
<template-output file="tech-spec.md">internal_dependencies</template-output>
|
||||
<template-output file="tech-spec.md">configuration_changes</template-output>
|
||||
|
||||
<check if="field_type == brownfield">
|
||||
<template-output file="tech-spec.md">existing_conventions</template-output>
|
||||
</check>
|
||||
|
||||
<check if="field_type == greenfield">
|
||||
<action>Set {{existing_conventions}} = "Greenfield project - establishing new conventions per modern best practices"</action>
|
||||
<template-output file="tech-spec.md">existing_conventions</template-output>
|
||||
</check>
|
||||
|
||||
<action>**Implementation Stack:**
|
||||
|
||||
Comprehensive stack with versions:
|
||||
|
||||
- Runtime: Node.js 20.x
|
||||
- Framework: Express 4.18.2
|
||||
- Language: TypeScript 5.1.6
|
||||
- Testing: Jest 29.5.0
|
||||
- Linting: ESLint 8.42.0
|
||||
- Validation: Joi 17.9.0
|
||||
|
||||
All from detected project setup!
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">implementation_stack</template-output>
|
||||
|
||||
<action>**Technical Details:**
|
||||
|
||||
Deep technical specifics:
|
||||
|
||||
- Algorithms to implement
|
||||
- Data structures to use
|
||||
- Performance considerations
|
||||
- Security considerations
|
||||
- Error scenarios and handling
|
||||
- Edge cases
|
||||
|
||||
Be thorough - developers need details!
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">technical_details</template-output>
|
||||
|
||||
<action>**Development Setup:**
|
||||
|
||||
What does a developer need to run this locally?
|
||||
|
||||
Based on detected stack and scripts:
|
||||
|
||||
```
|
||||
1. Clone repo (if not already)
|
||||
2. npm install (installs all deps from package.json)
|
||||
3. cp .env.example .env (configure environment)
|
||||
4. npm run dev (starts development server)
|
||||
5. npm test (runs test suite)
|
||||
```
|
||||
|
||||
Or for Python:
|
||||
|
||||
```
|
||||
1. python -m venv venv
|
||||
2. source venv/bin/activate
|
||||
3. pip install -r requirements.txt
|
||||
4. python manage.py runserver
|
||||
```
|
||||
|
||||
Use the actual scripts from package.json/setup files!
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">development_setup</template-output>
|
||||
|
||||
<action>**Implementation Guide:**
|
||||
|
||||
**Setup Steps:**
|
||||
Pre-implementation checklist:
|
||||
|
||||
- Create feature branch
|
||||
- Verify dev environment running
|
||||
- Review existing code references
|
||||
- Set up test data if needed
|
||||
|
||||
**Implementation Steps:**
|
||||
Step-by-step breakdown:
|
||||
|
||||
For single-story changes:
|
||||
|
||||
1. [Step 1 with specific file and action]
|
||||
2. [Step 2 with specific file and action]
|
||||
3. [Write tests]
|
||||
4. [Verify acceptance criteria]
|
||||
|
||||
For multi-story features:
|
||||
Organize by story/phase:
|
||||
|
||||
1. Phase 1: [Foundation work]
|
||||
2. Phase 2: [Core implementation]
|
||||
3. Phase 3: [Testing and validation]
|
||||
|
||||
**Testing Strategy:**
|
||||
|
||||
- Unit tests for [specific functions]
|
||||
- Integration tests for [specific flows]
|
||||
- Manual testing checklist
|
||||
- Performance testing if applicable
|
||||
|
||||
**Acceptance Criteria:**
|
||||
Specific, measurable, testable criteria:
|
||||
|
||||
1. Given [scenario], when [action], then [outcome]
|
||||
2. [Metric] meets [threshold]
|
||||
3. [Feature] works in [environment]
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">setup_steps</template-output>
|
||||
<template-output file="tech-spec.md">implementation_steps</template-output>
|
||||
<template-output file="tech-spec.md">testing_strategy</template-output>
|
||||
<template-output file="tech-spec.md">acceptance_criteria</template-output>
|
||||
|
||||
<action>**Developer Resources:**
|
||||
|
||||
**File Paths Reference:**
|
||||
Complete list of all files involved:
|
||||
|
||||
- /src/services/UserService.ts
|
||||
- /src/routes/api/users.ts
|
||||
- /tests/services/UserService.test.ts
|
||||
- /src/types/user.ts
|
||||
|
||||
**Key Code Locations:**
|
||||
Important functions, classes, modules:
|
||||
|
||||
- UserService class (src/services/UserService.ts:15)
|
||||
- validateUser function (src/utils/validation.ts:42)
|
||||
- User type definition (src/types/user.ts:8)
|
||||
|
||||
**Testing Locations:**
|
||||
Where tests go:
|
||||
|
||||
- Unit: tests/services/
|
||||
- Integration: tests/integration/
|
||||
- E2E: tests/e2e/
|
||||
|
||||
**Documentation to Update:**
|
||||
Docs that need updating:
|
||||
|
||||
- README.md - Add new endpoint documentation
|
||||
- API.md - Document /users/validate endpoint
|
||||
- CHANGELOG.md - Note the new feature
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">file_paths_complete</template-output>
|
||||
<template-output file="tech-spec.md">key_code_locations</template-output>
|
||||
<template-output file="tech-spec.md">testing_locations</template-output>
|
||||
<template-output file="tech-spec.md">documentation_updates</template-output>
|
||||
|
||||
<action>**UX/UI Considerations:**
|
||||
|
||||
<check if="change affects user interface OR user experience">
|
||||
**Determine if this change has UI/UX impact:**
|
||||
- Does it change what users see?
|
||||
- Does it change how users interact?
|
||||
- Does it affect user workflows?
|
||||
|
||||
If YES, document:
|
||||
|
||||
**UI Components Affected:**
|
||||
|
||||
- List specific components (buttons, forms, modals, pages)
|
||||
- Note which need creation vs modification
|
||||
|
||||
**UX Flow Changes:**
|
||||
|
||||
- Current flow vs new flow
|
||||
- User journey impact
|
||||
- Navigation changes
|
||||
|
||||
**Visual/Interaction Patterns:**
|
||||
|
||||
- Follow existing design system? (check for design tokens, component library)
|
||||
- New patterns needed?
|
||||
- Responsive design considerations (mobile, tablet, desktop)
|
||||
|
||||
**Accessibility:**
|
||||
|
||||
- Keyboard navigation requirements
|
||||
- Screen reader compatibility
|
||||
- ARIA labels needed
|
||||
- Color contrast standards
|
||||
|
||||
**User Feedback:**
|
||||
|
||||
- Loading states
|
||||
- Error messages
|
||||
- Success confirmations
|
||||
- Progress indicators
|
||||
</check>
|
||||
|
||||
<check if="no UI/UX impact">
|
||||
"No UI/UX impact - backend/API/infrastructure change only"
|
||||
</check>
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">ux_ui_considerations</template-output>
|
||||
|
||||
<action>**Testing Approach:**
|
||||
|
||||
Comprehensive testing strategy using {{test_framework_info}}:
|
||||
|
||||
**CONFORM TO EXISTING TEST STANDARDS:**
|
||||
<check if="conform_to_conventions == yes">
|
||||
|
||||
- Follow existing test file naming: {{detected_test_patterns.file_naming}}
|
||||
- Use existing test organization: {{detected_test_patterns.organization}}
|
||||
- Match existing assertion style: {{detected_test_patterns.assertion_style}}
|
||||
- Meet existing coverage requirements: {{detected_test_patterns.coverage}}
|
||||
</check>
|
||||
|
||||
**Test Strategy:**
|
||||
|
||||
- Test framework: {{detected_test_framework}} (from project dependencies)
|
||||
- Unit tests for [specific functions/methods]
|
||||
- Integration tests for [specific flows/APIs]
|
||||
- E2E tests if UI changes
|
||||
- Mock/stub strategies (use existing patterns: {{detected_test_patterns.mocking}})
|
||||
- Performance benchmarks if applicable
|
||||
- Accessibility tests if UI changes
|
||||
|
||||
**Coverage:**
|
||||
|
||||
- Unit test coverage: [target %]
|
||||
- Integration coverage: [critical paths]
|
||||
- Ensure all acceptance criteria have corresponding tests
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">test_framework_info</template-output>
|
||||
<template-output file="tech-spec.md">testing_approach</template-output>
|
||||
|
||||
<action>**Deployment Strategy:**
|
||||
|
||||
**Deployment Steps:**
|
||||
How to deploy this change:
|
||||
|
||||
1. Merge to main branch
|
||||
2. Run CI/CD pipeline
|
||||
3. Deploy to staging
|
||||
4. Verify in staging
|
||||
5. Deploy to production
|
||||
6. Monitor for issues
|
||||
|
||||
**Rollback Plan:**
|
||||
How to undo if problems:
|
||||
|
||||
1. Revert commit [hash]
|
||||
2. Redeploy previous version
|
||||
3. Verify rollback successful
|
||||
|
||||
**Monitoring:**
|
||||
What to watch after deployment:
|
||||
|
||||
- Error rates in [logging service]
|
||||
- Response times for [endpoint]
|
||||
- User feedback on [feature]
|
||||
</action>
|
||||
|
||||
<template-output file="tech-spec.md">deployment_steps</template-output>
|
||||
<template-output file="tech-spec.md">rollback_plan</template-output>
|
||||
<template-output file="tech-spec.md">monitoring_approach</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Auto-validate cohesion, completeness, and quality">
|
||||
|
||||
<critical>Always run validation - this is NOT optional!</critical>
|
||||
|
||||
<action>Tech-spec generation complete! Now running automatic validation...</action>
|
||||
|
||||
<action>Load {installed_path}/checklist.md</action>
|
||||
<action>Review tech-spec.md against ALL checklist criteria:
|
||||
|
||||
**Section 1: Output Files Exist**
|
||||
|
||||
- Verify tech-spec.md created
|
||||
- Check for unfilled template variables
|
||||
|
||||
**Section 2: Context Gathering**
|
||||
|
||||
- Validate all available documents were loaded
|
||||
- Confirm stack detection worked
|
||||
- Verify brownfield analysis (if applicable)
|
||||
|
||||
**Section 3: Tech-Spec Definitiveness**
|
||||
|
||||
- Scan for "or" statements (FAIL if found)
|
||||
- Verify all versions are specific
|
||||
- Check stack alignment
|
||||
|
||||
**Section 4: Context-Rich Content**
|
||||
|
||||
- Verify all new template sections populated
|
||||
- Check existing code references (brownfield)
|
||||
- Validate framework dependencies listed
|
||||
|
||||
**Section 5-6: Story Quality (deferred to Step 5)**
|
||||
|
||||
**Section 7: Workflow Status (if applicable)**
|
||||
|
||||
**Section 8: Implementation Readiness**
|
||||
|
||||
- Can developer start immediately?
|
||||
- Is tech-spec comprehensive enough?
|
||||
</action>
|
||||
|
||||
<action>Generate validation report with specific scores:
|
||||
|
||||
- Context Gathering: [Comprehensive/Partial/Insufficient]
|
||||
- Definitiveness: [All definitive/Some ambiguity/Major issues]
|
||||
- Brownfield Integration: [N/A/Excellent/Partial/Missing]
|
||||
- Stack Alignment: [Perfect/Good/Partial/None]
|
||||
- Implementation Readiness: [Yes/No]
|
||||
</action>
|
||||
|
||||
<check if="validation issues found">
|
||||
<output>⚠️ **Validation Issues Detected:**
|
||||
|
||||
{{list_of_issues}}
|
||||
|
||||
I can fix these automatically. Shall I proceed? (yes/no)</output>
|
||||
|
||||
<ask>Fix validation issues? (yes/no)</ask>
|
||||
|
||||
<check if="yes">
|
||||
<action>Fix each issue and re-validate</action>
|
||||
<output>✅ Issues fixed! Re-validation passed.</output>
|
||||
</check>
|
||||
|
||||
<check if="no">
|
||||
<output>⚠️ Proceeding with warnings. Issues should be addressed manually.</output>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="validation passes">
|
||||
<output>✅ **Validation Passed!**
|
||||
|
||||
**Scores:**
|
||||
|
||||
- Context Gathering: {{context_score}}
|
||||
- Definitiveness: {{definitiveness_score}}
|
||||
- Brownfield Integration: {{brownfield_score}}
|
||||
- Stack Alignment: {{stack_score}}
|
||||
- Implementation Readiness: ✅ Ready
|
||||
|
||||
Tech-spec is high quality and ready for story generation!</output>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Generate epic and context-rich stories">
|
||||
|
||||
<action>Invoke unified story generation workflow: {instructions_generate_stories}</action>
|
||||
|
||||
<action>This will generate:
|
||||
|
||||
- **epics.md** - Epic structure (minimal for 1 story, detailed for multiple)
|
||||
- **story-{epic-slug}-N.md** - Story files (where N = 1 to {{story_count}})
|
||||
|
||||
All stories reference tech-spec.md as primary context - comprehensive enough that developers can often skip story-context workflow.
|
||||
</action>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Finalize and guide next steps">
|
||||
|
||||
<output>**✅ Tech-Spec Complete, {user_name}!**
|
||||
|
||||
**Deliverables Created:**
|
||||
|
||||
- ✅ **tech-spec.md** - Context-rich technical specification
|
||||
- Includes: brownfield analysis, framework details, existing patterns
|
||||
- ✅ **epics.md** - Epic structure{{#if story_count == 1}} (minimal for single story){{else}} with {{story_count}} stories{{/if}}
|
||||
- ✅ **story-{epic-slug}-1.md** - First story{{#if story_count > 1}}
|
||||
- ✅ **story-{epic-slug}-2.md** - Second story{{/if}}{{#if story_count > 2}}
|
||||
- ✅ **story-{epic-slug}-3.md** - Third story{{/if}}{{#if story_count > 3}}
|
||||
- ✅ **Additional stories** through story-{epic-slug}-{{story_count}}.md{{/if}}
|
||||
|
||||
**What Makes This Tech-Spec Special:**
|
||||
|
||||
The tech-spec is comprehensive enough to serve as the primary context document:
|
||||
|
||||
- ✨ Brownfield codebase analysis (if applicable)
|
||||
- ✨ Exact framework and library versions from your project
|
||||
- ✨ Existing patterns and code references
|
||||
- ✨ Specific file paths and integration points
|
||||
- ✨ Complete developer resources
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
**🎯 Recommended Path - Direct to Development:**
|
||||
|
||||
Since the tech-spec is CONTEXT-RICH, you can often skip story-context generation!
|
||||
|
||||
{{#if story_count == 1}}
|
||||
**For Your Single Story:**
|
||||
|
||||
1. Ask DEV agent to run `dev-story`
|
||||
- Select story-{epic-slug}-1.md
|
||||
- Tech-spec provides all the context needed!
|
||||
|
||||
💡 **Optional:** Only run `story-context` (SM agent) if this is unusually complex
|
||||
{{else}}
|
||||
**For Your {{story_count}} Stories - Iterative Approach:**
|
||||
|
||||
1. **Start with Story 1:**
|
||||
- Ask DEV agent to run `dev-story`
|
||||
- Select story-{epic-slug}-1.md
|
||||
- Tech-spec provides context
|
||||
|
||||
2. **After Story 1 Complete:**
|
||||
- Repeat for story-{epic-slug}-2.md
|
||||
- Continue through story {{story_count}}
|
||||
|
||||
💡 **Alternative:** Use `sprint-planning` (SM agent) to organize all stories as a coordinated sprint
|
||||
|
||||
💡 **Optional:** Run `story-context` (SM agent) for complex stories needing additional context
|
||||
{{/if}}
|
||||
|
||||
**Your Tech-Spec:**
|
||||
|
||||
- 📄 Saved to: `{output_folder}/tech-spec.md`
|
||||
- Epic & Stories: `{output_folder}/epics.md` + `{sprint_artifacts}/`
|
||||
- Contains: All context, decisions, patterns, and implementation guidance
|
||||
- Ready for: Direct development!
|
||||
|
||||
The tech-spec is your single source of truth! 🚀
|
||||
</output>
|
||||
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -1,181 +0,0 @@
|
|||
# {{project_name}} - Technical Specification
|
||||
|
||||
**Author:** {{user_name}}
|
||||
**Date:** {{date}}
|
||||
**Project Level:** {{project_level}}
|
||||
**Change Type:** {{change_type}}
|
||||
**Development Context:** {{development_context}}
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Available Documents
|
||||
|
||||
{{loaded_documents_summary}}
|
||||
|
||||
### Project Stack
|
||||
|
||||
{{project_stack_summary}}
|
||||
|
||||
### Existing Codebase Structure
|
||||
|
||||
{{existing_structure_summary}}
|
||||
|
||||
---
|
||||
|
||||
## The Change
|
||||
|
||||
### Problem Statement
|
||||
|
||||
{{problem_statement}}
|
||||
|
||||
### Proposed Solution
|
||||
|
||||
{{solution_overview}}
|
||||
|
||||
### Scope
|
||||
|
||||
**In Scope:**
|
||||
|
||||
{{scope_in}}
|
||||
|
||||
**Out of Scope:**
|
||||
|
||||
{{scope_out}}
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Source Tree Changes
|
||||
|
||||
{{source_tree_changes}}
|
||||
|
||||
### Technical Approach
|
||||
|
||||
{{technical_approach}}
|
||||
|
||||
### Existing Patterns to Follow
|
||||
|
||||
{{existing_patterns}}
|
||||
|
||||
### Integration Points
|
||||
|
||||
{{integration_points}}
|
||||
|
||||
---
|
||||
|
||||
## Development Context
|
||||
|
||||
### Relevant Existing Code
|
||||
|
||||
{{existing_code_references}}
|
||||
|
||||
### Dependencies
|
||||
|
||||
**Framework/Libraries:**
|
||||
|
||||
{{framework_dependencies}}
|
||||
|
||||
**Internal Modules:**
|
||||
|
||||
{{internal_dependencies}}
|
||||
|
||||
### Configuration Changes
|
||||
|
||||
{{configuration_changes}}
|
||||
|
||||
### Existing Conventions (Brownfield)
|
||||
|
||||
{{existing_conventions}}
|
||||
|
||||
### Test Framework & Standards
|
||||
|
||||
{{test_framework_info}}
|
||||
|
||||
---
|
||||
|
||||
## Implementation Stack
|
||||
|
||||
{{implementation_stack}}
|
||||
|
||||
---
|
||||
|
||||
## Technical Details
|
||||
|
||||
{{technical_details}}
|
||||
|
||||
---
|
||||
|
||||
## Development Setup
|
||||
|
||||
{{development_setup}}
|
||||
|
||||
---
|
||||
|
||||
## Implementation Guide
|
||||
|
||||
### Setup Steps
|
||||
|
||||
{{setup_steps}}
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
{{implementation_steps}}
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
{{testing_strategy}}
|
||||
|
||||
### Acceptance Criteria
|
||||
|
||||
{{acceptance_criteria}}
|
||||
|
||||
---
|
||||
|
||||
## Developer Resources
|
||||
|
||||
### File Paths Reference
|
||||
|
||||
{{file_paths_complete}}
|
||||
|
||||
### Key Code Locations
|
||||
|
||||
{{key_code_locations}}
|
||||
|
||||
### Testing Locations
|
||||
|
||||
{{testing_locations}}
|
||||
|
||||
### Documentation to Update
|
||||
|
||||
{{documentation_updates}}
|
||||
|
||||
---
|
||||
|
||||
## UX/UI Considerations
|
||||
|
||||
{{ux_ui_considerations}}
|
||||
|
||||
---
|
||||
|
||||
## Testing Approach
|
||||
|
||||
{{testing_approach}}
|
||||
|
||||
---
|
||||
|
||||
## Deployment Strategy
|
||||
|
||||
### Deployment Steps
|
||||
|
||||
{{deployment_steps}}
|
||||
|
||||
### Rollback Plan
|
||||
|
||||
{{rollback_plan}}
|
||||
|
||||
### Monitoring
|
||||
|
||||
{{monitoring_approach}}
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
# Story {{N}}.{{M}}: {{story_title}}
|
||||
|
||||
**Status:** Draft
|
||||
|
||||
---
|
||||
|
||||
## User Story
|
||||
|
||||
As a {{user_type}},
|
||||
I want {{capability}},
|
||||
So that {{value_benefit}}.
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
**Given** {{precondition}}
|
||||
**When** {{action}}
|
||||
**Then** {{expected_outcome}}
|
||||
|
||||
**And** {{additional_criteria}}
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Tasks / Subtasks
|
||||
|
||||
{{tasks_subtasks}}
|
||||
|
||||
### Technical Summary
|
||||
|
||||
{{technical_summary}}
|
||||
|
||||
### Project Structure Notes
|
||||
|
||||
- **Files to modify:** {{files_to_modify}}
|
||||
- **Expected test locations:** {{test_locations}}
|
||||
- **Estimated effort:** {{story_points}} story points ({{time_estimate}})
|
||||
- **Prerequisites:** {{dependencies}}
|
||||
|
||||
### Key Code References
|
||||
|
||||
{{existing_code_references}}
|
||||
|
||||
---
|
||||
|
||||
## Context References
|
||||
|
||||
**Tech-Spec:** [tech-spec.md](../tech-spec.md) - Primary context document containing:
|
||||
|
||||
- Brownfield codebase analysis (if applicable)
|
||||
- Framework and library details with versions
|
||||
- Existing patterns to follow
|
||||
- Integration points and dependencies
|
||||
- Complete implementation guidance
|
||||
|
||||
**Architecture:** {{architecture_references}}
|
||||
|
||||
<!-- Additional context XML paths will be added here if story-context workflow is run -->
|
||||
|
||||
---
|
||||
|
||||
## Dev Agent Record
|
||||
|
||||
### Agent Model Used
|
||||
|
||||
<!-- Will be populated during dev-story execution -->
|
||||
|
||||
### Debug Log References
|
||||
|
||||
<!-- Will be populated during dev-story execution -->
|
||||
|
||||
### Completion Notes
|
||||
|
||||
<!-- Will be populated during dev-story execution -->
|
||||
|
||||
### Files Modified
|
||||
|
||||
<!-- Will be populated during dev-story execution -->
|
||||
|
||||
### Test Results
|
||||
|
||||
<!-- Will be populated during dev-story execution -->
|
||||
|
||||
---
|
||||
|
||||
## Review Notes
|
||||
|
||||
<!-- Will be populated during code review -->
|
||||
|
|
@ -1,60 +0,0 @@
|
|||
# Technical Specification
|
||||
name: tech-spec
|
||||
description: "Technical specification workflow for quick-flow projects. Creates focused tech spec and generates epic + stories (1 story for simple changes, 2-5 stories for features). Tech-spec only - no PRD needed."
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
|
||||
project_name: "{config_source}:project_name"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
user_skill_level: "{config_source}:user_skill_level"
|
||||
date: system-generated
|
||||
|
||||
workflow-status: "{output_folder}/bmm-workflow-status.yaml"
|
||||
|
||||
# Runtime variables (captured during workflow execution)
|
||||
story_count: runtime-captured
|
||||
epic_slug: runtime-captured
|
||||
change_type: runtime-captured
|
||||
field_type: runtime-captured
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
template: "{installed_path}/tech-spec-template.md"
|
||||
|
||||
# Story generation (unified approach - always generates epic + stories)
|
||||
instructions_generate_stories: "{installed_path}/instructions-generate-stories.md"
|
||||
user_story_template: "{installed_path}/user-story-template.md"
|
||||
epics_template: "{installed_path}/epics-template.md"
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{output_folder}/tech-spec.md"
|
||||
epics_file: "{output_folder}/epics.md"
|
||||
sprint_artifacts: "{output_folder}/sprint_artifacts"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
# Strategy: How to load sharded documents (FULL_LOAD, SELECTIVE_LOAD, INDEX_GUIDED)
|
||||
input_file_patterns:
|
||||
product_brief:
|
||||
description: "Product vision and goals (optional)"
|
||||
whole: "{output_folder}/*brief*.md"
|
||||
sharded: "{output_folder}/*brief*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
research:
|
||||
description: "Market or domain research (optional)"
|
||||
whole: "{output_folder}/*research*.md"
|
||||
sharded: "{output_folder}/*research*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
document_project:
|
||||
description: "Brownfield project documentation (optional)"
|
||||
sharded: "{output_folder}/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,13 +1,11 @@
|
|||
# Epic and Story Decomposition - Intent-Based Implementation Planning
|
||||
# Epic and Story Creation with Full Technical Context
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow transforms requirements into BITE-SIZED STORIES for development agents</critical>
|
||||
<critical>PREREQUISITES: PRD.md AND Architecture.md MUST be completed before running this workflow</critical>
|
||||
<critical>UX Design.md is highly recommended if the product has user interfaces</critical>
|
||||
<critical>EVERY story must be completable by a single dev agent in one focused session</critical>
|
||||
<critical>⚠️ EPIC STRUCTURE PRINCIPLE: Each epic MUST deliver USER VALUE, not just technical capability. Epics are NOT organized by technical layers (database, API, frontend). Each epic should result in something USERS can actually use or benefit from. Exception: Foundation/setup stories at the start of first epic are acceptable. Another valid exception: API-first epic ONLY when the API itself has standalone value (e.g., will be consumed by third parties or multiple frontends).</critical>
|
||||
<critical>BMAD METHOD WORKFLOW POSITION: This workflow can be invoked at multiple points - after PRD only, after PRD+UX, after PRD+UX+Architecture, or to update existing epics. If epics.md already exists, ASK the user: (1) CONTINUING - previous run was incomplete, (2) REPLACING - starting fresh/discarding old, (3) UPDATING - new planning document created since last epic generation</critical>
|
||||
<critical>This is a LIVING DOCUMENT that evolves through the BMad Method workflow chain</critical>
|
||||
<critical>Phase 4 Implementation pulls context from: PRD + epics.md + UX + Architecture</critical>
|
||||
<critical>⚠️ EPIC STRUCTURE PRINCIPLE: Each epic MUST deliver USER VALUE, not just technical capability. Epics are NOT organized by technical layers (database, API, frontend). Each epic should result in something USERS can actually use or benefit from. Exception: Foundation/setup stories at the start of first epic are acceptable.</critical>
|
||||
<critical>Communicate all responses in {communication_language} and adapt to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>LIVING DOCUMENT: Write to epics.md continuously as you work - never wait until the end</critical>
|
||||
|
|
@ -17,600 +15,373 @@
|
|||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Detect workflow mode and available context">
|
||||
<action>Determine if this is initial creation or update mode
|
||||
<step n="0" goal="Validate prerequisites and load all context">
|
||||
<action>Welcome {user_name} to comprehensive epic and story creation</action>
|
||||
|
||||
**Check for existing epics.md:**
|
||||
</action>
|
||||
<action>**CRITICAL PREREQUISITE VALIDATION:**</action>
|
||||
|
||||
<action>Check if {default_output_file} exists (epics.md)</action>
|
||||
<action>Verify required documents exist and are complete:
|
||||
|
||||
<check if="epics.md exists">
|
||||
<action>Load existing epics.md completely</action>
|
||||
<action>Extract existing:
|
||||
- Epic structure and titles
|
||||
- Story breakdown
|
||||
- FR coverage mapping
|
||||
- Existing acceptance criteria
|
||||
</action>
|
||||
1. **PRD.md** - Contains functional requirements (FRs) and product scope
|
||||
2. **Architecture.md** - Contains technical decisions, API contracts, data models
|
||||
3. **UX Design.md** (if UI exists) - Contains interaction patterns, mockups, user flows
|
||||
|
||||
<output>📝 **Existing epics.md found!**
|
||||
Missing any required document means this workflow cannot proceed successfully.</action>
|
||||
|
||||
Current structure:
|
||||
<check if="!prd_content">
|
||||
<output>❌ **PREREQUISITE FAILED: PRD.md not found**
|
||||
|
||||
- {{epic_count}} epics defined
|
||||
- {{story_count}} total stories
|
||||
</output>
|
||||
The PRD is required to define what functionality needs to be built.
|
||||
|
||||
<ask>What would you like to do?
|
||||
Please complete the PRD workflow first, then run this workflow again.</output>
|
||||
|
||||
1. **CONTINUING** - Previous run was incomplete, continue where we left off
|
||||
2. **REPLACING** - Start fresh, discard existing epic structure
|
||||
3. **UPDATING** - New planning document created (UX/Architecture), enhance existing epics
|
||||
|
||||
Enter your choice (1-3):</ask>
|
||||
|
||||
<action>Set mode based on user choice:
|
||||
|
||||
- Choice 1: mode = "CONTINUE" (resume incomplete work)
|
||||
- Choice 2: mode = "CREATE" (start fresh, ignore existing)
|
||||
- Choice 3: mode = "UPDATE" (enhance with new context)
|
||||
</action>
|
||||
</check>
|
||||
|
||||
<check if="epics.md does not exist">
|
||||
<action>Set mode = "CREATE"</action>
|
||||
<output>🆕 **INITIAL CREATION MODE**
|
||||
|
||||
No existing epics found - I'll create the initial epic breakdown.
|
||||
</output>
|
||||
<exit workflow="Missing required PRD document"/>
|
||||
</check>
|
||||
|
||||
<action>**Detect available context documents:**</action>
|
||||
<check if="!architecture_content">
|
||||
<output>❌ **PREREQUISITE FAILED: Architecture.md not found**
|
||||
|
||||
<action>Check which documents exist:
|
||||
The Architecture document is required to provide technical implementation context for stories.
|
||||
|
||||
- UX Design specification ({ux_design_content})
|
||||
- Architecture document ({architecture_content})
|
||||
- Domain brief ({domain_brief_content})
|
||||
- Product brief ({product_brief_content})
|
||||
</action>
|
||||
Please complete the Architecture workflow first, then run this workflow again.</output>
|
||||
|
||||
<check if="mode == 'UPDATE'">
|
||||
<action>Identify what's NEW since last epic update:
|
||||
|
||||
- If UX exists AND not previously incorporated:
|
||||
- Flag: "ADD_UX_DETAILS = true"
|
||||
- Note UX sections to extract (interaction patterns, mockup references, responsive breakpoints)
|
||||
|
||||
- If Architecture exists AND not previously incorporated:
|
||||
- Flag: "ADD_ARCH_DETAILS = true"
|
||||
- Note Architecture sections to extract (tech stack, API contracts, data models)
|
||||
</action>
|
||||
|
||||
<output>**Context Analysis:**
|
||||
{{if ADD_UX_DETAILS}}
|
||||
✅ UX Design found - will add interaction details to stories
|
||||
{{/if}}
|
||||
{{if ADD_ARCH_DETAILS}}
|
||||
✅ Architecture found - will add technical implementation notes
|
||||
{{/if}}
|
||||
{{if !ADD_UX_DETAILS && !ADD_ARCH_DETAILS}}
|
||||
⚠️ No new context documents found - reviewing for any PRD changes
|
||||
{{/if}}
|
||||
</output>
|
||||
<exit workflow="Missing required Architecture document"/>
|
||||
</check>
|
||||
|
||||
<check if="mode == 'CREATE'">
|
||||
<output>**Available Context:**
|
||||
- ✅ PRD (required)
|
||||
{{if ux_design_content}}
|
||||
- ✅ UX Design (will incorporate interaction patterns)
|
||||
{{/if}}
|
||||
{{if architecture_content}}
|
||||
- ✅ Architecture (will incorporate technical decisions)
|
||||
{{/if}}
|
||||
{{if !ux_design_content && !architecture_content}}
|
||||
- ℹ️ Creating basic epic structure (can be enhanced later with UX/Architecture)
|
||||
{{/if}}
|
||||
</output>
|
||||
</check>
|
||||
<action>List the documents loaded</action>
|
||||
|
||||
<template-output>workflow_mode</template-output>
|
||||
<template-output>available_context</template-output>
|
||||
</step>
|
||||
<action>**LOAD ALL CONTEXT DOCUMENTS:**</action>
|
||||
|
||||
<step n="1" goal="Load PRD and extract requirements">
|
||||
<action>
|
||||
<check if="mode == 'CREATE'">
|
||||
Welcome {user_name} to epic and story planning
|
||||
</check>
|
||||
<check if="mode == 'UPDATE'">
|
||||
Welcome back {user_name} - let's enhance your epic breakdown with new context
|
||||
</check>
|
||||
<action>Load and analyze PRD.md:
|
||||
|
||||
Load required documents (fuzzy match, handle both whole and sharded):
|
||||
Extract ALL functional requirements:
|
||||
|
||||
- PRD.md (required)
|
||||
- domain-brief.md (if exists)
|
||||
- product-brief.md (if exists)
|
||||
|
||||
**CRITICAL - PRD FRs Are Now Flat and Strategic:**
|
||||
|
||||
The PRD contains FLAT, capability-level functional requirements (FR1, FR2, FR3...).
|
||||
These are STRATEGIC (WHAT capabilities exist), NOT tactical (HOW they're implemented).
|
||||
|
||||
Example PRD FRs:
|
||||
|
||||
- FR1: Users can create accounts with email or social authentication
|
||||
- FR2: Users can log in securely and maintain sessions
|
||||
- FR6: Users can create, edit, and delete content items
|
||||
|
||||
**Your job in THIS workflow:**
|
||||
|
||||
1. Map each FR to one or more epics
|
||||
2. Break each FR into stories with DETAILED acceptance criteria
|
||||
3. Add ALL the implementation details that were intentionally left out of PRD
|
||||
|
||||
Extract from PRD:
|
||||
|
||||
- ALL functional requirements (flat numbered list)
|
||||
- Non-functional requirements
|
||||
- Domain considerations and compliance needs
|
||||
- Project type and complexity
|
||||
- MVP vs growth vs vision scope boundaries
|
||||
- Product differentiator (what makes it special)
|
||||
- Technical constraints
|
||||
- Complete FR inventory (FR1, FR2, FR3...)
|
||||
- Non-functional requirements and constraints
|
||||
- Project scope boundaries (MVP vs growth vs vision)
|
||||
- User types and their goals
|
||||
- Success criteria
|
||||
- Technical constraints
|
||||
- Compliance requirements
|
||||
|
||||
**Create FR Inventory:**
|
||||
|
||||
List all FRs to ensure coverage:
|
||||
|
||||
- FR1: [description]
|
||||
- FR2: [description]
|
||||
- ...
|
||||
- FRN: [description]
|
||||
|
||||
This inventory will be used to validate complete coverage in Step 4.
|
||||
**FR Inventory Creation:**
|
||||
List every functional requirement with description for coverage tracking.
|
||||
</action>
|
||||
|
||||
<action>Load and analyze Architecture.md:
|
||||
|
||||
Extract ALL technical implementation context relevant to the PRD functional requirements and project needs:
|
||||
|
||||
Scan comprehensively for any technical details needed to create complete user stories, including but not limited to:
|
||||
|
||||
- Technology stack decisions and framework choices
|
||||
- API design, contracts, and integration patterns
|
||||
- Data models, schemas, and relationships
|
||||
- Authentication, authorization, and security patterns
|
||||
- Performance requirements and scaling approaches
|
||||
- Error handling, logging, and monitoring strategies
|
||||
- Deployment architecture and infrastructure considerations
|
||||
- Any other technical decisions, patterns, or constraints that impact implementation
|
||||
|
||||
Focus on extracting whatever technical context exists in the Architecture document that will be needed to create comprehensive, actionable user stories for all PRD requirements.
|
||||
</action>
|
||||
|
||||
<action if="UX Design Exists">
|
||||
Load and analyze UX Design.md:
|
||||
|
||||
Extract ALL user experience context relevant to the PRD functional requirements and project needs:
|
||||
|
||||
Scan comprehensively for any user experience details needed to create complete user stories, including but not limited to:
|
||||
|
||||
- User flows, journey patterns, and interaction design
|
||||
- Screen layouts, components, and visual specifications
|
||||
- Interaction patterns, behaviors, and micro-interactions
|
||||
- Responsive design and mobile-first considerations
|
||||
- Accessibility requirements and inclusive design patterns
|
||||
- Animations, transitions, and feedback mechanisms
|
||||
- Error states, validation patterns, and user guidance
|
||||
- Any other UX/UI decisions, patterns, or specifications that impact implementation
|
||||
|
||||
Focus on extracting whatever user experience context exists in the UX document that will be needed to create comprehensive, actionable user stories for all PRD requirements.
|
||||
</action>
|
||||
|
||||
<template-output>context_validation</template-output>
|
||||
<template-output>fr_inventory</template-output>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Propose epic structure from natural groupings">
|
||||
<step n="1" goal="Design epic structure with full technical context">
|
||||
<action>**STRATEGIC EPIC PLANNING WITH COMPLETE CONTEXT:**</action>
|
||||
|
||||
<check if="mode == 'UPDATE'">
|
||||
<action>**MAINTAIN existing epic structure:**
|
||||
<action>Now that you have ALL available context (PRD + Architecture + UX), design epics that deliver incremental user value while leveraging the technical design decisions.
|
||||
|
||||
Use the epic structure already defined in epics.md:
|
||||
**EPIC DESIGN PRINCIPLES:**
|
||||
|
||||
- Keep all existing epic titles and goals
|
||||
- Preserve epic sequencing
|
||||
- Maintain FR coverage mapping
|
||||
1. **User-Value First**: Each epic must enable users to accomplish something meaningful
|
||||
2. **Leverage Architecture**: Build upon the technical decisions already made
|
||||
3. **Incremental Delivery**: Each epic should be independently valuable
|
||||
4. **Logical Dependencies**: Dependencies should flow naturally, not artificially
|
||||
|
||||
Note: We're enhancing stories within existing epics, not restructuring.
|
||||
</action>
|
||||
**USE YOUR FULL CONTEXT:**
|
||||
|
||||
<output>**Using existing epic structure:**
|
||||
{{list_existing_epics_with_titles}}
|
||||
From PRD: Group related functional requirements that deliver user outcomes
|
||||
From Architecture: Respect technical boundaries and integration points
|
||||
From UX: Design around user journeys and interaction flows
|
||||
|
||||
Will enhance stories within these epics using new context.
|
||||
</output>
|
||||
**VALID EPIC EXAMPLES:**
|
||||
|
||||
<template-output>epics_summary</template-output>
|
||||
<template-output>fr_coverage_map</template-output>
|
||||
✅ **CORRECT - User Value with Technical Context:**
|
||||
|
||||
<goto step="3">Skip to story enhancement</goto>
|
||||
</check>
|
||||
|
||||
<check if="mode == 'CREATE'">
|
||||
<action>Analyze requirements and identify natural epic boundaries
|
||||
|
||||
INTENT: Find organic groupings that make sense for THIS product
|
||||
|
||||
Look for natural patterns:
|
||||
|
||||
- Features that work together cohesively
|
||||
- User journeys that connect
|
||||
- Business capabilities that cluster
|
||||
- Domain requirements that relate (compliance, validation, security)
|
||||
- Technical systems that should be built together
|
||||
|
||||
Name epics based on VALUE, not technical layers:
|
||||
|
||||
- Good: "User Onboarding", "Content Discovery", "Compliance Framework"
|
||||
- Avoid: "Database Layer", "API Endpoints", "Frontend"
|
||||
|
||||
**⚠️ ANTI-PATTERN EXAMPLES (DO NOT DO THIS):**
|
||||
- Epic 1: Foundation Setup (infrastructure, deployment, core services)
|
||||
- Epic 2: User Authentication & Profile Management (register, login, profile management)
|
||||
- Epic 3: Content Creation & Management (create, edit, publish, organize content)
|
||||
- Epic 4: Content Discovery & Interaction (browse, search, share, comment)
|
||||
|
||||
❌ **WRONG - Technical Layer Breakdown:**
|
||||
|
||||
- Epic 1: Database Schema & Models
|
||||
- Epic 2: API Layer / Backend Services
|
||||
- Epic 3: Frontend UI Components
|
||||
- Epic 4: Integration & Testing
|
||||
- Epic 2: REST API Endpoints
|
||||
- Epic 3: Frontend Components
|
||||
- Epic 4: Authentication Service
|
||||
|
||||
WHY IT'S WRONG: User gets ZERO value until ALL epics complete. No incremental delivery.
|
||||
**PRESENT YOUR EPIC STRUCTURE:**
|
||||
|
||||
✅ **CORRECT - User Value Breakdown:**
|
||||
For each proposed epic, provide:
|
||||
|
||||
- Epic 1: Foundation (project setup - necessary exception)
|
||||
- Epic 2: User Authentication (user can register/login - VALUE DELIVERED)
|
||||
- Epic 3: Content Management (user can create/edit content - VALUE DELIVERED)
|
||||
- Epic 4: Social Features (user can share/interact - VALUE DELIVERED)
|
||||
- **Epic Title**: Value-based, not technical
|
||||
- **User Value Statement**: What users can accomplish after this epic
|
||||
- **PRD Coverage**: Which FRs this epic addresses
|
||||
- **Technical Context**: How this leverages Architecture decisions
|
||||
- **UX Integration**: How this incorporates user experience patterns (if available)
|
||||
- **Dependencies**: What must come before (natural dependencies only)
|
||||
|
||||
WHY IT'S RIGHT: Each epic delivers something users can USE. Incremental value.
|
||||
**FOUNDATION EPIC GUIDELINES:**
|
||||
|
||||
**Valid Exceptions:**
|
||||
For Epic 1, include technical foundation based on Architecture:
|
||||
|
||||
1. **Foundation Epic**: First epic CAN be setup/infrastructure (greenfield projects need this)
|
||||
2. **API-First Epic**: ONLY valid if the API has standalone value (third-party consumers, multiple frontends, API-as-product). If it's just "backend for our frontend", that's the WRONG pattern.
|
||||
- Project setup and build system
|
||||
- Core infrastructure and deployment pipeline
|
||||
- Database schema setup
|
||||
- Basic authentication foundation
|
||||
- API framework setup
|
||||
|
||||
Each epic should:
|
||||
|
||||
- Have clear business goal and user value
|
||||
- Be independently valuable
|
||||
- Contain 3-8 related capabilities
|
||||
- Be deliverable in cohesive phase
|
||||
|
||||
For greenfield projects:
|
||||
|
||||
- First epic MUST establish foundation (project setup, core infrastructure, deployment pipeline)
|
||||
- Foundation enables all subsequent work
|
||||
|
||||
For complex domains:
|
||||
|
||||
- Consider dedicated compliance/regulatory epics
|
||||
- Group validation and safety requirements logically
|
||||
- Note expertise requirements
|
||||
|
||||
Present proposed epic structure showing:
|
||||
|
||||
- Epic titles with clear value statements
|
||||
- High-level scope of each epic
|
||||
- **FR COVERAGE MAP: Which FRs does each epic address?**
|
||||
- Example: "Epic 1 (Foundation): Covers infrastructure needs for all FRs"
|
||||
- Example: "Epic 2 (User Management): FR1, FR2, FR3, FR4, FR5"
|
||||
- Example: "Epic 3 (Content System): FR6, FR7, FR8, FR9"
|
||||
- Suggested sequencing
|
||||
- Why this grouping makes sense
|
||||
|
||||
**Validate FR Coverage:**
|
||||
|
||||
Check that EVERY FR from Step 1 inventory is mapped to at least one epic.
|
||||
If any FRs are unmapped, add them now or explain why they're deferred.
|
||||
This enables all subsequent user-facing epics.
|
||||
</action>
|
||||
|
||||
<template-output>epics_summary</template-output>
|
||||
<template-output>fr_coverage_map</template-output>
|
||||
</check>
|
||||
<template-output>epics_structure_plan</template-output>
|
||||
<template-output>epics_technical_context</template-output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Decompose each epic into bite-sized stories with DETAILED AC" repeat="for-each-epic">
|
||||
<step n="2" goal="Create detailed stories with complete implementation context" repeat="for-each-epic">
|
||||
<action>**EPIC {{N}} - COMPREHENSIVE STORY CREATION:**</action>
|
||||
|
||||
<check if="mode == 'UPDATE'">
|
||||
<action>**ENHANCE Epic {{N}} stories with new context:**
|
||||
<action>For Epic {{N}}: {{epic_title}}, create bite-sized stories that incorporate ALL available context.
|
||||
|
||||
For each existing story in Epic {{N}}:
|
||||
**STORY CREATION WITH FULL CONTEXT:**
|
||||
|
||||
1. Preserve core story structure (title, user story statement)
|
||||
2. Add/enhance based on available NEW context:
|
||||
For each story, you now have the complete picture:
|
||||
|
||||
<check if="ADD_UX_DETAILS">
|
||||
**Add from UX Design:**
|
||||
- Specific mockup/wireframe references
|
||||
- Exact interaction patterns
|
||||
- Animation/transition specifications
|
||||
- Responsive breakpoints
|
||||
- Component specifications
|
||||
- Error states and feedback patterns
|
||||
- Accessibility requirements (WCAG compliance)
|
||||
- **WHAT to build** (from PRD FRs)
|
||||
- **HOW to build it** (from Architecture decisions)
|
||||
- **HOW users interact** (from UX patterns, if available)
|
||||
|
||||
Example enhancement:
|
||||
BEFORE: "User can log in"
|
||||
AFTER: "User can log in via modal (UX pg 12-15) with email/password fields,
|
||||
password visibility toggle, remember me checkbox,
|
||||
loading state during auth (spinner overlay),
|
||||
error messages below fields (red, 14px),
|
||||
success redirects to dashboard with fade transition"
|
||||
**TRANSFORM STRATEGIC REQUIREMENTS INTO TACTICAL IMPLEMENTATION:**
|
||||
|
||||
</check>
|
||||
PRD says: "Users can create accounts"
|
||||
Architecture says: "Use PostgreSQL with bcrypt hashing, JWT tokens, rate limiting"
|
||||
UX says: "Modal dialog with email/password fields, real-time validation, loading states"
|
||||
|
||||
<check if="ADD_ARCH_DETAILS">
|
||||
**Add from Architecture:**
|
||||
- Specific API endpoints and contracts
|
||||
- Data model references
|
||||
- Tech stack implementation details
|
||||
- Performance requirements
|
||||
- Security implementation notes
|
||||
- Cache strategies
|
||||
- Error handling patterns
|
||||
Your story becomes: Specific implementation details with exact acceptance criteria
|
||||
|
||||
Example enhancement:
|
||||
BEFORE: "System authenticates user"
|
||||
AFTER: "System authenticates user via POST /api/v1/auth/login,
|
||||
validates against users table (see Arch section 6.2),
|
||||
returns JWT token (expires 7d) + refresh token (30d),
|
||||
rate limited to 5 attempts/hour/IP,
|
||||
logs failures to security_events table"
|
||||
**STORY PATTERN FOR EACH EPIC {{N}}:**
|
||||
|
||||
</check>
|
||||
**Epic Goal:** {{epic_goal}}
|
||||
|
||||
3. Update acceptance criteria with new details
|
||||
4. Preserve existing prerequisites
|
||||
5. Enhance technical notes with new context
|
||||
</action>
|
||||
</check>
|
||||
For each story M in Epic {{N}}:
|
||||
|
||||
<check if="mode == 'CREATE'">
|
||||
<action>Break down Epic {{N}} into small, implementable stories
|
||||
- **User Story**: As a [user type], I want [specific capability], So that [value/benefit]
|
||||
- **Acceptance Criteria**: BDD format with COMPLETE implementation details
|
||||
- **Technical Implementation**: Specific guidance from Architecture
|
||||
- **User Experience**: Exact interaction patterns from UX (if available)
|
||||
- **Prerequisites**: Only previous stories, never forward dependencies
|
||||
|
||||
INTENT: Create stories sized for single dev agent completion
|
||||
**DETAILED ACCEPTANCE CRITERIA GUIDELINES:**
|
||||
|
||||
**CRITICAL - ALTITUDE SHIFT FROM PRD:**
|
||||
Include ALL implementation specifics:
|
||||
|
||||
PRD FRs are STRATEGIC (WHAT capabilities):
|
||||
**From Architecture:**
|
||||
|
||||
- ✅ "Users can create accounts"
|
||||
- Exact API endpoints and contracts
|
||||
- Database operations and validations
|
||||
- Authentication/authorization requirements
|
||||
- Error handling patterns
|
||||
- Performance requirements
|
||||
- Security considerations
|
||||
- Integration points with other systems
|
||||
|
||||
Epic Stories are TACTICAL (HOW it's implemented):
|
||||
**From UX (if available):**
|
||||
|
||||
- Email field with RFC 5322 validation
|
||||
- Password requirements: 8+ chars, 1 uppercase, 1 number, 1 special
|
||||
- Password strength meter with visual feedback
|
||||
- Email verification within 15 minutes
|
||||
- reCAPTCHA v3 integration
|
||||
- Account creation completes in < 2 seconds
|
||||
- Mobile responsive with 44x44px touch targets
|
||||
- WCAG 2.1 AA compliant
|
||||
- Specific screen/page references
|
||||
- Interaction patterns and behaviors
|
||||
- Form validation rules and error messages
|
||||
- Responsive behavior
|
||||
- Accessibility requirements
|
||||
- Loading states and transitions
|
||||
- Success/error feedback patterns
|
||||
|
||||
**THIS IS WHERE YOU ADD ALL THE DETAILS LEFT OUT OF PRD:**
|
||||
**From PRD:**
|
||||
|
||||
- UI specifics (exact field counts, validation rules, layout details)
|
||||
- Performance targets (< 2s, 60fps, etc.)
|
||||
- Technical implementation hints (libraries, patterns, APIs)
|
||||
- Edge cases (what happens when...)
|
||||
- Validation rules (regex patterns, constraints)
|
||||
- Error handling (specific error messages, retry logic)
|
||||
- Accessibility requirements (ARIA labels, keyboard nav, screen readers)
|
||||
- Platform specifics (mobile responsive, browser support)
|
||||
- Business rules and constraints
|
||||
- User types and permissions
|
||||
- Compliance requirements
|
||||
- Success criteria
|
||||
|
||||
For each epic, generate:
|
||||
**STORY SIZING PRINCIPLE:**
|
||||
|
||||
- Epic title as `epic_title_{{N}}`
|
||||
- Epic goal/value as `epic_goal_{{N}}`
|
||||
- All stories as repeated pattern `story_title_{{N}}_{{M}}` for each story M
|
||||
Each story must be completable by a single dev agent in one focused session. If a story becomes too large, break it down further while maintaining user value.
|
||||
|
||||
CRITICAL for Epic 1 (Foundation):
|
||||
**EXAMPLE RICH STORY:**
|
||||
|
||||
- Story 1.1 MUST be project setup/infrastructure initialization
|
||||
- Sets up: repo structure, build system, deployment pipeline basics, core dependencies
|
||||
- Creates foundation for all subsequent stories
|
||||
- Note: Architecture workflow will flesh out technical details
|
||||
**Story:** User Registration with Email Verification
|
||||
|
||||
Each story should follow BDD-style acceptance criteria:
|
||||
As a new user, I want to create an account using my email address, So that I can access the platform's features.
|
||||
|
||||
**Story Pattern:**
|
||||
As a [user type],
|
||||
I want [specific capability],
|
||||
So that [clear value/benefit].
|
||||
**Acceptance Criteria:**
|
||||
Given I am on the landing page
|
||||
When I click the "Sign Up" button
|
||||
Then the registration modal opens (UX Mockup 3.2)
|
||||
|
||||
**Acceptance Criteria using BDD:**
|
||||
Given [precondition or initial state]
|
||||
When [action or trigger]
|
||||
Then [expected outcome]
|
||||
And I see email and password fields with proper labels
|
||||
And the email field validates RFC 5322 format in real-time
|
||||
And the password field shows strength meter (red→yellow→green)
|
||||
And I see "Password must be 8+ chars with 1 uppercase, 1 number, 1 special"
|
||||
|
||||
And [additional criteria as needed]
|
||||
When I submit valid registration data
|
||||
Then POST /api/v1/auth/register is called (Architecture section 4.1)
|
||||
And the user record is created in users table with bcrypt hash (Architecture 6.2)
|
||||
And a verification email is sent via SendGrid (Architecture 7.3)
|
||||
And I see "Check your email for verification link" message
|
||||
And I cannot log in until email is verified
|
||||
|
||||
**Prerequisites:** Only previous stories (never forward dependencies)
|
||||
**Technical Notes:**
|
||||
|
||||
**Technical Notes:** Implementation guidance, affected components, compliance requirements
|
||||
- Use PostgreSQL users table (Architecture section 6.2)
|
||||
- Implement rate limiting: 3 attempts per hour per IP (Architecture 8.1)
|
||||
- Return JWT token on successful verification (Architecture 5.2)
|
||||
- Log registration events to audit_events table (Architecture 9.4)
|
||||
- Form validation follows UX Design patterns (UX section 4.1)
|
||||
|
||||
Ensure stories are:
|
||||
|
||||
- Vertically sliced (deliver complete functionality, not just one layer)
|
||||
- Sequentially ordered (logical progression, no forward dependencies)
|
||||
- Independently valuable when possible
|
||||
- Small enough for single-session completion
|
||||
- Clear enough for autonomous implementation
|
||||
|
||||
For each story in epic {{N}}, output variables following this pattern:
|
||||
|
||||
- story*title*{{N}}_1, story_title_{{N}}\*2, etc.
|
||||
- Each containing: user story, BDD acceptance criteria, prerequisites, technical notes</action>
|
||||
**Prerequisites:** Epic 1.1 - Foundation Setup Complete
|
||||
</action>
|
||||
|
||||
<action>**Generate all stories for Epic {{N}}**</action>
|
||||
<template-output>epic*title*{{N}}</template-output>
|
||||
<template-output>epic*goal*{{N}}</template-output>
|
||||
|
||||
<action>For each story M in epic {{N}}, generate story content</action>
|
||||
<template-output>story-title-{{N}}-{{M}}</template-output>
|
||||
</check>
|
||||
<template-output>story*{{N}}*{{M}}</template-output>
|
||||
|
||||
<action>**EPIC {{N}} REVIEW - Present for Checkpoint:**
|
||||
<action>**EPIC {{N}} COMPLETION REVIEW:**</action>
|
||||
|
||||
Summarize the COMPLETE epic breakdown:
|
||||
<output>**Epic {{N}} Complete: {{epic_title}}**
|
||||
|
||||
**Epic {{N}}: {{epic_title}}**
|
||||
Goal: {{epic_goal}}
|
||||
Stories Created: {{count}}
|
||||
|
||||
Stories ({{count}} total):
|
||||
{{for each story, show:}}
|
||||
**FR Coverage:** {{list of FRs covered by this epic}}
|
||||
|
||||
- Story {{N}}.{{M}}: {{story_title}}
|
||||
- User Story: As a... I want... So that...
|
||||
- Acceptance Criteria: (BDD format summary)
|
||||
- Prerequisites: {{list}}
|
||||
**Technical Context Used:** {{Architecture sections referenced}}
|
||||
|
||||
**Review Questions to Consider:**
|
||||
{{if ux_design_content}}
|
||||
**UX Patterns Incorporated:** {{UX sections referenced}}
|
||||
{{/if}}
|
||||
|
||||
- Is the story sequence logical?
|
||||
- Are acceptance criteria clear and testable?
|
||||
- Are there any missing stories for the FRs this epic covers?
|
||||
- Are the stories sized appropriately (single dev agent session)?
|
||||
- FRs covered by this epic: {{FR_list}}
|
||||
|
||||
**NOTE:** At the checkpoint prompt, select [a] for Advanced Elicitation if you want to refine stories, add missing ones, or reorder. Select [c] to approve this epic and continue to the next one.
|
||||
</action>
|
||||
|
||||
<template-output>epic\_{{N}}\_complete_breakdown</template-output>
|
||||
Ready for checkpoint validation.</output>
|
||||
|
||||
<template-output>epic\_{{N}}\_complete</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Review epic breakdown and completion">
|
||||
<step n="3" goal="Final validation and coverage matrix">
|
||||
<action>**COMPREHENSIVE VALIDATION WITH FULL CONTEXT:**</action>
|
||||
|
||||
<check if="mode == 'UPDATE'">
|
||||
<action>Review the ENHANCED epic breakdown for completeness
|
||||
<action>Review the complete epic and story breakdown for quality and completeness using ALL available context.
|
||||
|
||||
**Validate Enhancements:**
|
||||
**FR COVERAGE VALIDATION:**
|
||||
|
||||
- All stories now have context-appropriate details
|
||||
- UX references added where applicable
|
||||
- Architecture decisions incorporated where applicable
|
||||
- Acceptance criteria updated with new specifics
|
||||
- Technical notes enhanced with implementation details
|
||||
Create complete FR Coverage Matrix showing every PRD functional requirement mapped to specific stories:
|
||||
|
||||
**Quality Check:**
|
||||
|
||||
- Stories remain bite-sized for single dev agent sessions
|
||||
- No forward dependencies introduced
|
||||
- All new context properly integrated
|
||||
</action>
|
||||
|
||||
<template-output>epic_breakdown_summary</template-output>
|
||||
<template-output>enhancement_summary</template-output>
|
||||
|
||||
<output>✅ **Epic Enhancement Complete!**
|
||||
|
||||
**Updated:** epics.md with enhanced context
|
||||
|
||||
**Enhancements Applied:**
|
||||
{{if ADD_UX_DETAILS}}
|
||||
|
||||
- ✅ UX interaction patterns and mockup references added
|
||||
{{/if}}
|
||||
{{if ADD_ARCH_DETAILS}}
|
||||
- ✅ Architecture technical decisions and API contracts added
|
||||
{{/if}}
|
||||
|
||||
The epic breakdown now includes all available context for Phase 4 implementation.
|
||||
|
||||
**Next Steps:**
|
||||
{{if !architecture_content}}
|
||||
|
||||
- Run Architecture workflow for technical decisions
|
||||
{{/if}}
|
||||
{{if architecture_content}}
|
||||
- Ready for Phase 4: Sprint Planning
|
||||
{{/if}}
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="mode == 'CREATE'">
|
||||
<action>Review the complete epic breakdown for quality and completeness
|
||||
|
||||
**Validate Epic Structure (USER VALUE CHECK):**
|
||||
|
||||
For each epic, answer: "What can USERS do after this epic is complete that they couldn't do before?"
|
||||
|
||||
- Epic 1: [Must have clear user value OR be Foundation exception]
|
||||
- Epic 2: [Must deliver user-facing capability]
|
||||
- Epic N: [Must deliver user-facing capability]
|
||||
|
||||
⚠️ RED FLAG: If an epic only delivers technical infrastructure (database layer, API without users, component library without features), RESTRUCTURE IT. Each epic should enable users to accomplish something.
|
||||
|
||||
Exception validation:
|
||||
|
||||
- Foundation epic: Acceptable as first epic for greenfield projects
|
||||
- API-first epic: Acceptable ONLY if API has standalone consumers (third-party integrations, multiple frontends, API-as-product)
|
||||
|
||||
If any epic fails this check, restructure before proceeding.
|
||||
|
||||
**Validate FR Coverage:**
|
||||
|
||||
Create FR Coverage Matrix showing each FR mapped to epic(s) and story(ies):
|
||||
|
||||
- FR1: [description] → Epic X, Story X.Y
|
||||
- FR2: [description] → Epic X, Story X.Z
|
||||
- FR3: [description] → Epic Y, Story Y.A
|
||||
- **FR1:** [description] → Epic X, Story X.Y (with implementation details)
|
||||
- **FR2:** [description] → Epic Y, Story Y.A (with implementation details)
|
||||
- **FR3:** [description] → Epic Z, Story Z.B (with implementation details)
|
||||
- ...
|
||||
- FRN: [description] → Epic Z, Story Z.B
|
||||
|
||||
Confirm: EVERY FR from Step 1 inventory is covered by at least one story.
|
||||
If any FRs are missing, add stories now.
|
||||
**CRITICAL VALIDATION:** Every single FR from the PRD must be covered by at least one story with complete acceptance criteria.
|
||||
|
||||
**Validate Story Quality:**
|
||||
**ARCHITECTURE INTEGRATION VALIDATION:**
|
||||
|
||||
- All functional requirements from PRD are covered by stories
|
||||
- Epic 1 establishes proper foundation (if greenfield)
|
||||
- All stories are vertically sliced (deliver complete functionality, not just one layer)
|
||||
- No forward dependencies exist (only backward references)
|
||||
- Story sizing is appropriate for single-session completion
|
||||
- BDD acceptance criteria are clear and testable
|
||||
- Details added (what was missing from PRD FRs: UI specifics, performance targets, etc.)
|
||||
- Domain/compliance requirements are properly distributed
|
||||
- Sequencing enables incremental value delivery
|
||||
Verify that Architecture decisions are properly implemented:
|
||||
|
||||
Confirm with {user_name}:
|
||||
- All API endpoints from Architecture are covered in stories
|
||||
- Data models from Architecture are properly created and populated
|
||||
- Authentication/authorization patterns are consistently applied
|
||||
- Performance requirements are addressed in relevant stories
|
||||
- Security measures are implemented where required
|
||||
- Error handling follows Architecture patterns
|
||||
- Integration points between systems are properly handled
|
||||
|
||||
- Epic structure makes sense
|
||||
- All FRs covered by stories (validated via coverage matrix)
|
||||
- Story breakdown is actionable
|
||||
<check if="ux_design_content && architecture_content">
|
||||
- All available context has been incorporated (PRD + UX + Architecture)
|
||||
- Ready for Phase 4 Implementation
|
||||
</check>
|
||||
<check if="ux_design_content && !architecture_content">
|
||||
- UX context has been incorporated
|
||||
- Ready for Architecture workflow (recommended next step)
|
||||
</check>
|
||||
<check if="!ux_design_content && architecture_content">
|
||||
- Architecture context has been incorporated
|
||||
- Consider running UX Design workflow if UI exists
|
||||
</check>
|
||||
<check if="!ux_design_content && !architecture_content">
|
||||
- Basic epic structure created from PRD
|
||||
- Ready for next planning phase (UX Design or Architecture)
|
||||
</check>
|
||||
</action>
|
||||
**UX INTEGRATION VALIDATION** {{if ux_design_content}}:
|
||||
|
||||
<template-output>epic_breakdown_summary</template-output>
|
||||
Verify that UX design patterns are properly implemented:
|
||||
|
||||
- User flows follow the designed journey
|
||||
- Screen layouts and components match specifications
|
||||
- Interaction patterns work as designed
|
||||
- Responsive behavior matches breakpoints
|
||||
- Accessibility requirements are met
|
||||
- Error states and feedback patterns are implemented
|
||||
- Form validation follows UX guidelines
|
||||
- Loading states and transitions are implemented
|
||||
{{/if}}
|
||||
|
||||
**STORY QUALITY VALIDATION:**
|
||||
|
||||
- All stories are sized for single dev agent completion
|
||||
- Acceptance criteria are specific and testable
|
||||
- Technical implementation guidance is clear
|
||||
- User experience details are incorporated
|
||||
- No forward dependencies exist
|
||||
- Epic sequence delivers incremental value
|
||||
- Foundation epic properly enables subsequent work
|
||||
|
||||
**FINAL QUALITY CHECK:**
|
||||
|
||||
Answer these critical questions:
|
||||
|
||||
1. **User Value:** Does each epic deliver something users can actually do/use?
|
||||
2. **Completeness:** Are ALL PRD functional requirements covered?
|
||||
3. **Technical Soundness:** Do stories properly implement Architecture decisions?
|
||||
4. **User Experience:** {{if ux_design_content}} Do stories follow UX design patterns? {{/if}}
|
||||
5. **Implementation Ready:** Can dev agents implement these stories autonomously?
|
||||
</action>
|
||||
|
||||
<output>**✅ EPIC AND STORY CREATION COMPLETE**
|
||||
|
||||
**Output Generated:** epics.md with comprehensive implementation details
|
||||
|
||||
**Full Context Incorporated:**
|
||||
|
||||
- ✅ PRD functional requirements and scope
|
||||
- ✅ Architecture technical decisions and contracts
|
||||
{{if ux_design_content}}
|
||||
- ✅ UX Design interaction patterns and specifications
|
||||
{{/if}}
|
||||
|
||||
**FR Coverage:** {{count}} functional requirements mapped to {{story_count}} stories
|
||||
**Epic Structure:** {{epic_count}} epics delivering incremental user value
|
||||
|
||||
**Ready for Phase 4:** Sprint Planning and Development Implementation
|
||||
</output>
|
||||
|
||||
<template-output>final_validation</template-output>
|
||||
<template-output>fr_coverage_matrix</template-output>
|
||||
|
||||
<check if="mode == 'CREATE'">
|
||||
<output>**✅ Epic Breakdown Complete**
|
||||
|
||||
**Created:** epics.md with epic and story breakdown
|
||||
|
||||
**FR Coverage:** All functional requirements from PRD mapped to stories
|
||||
|
||||
**Context Incorporated:**
|
||||
{{if ux_design_content && architecture_content}}
|
||||
|
||||
- ✅ PRD requirements
|
||||
- ✅ UX interaction patterns
|
||||
- ✅ Architecture technical decisions
|
||||
**Status:** COMPLETE - Ready for Phase 4 Implementation!
|
||||
{{/if}}
|
||||
{{if ux_design_content && !architecture_content}}
|
||||
- ✅ PRD requirements
|
||||
- ✅ UX interaction patterns
|
||||
**Next:** Run Architecture workflow for technical decisions
|
||||
{{/if}}
|
||||
{{if !ux_design_content && architecture_content}}
|
||||
- ✅ PRD requirements
|
||||
- ✅ Architecture technical decisions
|
||||
**Next:** Consider UX Design workflow if UI needed
|
||||
{{/if}}
|
||||
{{if !ux_design_content && !architecture_content}}
|
||||
- ✅ PRD requirements (basic structure)
|
||||
**Next:** Run UX Design (if UI) or Architecture workflow
|
||||
**Note:** Epics will be enhanced with additional context later
|
||||
{{/if}}
|
||||
</output>
|
||||
</check>
|
||||
</check>
|
||||
</step>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Epic and Story Decomposition Workflow
|
||||
name: create-epics-and-stories
|
||||
description: "Transform PRD requirements into bite-sized stories organized into deliverable functional epics. This workflow takes a Product Requirements Document (PRD) and breaks it down into epics and user stories that can be easily assigned to development teams. It ensures that all functional requirements are captured in a structured format, making it easier for teams to understand and implement the necessary features."
|
||||
description: "Transform PRD requirements and Architecture decisions into comprehensive stories organized by user value. This workflow requires completed PRD + Architecture documents (UX recommended if UI exists) and breaks down requirements into implementation-ready epics and user stories that incorporate all available technical and design context. Creates detailed, actionable stories with complete acceptance criteria for development teams."
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
|
|
@ -17,30 +17,20 @@ date: system-generated
|
|||
# Priority: Whole document first, then sharded version
|
||||
input_file_patterns:
|
||||
prd:
|
||||
description: "Product Requirements Document with FRs and NFRs"
|
||||
description: "Product Requirements Document with FRs and NFRs (required)"
|
||||
whole: "{output_folder}/*prd*.md"
|
||||
sharded: "{output_folder}/*prd*/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
product_brief:
|
||||
description: "Product vision and goals (optional)"
|
||||
whole: "{output_folder}/*product*brief*.md"
|
||||
sharded: "{output_folder}/*product*brief*/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
domain_brief:
|
||||
description: "Domain-specific requirements and context (optional)"
|
||||
whole: "{output_folder}/*domain*brief*.md"
|
||||
sharded: "{output_folder}/*domain*brief*/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
ux_design:
|
||||
description: "UX design specification for interaction patterns (optional)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
architecture:
|
||||
description: "Architecture decisions and technical design (optional)"
|
||||
description: "Architecture decisions and technical design (required)"
|
||||
whole: "{output_folder}/*architecture*.md"
|
||||
sharded: "{output_folder}/*architecture*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
ux_design:
|
||||
description: "UX design specification for interaction patterns (recommended if UI exists)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/index.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
|
||||
# Module path and component files
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/3-solutioning/create-epics-and-stories"
|
||||
|
|
|
|||
|
|
@ -1,12 +0,0 @@
|
|||
# Engineering Backlog
|
||||
|
||||
This backlog collects cross-cutting or future action items that emerge from reviews and planning.
|
||||
|
||||
Routing guidance:
|
||||
|
||||
- Use this file for non-urgent optimizations, refactors, or follow-ups that span multiple stories/epics.
|
||||
- Must-fix items to ship a story belong in that story’s `Tasks / Subtasks`.
|
||||
- Same-epic improvements may also be captured under the epic Tech Spec `Post-Review Follow-ups` section.
|
||||
|
||||
| Date | Story | Epic | Type | Severity | Owner | Status | Notes |
|
||||
| ---- | ----- | ---- | ---- | -------- | ----- | ------ | ----- |
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
# Senior Developer Review - Validation Checklist
|
||||
|
||||
- [ ] Story file loaded from `{{story_path}}`
|
||||
- [ ] Story Status verified as one of: {{allow_status_values}}
|
||||
- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}})
|
||||
- [ ] Story Context located or warning recorded
|
||||
- [ ] Epic Tech Spec located or warning recorded
|
||||
- [ ] Architecture/standards docs loaded (as available)
|
||||
- [ ] Tech stack detected and documented
|
||||
- [ ] MCP doc search performed (or web fallback) and references captured
|
||||
- [ ] Acceptance Criteria cross-checked against implementation
|
||||
- [ ] File List reviewed and validated for completeness
|
||||
- [ ] Tests identified and mapped to ACs; gaps noted
|
||||
- [ ] Code quality review performed on changed files
|
||||
- [ ] Security review performed on changed files and dependencies
|
||||
- [ ] Outcome decided (Approve/Changes Requested/Blocked)
|
||||
- [ ] Review notes appended under "Senior Developer Review (AI)"
|
||||
- [ ] Change Log updated with review entry
|
||||
- [ ] Status updated according to settings (if enabled)
|
||||
- [ ] Story saved successfully
|
||||
|
||||
_Reviewer: {{user_name}} on {{date}}_
|
||||
|
|
@ -1,398 +0,0 @@
|
|||
# Senior Developer Review - Workflow Instructions
|
||||
|
||||
````xml
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>This workflow performs a SYSTEMATIC Senior Developer Review on a story with status "review", validates EVERY acceptance criterion and EVERY completed task, appends structured review notes with evidence, and updates the story status based on outcome.</critical>
|
||||
<critical>If story_path is provided, use it. Otherwise, find the first story in sprint-status.yaml with status "review". If none found, offer ad-hoc review option.</critical>
|
||||
<critical>Ad-hoc review mode: User can specify any files to review and what to review for (quality, security, requirements, etc.). Creates standalone review report.</critical>
|
||||
<critical>SYSTEMATIC VALIDATION REQUIREMENT: For EVERY acceptance criterion, verify implementation with evidence (file:line). For EVERY task marked complete, verify it was actually done. Tasks marked complete but not done = HIGH SEVERITY finding.</critical>
|
||||
<critical>⚠️ ZERO TOLERANCE FOR LAZY VALIDATION ⚠️</critical>
|
||||
<critical>If you FAIL to catch even ONE task marked complete that was NOT actually implemented, or ONE acceptance criterion marked done that is NOT in the code with evidence, you have FAILED YOUR ONLY PURPOSE. This is an IMMEDIATE DISQUALIFICATION. No shortcuts. No assumptions. No "looks good enough." You WILL read every file. You WILL verify every claim. You WILL provide evidence (file:line) for EVERY validation. Failure to catch false completions = you failed humanity and the project. Your job is to be the uncompromising gatekeeper. DO YOUR JOB COMPLETELY OR YOU WILL BE REPLACED.</critical>
|
||||
<critical>Only modify the story file in these areas: Status, Dev Agent Record (Completion Notes), File List (if corrections needed), Change Log, and the appended "Senior Developer Review (AI)" section.</critical>
|
||||
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
|
||||
|
||||
<critical>DOCUMENT OUTPUT: Technical review reports. Structured findings with severity levels and action items. User skill level ({user_skill_level}) affects conversation style ONLY, not review content.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Find story ready for review" tag="sprint-status">
|
||||
<check if="{{story_path}} is provided">
|
||||
<action>Use {{story_path}} directly</action>
|
||||
<action>Read COMPLETE story file and parse sections</action>
|
||||
<action>Extract story_key from filename or story metadata</action>
|
||||
<action>Verify Status is "review" or "ready-for-review" - if not, HALT with message: "Story status must be 'review' or 'ready-for-review' to proceed"</action>
|
||||
</check>
|
||||
|
||||
<check if="{{story_path}} is NOT provided">
|
||||
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely</action>
|
||||
|
||||
<action>Find FIRST story (reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "review" OR "ready-for-review"
|
||||
</action>
|
||||
|
||||
<check if="no story with status 'review' or 'ready-for-review' found">
|
||||
<output>📋 No stories with status "review" or "ready-for-review" found
|
||||
|
||||
**What would you like to do?**
|
||||
1. Run `dev-story` to implement and mark a story ready for review
|
||||
2. Check sprint-status.yaml for current story states
|
||||
3. Tell me what code to review and what to review it for
|
||||
</output>
|
||||
<ask>Select an option (1/2/3):</ask>
|
||||
|
||||
<check if="option 3 selected">
|
||||
<ask>What code would you like me to review?
|
||||
|
||||
Provide:
|
||||
- File path(s) or directory to review
|
||||
- What to review for:
|
||||
• General quality and standards
|
||||
• Requirements compliance
|
||||
• Security concerns
|
||||
• Performance issues
|
||||
• Architecture alignment
|
||||
• Something else (specify)
|
||||
|
||||
Your input:?
|
||||
</ask>
|
||||
|
||||
<action>Parse user input to extract:
|
||||
- {{review_files}}: file paths or directories to review
|
||||
- {{review_focus}}: what aspects to focus on
|
||||
- {{review_context}}: any additional context provided
|
||||
</action>
|
||||
|
||||
<action>Set ad_hoc_review_mode = true</action>
|
||||
<action>Skip to step 4 with custom scope</action>
|
||||
</check>
|
||||
|
||||
<check if="option 1 or 2 or no option 3">
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Use the first story found with status "review"</action>
|
||||
<action>Resolve story file path in {{story_dir}}</action>
|
||||
<action>Read the COMPLETE story file</action>
|
||||
</check>
|
||||
|
||||
<action>Extract {{epic_num}} and {{story_num}} from filename (e.g., story-2.3.*.md) and story metadata</action>
|
||||
<action>Parse sections: Status, Story, Acceptance Criteria, Tasks/Subtasks (and completion states), Dev Notes, Dev Agent Record (Context Reference, Completion Notes, File List), Change Log</action>
|
||||
<action if="story cannot be read">HALT with message: "Unable to read story file"</action>
|
||||
</step>
|
||||
|
||||
<step n="1.5" goal="Discover and load project documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {architecture_content}, {ux_design_content}, {epics_content} (loads only epic for this story if sharded), {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Resolve story context file and specification inputs">
|
||||
<action>Locate story context file: Under Dev Agent Record → Context Reference, read referenced path(s). If missing, search {{output_folder}} for files matching pattern "story-{{epic_num}}.{{story_num}}*.context.xml" and use the most recent.</action>
|
||||
<action if="no story context file found">Continue but record a WARNING in review notes: "No story context file found"</action>
|
||||
|
||||
<action>Locate Epic Tech Spec: Search {{tech_spec_search_dir}} with glob {{tech_spec_glob_template}} (resolve {{epic_num}})</action>
|
||||
<action if="no tech spec found">Continue but record a WARNING in review notes: "No Tech Spec found for epic {{epic_num}}"</action>
|
||||
|
||||
<action>Load architecture/standards docs: For each file name in {{arch_docs_file_names}} within {{arch_docs_search_dirs}}, read if exists. Collect testing, coding standards, security, and architectural patterns.</action>
|
||||
<note>Architecture and brownfield docs were pre-loaded in Step 1.5 as {architecture_content} and {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Detect tech stack and establish best-practice reference set">
|
||||
<action>Detect primary ecosystem(s) by scanning for manifests (e.g., package.json, pyproject.toml, go.mod, Dockerfile). Record key frameworks (e.g., Node/Express, React/Vue, Python/FastAPI, etc.).</action>
|
||||
<action>Synthesize a concise "Best-Practices and References" note capturing any updates or considerations that should influence the review (cite links and versions if available).</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Systematic validation of implementation against acceptance criteria and tasks">
|
||||
<check if="ad_hoc_review_mode == true">
|
||||
<action>Use {{review_files}} as the file list to review</action>
|
||||
<action>Focus review on {{review_focus}} aspects specified by user</action>
|
||||
<action>Use {{review_context}} for additional guidance</action>
|
||||
<action>Skip acceptance criteria checking (no story context)</action>
|
||||
<action>If architecture docs exist, verify alignment with architectural constraints</action>
|
||||
</check>
|
||||
|
||||
<check if="ad_hoc_review_mode != true">
|
||||
<critical>SYSTEMATIC VALIDATION - Check EVERY AC and EVERY task marked complete</critical>
|
||||
|
||||
<action>From the story, read Acceptance Criteria section completely - parse into numbered list</action>
|
||||
<action>From the story, read Tasks/Subtasks section completely - parse ALL tasks and subtasks with their completion state ([x] = completed, [ ] = incomplete)</action>
|
||||
<action>From Dev Agent Record → File List, compile list of changed/added files. If File List is missing or clearly incomplete, search repo for recent changes relevant to the story scope (heuristics: filenames matching components/services/routes/tests inferred from ACs/tasks).</action>
|
||||
|
||||
<critical>Step 4A: SYSTEMATIC ACCEPTANCE CRITERIA VALIDATION</critical>
|
||||
<action>Create AC validation checklist with one entry per AC</action>
|
||||
<action>For EACH acceptance criterion (AC1, AC2, AC3, etc.):
|
||||
1. Read the AC requirement completely
|
||||
2. Search changed files for evidence of implementation
|
||||
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
|
||||
4. Record specific evidence (file:line references where AC is satisfied)
|
||||
5. Check for corresponding tests (unit/integration/E2E as applicable)
|
||||
6. If PARTIAL or MISSING: Flag as finding with severity based on AC criticality
|
||||
7. Document in AC validation checklist
|
||||
</action>
|
||||
<action>Generate AC Coverage Summary: "X of Y acceptance criteria fully implemented"</action>
|
||||
|
||||
<critical>Step 4B: SYSTEMATIC TASK COMPLETION VALIDATION</critical>
|
||||
<action>Create task validation checklist with one entry per task/subtask</action>
|
||||
<action>For EACH task/subtask marked as COMPLETED ([x]):
|
||||
1. Read the task description completely
|
||||
2. Search changed files for evidence the task was actually done
|
||||
3. Determine: VERIFIED COMPLETE, QUESTIONABLE, or NOT DONE
|
||||
4. Record specific evidence (file:line references proving task completion)
|
||||
5. **CRITICAL**: If marked complete but NOT DONE → Flag as HIGH SEVERITY finding with message: "Task marked complete but implementation not found: [task description]"
|
||||
6. If QUESTIONABLE → Flag as MEDIUM SEVERITY finding: "Task completion unclear: [task description]"
|
||||
7. Document in task validation checklist
|
||||
</action>
|
||||
<action>For EACH task/subtask marked as INCOMPLETE ([ ]):
|
||||
1. Note it was not claimed to be complete
|
||||
2. Check if it was actually done anyway (sometimes devs forget to check boxes)
|
||||
3. If done but not marked: Note in review (helpful correction, not a finding)
|
||||
</action>
|
||||
<action>Generate Task Completion Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"</action>
|
||||
|
||||
<critical>Step 4C: CROSS-CHECK EPIC TECH-SPEC REQUIREMENTS</critical>
|
||||
<action>Cross-check epic tech-spec requirements and architecture constraints against the implementation intent in files.</action>
|
||||
<action if="critical architecture constraints are violated (e.g., layering, dependency rules)">flag as High Severity finding.</action>
|
||||
|
||||
<critical>Step 4D: COMPILE VALIDATION FINDINGS</critical>
|
||||
<action>Compile all validation findings into structured list:
|
||||
- Missing AC implementations (severity based on AC importance)
|
||||
- Partial AC implementations (MEDIUM severity)
|
||||
- Tasks falsely marked complete (HIGH severity - this is critical)
|
||||
- Questionable task completions (MEDIUM severity)
|
||||
- Missing tests for ACs (severity based on AC criticality)
|
||||
- Architecture violations (HIGH severity)
|
||||
</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Perform code quality and risk review">
|
||||
<action>For each changed file, skim for common issues appropriate to the stack: error handling, input validation, logging, dependency injection, thread-safety/async correctness, resource cleanup, performance anti-patterns.</action>
|
||||
<action>Perform security review: injection risks, authZ/authN handling, secret management, unsafe defaults, un-validated redirects, CORS misconfigured, dependency vulnerabilities (based on manifests).</action>
|
||||
<action>Check tests quality: assertions are meaningful, edge cases covered, deterministic behavior, proper fixtures, no flakiness patterns.</action>
|
||||
<action>Capture concrete, actionable suggestions with severity (High/Med/Low) and rationale. When possible, suggest specific code-level changes (filenames + line ranges) without rewriting large sections.</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Decide review outcome and prepare comprehensive notes">
|
||||
<action>Determine outcome based on validation results:
|
||||
- BLOCKED: Any HIGH severity finding (AC missing, task falsely marked complete, critical architecture violation)
|
||||
- CHANGES REQUESTED: Any MEDIUM severity findings or multiple LOW severity issues
|
||||
- APPROVE: All ACs implemented, all completed tasks verified, no significant issues
|
||||
</action>
|
||||
|
||||
<action>Prepare a structured review report with sections:
|
||||
1. **Summary**: Brief overview of review outcome and key concerns
|
||||
2. **Outcome**: Approve | Changes Requested | Blocked (with justification)
|
||||
3. **Key Findings** (by severity):
|
||||
- HIGH severity issues first (especially falsely marked complete tasks)
|
||||
- MEDIUM severity issues
|
||||
- LOW severity issues
|
||||
4. **Acceptance Criteria Coverage**:
|
||||
- Include complete AC validation checklist from Step 4A
|
||||
- Show: AC# | Description | Status (IMPLEMENTED/PARTIAL/MISSING) | Evidence (file:line)
|
||||
- Summary: "X of Y acceptance criteria fully implemented"
|
||||
- List any missing or partial ACs with severity
|
||||
5. **Task Completion Validation**:
|
||||
- Include complete task validation checklist from Step 4B
|
||||
- Show: Task | Marked As | Verified As | Evidence (file:line)
|
||||
- **CRITICAL**: Highlight any tasks marked complete but not done in RED/bold
|
||||
- Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"
|
||||
6. **Test Coverage and Gaps**:
|
||||
- Which ACs have tests, which don't
|
||||
- Test quality issues found
|
||||
7. **Architectural Alignment**:
|
||||
- Tech-spec compliance
|
||||
- Architecture violations if any
|
||||
8. **Security Notes**: Security findings if any
|
||||
9. **Best-Practices and References**: With links
|
||||
10. **Action Items**:
|
||||
- CRITICAL: ALL action items requiring code changes MUST have checkboxes for tracking
|
||||
- Format for actionable items: `- [ ] [Severity] Description (AC #X) [file: path:line]`
|
||||
- Format for informational notes: `- Note: Description (no action required)`
|
||||
- Imperative phrasing for action items
|
||||
- Map to related ACs or files with specific line references
|
||||
- Include suggested owners if clear
|
||||
- Example format:
|
||||
```
|
||||
### Action Items
|
||||
|
||||
**Code Changes Required:**
|
||||
- [ ] [High] Add input validation on login endpoint (AC #1) [file: src/routes/auth.js:23-45]
|
||||
- [ ] [Med] Add unit test for invalid email format [file: tests/unit/auth.test.js]
|
||||
|
||||
**Advisory Notes:**
|
||||
- Note: Consider adding rate limiting for production deployment
|
||||
- Note: Document the JWT expiration policy in README
|
||||
```
|
||||
</action>
|
||||
|
||||
<critical>The AC validation checklist and task validation checklist MUST be included in the review - this is the evidence trail</critical>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Append review to story and update metadata">
|
||||
<check if="ad_hoc_review_mode == true">
|
||||
<action>Generate review report as a standalone document</action>
|
||||
<action>Save to {{output_folder}}/code-review-{{date}}.md</action>
|
||||
<action>Include sections:
|
||||
- Review Type: Ad-Hoc Code Review
|
||||
- Reviewer: {{user_name}}
|
||||
- Date: {{date}}
|
||||
- Files Reviewed: {{review_files}}
|
||||
- Review Focus: {{review_focus}}
|
||||
- Outcome: (Approve | Changes Requested | Blocked)
|
||||
- Summary
|
||||
- Key Findings
|
||||
- Test Coverage and Gaps
|
||||
- Architectural Alignment
|
||||
- Security Notes
|
||||
- Best-Practices and References (with links)
|
||||
- Action Items
|
||||
</action>
|
||||
<output>Review saved to: {{output_folder}}/code-review-{{date}}.md</output>
|
||||
</check>
|
||||
|
||||
<check if="ad_hoc_review_mode != true">
|
||||
<action>Open {{story_path}} and append a new section at the end titled exactly: "Senior Developer Review (AI)".</action>
|
||||
<action>Insert subsections:
|
||||
- Reviewer: {{user_name}}
|
||||
- Date: {{date}}
|
||||
- Outcome: (Approve | Changes Requested | Blocked) with justification
|
||||
- Summary
|
||||
- Key Findings (by severity - HIGH/MEDIUM/LOW)
|
||||
- **Acceptance Criteria Coverage**:
|
||||
* Include complete AC validation checklist with table format
|
||||
* AC# | Description | Status | Evidence
|
||||
* Summary: X of Y ACs implemented
|
||||
- **Task Completion Validation**:
|
||||
* Include complete task validation checklist with table format
|
||||
* Task | Marked As | Verified As | Evidence
|
||||
* **Highlight falsely marked complete tasks prominently**
|
||||
* Summary: X of Y tasks verified, Z questionable, W false completions
|
||||
- Test Coverage and Gaps
|
||||
- Architectural Alignment
|
||||
- Security Notes
|
||||
- Best-Practices and References (with links)
|
||||
- Action Items:
|
||||
* CRITICAL: Format with checkboxes for tracking resolution
|
||||
* Code changes required: `- [ ] [Severity] Description [file: path:line]`
|
||||
* Advisory notes: `- Note: Description (no action required)`
|
||||
* Group by type: "Code Changes Required" and "Advisory Notes"
|
||||
</action>
|
||||
<action>Add a Change Log entry with date, version bump if applicable, and description: "Senior Developer Review notes appended".</action>
|
||||
<action>If {{update_status_on_result}} is true: update Status to {{status_on_approve}} when approved; to {{status_on_changes_requested}} when changes requested; otherwise leave unchanged.</action>
|
||||
<action>Save the story file.</action>
|
||||
|
||||
<critical>MUST include the complete validation checklists - this is the evidence that systematic review was performed</critical>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Update sprint status based on review outcome" tag="sprint-status">
|
||||
<check if="ad_hoc_review_mode == true">
|
||||
<action>Skip sprint status update (no story context)</action>
|
||||
<output>📋 Ad-hoc review complete - no sprint status to update</output>
|
||||
</check>
|
||||
|
||||
<check if="ad_hoc_review_mode != true">
|
||||
<action>Determine target status based on review outcome:
|
||||
- If {{outcome}} == "Approve" → target_status = "done"
|
||||
- If {{outcome}} == "Changes Requested" → target_status = "in-progress"
|
||||
- If {{outcome}} == "Blocked" → target_status = "review" (stay in review)
|
||||
</action>
|
||||
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Read all development_status entries to find {{story_key}}</action>
|
||||
<action>Verify current status is "review" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = {{target_status}}</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<check if="update successful">
|
||||
<output>✅ Sprint status updated: review → {{target_status}}</output>
|
||||
</check>
|
||||
|
||||
<check if="story key not found">
|
||||
<output>⚠️ Could not update sprint-status: {{story_key}} not found
|
||||
|
||||
Review was saved to story file, but sprint-status.yaml may be out of sync.
|
||||
</output>
|
||||
</check>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Persist action items to tasks/backlog/epic">
|
||||
<check if="ad_hoc_review_mode == true">
|
||||
<action>All action items are included in the standalone review report</action>
|
||||
<ask if="action items exist">Would you like me to create tracking items for these action items? (backlog/tasks)</ask>
|
||||
<action if="user confirms">
|
||||
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
|
||||
Append a row per action item with Date={{date}}, Story="Ad-Hoc Review", Epic="N/A", Type, Severity, Owner (or "TBD"), Status="Open", Notes with file refs and context.
|
||||
</action>
|
||||
</check>
|
||||
|
||||
<check if="ad_hoc_review_mode != true">
|
||||
<action>Normalize Action Items into a structured list: description, severity (High/Med/Low), type (Bug/TechDebt/Enhancement), suggested owner (if known), related AC/file references.</action>
|
||||
<ask if="action items exist and 'story_tasks' in {{persist_targets}}">Add {{action_item_count}} follow-up items to story Tasks/Subtasks?</ask>
|
||||
<action if="user confirms or no ask needed">
|
||||
Append under the story's "Tasks / Subtasks" a new subsection titled "Review Follow-ups (AI)", adding each item as an unchecked checkbox in imperative form, prefixed with "[AI-Review]" and severity. Example: "- [ ] [AI-Review][High] Add input validation on server route /api/x (AC #2)".
|
||||
</action>
|
||||
<action>
|
||||
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
|
||||
Append a row per action item with Date={{date}}, Story={{epic_num}}.{{story_num}}, Epic={{epic_num}}, Type, Severity, Owner (or "TBD"), Status="Open", Notes with short context and file refs.
|
||||
</action>
|
||||
<action>
|
||||
If an epic Tech Spec was found: open it and create (if missing) a section titled "{{epic_followups_section_title}}". Append a bullet list of action items scoped to this epic with references back to Story {{epic_num}}.{{story_num}}.
|
||||
</action>
|
||||
<action>Save modified files.</action>
|
||||
<action>Optionally invoke tests or linters to verify quick fixes if any were applied as part of review (requires user approval for any dependency changes).</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Validation and completion">
|
||||
<invoke-task>Run validation checklist at {installed_path}/checklist.md using {project-root}/{bmad_folder}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
<action>Report workflow completion.</action>
|
||||
|
||||
<check if="ad_hoc_review_mode == true">
|
||||
<output>**✅ Ad-Hoc Code Review Complete, {user_name}!**
|
||||
|
||||
**Review Details:**
|
||||
- Files Reviewed: {{review_files}}
|
||||
- Review Focus: {{review_focus}}
|
||||
- Review Outcome: {{outcome}}
|
||||
- Action Items: {{action_item_count}}
|
||||
- Review Report: {{output_folder}}/code-review-{{date}}.md
|
||||
|
||||
**Next Steps:**
|
||||
1. Review the detailed findings in the review report
|
||||
2. If changes requested: Address action items in the code
|
||||
3. If blocked: Resolve blockers before proceeding
|
||||
4. Re-run review on updated code if needed
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="ad_hoc_review_mode != true">
|
||||
<output>**✅ Story Review Complete, {user_name}!**
|
||||
|
||||
**Story Details:**
|
||||
- Story: {{epic_num}}.{{story_num}}
|
||||
- Story Key: {{story_key}}
|
||||
- Review Outcome: {{outcome}}
|
||||
- Sprint Status: {{target_status}}
|
||||
- Action Items: {{action_item_count}}
|
||||
|
||||
**Next Steps:**
|
||||
1. Review the Senior Developer Review notes appended to story
|
||||
2. If approved: Story is marked done, continue with next story
|
||||
3. If changes requested: Address action items and re-run `dev-story`
|
||||
4. If blocked: Resolve blockers before proceeding
|
||||
</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
````
|
||||
|
|
@ -0,0 +1,176 @@
|
|||
<workflow>
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
|
||||
<critical>🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥</critical>
|
||||
<critical>Your purpose: Validate story file claims against actual implementation</critical>
|
||||
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
|
||||
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
|
||||
that wrote this slop</critical>
|
||||
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
|
||||
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
|
||||
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
|
||||
|
||||
<step n="1" goal="Load story and discover changes">
|
||||
<action>Use provided {{story_path}} or ask user which story file to review</action>
|
||||
<action>Read COMPLETE story file</action>
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log</action>
|
||||
|
||||
<!-- Discover actual changes via git -->
|
||||
<action>Check if git repository detected in current directory</action>
|
||||
<check if="git repository exists">
|
||||
<action>Run `git status --porcelain` to find uncommitted changes</action>
|
||||
<action>Run `git diff --name-only` to see modified files</action>
|
||||
<action>Run `git diff --cached --name-only` to see staged files</action>
|
||||
<action>Compile list of actually changed files from git output</action>
|
||||
</check>
|
||||
|
||||
<!-- Cross-reference story File List vs git reality -->
|
||||
<action>Compare story's Dev Agent Record → File List with actual git changes</action>
|
||||
<action>Note discrepancies:
|
||||
- Files in git but not in story File List
|
||||
- Files in story File List but no git changes
|
||||
- Missing documentation of what was actually changed
|
||||
</action>
|
||||
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<action>Load {project_context} for coding standards (if exists)</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Build review attack plan">
|
||||
<action>Extract ALL Acceptance Criteria from story</action>
|
||||
<action>Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])</action>
|
||||
<action>From Dev Agent Record → File List, compile list of claimed changes</action>
|
||||
|
||||
<action>Create review plan:
|
||||
1. **AC Validation**: Verify each AC is actually implemented
|
||||
2. **Task Audit**: Verify each [x] task is really done
|
||||
3. **Code Quality**: Security, performance, maintainability
|
||||
4. **Test Quality**: Real tests vs placeholder bullshit
|
||||
</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Execute adversarial review">
|
||||
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
|
||||
|
||||
<!-- Git vs Story Discrepancies -->
|
||||
<action>Review git vs story File List discrepancies:
|
||||
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
|
||||
2. **Story lists files but no git changes** → HIGH finding (false claims)
|
||||
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
|
||||
</action>
|
||||
|
||||
<!-- Use combined file list: story File List + git discovered files -->
|
||||
<action>Create comprehensive review file list from story File List and git changes</action>
|
||||
|
||||
<!-- AC Validation -->
|
||||
<action>For EACH Acceptance Criterion:
|
||||
1. Read the AC requirement
|
||||
2. Search implementation files for evidence
|
||||
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
|
||||
4. If MISSING/PARTIAL → HIGH SEVERITY finding
|
||||
</action>
|
||||
|
||||
<!-- Task Completion Audit -->
|
||||
<action>For EACH task marked [x]:
|
||||
1. Read the task description
|
||||
2. Search files for evidence it was actually done
|
||||
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
|
||||
4. Record specific proof (file:line)
|
||||
</action>
|
||||
|
||||
<!-- Code Quality Deep Dive -->
|
||||
<action>For EACH file in comprehensive review list:
|
||||
1. **Security**: Look for injection risks, missing validation, auth issues
|
||||
2. **Performance**: N+1 queries, inefficient loops, missing caching
|
||||
3. **Error Handling**: Missing try/catch, poor error messages
|
||||
4. **Code Quality**: Complex functions, magic numbers, poor naming
|
||||
5. **Test Quality**: Are tests real assertions or placeholders?
|
||||
</action>
|
||||
|
||||
<check if="total_issues_found lt 3">
|
||||
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical>
|
||||
<action>Re-examine code for:
|
||||
- Edge cases and null handling
|
||||
- Architecture violations
|
||||
- Documentation gaps
|
||||
- Integration issues
|
||||
- Dependency problems
|
||||
- Git commit message quality (if applicable)
|
||||
</action>
|
||||
<action>Find at least 3 more specific, actionable issues</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Present findings and fix them">
|
||||
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
|
||||
|
||||
<output>**🔥 CODE REVIEW FINDINGS, {user_name}!**
|
||||
|
||||
**Story:** {{story_file}}
|
||||
**Git vs Story Discrepancies:** {{git_discrepancy_count}} found
|
||||
**Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low
|
||||
|
||||
## 🔴 CRITICAL ISSUES
|
||||
- Tasks marked [x] but not actually implemented
|
||||
- Acceptance Criteria not implemented
|
||||
- Story claims files changed but no git evidence
|
||||
- Security vulnerabilities
|
||||
|
||||
## 🟡 MEDIUM ISSUES
|
||||
- Files changed but not documented in story File List
|
||||
- Uncommitted changes not tracked
|
||||
- Performance problems
|
||||
- Poor test coverage/quality
|
||||
- Code maintainability issues
|
||||
|
||||
## 🟢 LOW ISSUES
|
||||
- Code style improvements
|
||||
- Documentation gaps
|
||||
- Git commit message quality
|
||||
</output>
|
||||
|
||||
<ask>What should I do with these issues?
|
||||
|
||||
1. **Fix them automatically** - I'll update the code and tests
|
||||
2. **Create action items** - Add to story Tasks/Subtasks for later
|
||||
3. **Show me details** - Deep dive into specific issues
|
||||
|
||||
Choose [1], [2], or specify which issue to examine:</ask>
|
||||
|
||||
<check if="user chooses 1">
|
||||
<action>Fix all HIGH and MEDIUM issues in the code</action>
|
||||
<action>Add/update tests as needed</action>
|
||||
<action>Update File List in story if files changed</action>
|
||||
<action>Update story Dev Agent Record with fixes applied</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses 2">
|
||||
<action>Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks</action>
|
||||
<action>For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses 3">
|
||||
<action>Show detailed explanation with code examples</action>
|
||||
<action>Return to fix decision</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Update story status">
|
||||
<action>If all HIGH issues fixed and ACs implemented → Update story Status to "done"</action>
|
||||
<action>If issues remain → Update story Status to "in-progress"</action>
|
||||
<action>Save story file</action>
|
||||
|
||||
<output>**✅ Review Complete!**
|
||||
|
||||
**Story Status:** {{new_status}}
|
||||
**Issues Fixed:** {{fixed_count}}
|
||||
**Action Items Created:** {{action_count}}
|
||||
|
||||
{{#if new_status == "done"}}Story is ready for next work!{{else}}Address the action items and continue development.{{/if}}
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
# Review Story Workflow
|
||||
name: code-review
|
||||
description: "Perform a Senior Developer code review on a completed story flagged Ready for Review, leveraging story-context, epic tech-spec, repo docs, MCP servers for latest best-practices, and web search as fallback. Appends structured review notes to the story."
|
||||
description: "Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts 'looks good' - must find minimum issues and can auto-fix with user approval."
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
|
|
@ -16,21 +16,14 @@ sprint_status: "{sprint_artifacts}/sprint-status.yaml || {output_folder}/sprint-
|
|||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/code-review"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
instructions: "{installed_path}/instructions.xml"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
template: false
|
||||
|
||||
variables:
|
||||
# Project context
|
||||
project_context: "**/project-context.md"
|
||||
story_dir: "{sprint_artifacts}"
|
||||
tech_spec_search_dir: "{output_folder}"
|
||||
tech_spec_glob_template: "tech-spec-epic-{{epic_num}}*.md"
|
||||
arch_docs_search_dirs: |
|
||||
- "{output_folder}"
|
||||
arch_docs_file_names: |
|
||||
- architecture.md
|
||||
backlog_file: "{output_folder}/backlog.md"
|
||||
update_epic_followups: true
|
||||
epic_followups_section_title: "Post-Review Follow-ups"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
|
|
|
|||
|
|
@ -1,240 +1,358 @@
|
|||
# Create Story Quality Validation Checklist
|
||||
# 🎯 Story Context Quality Competition Prompt
|
||||
|
||||
```xml
|
||||
<critical>This validation runs in a FRESH CONTEXT by an independent validator agent</critical>
|
||||
<critical>The validator audits story quality and offers to improve if issues are found</critical>
|
||||
<critical>Load only the story file and necessary source documents - do NOT load workflow instructions</critical>
|
||||
## **🔥 CRITICAL MISSION: Outperform and Fix the Original Create-Story LLM**
|
||||
|
||||
<validation-checklist>
|
||||
You are an independent quality validator in a **FRESH CONTEXT**. Your mission is to **thoroughly review** a story file that was generated by the create-story workflow and **systematically identify any mistakes, omissions, or disasters** that the original LLM missed.
|
||||
|
||||
<expectations>
|
||||
**What create-story workflow should have accomplished:**
|
||||
**Your purpose is NOT just to validate - it's to FIX and PREVENT LLM developer mistakes, omissions, or disasters!**
|
||||
|
||||
1. **Previous Story Continuity:** If a previous story exists (status: done/review/in-progress), current story should have "Learnings from Previous Story" subsection in Dev Notes that references: new files created, completion notes, architectural decisions, unresolved review items
|
||||
2. **Source Document Coverage:** Story should cite tech spec (if exists), epics, PRD, and relevant architecture docs (architecture.md, testing-strategy.md, coding-standards.md, unified-project-structure.md)
|
||||
3. **Requirements Traceability:** ACs sourced from tech spec (preferred) or epics, not invented
|
||||
4. **Dev Notes Quality:** Specific guidance with citations, not generic advice
|
||||
5. **Task-AC Mapping:** Every AC has tasks, every task references AC, testing subtasks present
|
||||
6. **Structure:** Status="drafted", proper story statement, Dev Agent Record sections initialized
|
||||
</expectations>
|
||||
### **🚨 CRITICAL MISTAKES TO PREVENT:**
|
||||
|
||||
## Validation Steps
|
||||
- **Reinventing wheels** - Creating duplicate functionality instead of reusing existing
|
||||
- **Wrong libraries** - Using incorrect frameworks, versions, or dependencies
|
||||
- **Wrong file locations** - Violating project structure and organization
|
||||
- **Breaking regressions** - Implementing changes that break existing functionality
|
||||
- **Ignoring UX** - Not following user experience design requirements
|
||||
- **Vague implementations** - Creating unclear, ambiguous implementations
|
||||
- **Lying about completion** - Implementing incorrectly or incompletely
|
||||
- **Not learning from past work** - Ignoring previous story learnings and patterns
|
||||
|
||||
### 1. Load Story and Extract Metadata
|
||||
- [ ] Load story file: {{story_file_path}}
|
||||
- [ ] Parse sections: Status, Story, ACs, Tasks, Dev Notes, Dev Agent Record, Change Log
|
||||
- [ ] Extract: epic_num, story_num, story_key, story_title
|
||||
- [ ] Initialize issue tracker (Critical/Major/Minor)
|
||||
### **🚨 EXHAUSTIVE ANALYSIS REQUIRED:**
|
||||
|
||||
### 2. Previous Story Continuity Check
|
||||
You must thoroughly analyze **ALL artifacts** to extract critical context - do NOT be lazy or skim! This is the most important quality control function in the entire development process!
|
||||
|
||||
**Find previous story:**
|
||||
- [ ] Load {output_folder}/sprint-status.yaml
|
||||
- [ ] Find current {{story_key}} in development_status
|
||||
- [ ] Identify story entry immediately above (previous story)
|
||||
- [ ] Check previous story status
|
||||
### **🔬 UTILIZE SUBPROCESSES AND SUBAGENTS:**
|
||||
|
||||
**If previous story status is done/review/in-progress:**
|
||||
- [ ] Load previous story file: {story_dir}/{{previous_story_key}}.md
|
||||
- [ ] Extract: Dev Agent Record (Completion Notes, File List with NEW/MODIFIED)
|
||||
- [ ] Extract: Senior Developer Review section if present
|
||||
- [ ] Count unchecked [ ] items in Review Action Items
|
||||
- [ ] Count unchecked [ ] items in Review Follow-ups (AI)
|
||||
Use research subagents, subprocesses, or parallel processing if available to thoroughly analyze different artifacts **simultaneously and thoroughly**. Leave no stone unturned!
|
||||
|
||||
**Validate current story captured continuity:**
|
||||
- [ ] Check: "Learnings from Previous Story" subsection exists in Dev Notes
|
||||
- If MISSING and previous story has content → **CRITICAL ISSUE**
|
||||
- [ ] If subsection exists, verify it includes:
|
||||
- [ ] References to NEW files from previous story → If missing → **MAJOR ISSUE**
|
||||
- [ ] Mentions completion notes/warnings → If missing → **MAJOR ISSUE**
|
||||
- [ ] Calls out unresolved review items (if any exist) → If missing → **CRITICAL ISSUE**
|
||||
- [ ] Cites previous story: [Source: stories/{{previous_story_key}}.md]
|
||||
### **🎯 COMPETITIVE EXCELLENCE:**
|
||||
|
||||
**If previous story status is backlog/drafted:**
|
||||
- [ ] No continuity expected (note this)
|
||||
This is a COMPETITION to create the **ULTIMATE story context** that makes LLM developer mistakes **IMPOSSIBLE**!
|
||||
|
||||
**If no previous story exists:**
|
||||
- [ ] First story in epic, no continuity expected
|
||||
## **🚀 HOW TO USE THIS CHECKLIST**
|
||||
|
||||
### 3. Source Document Coverage Check
|
||||
### **When Running from Create-Story Workflow:**
|
||||
|
||||
**Build available docs list:**
|
||||
- [ ] Check exists: tech-spec-epic-{{epic_num}}*.md in {tech_spec_search_dir}
|
||||
- [ ] Check exists: {output_folder}/epics.md
|
||||
- [ ] Check exists: {output_folder}/PRD.md
|
||||
- [ ] Check exists in {output_folder}/ or {project-root}/docs/:
|
||||
- architecture.md, testing-strategy.md, coding-standards.md
|
||||
- unified-project-structure.md, tech-stack.md
|
||||
- backend-architecture.md, frontend-architecture.md, data-models.md
|
||||
- The `{project_root}/{bmad_folder}/core/tasks/validate-workflow.xml` framework will automatically:
|
||||
- Load this checklist file
|
||||
- Load the newly created story file (`{story_file_path}`)
|
||||
- Load workflow variables from `{installed_path}/workflow.yaml`
|
||||
- Execute the validation process
|
||||
|
||||
**Validate story references available docs:**
|
||||
- [ ] Extract all [Source: ...] citations from story Dev Notes
|
||||
- [ ] Tech spec exists but not cited → **CRITICAL ISSUE**
|
||||
- [ ] Epics exists but not cited → **CRITICAL ISSUE**
|
||||
- [ ] Architecture.md exists → Read for relevance → If relevant but not cited → **MAJOR ISSUE**
|
||||
- [ ] Testing-strategy.md exists → Check Dev Notes mentions testing standards → If not → **MAJOR ISSUE**
|
||||
- [ ] Testing-strategy.md exists → Check Tasks have testing subtasks → If not → **MAJOR ISSUE**
|
||||
- [ ] Coding-standards.md exists → Check Dev Notes references standards → If not → **MAJOR ISSUE**
|
||||
- [ ] Unified-project-structure.md exists → Check Dev Notes has "Project Structure Notes" subsection → If not → **MAJOR ISSUE**
|
||||
### **When Running in Fresh Context:**
|
||||
|
||||
**Validate citation quality:**
|
||||
- [ ] Verify cited file paths are correct and files exist → Bad citations → **MAJOR ISSUE**
|
||||
- [ ] Check citations include section names, not just file paths → Vague citations → **MINOR ISSUE**
|
||||
- User should provide the story file path being reviewed
|
||||
- Load the story file directly
|
||||
- Load the corresponding workflow.yaml for variable context
|
||||
- Proceed with systematic analysis
|
||||
|
||||
### 4. Acceptance Criteria Quality Check
|
||||
### **Required Inputs:**
|
||||
|
||||
- [ ] Extract Acceptance Criteria from story
|
||||
- [ ] Count ACs: {{ac_count}} (if 0 → **CRITICAL ISSUE** and halt)
|
||||
- [ ] Check story indicates AC source (tech spec, epics, PRD)
|
||||
- **Story file**: The story file to review and improve
|
||||
- **Workflow variables**: From workflow.yaml (story_dir, output_folder, epics_file, etc.)
|
||||
- **Source documents**: Epics, architecture, etc. (discovered or provided)
|
||||
- **Validation framework**: `validate-workflow.xml` (handles checklist execution)
|
||||
|
||||
**If tech spec exists:**
|
||||
- [ ] Load tech spec
|
||||
- [ ] Search for this story number
|
||||
- [ ] Extract tech spec ACs for this story
|
||||
- [ ] Compare story ACs vs tech spec ACs → If mismatch → **MAJOR ISSUE**
|
||||
---
|
||||
|
||||
**If no tech spec but epics.md exists:**
|
||||
- [ ] Load epics.md
|
||||
- [ ] Search for Epic {{epic_num}}, Story {{story_num}}
|
||||
- [ ] Story not found in epics → **CRITICAL ISSUE** (should have halted)
|
||||
- [ ] Extract epics ACs
|
||||
- [ ] Compare story ACs vs epics ACs → If mismatch without justification → **MAJOR ISSUE**
|
||||
## **🔬 SYSTEMATIC RE-ANALYSIS APPROACH**
|
||||
|
||||
**Validate AC quality:**
|
||||
- [ ] Each AC is testable (measurable outcome)
|
||||
- [ ] Each AC is specific (not vague)
|
||||
- [ ] Each AC is atomic (single concern)
|
||||
- [ ] Vague ACs found → **MINOR ISSUE**
|
||||
You will systematically re-do the entire story creation process, but with a critical eye for what the original LLM might have missed:
|
||||
|
||||
### 5. Task-AC Mapping Check
|
||||
### **Step 1: Load and Understand the Target**
|
||||
|
||||
- [ ] Extract Tasks/Subtasks from story
|
||||
- [ ] For each AC: Search tasks for "(AC: #{{ac_num}})" reference
|
||||
- [ ] AC has no tasks → **MAJOR ISSUE**
|
||||
- [ ] For each task: Check if references an AC number
|
||||
- [ ] Tasks without AC refs (and not testing/setup) → **MINOR ISSUE**
|
||||
- [ ] Count tasks with testing subtasks
|
||||
- [ ] Testing subtasks < ac_count → **MAJOR ISSUE**
|
||||
1. **Load the workflow configuration**: `{installed_path}/workflow.yaml` for variable inclusion
|
||||
2. **Load the story file**: `{story_file_path}` (provided by user or discovered)
|
||||
3. **Load validation framework**: `{project_root}/{bmad_folder}/core/tasks/validate-workflow.xml`
|
||||
4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file
|
||||
5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc.
|
||||
6. **Understand current status**: What story implementation guidance is currently provided?
|
||||
|
||||
### 6. Dev Notes Quality Check
|
||||
**Note:** If running in fresh context, user should provide the story file path being reviewed. If running from create-story workflow, the validation framework will automatically discover the checklist and story file.
|
||||
|
||||
**Check required subsections exist:**
|
||||
- [ ] Architecture patterns and constraints
|
||||
- [ ] References (with citations)
|
||||
- [ ] Project Structure Notes (if unified-project-structure.md exists)
|
||||
- [ ] Learnings from Previous Story (if previous story has content)
|
||||
- [ ] Missing required subsections → **MAJOR ISSUE**
|
||||
### **Step 2: Exhaustive Source Document Analysis**
|
||||
|
||||
**Validate content quality:**
|
||||
- [ ] Architecture guidance is specific (not generic "follow architecture docs") → If generic → **MAJOR ISSUE**
|
||||
- [ ] Count citations in References subsection
|
||||
- [ ] No citations → **MAJOR ISSUE**
|
||||
- [ ] < 3 citations and multiple arch docs exist → **MINOR ISSUE**
|
||||
- [ ] Scan for suspicious specifics without citations:
|
||||
- API endpoints, schema details, business rules, tech choices
|
||||
- [ ] Likely invented details found → **MAJOR ISSUE**
|
||||
**🔥 CRITICAL: Treat this like YOU are creating the story from scratch to PREVENT DISASTERS!**
|
||||
**Discover everything the original LLM missed that could cause developer mistakes, omissions, or disasters!**
|
||||
|
||||
### 7. Story Structure Check
|
||||
#### **2.1 Epics and Stories Analysis**
|
||||
|
||||
- [ ] Status = "drafted" → If not → **MAJOR ISSUE**
|
||||
- [ ] Story section has "As a / I want / so that" format → If malformed → **MAJOR ISSUE**
|
||||
- [ ] Dev Agent Record has required sections:
|
||||
- Context Reference, Agent Model Used, Debug Log References, Completion Notes List, File List
|
||||
- [ ] Missing sections → **MAJOR ISSUE**
|
||||
- [ ] Change Log initialized → If missing → **MINOR ISSUE**
|
||||
- [ ] File in correct location: {story_dir}/{{story_key}}.md → If not → **MAJOR ISSUE**
|
||||
- Load `{epics_file}` (or sharded equivalents)
|
||||
- Extract **COMPLETE Epic {{epic_num}} context**:
|
||||
- Epic objectives and business value
|
||||
- ALL stories in this epic (for cross-story context)
|
||||
- Our specific story's requirements, acceptance criteria
|
||||
- Technical requirements and constraints
|
||||
- Cross-story dependencies and prerequisites
|
||||
|
||||
### 8. Unresolved Review Items Alert
|
||||
#### **2.2 Architecture Deep-Dive**
|
||||
|
||||
**CRITICAL CHECK for incomplete review items from previous story:**
|
||||
- Load `{architecture_file}` (single or sharded)
|
||||
- **Systematically scan for ANYTHING relevant to this story:**
|
||||
- Technical stack with versions (languages, frameworks, libraries)
|
||||
- Code structure and organization patterns
|
||||
- API design patterns and contracts
|
||||
- Database schemas and relationships
|
||||
- Security requirements and patterns
|
||||
- Performance requirements and optimization strategies
|
||||
- Testing standards and frameworks
|
||||
- Deployment and environment patterns
|
||||
- Integration patterns and external services
|
||||
|
||||
- [ ] If previous story has "Senior Developer Review (AI)" section:
|
||||
- [ ] Count unchecked [ ] items in "Action Items"
|
||||
- [ ] Count unchecked [ ] items in "Review Follow-ups (AI)"
|
||||
- [ ] If unchecked items > 0:
|
||||
- [ ] Check current story "Learnings from Previous Story" mentions these
|
||||
- [ ] If NOT mentioned → **CRITICAL ISSUE** with details:
|
||||
- List all unchecked items with severity
|
||||
- Note: "These may represent epic-wide concerns"
|
||||
- Required: Add to Learnings section with note about pending items
|
||||
#### **2.3 Previous Story Intelligence (if applicable)**
|
||||
|
||||
## Validation Report Generation
|
||||
- If `story_num > 1`, load the previous story file
|
||||
- Extract **actionable intelligence**:
|
||||
- Dev notes and learnings
|
||||
- Review feedback and corrections needed
|
||||
- Files created/modified and their patterns
|
||||
- Testing approaches that worked/didn't work
|
||||
- Problems encountered and solutions found
|
||||
- Code patterns and conventions established
|
||||
|
||||
**Calculate severity counts:**
|
||||
- Critical: {{critical_count}}
|
||||
- Major: {{major_count}}
|
||||
- Minor: {{minor_count}}
|
||||
#### **2.4 Git History Analysis (if available)**
|
||||
|
||||
**Determine outcome:**
|
||||
- Critical > 0 OR Major > 3 → **FAIL**
|
||||
- Major ≤ 3 and Critical = 0 → **PASS with issues**
|
||||
- All = 0 → **PASS**
|
||||
- Analyze recent commits for patterns:
|
||||
- Files created/modified in previous work
|
||||
- Code patterns and conventions used
|
||||
- Library dependencies added/changed
|
||||
- Architecture decisions implemented
|
||||
- Testing approaches used
|
||||
|
||||
**Generate report:**
|
||||
```
|
||||
#### **2.5 Latest Technical Research**
|
||||
|
||||
# Story Quality Validation Report
|
||||
- Identify any libraries/frameworks mentioned
|
||||
- Research latest versions and critical information:
|
||||
- Breaking changes or security updates
|
||||
- Performance improvements or deprecations
|
||||
- Best practices for current versions
|
||||
|
||||
Story: {{story_key}} - {{story_title}}
|
||||
Outcome: {{outcome}} (Critical: {{critical_count}}, Major: {{major_count}}, Minor: {{minor_count}})
|
||||
### **Step 3: Disaster Prevention Gap Analysis**
|
||||
|
||||
## Critical Issues (Blockers)
|
||||
**🚨 CRITICAL: Identify every mistake the original LLM missed that could cause DISASTERS!**
|
||||
|
||||
{{list_each_with_description_and_evidence}}
|
||||
#### **3.1 Reinvention Prevention Gaps**
|
||||
|
||||
## Major Issues (Should Fix)
|
||||
- **Wheel reinvention:** Areas where developer might create duplicate functionality
|
||||
- **Code reuse opportunities** not identified that could prevent redundant work
|
||||
- **Existing solutions** not mentioned that developer should extend instead of replace
|
||||
|
||||
{{list_each_with_description_and_evidence}}
|
||||
#### **3.2 Technical Specification DISASTERS**
|
||||
|
||||
## Minor Issues (Nice to Have)
|
||||
- **Wrong libraries/frameworks:** Missing version requirements that could cause compatibility issues
|
||||
- **API contract violations:** Missing endpoint specifications that could break integrations
|
||||
- **Database schema conflicts:** Missing requirements that could corrupt data
|
||||
- **Security vulnerabilities:** Missing security requirements that could expose the system
|
||||
- **Performance disasters:** Missing requirements that could cause system failures
|
||||
|
||||
{{list_each_with_description}}
|
||||
#### **3.3 File Structure DISASTERS**
|
||||
|
||||
## Successes
|
||||
- **Wrong file locations:** Missing organization requirements that could break build processes
|
||||
- **Coding standard violations:** Missing conventions that could create inconsistent codebase
|
||||
- **Integration pattern breaks:** Missing data flow requirements that could cause system failures
|
||||
- **Deployment failures:** Missing environment requirements that could prevent deployment
|
||||
|
||||
{{list_what_was_done_well}}
|
||||
#### **3.4 Regression DISASTERS**
|
||||
|
||||
- **Breaking changes:** Missing requirements that could break existing functionality
|
||||
- **Test failures:** Missing test requirements that could allow bugs to reach production
|
||||
- **UX violations:** Missing user experience requirements that could ruin the product
|
||||
- **Learning failures:** Missing previous story context that could repeat same mistakes
|
||||
|
||||
#### **3.5 Implementation DISASTERS**
|
||||
|
||||
- **Vague implementations:** Missing details that could lead to incorrect or incomplete work
|
||||
- **Completion lies:** Missing acceptance criteria that could allow fake implementations
|
||||
- **Scope creep:** Missing boundaries that could cause unnecessary work
|
||||
- **Quality failures:** Missing quality requirements that could deliver broken features
|
||||
|
||||
### **Step 4: LLM-Dev-Agent Optimization Analysis**
|
||||
|
||||
**CRITICAL STEP: Optimize story context for LLM developer agent consumption**
|
||||
|
||||
**Analyze current story for LLM optimization issues:**
|
||||
|
||||
- **Verbosity problems:** Excessive detail that wastes tokens without adding value
|
||||
- **Ambiguity issues:** Vague instructions that could lead to multiple interpretations
|
||||
- **Context overload:** Too much information not directly relevant to implementation
|
||||
- **Missing critical signals:** Key requirements buried in verbose text
|
||||
- **Poor structure:** Information not organized for efficient LLM processing
|
||||
|
||||
**Apply LLM Optimization Principles:**
|
||||
|
||||
- **Clarity over verbosity:** Be precise and direct, eliminate fluff
|
||||
- **Actionable instructions:** Every sentence should guide implementation
|
||||
- **Scannable structure:** Use clear headings, bullet points, and emphasis
|
||||
- **Token efficiency:** Pack maximum information into minimum text
|
||||
- **Unambiguous language:** Clear requirements with no room for interpretation
|
||||
|
||||
### **Step 5: Improvement Recommendations**
|
||||
|
||||
**For each gap identified, provide specific, actionable improvements:**
|
||||
|
||||
#### **5.1 Critical Misses (Must Fix)**
|
||||
|
||||
- Missing essential technical requirements
|
||||
- Missing previous story context that could cause errors
|
||||
- Missing anti-pattern prevention that could lead to duplicate code
|
||||
- Missing security or performance requirements
|
||||
|
||||
#### **5.2 Enhancement Opportunities (Should Add)**
|
||||
|
||||
- Additional architectural guidance that would help developer
|
||||
- More detailed technical specifications
|
||||
- Better code reuse opportunities
|
||||
- Enhanced testing guidance
|
||||
|
||||
#### **5.3 Optimization Suggestions (Nice to Have)**
|
||||
|
||||
- Performance optimization hints
|
||||
- Additional context for complex scenarios
|
||||
- Enhanced debugging or development tips
|
||||
|
||||
#### **5.4 LLM Optimization Improvements**
|
||||
|
||||
- Token-efficient phrasing of existing content
|
||||
- Clearer structure for LLM processing
|
||||
- More actionable and direct instructions
|
||||
- Reduced verbosity while maintaining completeness
|
||||
|
||||
---
|
||||
|
||||
## **🎯 COMPETITION SUCCESS METRICS**
|
||||
|
||||
**You WIN against the original LLM if you identify:**
|
||||
|
||||
### **Category 1: Critical Misses (Blockers)**
|
||||
|
||||
- Essential technical requirements the developer needs but aren't provided
|
||||
- Previous story learnings that would prevent errors if ignored
|
||||
- Anti-pattern prevention that would prevent code duplication
|
||||
- Security or performance requirements that must be followed
|
||||
|
||||
### **Category 2: Enhancement Opportunities**
|
||||
|
||||
- Architecture guidance that would significantly help implementation
|
||||
- Technical specifications that would prevent wrong approaches
|
||||
- Code reuse opportunities the developer should know about
|
||||
- Testing guidance that would improve quality
|
||||
|
||||
### **Category 3: Optimization Insights**
|
||||
|
||||
- Performance or efficiency improvements
|
||||
- Development workflow optimizations
|
||||
- Additional context for complex scenarios
|
||||
|
||||
---
|
||||
|
||||
## **📋 INTERACTIVE IMPROVEMENT PROCESS**
|
||||
|
||||
After completing your systematic analysis, present your findings to the user interactively:
|
||||
|
||||
### **Step 5: Present Improvement Suggestions**
|
||||
|
||||
```
|
||||
🎯 **STORY CONTEXT QUALITY REVIEW COMPLETE**
|
||||
|
||||
## User Alert and Remediation
|
||||
**Story:** {{story_key}} - {{story_title}}
|
||||
|
||||
**If FAIL:**
|
||||
- Show issues summary and top 3 issues
|
||||
- Offer options: (1) Auto-improve story, (2) Show detailed findings, (3) Fix manually, (4) Accept as-is
|
||||
- If option 1: Re-load source docs, regenerate affected sections, re-run validation
|
||||
I found {{critical_count}} critical issues, {{enhancement_count}} enhancements, and {{optimization_count}} optimizations.
|
||||
|
||||
**If PASS with issues:**
|
||||
- Show issues list
|
||||
- Ask: "Improve story? (y/n)"
|
||||
- If yes: Enhance story with missing items
|
||||
## **🚨 CRITICAL ISSUES (Must Fix)**
|
||||
|
||||
**If PASS:**
|
||||
- Confirm: All quality standards met
|
||||
- List successes
|
||||
- Ready for story-context generation
|
||||
{{list each critical issue with clear, actionable description}}
|
||||
|
||||
</validation-checklist>
|
||||
## **⚡ ENHANCEMENT OPPORTUNITIES (Should Add)**
|
||||
|
||||
{{list each enhancement with clear benefit description}}
|
||||
|
||||
## **✨ OPTIMIZATIONS (Nice to Have)**
|
||||
|
||||
{{list each optimization with benefit description}}
|
||||
|
||||
## **🤖 LLM OPTIMIZATION (Token Efficiency & Clarity)**
|
||||
|
||||
{{list each LLM optimization that will improve dev agent performance:
|
||||
- Reduce verbosity while maintaining completeness
|
||||
- Improve structure for better LLM processing
|
||||
- Make instructions more actionable and direct
|
||||
- Enhance clarity and reduce ambiguity}}
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
### **Step 6: Interactive User Selection**
|
||||
|
||||
**Validation runs in fresh context and checks:**
|
||||
After presenting the suggestions, ask the user:
|
||||
|
||||
1. ✅ Previous story continuity captured (files, notes, **unresolved review items**)
|
||||
2. ✅ All relevant source docs discovered and cited
|
||||
3. ✅ ACs match tech spec/epics exactly
|
||||
4. ✅ Tasks cover all ACs with testing
|
||||
5. ✅ Dev Notes have specific guidance with citations (not generic)
|
||||
6. ✅ Structure and metadata complete
|
||||
```
|
||||
**IMPROVEMENT OPTIONS:**
|
||||
|
||||
**Severity Levels:**
|
||||
Which improvements would you like me to apply to the story?
|
||||
|
||||
- **CRITICAL** = Missing previous story reference, missing tech spec cite, unresolved review items not called out, story not in epics
|
||||
- **MAJOR** = Missing arch docs, missing files from previous story, vague Dev Notes, ACs don't match source, no testing subtasks
|
||||
- **MINOR** = Vague citations, orphan tasks, missing Change Log
|
||||
**Select from the numbered list above, or choose:**
|
||||
- **all** - Apply all suggested improvements
|
||||
- **critical** - Apply only critical issues
|
||||
- **select** - I'll choose specific numbers
|
||||
- **none** - Keep story as-is
|
||||
- **details** - Show me more details about any suggestion
|
||||
|
||||
**Outcome Triggers:**
|
||||
Your choice:
|
||||
```
|
||||
|
||||
- **FAIL** = Any critical OR >3 major issues
|
||||
- **PASS with issues** = ≤3 major issues, no critical
|
||||
- **PASS** = All checks passed
|
||||
### **Step 7: Apply Selected Improvements**
|
||||
|
||||
When user accepts improvements:
|
||||
|
||||
- **Load the story file**
|
||||
- **Apply accepted changes** (make them look natural, as if they were always there)
|
||||
- **DO NOT reference** the review process, original LLM, or that changes were "added" or "enhanced"
|
||||
- **Ensure clean, coherent final story** that reads as if it was created perfectly the first time
|
||||
|
||||
### **Step 8: Confirmation**
|
||||
|
||||
After applying changes:
|
||||
|
||||
```
|
||||
✅ **STORY IMPROVEMENTS APPLIED**
|
||||
|
||||
Updated {{count}} sections in the story file.
|
||||
|
||||
The story now includes comprehensive developer guidance to prevent common implementation issues and ensure flawless execution.
|
||||
|
||||
**Next Steps:**
|
||||
1. Review the updated story
|
||||
2. Run `dev-story` for implementation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **💪 COMPETITIVE EXCELLENCE MINDSET**
|
||||
|
||||
**Your goal:** Improve the story file with dev agent needed context that makes flawless implementation inevitable while being optimized for LLM developer agent consumption. Remember the dev agent will ONLY have this file to use.
|
||||
|
||||
**Success Criteria:** The LLM developer agent that processes your improved story will have:
|
||||
|
||||
- ✅ Clear technical requirements they must follow
|
||||
- ✅ Previous work context they can build upon
|
||||
- ✅ Anti-pattern prevention to avoid common mistakes
|
||||
- ✅ Comprehensive guidance for efficient implementation
|
||||
- ✅ **Optimized content structure** for maximum clarity and minimum token waste
|
||||
- ✅ **Actionable instructions** with no ambiguity or verbosity
|
||||
- ✅ **Efficient information density** - maximum guidance in minimum text
|
||||
|
||||
**Every improvement should make it IMPOSSIBLE for the developer to:**
|
||||
|
||||
- Reinvent existing solutions
|
||||
- Use wrong approaches or libraries
|
||||
- Create duplicate functionality
|
||||
- Miss critical requirements
|
||||
- Make implementation errors
|
||||
|
||||
**LLM Optimization Should Make it IMPOSSIBLE for the developer agent to:**
|
||||
|
||||
- Misinterpret requirements due to ambiguity
|
||||
- Waste tokens on verbose, non-actionable content
|
||||
- Struggle to find critical information buried in text
|
||||
- Get confused by poor structure or organization
|
||||
- Miss key implementation signals due to inefficient communication
|
||||
|
||||
**Go create the ultimate developer implementation guide! 🚀**
|
||||
|
|
|
|||
|
|
@ -1,256 +0,0 @@
|
|||
# Create Story - Workflow Instructions (Spec-compliant, non-interactive by default)
|
||||
|
||||
````xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>This workflow creates or updates the next user story from epics/PRD and architecture context, saving to the configured stories directory and optionally invoking Story Context.</critical>
|
||||
<critical>DOCUMENT OUTPUT: Concise, technical, actionable story specifications. Use tables/lists for acceptance criteria and tasks.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load config and initialize">
|
||||
<action>Resolve variables from config_source: story_dir (sprint_artifacts), output_folder, user_name, communication_language. If story_dir missing → ASK user to provide a stories directory and update variable.</action>
|
||||
<action>Create {{story_dir}} if it does not exist</action>
|
||||
<action>Resolve installed component paths from workflow.yaml: template, instructions, validation</action>
|
||||
<action>Load architecture/standards docs: For each file name in {{arch_docs_file_names}} within {{arch_docs_search_dirs}}, read if exists. Collect testing, coding standards, security, and architectural patterns.</action>
|
||||
</step>
|
||||
|
||||
<step n="1.5" goal="Discover and load project documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {prd_content}, {tech_spec_content}, {architecture_content}, {ux_design_content}, {epics_content}, {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Discover previous story context">
|
||||
<critical>PREVIOUS STORY CONTINUITY: Essential for maintaining context and learning from prior development</critical>
|
||||
|
||||
<action>Find the previous completed story to extract dev agent learnings and review findings:
|
||||
1. Load {{output_folder}}/sprint-status.yaml COMPLETELY
|
||||
2. Find current {{story_key}} in development_status section
|
||||
3. Identify the story entry IMMEDIATELY ABOVE current story (previous row in file order)
|
||||
4. If previous story exists:
|
||||
- Extract {{previous_story_key}}
|
||||
- Check previous story status (done, in-progress, review, etc.)
|
||||
- If status is "done", "review", or "in-progress" (has some completion):
|
||||
* Construct path: {{story_dir}}/{{previous_story_key}}.md
|
||||
* Load the COMPLETE previous story file
|
||||
* Parse ALL sections comprehensively:
|
||||
|
||||
A) Dev Agent Record → Completion Notes List:
|
||||
- New patterns/services created (to reuse, not recreate)
|
||||
- Architectural deviations or decisions made
|
||||
- Technical debt deferred to future stories
|
||||
- Warnings or recommendations for next story
|
||||
- Interfaces/methods created for reuse
|
||||
|
||||
B) Dev Agent Record → Debug Log References:
|
||||
- Issues encountered and solutions
|
||||
- Gotchas or unexpected challenges
|
||||
- Workarounds applied
|
||||
|
||||
C) Dev Agent Record → File List:
|
||||
- Files created (NEW) - understand new capabilities
|
||||
- Files modified (MODIFIED) - track evolving components
|
||||
- Files deleted (DELETED) - removed functionality
|
||||
|
||||
D) Dev Notes:
|
||||
- Any "future story" notes or TODOs
|
||||
- Patterns established
|
||||
- Constraints discovered
|
||||
|
||||
E) Senior Developer Review (AI) section (if present):
|
||||
- Review outcome (Approve/Changes Requested/Blocked)
|
||||
- Unresolved action items (unchecked [ ] items)
|
||||
- Key findings that might affect this story
|
||||
- Architectural concerns raised
|
||||
|
||||
F) Senior Developer Review → Action Items (if present):
|
||||
- Check for unchecked [ ] items still pending
|
||||
- Note any systemic issues that apply to multiple stories
|
||||
|
||||
G) Review Follow-ups (AI) tasks (if present):
|
||||
- Check for unchecked [ ] review tasks still pending
|
||||
- Determine if they're epic-wide concerns
|
||||
|
||||
H) Story Status:
|
||||
- If "review" or "in-progress" - incomplete, note what's pending
|
||||
- If "done" - confirmed complete
|
||||
* Store ALL findings as {{previous_story_learnings}} with structure:
|
||||
- new_files: [list]
|
||||
- modified_files: [list]
|
||||
- new_services: [list with descriptions]
|
||||
- architectural_decisions: [list]
|
||||
- technical_debt: [list]
|
||||
- warnings_for_next: [list]
|
||||
- review_findings: [list if review exists]
|
||||
- pending_items: [list of unchecked action items]
|
||||
- If status is "backlog" or "drafted":
|
||||
* Set {{previous_story_learnings}} = "Previous story not yet implemented"
|
||||
5. If no previous story exists (first story in epic):
|
||||
- Set {{previous_story_learnings}} = "First story in epic - no predecessor context"
|
||||
</action>
|
||||
|
||||
<action>If {{tech_spec_file}} empty: derive from {{tech_spec_glob_template}} with {{epic_num}} and search {{tech_spec_search_dir}} recursively. If multiple, pick most recent by modified time.</action>
|
||||
<action>Build a prioritized document set for this epic - search and load from {input_file_patterns} list of potential locations:
|
||||
1) tech_spec_file (epic-scoped)
|
||||
2) epics_file (acceptance criteria and breakdown) the specific epic the story will be part of
|
||||
3) prd_file (business requirements and constraints) whole or sharded
|
||||
4) architecture_file (architecture constraints) whole or sharded
|
||||
</action>
|
||||
<action>READ COMPLETE FILES for all items found in the prioritized set. Store content and paths for citation.</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Find next backlog story to draft" tag="sprint-status">
|
||||
<critical>MUST read COMPLETE {sprint_status} file from start to end to preserve order</critical>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely to understand story order</action>
|
||||
|
||||
<action>Find the FIRST story (by reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "backlog"
|
||||
</action>
|
||||
|
||||
<check if="no backlog story found">
|
||||
<output>📋 No backlog stories found in sprint-status.yaml
|
||||
|
||||
All stories are either already drafted or completed.
|
||||
|
||||
**Options:**
|
||||
1. Run sprint-planning to refresh story tracking
|
||||
2. Load PM agent and run correct-course to add more stories
|
||||
3. Check if current sprint is complete
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Extract from found story key (e.g., "1-2-user-authentication"):
|
||||
- epic_num: first number before dash (e.g., "1")
|
||||
- story_num: second number after first dash (e.g., "2")
|
||||
- story_title: remainder after second dash (e.g., "user-authentication")
|
||||
</action>
|
||||
<action>Set {{story_id}} = "{{epic_num}}.{{story_num}}"</action>
|
||||
<action>Store story_key for later use (e.g., "1-2-user-authentication")</action>
|
||||
|
||||
<action>Verify story is enumerated in {{epics_file}}. If not found, HALT with message:</action>
|
||||
<action>"Story {{story_key}} not found in epics.md. Please load PM agent and run correct-course to sync epics, then rerun create-story."</action>
|
||||
|
||||
<action>Check if story file already exists at expected path in {{story_dir}}</action>
|
||||
<check if="story file exists">
|
||||
<output>ℹ️ Story file already exists: {{story_file_path}}
|
||||
Will update existing story file rather than creating new one.
|
||||
</output>
|
||||
<action>Set update_mode = true</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Extract requirements and derive story statement">
|
||||
<action>From tech_spec_file (preferred) or epics_file: extract epic {{epic_num}} title/summary, acceptance criteria for the next story, and any component references. If not present, fall back to PRD sections mapping to this epic/story.</action>
|
||||
<action>From architecture and architecture docs: extract constraints, patterns, component boundaries, and testing guidance relevant to the extracted ACs. ONLY capture information that directly informs implementation of this story.</action>
|
||||
<action>Derive a clear user story statement (role, action, benefit) grounded strictly in the above sources. If ambiguous and {{non_interactive}} == false → ASK user to clarify. If {{non_interactive}} == true → generate the best grounded statement WITHOUT inventing domain facts.</action>
|
||||
<template-output file="{default_output_file}">requirements_context_summary</template-output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Project structure alignment and lessons learned">
|
||||
<action>Review {{previous_story_learnings}} and extract actionable intelligence:
|
||||
- New patterns/services created → Note for reuse (DO NOT recreate)
|
||||
- Architectural deviations → Understand and maintain consistency
|
||||
- Technical debt items → Assess if this story should address them
|
||||
- Files modified → Understand current state of evolving components
|
||||
- Warnings/recommendations → Apply to this story's approach
|
||||
- Review findings → Learn from issues found in previous story
|
||||
- Pending action items → Determine if epic-wide concerns affect this story
|
||||
</action>
|
||||
|
||||
<action>If unified-project-structure.md present: align expected file paths, module names, and component locations; note any potential conflicts.</action>
|
||||
|
||||
<action>Cross-reference {{previous_story_learnings}}.new_files with project structure to understand where new capabilities are located.</action>
|
||||
|
||||
<template-output file="{default_output_file}">structure_alignment_summary</template-output>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Assemble acceptance criteria and tasks">
|
||||
<action>Assemble acceptance criteria list from tech_spec or epics. If gaps exist, derive minimal, testable criteria from PRD verbatim phrasing (NO invention).</action>
|
||||
<action>Create tasks/subtasks directly mapped to ACs. Include explicit testing subtasks per testing-strategy and existing tests framework. Cite architecture/source documents for any technical mandates.</action>
|
||||
<template-output file="{default_output_file}">acceptance_criteria</template-output>
|
||||
<template-output file="{default_output_file}">tasks_subtasks</template-output>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Create or update story document">
|
||||
<action>Resolve output path: {default_output_file} using current {{epic_num}} and {{story_num}}. If targeting an existing story for update, use its path.</action>
|
||||
<action>Initialize from template.md if creating a new file; otherwise load existing file for edit.</action>
|
||||
<action>Compute a concise story_title from epic/story context; if missing, synthesize from PRD feature name and epic number.</action>
|
||||
<template-output file="{default_output_file}">story_header</template-output>
|
||||
<template-output file="{default_output_file}">story_body</template-output>
|
||||
<template-output file="{default_output_file}">dev_notes_with_citations</template-output>
|
||||
|
||||
<action>If {{previous_story_learnings}} contains actionable items (not "First story" or "not yet implemented"):
|
||||
- Add "Learnings from Previous Story" subsection to Dev Notes
|
||||
- Include relevant completion notes, new files/patterns, deviations
|
||||
- Cite previous story file as reference [Source: stories/{{previous_story_key}}.md]
|
||||
- Highlight interfaces/services to REUSE (not recreate)
|
||||
- Note any technical debt to address in this story
|
||||
- List pending review items that affect this story (if any)
|
||||
- Reference specific files created: "Use {{file_path}} for {{purpose}}"
|
||||
- Format example:
|
||||
```
|
||||
### Learnings from Previous Story
|
||||
|
||||
**From Story {{previous_story_key}} (Status: {{previous_status}})**
|
||||
|
||||
- **New Service Created**: `AuthService` base class available at `src/services/AuthService.js` - use `AuthService.register()` method
|
||||
- **Architectural Change**: Switched from session-based to JWT authentication
|
||||
- **Schema Changes**: User model now includes `passwordHash` field, migration applied
|
||||
- **Technical Debt**: Email verification skipped, should be included in this or subsequent story
|
||||
- **Testing Setup**: Auth test suite initialized at `tests/integration/auth.test.js` - follow patterns established there
|
||||
- **Pending Review Items**: Rate limiting mentioned in review - consider for this story
|
||||
|
||||
[Source: stories/{{previous_story_key}}.md#Dev-Agent-Record]
|
||||
```
|
||||
</action>
|
||||
|
||||
<template-output file="{default_output_file}">change_log</template-output>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Validate, save, and mark story drafted" tag="sprint-status">
|
||||
<invoke-task>Validate against checklist at {installed_path}/checklist.md using {bmad_folder}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
<action>Save document unconditionally (non-interactive default). In interactive mode, allow user confirmation.</action>
|
||||
|
||||
<!-- Mark story as drafted in sprint status -->
|
||||
<action>Update {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Load the FULL file and read all development_status entries</action>
|
||||
<action>Find development_status key matching {{story_key}}</action>
|
||||
<action>Verify current status is "backlog" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = "drafted"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<check if="story key not found in file">
|
||||
<output>⚠️ Could not update story status: {{story_key}} not found in sprint-status.yaml
|
||||
|
||||
Story file was created successfully, but sprint-status.yaml was not updated.
|
||||
You may need to run sprint-planning to refresh tracking, or manually set the story row status to `drafted`.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<action>Report created/updated story path</action>
|
||||
<output>**✅ Story Created Successfully, {user_name}!**
|
||||
|
||||
**Story Details:**
|
||||
|
||||
- Story ID: {{story_id}}
|
||||
- Story Key: {{story_key}}
|
||||
- File: {{story_file}}
|
||||
- Status: drafted (was backlog)
|
||||
|
||||
**⚠️ Important:** The following workflows are context-intensive. It's recommended to clear context and restart the SM agent before running the next command.
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Review the drafted story in {{story_file}}
|
||||
2. **[RECOMMENDED]** Run `story-context` to generate technical context XML and mark story ready for development (combines context + ready in one step)
|
||||
3. Or run `story-ready` to manually mark the story ready without generating technical context
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
````
|
||||
|
|
@ -0,0 +1,324 @@
|
|||
<workflow>
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and generate all documents in {document_output_language}</critical>
|
||||
|
||||
<critical>🔥 CRITICAL MISSION: You are creating the ULTIMATE story context engine that prevents LLM developer mistakes, omissions or
|
||||
disasters! 🔥</critical>
|
||||
<critical>Your purpose is NOT to copy from epics - it's to create a comprehensive, optimized story file that gives the DEV agent
|
||||
EVERYTHING needed for flawless implementation</critical>
|
||||
<critical>COMMON LLM MISTAKES TO PREVENT: reinventing wheels, wrong libraries, wrong file locations, breaking regressions, ignoring UX,
|
||||
vague implementations, lying about completion, not learning from past work</critical>
|
||||
<critical>🚨 EXHAUSTIVE ANALYSIS REQUIRED: You must thoroughly analyze ALL artifacts to extract critical context - do NOT be lazy or skim!
|
||||
This is the most important function in the entire development process!</critical>
|
||||
<critical>🔬 UTILIZE SUBPROCESSES AND SUBAGENTS: Use research subagents, subprocesses or parallel processing if available to thoroughly
|
||||
analyze different artifacts simultaneously and thoroughly</critical>
|
||||
<critical>❓ SAVE QUESTIONS: If you think of questions or clarifications during analysis, save them for the end after the complete story is
|
||||
written</critical>
|
||||
<critical>🎯 ZERO USER INTERVENTION: Process should be fully automated except for initial epic/story selection or missing documents</critical>
|
||||
|
||||
<step n="1" goal="Determine target story">
|
||||
<check if="{{story_path}} is provided by user or user provided the epic and story number such as 2-4 or 1.6 or epic 1 story 5">
|
||||
<action>Parse user-provided story path: extract epic_num, story_num, story_title from format like "1-2-user-auth"</action>
|
||||
<action>Set {{epic_num}}, {{story_num}}, {{story_key}} from user input</action>
|
||||
<action>GOTO step 2a</action>
|
||||
</check>
|
||||
|
||||
<action>Check if {{sprint_status}} file exists for auto discover</action>
|
||||
<check if="sprint status file does NOT exist">
|
||||
<output>🚫 No sprint status file found and no story specified</output>
|
||||
<output>
|
||||
**Required Options:**
|
||||
1. Run `sprint-planning` to initialize sprint tracking (recommended)
|
||||
2. Provide specific epic-story number to draft (e.g., "1-2-user-auth")
|
||||
3. Provide path to story documents if sprint status doesn't exist yet
|
||||
</output>
|
||||
<ask>Choose option [1], provide epic-story number, path to story docs, or [q] to quit:</ask>
|
||||
|
||||
<check if="user chooses 'q'">
|
||||
<action>HALT - No work needed</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses '1'">
|
||||
<output>Run sprint-planning workflow first to create sprint-status.yaml</output>
|
||||
<action>HALT - User needs to run sprint-planning</action>
|
||||
</check>
|
||||
|
||||
<check if="user provides epic-story number">
|
||||
<action>Parse user input: extract epic_num, story_num, story_title</action>
|
||||
<action>Set {{epic_num}}, {{story_num}}, {{story_key}} from user input</action>
|
||||
<action>GOTO step 2a</action>
|
||||
</check>
|
||||
|
||||
<check if="user provides story docs path">
|
||||
<action>Use user-provided path for story documents</action>
|
||||
<action>GOTO step 2a</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<!-- Auto-discover from sprint status only if no user input -->
|
||||
<check if="no user input provided">
|
||||
<critical>MUST read COMPLETE {sprint_status} file from start to end to preserve order</critical>
|
||||
<action>Load the FULL file: {{sprint_status}}</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely</action>
|
||||
|
||||
<action>Find the FIRST story (by reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "backlog"
|
||||
</action>
|
||||
|
||||
<check if="no backlog story found">
|
||||
<output>📋 No backlog stories found in sprint-status.yaml
|
||||
|
||||
All stories are either already drafted, in progress, or done.
|
||||
|
||||
**Options:**
|
||||
1. Run sprint-planning to refresh story tracking
|
||||
2. Load PM agent and run correct-course to add more stories
|
||||
3. Check if current sprint is complete and run retrospective
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Extract from found story key (e.g., "1-2-user-authentication"):
|
||||
- epic_num: first number before dash (e.g., "1")
|
||||
- story_num: second number after first dash (e.g., "2")
|
||||
- story_title: remainder after second dash (e.g., "user-authentication")
|
||||
</action>
|
||||
<action>Set {{story_id}} = "{{epic_num}}.{{story_num}}"</action>
|
||||
<action>Store story_key for later use (e.g., "1-2-user-authentication")</action>
|
||||
|
||||
<!-- Mark epic as in-progress if this is first story -->
|
||||
<action>Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern</action>
|
||||
<check if="this is first story in epic {{epic_num}}">
|
||||
<action>Load {{sprint_status}} and check epic-{{epic_num}} status</action>
|
||||
<action>If epic status is "backlog" → update to "in-progress"</action>
|
||||
<action>If epic status is "contexted" → this means same as "in-progress", no change needed</action>
|
||||
<output>📊 Epic {{epic_num}} status updated to in-progress</output>
|
||||
</check>
|
||||
|
||||
<action>GOTO step 2a</action>
|
||||
</check>
|
||||
<action>Load the FULL file: {{sprint_status}}</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely</action>
|
||||
|
||||
<action>Find the FIRST story (by reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "backlog"
|
||||
</action>
|
||||
|
||||
<check if="no backlog story found">
|
||||
<output>📋 No backlog stories found in sprint-status.yaml
|
||||
|
||||
All stories are either already drafted, in progress, or done.
|
||||
|
||||
**Options:**
|
||||
1. Run sprint-planning to refresh story tracking
|
||||
2. Load PM agent and run correct-course to add more stories
|
||||
3. Check if current sprint is complete and run retrospective
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Extract from found story key (e.g., "1-2-user-authentication"):
|
||||
- epic_num: first number before dash (e.g., "1")
|
||||
- story_num: second number after first dash (e.g., "2")
|
||||
- story_title: remainder after second dash (e.g., "user-authentication")
|
||||
</action>
|
||||
<action>Set {{story_id}} = "{{epic_num}}.{{story_num}}"</action>
|
||||
<action>Store story_key for later use (e.g., "1-2-user-authentication")</action>
|
||||
|
||||
<!-- Mark epic as in-progress if this is first story -->
|
||||
<action>Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern</action>
|
||||
<check if="this is first story in epic {{epic_num}}">
|
||||
<action>Load {{sprint_status}} and check epic-{{epic_num}} status</action>
|
||||
<action>If epic status is "backlog" → update to "in-progress"</action>
|
||||
<action>If epic status is "contexted" → this means same as "in-progress", no change needed</action>
|
||||
<output>📊 Epic {{epic_num}} status updated to in-progress</output>
|
||||
</check>
|
||||
|
||||
<action>GOTO step 2a</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Load and analyze core artifacts">
|
||||
<critical>🔬 EXHAUSTIVE ARTIFACT ANALYSIS - This is where you prevent future developer fuckups!</critical>
|
||||
|
||||
<!-- Load all available content through discovery protocol -->
|
||||
<invoke-protocol
|
||||
name="discover_inputs" />
|
||||
<note>Available content: {epics_content}, {prd_content}, {architecture_content}, {ux_content},
|
||||
{project_context}</note>
|
||||
|
||||
<!-- Analyze epics file for story foundation -->
|
||||
<action>From {epics_content}, extract Epic {{epic_num}} complete context:</action> **EPIC ANALYSIS:** - Epic
|
||||
objectives and business value - ALL stories in this epic for cross-story context - Our specific story's requirements, user story
|
||||
statement, acceptance criteria - Technical requirements and constraints - Dependencies on other stories/epics - Source hints pointing to
|
||||
original documents <!-- Extract specific story requirements -->
|
||||
<action>Extract our story ({{epic_num}}-{{story_num}}) details:</action> **STORY FOUNDATION:** - User story statement
|
||||
(As a, I want, so that) - Detailed acceptance criteria (already BDD formatted) - Technical requirements specific to this story -
|
||||
Business context and value - Success criteria <!-- Previous story analysis for context continuity -->
|
||||
<check if="story_num > 1">
|
||||
<action>Load previous story file: {{story_dir}}/{{epic_num}}-{{previous_story_num}}-*.md</action> **PREVIOUS STORY INTELLIGENCE:** -
|
||||
Dev notes and learnings from previous story - Review feedback and corrections needed - Files that were created/modified and their
|
||||
patterns - Testing approaches that worked/didn't work - Problems encountered and solutions found - Code patterns established <action>Extract
|
||||
all learnings that could impact current story implementation</action>
|
||||
</check>
|
||||
|
||||
<!-- Git intelligence for previous work patterns -->
|
||||
<check
|
||||
if="previous story exists AND git repository detected">
|
||||
<action>Get last 5 commit titles to understand recent work patterns</action>
|
||||
<action>Analyze 1-5 most recent commits for relevance to current story:
|
||||
- Files created/modified
|
||||
- Code patterns and conventions used
|
||||
- Library dependencies added/changed
|
||||
- Architecture decisions implemented
|
||||
- Testing approaches used
|
||||
</action>
|
||||
<action>Extract actionable insights for current story implementation</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Architecture analysis for developer guardrails">
|
||||
<critical>🏗️ ARCHITECTURE INTELLIGENCE - Extract everything the developer MUST follow!</critical> **ARCHITECTURE DOCUMENT ANALYSIS:** <action>Systematically
|
||||
analyze architecture content for story-relevant requirements:</action>
|
||||
|
||||
<!-- Load architecture - single file or sharded -->
|
||||
<check if="architecture file is single file">
|
||||
<action>Load complete {architecture_content}</action>
|
||||
</check>
|
||||
<check if="architecture is sharded to folder">
|
||||
<action>Load architecture index and scan all architecture files</action>
|
||||
</check> **CRITICAL ARCHITECTURE EXTRACTION:** <action>For
|
||||
each architecture section, determine if relevant to this story:</action> - **Technical Stack:** Languages, frameworks, libraries with
|
||||
versions - **Code Structure:** Folder organization, naming conventions, file patterns - **API Patterns:** Service structure, endpoint
|
||||
patterns, data contracts - **Database Schemas:** Tables, relationships, constraints relevant to story - **Security Requirements:**
|
||||
Authentication patterns, authorization rules - **Performance Requirements:** Caching strategies, optimization patterns - **Testing
|
||||
Standards:** Testing frameworks, coverage expectations, test patterns - **Deployment Patterns:** Environment configurations, build
|
||||
processes - **Integration Patterns:** External service integrations, data flows <action>Extract any story-specific requirements that the
|
||||
developer MUST follow</action>
|
||||
<action>Identify any architectural decisions that override previous patterns</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Web research for latest technical specifics">
|
||||
<critical>🌐 ENSURE LATEST TECH KNOWLEDGE - Prevent outdated implementations!</critical> **WEB INTELLIGENCE:** <action>Identify specific
|
||||
technical areas that require latest version knowledge:</action>
|
||||
|
||||
<!-- Check for libraries/frameworks mentioned in architecture -->
|
||||
<action>From architecture analysis, identify specific libraries, APIs, or
|
||||
frameworks</action>
|
||||
<action>For each critical technology, research latest stable version and key changes:
|
||||
- Latest API documentation and breaking changes
|
||||
- Security vulnerabilities or updates
|
||||
- Performance improvements or deprecations
|
||||
- Best practices for current version
|
||||
</action>
|
||||
**EXTERNAL CONTEXT INCLUSION:** <action>Include in story any critical latest information the developer needs:
|
||||
- Specific library versions and why chosen
|
||||
- API endpoints with parameters and authentication
|
||||
- Recent security patches or considerations
|
||||
- Performance optimization techniques
|
||||
- Migration considerations if upgrading
|
||||
</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Create comprehensive story file">
|
||||
<critical>📝 CREATE ULTIMATE STORY FILE - The developer's master implementation guide!</critical>
|
||||
|
||||
<action>Initialize from template.md:
|
||||
{default_output_file}</action>
|
||||
<template-output file="{default_output_file}">story_header</template-output>
|
||||
|
||||
<!-- Story foundation from epics analysis -->
|
||||
<template-output
|
||||
file="{default_output_file}">story_requirements</template-output>
|
||||
|
||||
<!-- Developer context section - MOST IMPORTANT PART -->
|
||||
<template-output file="{default_output_file}">
|
||||
developer_context_section</template-output> **DEV AGENT GUARDRAILS:** <template-output file="{default_output_file}">
|
||||
technical_requirements</template-output>
|
||||
<template-output file="{default_output_file}">architecture_compliance</template-output>
|
||||
<template-output
|
||||
file="{default_output_file}">library_framework_requirements</template-output>
|
||||
<template-output file="{default_output_file}">
|
||||
file_structure_requirements</template-output>
|
||||
<template-output file="{default_output_file}">testing_requirements</template-output>
|
||||
|
||||
<!-- Previous story intelligence -->
|
||||
<check
|
||||
if="previous story learnings available">
|
||||
<template-output file="{default_output_file}">previous_story_intelligence</template-output>
|
||||
</check>
|
||||
|
||||
<!-- Git intelligence -->
|
||||
<check
|
||||
if="git analysis completed">
|
||||
<template-output file="{default_output_file}">git_intelligence_summary</template-output>
|
||||
</check>
|
||||
|
||||
<!-- Latest technical specifics -->
|
||||
<check if="web research completed">
|
||||
<template-output file="{default_output_file}">latest_tech_information</template-output>
|
||||
</check>
|
||||
|
||||
<!-- Project context reference -->
|
||||
<template-output
|
||||
file="{default_output_file}">project_context_reference</template-output>
|
||||
|
||||
<!-- Final status update -->
|
||||
<template-output file="{default_output_file}">
|
||||
story_completion_status</template-output>
|
||||
|
||||
<!-- CRITICAL: Set status to ready-for-dev -->
|
||||
<action>Set story Status to: "ready-for-dev"</action>
|
||||
<action>Add completion note: "Ultimate
|
||||
context engine analysis completed - comprehensive developer guide created"</action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Update sprint status and finalize">
|
||||
<invoke-task>Validate against checklist at {installed_path}/checklist.md using {bmad_folder}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
<action>Save story document unconditionally</action>
|
||||
|
||||
<!-- Update sprint status -->
|
||||
<check if="sprint status file exists">
|
||||
<action>Update {{sprint_status}}</action>
|
||||
<action>Load the FULL file and read all development_status entries</action>
|
||||
<action>Find development_status key matching {{story_key}}</action>
|
||||
<action>Verify current status is "backlog" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = "ready-for-dev"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
</check>
|
||||
|
||||
<action>Report completion</action>
|
||||
<output>**🎯 ULTIMATE BMad Method STORY CONTEXT CREATED, {user_name}!**
|
||||
|
||||
**Story Details:**
|
||||
- Story ID: {{story_id}}
|
||||
- Story Key: {{story_key}}
|
||||
- File: {{story_file}}
|
||||
- Status: ready-for-dev
|
||||
|
||||
**Next Steps:**
|
||||
1. Review the comprehensive story in {{story_file}}
|
||||
2. **Optional Quality Competition:** Run the scrum masters `*validate-create-story` to have a fresh LLM systematically review and
|
||||
improve the story context
|
||||
3. Run dev agents `dev-story` for optimized implementation
|
||||
4. Run `code-review` when complete (auto-marks done)
|
||||
|
||||
**Quality Competition Option:** The `*validate-create-story` command runs the story context through an independent LLM in fresh
|
||||
context that will:
|
||||
- Systematically re-analyze all source documents
|
||||
- Identify any misses, omissions, or improvements
|
||||
- Compete to create a more comprehensive story context
|
||||
- Present findings interactively for your approval
|
||||
- Apply improvements to create the ultimate developer implementation guide
|
||||
|
||||
**The developer now has everything needed for flawless implementation!**
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
name: create-story
|
||||
description: "Create the next user story markdown from epics/PRD and architecture, using a standard template and saving to the stories folder"
|
||||
description: "Create the next user story from epics+stories with enhanced context analysis and direct ready-for-dev marking"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
|
|
@ -14,59 +14,46 @@ story_dir: "{sprint_artifacts}"
|
|||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/create-story"
|
||||
template: "{installed_path}/template.md"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
instructions: "{installed_path}/instructions.xml"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" # Primary source for story tracking
|
||||
epics_file: "{output_folder}/epics.md" # Preferred source for epic/story breakdown
|
||||
prd_file: "{output_folder}/PRD.md" # Fallback for requirements
|
||||
architecture_file: "{output_folder}/architecture.md" # Optional architecture context
|
||||
tech_spec_file: "" # Will be auto-discovered from docs as tech-spec-epic-{{epic_num}}-*.md
|
||||
tech_spec_search_dir: "{project-root}/docs"
|
||||
tech_spec_glob_template: "tech-spec-epic-{{epic_num}}*.md"
|
||||
arch_docs_search_dirs: |
|
||||
- "{project-root}/docs"
|
||||
- "{output_folder}"
|
||||
arch_docs_file_names: |
|
||||
- *architecture*.md
|
||||
epics_file: "{output_folder}/epics.md" # Enhanced epics+stories with BDD and source hints
|
||||
prd_file: "{output_folder}/PRD.md" # Fallback for requirements (if not in epics file)
|
||||
architecture_file: "{output_folder}/architecture.md" # Fallback for constraints (if not in epics file)
|
||||
ux_file: "{output_folder}/ux.md" # Fallback for UX requirements (if not in epics file)
|
||||
story_title: "" # Will be elicited if not derivable
|
||||
|
||||
# Project context
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
default_output_file: "{story_dir}/{{story_key}}.md"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story
|
||||
# Smart input file references - Simplified for enhanced approach
|
||||
# The epics+stories file should contain everything needed with source hints
|
||||
input_file_patterns:
|
||||
prd:
|
||||
description: "Product requirements (optional)"
|
||||
description: "PRD (fallback - epics file should have most content)"
|
||||
whole: "{output_folder}/*prd*.md"
|
||||
sharded: "{output_folder}/*prd*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
tech_spec:
|
||||
description: "Technical specification (Quick Flow track)"
|
||||
whole: "{output_folder}/tech-spec.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only load if needed
|
||||
architecture:
|
||||
description: "System architecture and decisions"
|
||||
description: "Architecture (fallback - epics file should have relevant sections)"
|
||||
whole: "{output_folder}/*architecture*.md"
|
||||
sharded: "{output_folder}/*architecture*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
ux_design:
|
||||
description: "UX design specification (if UI)"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only load if needed
|
||||
ux:
|
||||
description: "UX design (fallback - epics file should have relevant sections)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only load if needed
|
||||
epics:
|
||||
description: "Epic containing this story"
|
||||
description: "Enhanced epics+stories file with BDD and source hints"
|
||||
whole: "{output_folder}/*epic*.md"
|
||||
sharded_index: "{output_folder}/*epic*/index.md"
|
||||
sharded_single: "{output_folder}/*epic*/epic-{{epic_num}}.md"
|
||||
load_strategy: "SELECTIVE_LOAD"
|
||||
document_project:
|
||||
sharded: "{output_folder}/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
sharded: "{output_folder}/*epic*/*.md"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only load needed epic
|
||||
|
||||
standalone: true
|
||||
|
||||
|
|
|
|||
|
|
@ -1,38 +1,80 @@
|
|||
---
|
||||
title: 'Dev Story Completion Checklist'
|
||||
title: 'Enhanced Dev Story Definition of Done Checklist'
|
||||
validation-target: 'Story markdown ({{story_path}})'
|
||||
validation-criticality: 'HIGHEST'
|
||||
required-inputs:
|
||||
- 'Story markdown file with Tasks/Subtasks, Acceptance Criteria'
|
||||
- 'Story markdown file with enhanced Dev Notes containing comprehensive implementation context'
|
||||
- 'Completed Tasks/Subtasks section with all items marked [x]'
|
||||
- 'Updated File List section with all changed files'
|
||||
- 'Updated Dev Agent Record with implementation notes'
|
||||
optional-inputs:
|
||||
- 'Test results output (if saved)'
|
||||
- 'CI logs (if applicable)'
|
||||
- 'Test results output'
|
||||
- 'CI logs'
|
||||
- 'Linting reports'
|
||||
validation-rules:
|
||||
- 'Only permitted sections in story were modified: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status'
|
||||
- 'Only permitted story sections modified: Tasks/Subtasks checkboxes, Dev Agent Record, File List, Change Log, Status'
|
||||
- 'All implementation requirements from story Dev Notes must be satisfied'
|
||||
- 'Definition of Done checklist must pass completely'
|
||||
- 'Enhanced story context must contain sufficient technical guidance'
|
||||
---
|
||||
|
||||
# Dev Story Completion Checklist
|
||||
# 🎯 Enhanced Definition of Done Checklist
|
||||
|
||||
## Tasks Completion
|
||||
**Critical validation:** Story is truly ready for review only when ALL items below are satisfied
|
||||
|
||||
- [ ] All tasks and subtasks for this story are marked complete with [x]
|
||||
- [ ] Implementation aligns with every Acceptance Criterion in the story
|
||||
## 📋 Context & Requirements Validation
|
||||
|
||||
## Tests and Quality
|
||||
- [ ] **Story Context Completeness:** Dev Notes contains ALL necessary technical requirements, architecture patterns, and implementation guidance
|
||||
- [ ] **Architecture Compliance:** Implementation follows all architectural requirements specified in Dev Notes
|
||||
- [ ] **Technical Specifications:** All technical specifications (libraries, frameworks, versions) from Dev Notes are implemented correctly
|
||||
- [ ] **Previous Story Learnings:** Previous story insights incorporated (if applicable) and build upon appropriately
|
||||
|
||||
- [ ] Unit tests added/updated for core functionality changed by this story
|
||||
- [ ] Integration tests added/updated when component interactions are affected
|
||||
- [ ] End-to-end tests created for critical user flows, if applicable
|
||||
- [ ] All tests pass locally (no regressions introduced)
|
||||
- [ ] Linting and static checks (if configured) pass
|
||||
## ✅ Implementation Completion
|
||||
|
||||
## Story File Updates
|
||||
- [ ] **All Tasks Complete:** Every task and subtask marked complete with [x]
|
||||
- [ ] **Acceptance Criteria Satisfaction:** Implementation satisfies EVERY Acceptance Criterion in the story
|
||||
- [ ] **No Ambiguous Implementation:** Clear, unambiguous implementation that meets story requirements
|
||||
- [ ] **Edge Cases Handled:** Error conditions and edge cases appropriately addressed
|
||||
- [ ] **Dependencies Within Scope:** Only uses dependencies specified in story or project_context.md
|
||||
|
||||
- [ ] File List section includes every new/modified/deleted file (paths relative to repo root)
|
||||
- [ ] Dev Agent Record contains relevant Debug Log and/or Completion Notes for this work
|
||||
- [ ] Change Log includes a brief summary of what changed
|
||||
- [ ] Only permitted sections of the story file were modified
|
||||
## 🧪 Testing & Quality Assurance
|
||||
|
||||
## Final Status
|
||||
- [ ] **Unit Tests:** Unit tests added/updated for ALL core functionality introduced/changed by this story
|
||||
- [ ] **Integration Tests:** Integration tests added/updated for component interactions when story requirements demand them
|
||||
- [ ] **End-to-End Tests:** End-to-end tests created for critical user flows when story requirements specify them
|
||||
- [ ] **Test Coverage:** Tests cover acceptance criteria and edge cases from story Dev Notes
|
||||
- [ ] **Regression Prevention:** ALL existing tests pass (no regressions introduced)
|
||||
- [ ] **Code Quality:** Linting and static checks pass when configured in project
|
||||
- [ ] **Test Framework Compliance:** Tests use project's testing frameworks and patterns from Dev Notes
|
||||
|
||||
- [ ] Regression suite executed successfully
|
||||
- [ ] Story Status is set to "Ready for Review"
|
||||
## 📝 Documentation & Tracking
|
||||
|
||||
- [ ] **File List Complete:** File List includes EVERY new, modified, or deleted file (paths relative to repo root)
|
||||
- [ ] **Dev Agent Record Updated:** Contains relevant Implementation Notes and/or Debug Log for this work
|
||||
- [ ] **Change Log Updated:** Change Log includes clear summary of what changed and why
|
||||
- [ ] **Review Follow-ups:** All review follow-up tasks (marked [AI-Review]) completed and corresponding review items marked resolved (if applicable)
|
||||
- [ ] **Story Structure Compliance:** Only permitted sections of story file were modified
|
||||
|
||||
## 🔚 Final Status Verification
|
||||
|
||||
- [ ] **Story Status Updated:** Story Status set to "Ready for Review"
|
||||
- [ ] **Sprint Status Updated:** Sprint status updated to "review" (when sprint tracking is used)
|
||||
- [ ] **Quality Gates Passed:** All quality checks and validations completed successfully
|
||||
- [ ] **No HALT Conditions:** No blocking issues or incomplete work remaining
|
||||
- [ ] **User Communication Ready:** Implementation summary prepared for user review
|
||||
|
||||
## 🎯 Final Validation Output
|
||||
|
||||
```
|
||||
Definition of Done: {{PASS/FAIL}}
|
||||
|
||||
✅ **Story Ready for Review:** {{story_key}}
|
||||
📊 **Completion Score:** {{completed_items}}/{{total_items}} items passed
|
||||
🔍 **Quality Gates:** {{quality_gates_status}}
|
||||
📋 **Test Results:** {{test_results_summary}}
|
||||
📝 **Documentation:** {{documentation_status}}
|
||||
```
|
||||
|
||||
**If FAIL:** List specific failures and required actions before story can be marked Ready for Review
|
||||
|
||||
**If PASS:** Story is fully ready for code review and production consideration
|
||||
|
|
|
|||
|
|
@ -1,267 +0,0 @@
|
|||
# Develop Story - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status</critical>
|
||||
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
|
||||
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.</critical>
|
||||
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.</critical>
|
||||
|
||||
<critical>User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Find next ready story and load it" tag="sprint-status">
|
||||
<check if="{{story_path}} is provided">
|
||||
<action>Use {{story_path}} directly</action>
|
||||
<action>Read COMPLETE story file</action>
|
||||
<action>Extract story_key from filename or metadata</action>
|
||||
<goto>task_check</goto>
|
||||
</check>
|
||||
|
||||
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely to understand story order</action>
|
||||
|
||||
<action>Find the FIRST story (by reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "ready-for-dev"
|
||||
</action>
|
||||
|
||||
<check if="no ready-for-dev or in-progress story found">
|
||||
<output>📋 No ready-for-dev stories found in sprint-status.yaml
|
||||
**Options:**
|
||||
1. Run `story-context` to generate context file and mark drafted stories as ready
|
||||
2. Run `story-ready` to quickly mark drafted stories as ready without generating context
|
||||
3. Run `create-story` if no incomplete stories are drafted yet
|
||||
4. Check {output_folder}/sprint-status.yaml to see current sprint status
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
|
||||
<action>Find matching story file in {{story_dir}} using story_key pattern: {{story_key}}.md</action>
|
||||
<action>Read COMPLETE story file from discovered path</action>
|
||||
|
||||
<anchor id="task_check" />
|
||||
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
|
||||
|
||||
<action>Check if context file exists at: {{story_dir}}/{{story_key}}.context.xml</action>
|
||||
<check if="context file exists">
|
||||
<action>Read COMPLETE context file</action>
|
||||
<action>Parse all sections: story details, artifacts (docs, code, dependencies), interfaces, constraints, tests</action>
|
||||
<action>Use this context to inform implementation decisions and approaches</action>
|
||||
</check>
|
||||
<check if="context file does NOT exist">
|
||||
<output>ℹ️ No context file found for {{story_key}}
|
||||
|
||||
Proceeding with story file only. For better context, consider running `story-context` workflow first.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<action>Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks</action>
|
||||
|
||||
<action if="no incomplete tasks"><goto step="6">Completion sequence</goto></action>
|
||||
<action if="story file inaccessible">HALT: "Cannot develop story without access to story file"</action>
|
||||
<action if="incomplete task or subtask requirements ambiguous">ASK user to clarify or HALT</action>
|
||||
</step>
|
||||
|
||||
<step n="0.5" goal="Discover and load project documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {architecture_content}, {tech_spec_content}, {ux_design_content}, {epics_content} (selective load), {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="1.5" goal="Detect review continuation and extract review context">
|
||||
<critical>Determine if this is a fresh start or continuation after code review</critical>
|
||||
|
||||
<action>Check if "Senior Developer Review (AI)" section exists in the story file</action>
|
||||
<action>Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks</action>
|
||||
|
||||
<check if="Senior Developer Review section exists">
|
||||
<action>Set review_continuation = true</action>
|
||||
<action>Extract from "Senior Developer Review (AI)" section:
|
||||
- Review outcome (Approve/Changes Requested/Blocked)
|
||||
- Review date
|
||||
- Total action items with checkboxes (count checked vs unchecked)
|
||||
- Severity breakdown (High/Med/Low counts)
|
||||
</action>
|
||||
<action>Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection</action>
|
||||
<action>Store list of unchecked review items as {{pending_review_items}}</action>
|
||||
|
||||
<output>⏯️ **Resuming Story After Code Review** ({{review_date}})
|
||||
|
||||
**Review Outcome:** {{review_outcome}}
|
||||
**Action Items:** {{unchecked_review_count}} remaining to address
|
||||
**Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low
|
||||
|
||||
**Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="Senior Developer Review section does NOT exist">
|
||||
<action>Set review_continuation = false</action>
|
||||
<action>Set {{pending_review_items}} = empty</action>
|
||||
|
||||
<output>🚀 **Starting Fresh Implementation**
|
||||
|
||||
Story: {{story_key}}
|
||||
Context file: {{context_available}}
|
||||
First incomplete task: {{first_task_description}}
|
||||
</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="1.6" goal="Mark story in-progress" tag="sprint-status">
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Read all development_status entries to find {{story_key}}</action>
|
||||
<action>Get current status value for development_status[{{story_key}}]</action>
|
||||
|
||||
<check if="current status == 'ready-for-dev'">
|
||||
<action>Update the story in the sprint status report to = "in-progress"</action>
|
||||
<output>🚀 Starting work on story {{story_key}}
|
||||
Status updated: ready-for-dev → in-progress
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="current status == 'in-progress'">
|
||||
<output>⏯️ Resuming work on story {{story_key}}
|
||||
Story is already marked in-progress
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="current status is neither ready-for-dev nor in-progress">
|
||||
<output>⚠️ Unexpected story status: {{current_status}}
|
||||
Expected ready-for-dev or in-progress. Continuing anyway...
|
||||
</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Plan and implement task">
|
||||
<action>Review acceptance criteria and dev notes for the selected task</action>
|
||||
<action>Plan implementation steps and edge cases; write down a brief plan in Dev Agent Record → Debug Log</action>
|
||||
<action>Implement the task COMPLETELY including all subtasks, critically following best practices, coding patterns and coding standards in this repo you have learned about from the story and context file or your own critical agent instructions</action>
|
||||
<action>Handle error conditions and edge cases appropriately</action>
|
||||
<action if="new or different than what is documented dependencies are needed">ASK user for approval before adding</action>
|
||||
<action if="3 consecutive implementation failures occur">HALT and request guidance</action>
|
||||
<action if="required configuration is missing">HALT: "Cannot proceed without necessary configuration files"</action>
|
||||
<critical>Do not stop after partial progress; continue iterating tasks until all ACs are satisfied and tested or a HALT condition triggers</critical>
|
||||
<critical>Do NOT propose to pause for review, stand-ups, or validation until Step 6 gates are satisfied</critical>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Author comprehensive tests">
|
||||
<action>Create unit tests for business logic and core functionality introduced/changed by the task</action>
|
||||
<action>Add integration tests for component interactions where desired by test plan or story notes</action>
|
||||
<action>Include end-to-end tests for critical user flows where desired by test plan or story notes</action>
|
||||
<action>Cover edge cases and error handling scenarios noted in the test plan or story notes</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Run validations and tests">
|
||||
<action>Determine how to run tests for this repo (infer or use {{run_tests_command}} if provided)</action>
|
||||
<action>Run all existing tests to ensure no regressions</action>
|
||||
<action>Run the new tests to verify implementation correctness</action>
|
||||
<action>Run linting and code quality checks if configured</action>
|
||||
<action>Validate implementation meets ALL story acceptance criteria; if ACs include quantitative thresholds (e.g., test pass rate), ensure they are met before marking complete</action>
|
||||
<action if="regression tests fail">STOP and fix before continuing, consider how current changes made broke regression</action>
|
||||
<action if="new tests fail">STOP and fix before continuing</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Mark task complete, track review resolutions, and update story">
|
||||
<critical>If task is a review follow-up, must mark BOTH the task checkbox AND the corresponding action item in the review section</critical>
|
||||
|
||||
<action>Check if completed task has [AI-Review] prefix (indicates review follow-up task)</action>
|
||||
|
||||
<check if="task is review follow-up">
|
||||
<action>Extract review item details (severity, description, related AC/file)</action>
|
||||
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
|
||||
|
||||
<!-- Mark task in Review Follow-ups section -->
|
||||
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
|
||||
|
||||
<!-- CRITICAL: Also mark corresponding action item in review section -->
|
||||
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
|
||||
<action>Mark that action item checkbox [x] as resolved</action>
|
||||
|
||||
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
|
||||
</check>
|
||||
|
||||
<action>ONLY mark the task (and subtasks) checkbox with [x] if ALL tests pass and validation succeeds</action>
|
||||
<action>Update File List section with any new, modified, or deleted files (paths relative to repo root)</action>
|
||||
<action>Add completion notes to Dev Agent Record if significant changes were made (summarize intent, approach, and any follow-ups)</action>
|
||||
|
||||
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
|
||||
<action>Count total resolved review items in this session</action>
|
||||
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
|
||||
</check>
|
||||
|
||||
<action>Save the story file</action>
|
||||
<action>Determine if more incomplete tasks remain</action>
|
||||
<action if="more tasks remain"><goto step="2">Next task</goto></action>
|
||||
<action if="no tasks remain"><goto step="6">Completion</goto></action>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Story completion and mark for review" tag="sprint-status">
|
||||
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
|
||||
<action>Run the full regression suite (do not skip)</action>
|
||||
<action>Confirm File List includes every changed file</action>
|
||||
<action>Execute story definition-of-done checklist, if the story includes one</action>
|
||||
<action>Update the story Status to: review</action>
|
||||
|
||||
<!-- Mark story ready for review -->
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Find development_status key matching {{story_key}}</action>
|
||||
<action>Verify current status is "in-progress" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = "review"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<check if="story key not found in file">
|
||||
<output>⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found
|
||||
|
||||
Story is marked Ready for Review in file, but sprint-status.yaml may be out of sync.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<action if="any task is incomplete">Return to step 1 to complete remaining work (Do NOT finish with partial progress)</action>
|
||||
<action if="regression failures exist">STOP and resolve before completing</action>
|
||||
<action if="File List is incomplete">Update it before completing</action>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Completion communication and user support">
|
||||
<action>Optionally run the workflow validation task against the story using {project-root}/{bmad_folder}/core/tasks/validate-workflow.xml</action>
|
||||
<action>Prepare a concise summary in Dev Agent Record → Completion Notes</action>
|
||||
|
||||
<action>Communicate to {user_name} that story implementation is complete and ready for review</action>
|
||||
<action>Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified</action>
|
||||
<action>Provide the story file path and current status (now "review", was "in-progress")</action>
|
||||
|
||||
<action>Based on {user_skill_level}, ask if user needs any explanations about:
|
||||
- What was implemented and how it works
|
||||
- Why certain technical decisions were made
|
||||
- How to test or verify the changes
|
||||
- Any patterns, libraries, or approaches used
|
||||
- Anything else they'd like clarified
|
||||
</action>
|
||||
|
||||
<check if="user asks for explanations">
|
||||
<action>Provide clear, contextual explanations tailored to {user_skill_level}</action>
|
||||
<action>Use examples and references to specific code when helpful</action>
|
||||
</check>
|
||||
|
||||
<action>Once explanations are complete (or user indicates no questions), suggest logical next steps</action>
|
||||
<action>Common next steps to suggest (but allow user flexibility):
|
||||
- Review the implemented story yourself and test the changes
|
||||
- Verify all acceptance criteria are met
|
||||
- Ensure deployment readiness if applicable
|
||||
- Run `code-review` workflow for peer review
|
||||
- Check sprint-status.yaml to see project progress
|
||||
</action>
|
||||
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -0,0 +1,404 @@
|
|||
<workflow>
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List,
|
||||
Change Log, and Status</critical>
|
||||
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
|
||||
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution
|
||||
until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives
|
||||
other instruction.</critical>
|
||||
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.</critical>
|
||||
<critical>User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.</critical>
|
||||
|
||||
<step n="1" goal="Find next ready story and load it" tag="sprint-status">
|
||||
<check if="{{story_path}} is provided">
|
||||
<action>Use {{story_path}} directly</action>
|
||||
<action>Read COMPLETE story file</action>
|
||||
<action>Extract story_key from filename or metadata</action>
|
||||
<goto> anchor with id task_check</goto>
|
||||
</check>
|
||||
|
||||
<!-- Sprint-based story discovery -->
|
||||
<check if="{{sprint_status}} file exists">
|
||||
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
|
||||
<action>Load the FULL file: {{sprint_status}}</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely to understand story order</action>
|
||||
|
||||
<action>Find the FIRST story (by reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "ready-for-dev"
|
||||
</action>
|
||||
|
||||
<check if="no ready-for-dev or in-progress story found">
|
||||
<output>📋 No ready-for-dev stories found in sprint-status.yaml
|
||||
|
||||
**Current Sprint Status:** {{sprint_status_summary}}
|
||||
|
||||
**What would you like to do?**
|
||||
1. Run `create-story` to create next story from epics with comprehensive context
|
||||
2. Run `*validate-create-story` to improve existing drafted stories before development
|
||||
3. Specify a particular story file to develop (provide full path)
|
||||
4. Check {{sprint_status}} file to see current sprint status
|
||||
</output>
|
||||
<ask>Choose option [1], [2], [3], or [4], or specify story file path:</ask>
|
||||
|
||||
<check if="user chooses '1'">
|
||||
<action>HALT - Run create-story to create next story</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses '2'">
|
||||
<action>HALT - Run validate-create-story to improve existing stories</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses '3'">
|
||||
<ask>Provide the story file path to develop:</ask>
|
||||
<action>Store user-provided story path as {{story_path}}</action>
|
||||
<goto anchor="task_check" />
|
||||
</check>
|
||||
|
||||
<check if="user chooses '4'">
|
||||
<output>Loading {{sprint_status}} for detailed status review...</output>
|
||||
<action>Display detailed sprint status analysis</action>
|
||||
<action>HALT - User can review sprint status and provide story path</action>
|
||||
</check>
|
||||
|
||||
<check if="user provides story file path">
|
||||
<action>Store user-provided story path as {{story_path}}</action>
|
||||
<goto anchor="task_check" />
|
||||
</check>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<!-- Non-sprint story discovery -->
|
||||
<check if="{{sprint_status}} file does NOT exist">
|
||||
<action>Search {story_dir} for stories directly</action>
|
||||
<action>Find stories with "ready-for-dev" status in files</action>
|
||||
<action>Look for story files matching pattern: *-*-*.md</action>
|
||||
<action>Read each candidate story file to check Status section</action>
|
||||
|
||||
<check if="no ready-for-dev stories found in story files">
|
||||
<output>📋 No ready-for-dev stories found
|
||||
|
||||
**Available Options:**
|
||||
1. Run `create-story` to create next story from epics with comprehensive context
|
||||
2. Run `*validate-create-story` to improve existing drafted stories
|
||||
3. Specify which story to develop
|
||||
</output>
|
||||
<ask>What would you like to do? Choose option [1], [2], or [3]:</ask>
|
||||
|
||||
<check if="user chooses '1'">
|
||||
<action>HALT - Run create-story to create next story</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses '2'">
|
||||
<action>HALT - Run validate-create-story to improve existing stories</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses '3'">
|
||||
<ask>It's unclear what story you want developed. Please provide the full path to the story file:</ask>
|
||||
<action>Store user-provided story path as {{story_path}}</action>
|
||||
<action>Continue with provided story file</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="ready-for-dev story found in files">
|
||||
<action>Use discovered story file and extract story_key</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
|
||||
<action>Find matching story file in {story_dir} using story_key pattern: {{story_key}}.md</action>
|
||||
<action>Read COMPLETE story file from discovered path</action>
|
||||
|
||||
<anchor id="task_check" />
|
||||
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
|
||||
|
||||
<action>Load comprehensive context from story file's Dev Notes section</action>
|
||||
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
|
||||
<action>Use enhanced story context to inform implementation decisions and approaches</action>
|
||||
|
||||
<action>Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks</action>
|
||||
|
||||
<action if="no incomplete tasks">
|
||||
<goto step="6">Completion sequence</goto>
|
||||
</action>
|
||||
<action if="story file inaccessible">HALT: "Cannot develop story without access to story file"</action>
|
||||
<action if="incomplete task or subtask requirements ambiguous">ASK user to clarify or HALT</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Load project context and story information">
|
||||
<critical>Load all available context to inform implementation</critical>
|
||||
|
||||
<action>Load {project_context} for coding standards and project-wide patterns (if exists)</action>
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
|
||||
<action>Load comprehensive context from story file's Dev Notes section</action>
|
||||
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
|
||||
<action>Use enhanced story context to inform implementation decisions and approaches</action>
|
||||
<output>✅ **Context Loaded**
|
||||
Story and project context available for implementation
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Detect review continuation and extract review context">
|
||||
<critical>Determine if this is a fresh start or continuation after code review</critical>
|
||||
|
||||
<action>Check if "Senior Developer Review (AI)" section exists in the story file</action>
|
||||
<action>Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks</action>
|
||||
|
||||
<check if="Senior Developer Review section exists">
|
||||
<action>Set review_continuation = true</action>
|
||||
<action>Extract from "Senior Developer Review (AI)" section:
|
||||
- Review outcome (Approve/Changes Requested/Blocked)
|
||||
- Review date
|
||||
- Total action items with checkboxes (count checked vs unchecked)
|
||||
- Severity breakdown (High/Med/Low counts)
|
||||
</action>
|
||||
<action>Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection</action>
|
||||
<action>Store list of unchecked review items as {{pending_review_items}}</action>
|
||||
|
||||
<output>⏯️ **Resuming Story After Code Review** ({{review_date}})
|
||||
|
||||
**Review Outcome:** {{review_outcome}}
|
||||
**Action Items:** {{unchecked_review_count}} remaining to address
|
||||
**Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low
|
||||
|
||||
**Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="Senior Developer Review section does NOT exist">
|
||||
<action>Set review_continuation = false</action>
|
||||
<action>Set {{pending_review_items}} = empty</action>
|
||||
|
||||
<output>🚀 **Starting Fresh Implementation**
|
||||
|
||||
Story: {{story_key}}
|
||||
Story Status: {{current_status}}
|
||||
First incomplete task: {{first_task_description}}
|
||||
</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Mark story in-progress" tag="sprint-status">
|
||||
<check if="{{sprint_status}} file exists">
|
||||
<action>Load the FULL file: {{sprint_status}}</action>
|
||||
<action>Read all development_status entries to find {{story_key}}</action>
|
||||
<action>Get current status value for development_status[{{story_key}}]</action>
|
||||
|
||||
<check if="current status == 'ready-for-dev' OR review_continuation == true">
|
||||
<action>Update the story in the sprint status report to = "in-progress"</action>
|
||||
<output>🚀 Starting work on story {{story_key}}
|
||||
Status updated: ready-for-dev → in-progress
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="current status == 'in-progress'">
|
||||
<output>⏯️ Resuming work on story {{story_key}}
|
||||
Story is already marked in-progress
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="current status is neither ready-for-dev nor in-progress">
|
||||
<output>⚠️ Unexpected story status: {{current_status}}
|
||||
Expected ready-for-dev or in-progress. Continuing anyway...
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<action>Store {{current_sprint_status}} for later use</action>
|
||||
</check>
|
||||
|
||||
<check if="{{sprint_status}} file does NOT exist">
|
||||
<output>ℹ️ No sprint status file exists - story progress will be tracked in story file only</output>
|
||||
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Implement task following red-green-refactor cycle">
|
||||
<critical>FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION</critical>
|
||||
|
||||
<action>Review the current task/subtask from the story file - this is your authoritative implementation guide</action>
|
||||
<action>Plan implementation following red-green-refactor cycle</action>
|
||||
|
||||
<!-- RED PHASE -->
|
||||
<action>Write FAILING tests first for the task/subtask functionality</action>
|
||||
<action>Confirm tests fail before implementation - this validates test correctness</action>
|
||||
|
||||
<!-- GREEN PHASE -->
|
||||
<action>Implement MINIMAL code to make tests pass</action>
|
||||
<action>Run tests to confirm they now pass</action>
|
||||
<action>Handle error conditions and edge cases as specified in task/subtask</action>
|
||||
|
||||
<!-- REFACTOR PHASE -->
|
||||
<action>Improve code structure while keeping tests green</action>
|
||||
<action>Ensure code follows architecture patterns and coding standards from Dev Notes</action>
|
||||
|
||||
<action>Document technical approach and decisions in Dev Agent Record → Implementation Plan</action>
|
||||
|
||||
<action if="new dependencies required beyond story specifications">HALT: "Additional dependencies need user approval"</action>
|
||||
<action if="3 consecutive implementation failures occur">HALT and request guidance</action>
|
||||
<action if="required configuration is missing">HALT: "Cannot proceed without necessary configuration files"</action>
|
||||
|
||||
<critical>NEVER implement anything not mapped to a specific task/subtask in the story file</critical>
|
||||
<critical>NEVER proceed to next task until current task/subtask is complete AND tests pass</critical>
|
||||
<critical>Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition</critical>
|
||||
<critical>Do NOT propose to pause for review until Step 9 completion gates are satisfied</critical>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Author comprehensive tests">
|
||||
<action>Create unit tests for business logic and core functionality introduced/changed by the task</action>
|
||||
<action>Add integration tests for component interactions specified in story requirements</action>
|
||||
<action>Include end-to-end tests for critical user flows when story requirements demand them</action>
|
||||
<action>Cover edge cases and error handling scenarios identified in story Dev Notes</action>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Run validations and tests">
|
||||
<action>Determine how to run tests for this repo (infer test framework from project structure)</action>
|
||||
<action>Run all existing tests to ensure no regressions</action>
|
||||
<action>Run the new tests to verify implementation correctness</action>
|
||||
<action>Run linting and code quality checks if configured in project</action>
|
||||
<action>Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly</action>
|
||||
<action if="regression tests fail">STOP and fix before continuing - identify breaking changes immediately</action>
|
||||
<action if="new tests fail">STOP and fix before continuing - ensure implementation correctness</action>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Validate and mark task complete ONLY when fully done">
|
||||
<critical>NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING</critical>
|
||||
|
||||
<!-- VALIDATION GATES -->
|
||||
<action>Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%</action>
|
||||
<action>Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features</action>
|
||||
<action>Validate that ALL acceptance criteria related to this task are satisfied</action>
|
||||
<action>Run full test suite to ensure NO regressions introduced</action>
|
||||
|
||||
<!-- REVIEW FOLLOW-UP HANDLING -->
|
||||
<check if="task is review follow-up (has [AI-Review] prefix)">
|
||||
<action>Extract review item details (severity, description, related AC/file)</action>
|
||||
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
|
||||
|
||||
<!-- Mark task in Review Follow-ups section -->
|
||||
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
|
||||
|
||||
<!-- CRITICAL: Also mark corresponding action item in review section -->
|
||||
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
|
||||
<action>Mark that action item checkbox [x] as resolved</action>
|
||||
|
||||
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
|
||||
</check>
|
||||
|
||||
<!-- ONLY MARK COMPLETE IF ALL VALIDATION PASS -->
|
||||
<check if="ALL validation gates pass AND tests ACTUALLY exist and pass">
|
||||
<action>ONLY THEN mark the task (and subtasks) checkbox with [x]</action>
|
||||
<action>Update File List section with ALL new, modified, or deleted files (paths relative to repo root)</action>
|
||||
<action>Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested</action>
|
||||
</check>
|
||||
|
||||
<check if="ANY validation fails">
|
||||
<action>DO NOT mark task complete - fix issues first</action>
|
||||
<action>HALT if unable to fix validation failures</action>
|
||||
</check>
|
||||
|
||||
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
|
||||
<action>Count total resolved review items in this session</action>
|
||||
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
|
||||
</check>
|
||||
|
||||
<action>Save the story file</action>
|
||||
<action>Determine if more incomplete tasks remain</action>
|
||||
<action if="more tasks remain">
|
||||
<goto step="5">Next task</goto>
|
||||
</action>
|
||||
<action if="no tasks remain">
|
||||
<goto step="9">Completion</goto>
|
||||
</action>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Story completion and mark for review" tag="sprint-status">
|
||||
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
|
||||
<action>Run the full regression suite (do not skip)</action>
|
||||
<action>Confirm File List includes every changed file</action>
|
||||
<action>Execute enhanced definition-of-done validation</action>
|
||||
<action>Update the story Status to: "Ready for Review"</action>
|
||||
|
||||
<!-- Enhanced Definition of Done Validation -->
|
||||
<action>Validate definition-of-done checklist with essential requirements:
|
||||
- All tasks/subtasks marked complete with [x]
|
||||
- Implementation satisfies every Acceptance Criterion
|
||||
- Unit tests for core functionality added/updated
|
||||
- Integration tests for component interactions added when required
|
||||
- End-to-end tests for critical flows added when story demands them
|
||||
- All tests pass (no regressions, new tests successful)
|
||||
- Code quality checks pass (linting, static analysis if configured)
|
||||
- File List includes every new/modified/deleted file (relative paths)
|
||||
- Dev Agent Record contains implementation notes
|
||||
- Change Log includes summary of changes
|
||||
- Only permitted story sections were modified
|
||||
</action>
|
||||
|
||||
<!-- Mark story ready for review - sprint status conditional -->
|
||||
<check if="{sprint_status} file exists AND {{current_sprint_status}} != 'no-sprint-tracking'">
|
||||
<action>Load the FULL file: {sprint_status}</action>
|
||||
<action>Find development_status key matching {{story_key}}</action>
|
||||
<action>Verify current status is "in-progress" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = "review"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
<output>✅ Story marked Ready for Review in sprint status</output>
|
||||
</check>
|
||||
|
||||
<check if="{sprint_status} file does NOT exist OR {{current_sprint_status}} == 'no-sprint-tracking'">
|
||||
<output>ℹ️ Story marked Ready for Review in story file (no sprint tracking configured)</output>
|
||||
</check>
|
||||
|
||||
<check if="story key not found in sprint status">
|
||||
<output>⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found
|
||||
|
||||
Story is marked Ready for Review in file, but sprint-status.yaml may be out of sync.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<!-- Final validation gates -->
|
||||
<action if="any task is incomplete">HALT - Complete remaining tasks before marking ready for review</action>
|
||||
<action if="regression failures exist">HALT - Fix regression issues before completing</action>
|
||||
<action if="File List is incomplete">HALT - Update File List with all changed files</action>
|
||||
<action if="definition-of-done validation fails">HALT - Address DoD failures before completing</action>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Completion communication and user support">
|
||||
<action>Execute the enhanced definition-of-done checklist using the validation framework</action>
|
||||
<action>Prepare a concise summary in Dev Agent Record → Completion Notes</action>
|
||||
|
||||
<action>Communicate to {user_name} that story implementation is complete and ready for review</action>
|
||||
<action>Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified</action>
|
||||
<action>Provide the story file path and current status (now "Ready for Review")</action>
|
||||
|
||||
<action>Based on {user_skill_level}, ask if user needs any explanations about:
|
||||
- What was implemented and how it works
|
||||
- Why certain technical decisions were made
|
||||
- How to test or verify the changes
|
||||
- Any patterns, libraries, or approaches used
|
||||
- Anything else they'd like clarified
|
||||
</action>
|
||||
|
||||
<check if="user asks for explanations">
|
||||
<action>Provide clear, contextual explanations tailored to {user_skill_level}</action>
|
||||
<action>Use examples and references to specific code when helpful</action>
|
||||
</check>
|
||||
|
||||
<action>Once explanations are complete (or user indicates no questions), suggest logical next steps</action>
|
||||
<action>Recommended next steps (flexible based on project setup):
|
||||
- Review the implemented story and test the changes
|
||||
- Verify all acceptance criteria are met
|
||||
- Ensure deployment readiness if applicable
|
||||
- Run `code-review` workflow for peer review
|
||||
</action>
|
||||
<check if="{sprint_status} file exists">
|
||||
<action>Suggest checking {sprint_status} to see project progress</action>
|
||||
</check>
|
||||
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -12,47 +12,16 @@ document_output_language: "{config_source}:document_output_language"
|
|||
story_dir: "{config_source}:sprint_artifacts"
|
||||
date: system-generated
|
||||
|
||||
story_file: "" # Explicit story path; auto-discovered if empty
|
||||
# Context file uses same story_key as story file (e.g., "1-2-user-authentication.context.xml")
|
||||
context_file: "{story_dir}/{{story_key}}.context.xml"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
# Strategy: Load necessary context for story implementation
|
||||
input_file_patterns:
|
||||
architecture:
|
||||
description: "System architecture and decisions"
|
||||
whole: "{output_folder}/*architecture*.md"
|
||||
sharded: "{output_folder}/*architecture*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
tech_spec:
|
||||
description: "Technical specification for this epic"
|
||||
whole: "{output_folder}/tech-spec*.md"
|
||||
sharded: "{sprint_artifacts}/tech-spec-epic-*.md"
|
||||
load_strategy: "SELECTIVE_LOAD"
|
||||
ux_design:
|
||||
description: "UX design specification (if UI)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
epics:
|
||||
description: "Epic containing this story"
|
||||
whole: "{output_folder}/*epic*.md"
|
||||
sharded_index: "{output_folder}/*epic*/index.md"
|
||||
sharded_single: "{output_folder}/*epic*/epic-{{epic_num}}.md"
|
||||
load_strategy: "SELECTIVE_LOAD"
|
||||
document_project:
|
||||
description: "Brownfield project documentation (optional)"
|
||||
sharded: "{output_folder}/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/dev-story"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
instructions: "{installed_path}/instructions.xml"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
story_file: "" # Explicit story path; auto-discovered if empty
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml"
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
|
|||
|
|
@ -1,17 +0,0 @@
|
|||
# Tech Spec Validation Checklist
|
||||
|
||||
```xml
|
||||
<checklist id="{bmad_folder}/bmm/workflows/4-implementation/epic-tech-context/checklist">
|
||||
<item>Overview clearly ties to PRD goals</item>
|
||||
<item>Scope explicitly lists in-scope and out-of-scope</item>
|
||||
<item>Design lists all services/modules with responsibilities</item>
|
||||
<item>Data models include entities, fields, and relationships</item>
|
||||
<item>APIs/interfaces are specified with methods and schemas</item>
|
||||
<item>NFRs: performance, security, reliability, observability addressed</item>
|
||||
<item>Dependencies/integrations enumerated with versions where known</item>
|
||||
<item>Acceptance criteria are atomic and testable</item>
|
||||
<item>Traceability maps AC → Spec → Components → Tests</item>
|
||||
<item>Risks/assumptions/questions listed with mitigation/next steps</item>
|
||||
<item>Test strategy covers all ACs and critical paths</item>
|
||||
</checklist>
|
||||
```
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
<!-- BMAD BMM Tech Spec Workflow Instructions (v6) -->
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project_root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language}</critical>
|
||||
<critical>This workflow generates a comprehensive Technical Specification from PRD and Architecture, including detailed design, NFRs, acceptance criteria, and traceability mapping.</critical>
|
||||
<critical>If required inputs cannot be auto-discovered HALT with a clear message listing missing documents, allow user to provide them to proceed.</critical>
|
||||
|
||||
<workflow>
|
||||
<step n="1" goal="Collect inputs and discover next epic" tag="sprint-status">
|
||||
<action>Identify PRD and Architecture documents from recommended_inputs. Attempt to auto-discover at default paths.</action>
|
||||
<ask if="inputs are missing">ask the user for file paths. HALT and wait for docs to proceed</ask>
|
||||
|
||||
<!-- Intelligent Epic Discovery -->
|
||||
<critical>MUST read COMPLETE {sprint-status} file to discover next epic</critical>
|
||||
<action>Read ALL development_status entries</action>
|
||||
<action>Find all epics with status "backlog" (not yet contexted)</action>
|
||||
<action>Identify the FIRST backlog epic as the suggested default</action>
|
||||
|
||||
<check if="backlog epics found">
|
||||
<output>📋 **Next Epic Suggested:** Epic {{suggested_epic_id}}: {{suggested_epic_title}}</output>
|
||||
<ask>Use this epic?
|
||||
- [y] Yes, use {{suggested_epic_id}}
|
||||
- [n] No, let me specify a different epic_id
|
||||
</ask>
|
||||
|
||||
<check if="user selects 'n'">
|
||||
<ask>Enter the epic_id you want to context</ask>
|
||||
<action>Store user-provided epic_id as {{epic_id}}</action>
|
||||
</check>
|
||||
|
||||
<check if="user selects 'y'">
|
||||
<action>Use {{suggested_epic_id}} as {{epic_id}}</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="no backlog epics found">
|
||||
<output>✅ All epics are already contexted!
|
||||
|
||||
No epics with status "backlog" found in sprint-status.yaml.
|
||||
</output>
|
||||
<ask>Do you want to re-context an existing epic? Enter epic_id or [q] to quit:</ask>
|
||||
|
||||
<check if="user enters epic_id">
|
||||
<action>Store as {{epic_id}}</action>
|
||||
</check>
|
||||
|
||||
<check if="user enters 'q'">
|
||||
<action>HALT - No work needed</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Resolve output file path using workflow variables and initialize by writing the template.</action>
|
||||
</step>
|
||||
|
||||
<step n="1.5" goal="Discover and load project documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {prd_content}, {gdd_content}, {architecture_content}, {ux_design_content}, {epics_content} (will load only epic-{{epic_id}}.md if sharded), {document_project_content}</note>
|
||||
<action>Extract {{epic_title}} from {prd_content} or {epics_content} based on {{epic_id}}.</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Validate epic exists in sprint status" tag="sprint-status">
|
||||
<action>Look for epic key "epic-{{epic_id}}" in development_status (already loaded from step 1)</action>
|
||||
<action>Get current status value if epic exists</action>
|
||||
|
||||
<check if="epic not found">
|
||||
<output>⚠️ Epic {{epic_id}} not found in sprint-status.yaml
|
||||
|
||||
This epic hasn't been registered in the sprint plan yet.
|
||||
Run sprint-planning workflow to initialize epic tracking.
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<check if="epic status == 'contexted'">
|
||||
<output>ℹ️ Epic {{epic_id}} already marked as contexted
|
||||
|
||||
Continuing to regenerate tech spec...
|
||||
</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Overview and scope">
|
||||
<action>Read COMPLETE found {recommended_inputs}.</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{overview}} with a concise 1-2 paragraph summary referencing PRD context and goals
|
||||
Replace {{objectives_scope}} with explicit in-scope and out-of-scope bullets
|
||||
Replace {{system_arch_alignment}} with a short alignment summary to the architecture (components referenced, constraints)
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Detailed design">
|
||||
<action>Derive concrete implementation specifics from all {recommended_inputs} (CRITICAL: NO invention). If a epic tech spec precedes this one and exists, maintain consistency where appropriate.</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{services_modules}} with a table or bullets listing services/modules with responsibilities, inputs/outputs, and owners
|
||||
Replace {{data_models}} with normalized data model definitions (entities, fields, types, relationships); include schema snippets where available
|
||||
Replace {{apis_interfaces}} with API endpoint specs or interface signatures (method, path, request/response models, error codes)
|
||||
Replace {{workflows_sequencing}} with sequence notes or diagrams-as-text (steps, actors, data flow)
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Non-functional requirements">
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{nfr_performance}} with measurable targets (latency, throughput); link to any performance requirements in PRD/Architecture
|
||||
Replace {{nfr_security}} with authn/z requirements, data handling, threat notes; cite source sections
|
||||
Replace {{nfr_reliability}} with availability, recovery, and degradation behavior
|
||||
Replace {{nfr_observability}} with logging, metrics, tracing requirements; name required signals
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Dependencies and integrations">
|
||||
<action>Scan repository for dependency manifests (e.g., package.json, pyproject.toml, go.mod, Unity Packages/manifest.json).</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{dependencies_integrations}} with a structured list of dependencies and integration points with version or commit constraints when known
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Acceptance criteria and traceability">
|
||||
<action>Extract acceptance criteria from PRD; normalize into atomic, testable statements.</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{acceptance_criteria}} with a numbered list of testable acceptance criteria
|
||||
Replace {{traceability_mapping}} with a table mapping: AC → Spec Section(s) → Component(s)/API(s) → Test Idea
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Risks and test strategy">
|
||||
<template-output file="{default_output_file}">
|
||||
Replace {{risks_assumptions_questions}} with explicit list (each item labeled as Risk/Assumption/Question) with mitigation or next step
|
||||
Replace {{test_strategy}} with a brief plan (test levels, frameworks, coverage of ACs, edge cases)
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Validate and mark epic contexted" tag="sprint-status">
|
||||
<invoke-task>Validate against checklist at {installed_path}/checklist.md using {bmad_folder}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
|
||||
<!-- Mark epic as contexted -->
|
||||
<action>Load the FULL file: {sprint_status}</action>
|
||||
<action>Find development_status key "epic-{{epic_id}}"</action>
|
||||
<action>Verify current status is "backlog" (expected previous state)</action>
|
||||
<action>Update development_status["epic-{{epic_id}}"] = "contexted"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<check if="epic key not found in file">
|
||||
<output>⚠️ Could not update epic status: epic-{{epic_id}} not found</output>
|
||||
</check>
|
||||
|
||||
<output>**✅ Tech Spec Generated Successfully, {user_name}!**
|
||||
|
||||
**Epic Details:**
|
||||
- Epic ID: {{epic_id}}
|
||||
- Epic Title: {{epic_title}}
|
||||
- Tech Spec File: {{default_output_file}}
|
||||
- Epic Status: contexted (was backlog)
|
||||
|
||||
**Note:** This is a JIT (Just-In-Time) workflow - run again for other epics as needed.
|
||||
|
||||
**Next Steps:**
|
||||
1. Load SM agent and run `create-story` to begin implementing the first story under this epic.
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
# Epic Technical Specification: {{epic_title}}
|
||||
|
||||
Date: {{date}}
|
||||
Author: {{user_name}}
|
||||
Epic ID: {{epic_id}}
|
||||
Status: Draft
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
{{overview}}
|
||||
|
||||
## Objectives and Scope
|
||||
|
||||
{{objectives_scope}}
|
||||
|
||||
## System Architecture Alignment
|
||||
|
||||
{{system_arch_alignment}}
|
||||
|
||||
## Detailed Design
|
||||
|
||||
### Services and Modules
|
||||
|
||||
{{services_modules}}
|
||||
|
||||
### Data Models and Contracts
|
||||
|
||||
{{data_models}}
|
||||
|
||||
### APIs and Interfaces
|
||||
|
||||
{{apis_interfaces}}
|
||||
|
||||
### Workflows and Sequencing
|
||||
|
||||
{{workflows_sequencing}}
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
{{nfr_performance}}
|
||||
|
||||
### Security
|
||||
|
||||
{{nfr_security}}
|
||||
|
||||
### Reliability/Availability
|
||||
|
||||
{{nfr_reliability}}
|
||||
|
||||
### Observability
|
||||
|
||||
{{nfr_observability}}
|
||||
|
||||
## Dependencies and Integrations
|
||||
|
||||
{{dependencies_integrations}}
|
||||
|
||||
## Acceptance Criteria (Authoritative)
|
||||
|
||||
{{acceptance_criteria}}
|
||||
|
||||
## Traceability Mapping
|
||||
|
||||
{{traceability_mapping}}
|
||||
|
||||
## Risks, Assumptions, Open Questions
|
||||
|
||||
{{risks_assumptions_questions}}
|
||||
|
||||
## Test Strategy Summary
|
||||
|
||||
{{test_strategy}}
|
||||
|
|
@ -1,58 +0,0 @@
|
|||
name: epic-tech-context
|
||||
description: "Generate a comprehensive Technical Specification from PRD and Architecture with acceptance criteria and traceability mapping"
|
||||
author: "BMAD BMM"
|
||||
|
||||
# Critical variables
|
||||
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
# Strategy: SELECTIVE LOAD - only load the specific epic needed (epic_num from context)
|
||||
input_file_patterns:
|
||||
prd:
|
||||
description: "Product requirements (optional)"
|
||||
whole: "{output_folder}/*prd*.md"
|
||||
sharded: "{output_folder}/*prd*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
gdd:
|
||||
description: "Game Design Document (for game projects)"
|
||||
whole: "{output_folder}/*gdd*.md"
|
||||
sharded: "{output_folder}/*gdd*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
architecture:
|
||||
description: "System architecture and decisions"
|
||||
whole: "{output_folder}/*architecture*.md"
|
||||
sharded: "{output_folder}/*architecture*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
ux_design:
|
||||
description: "UX design specification (if UI)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
epics:
|
||||
description: "Specific epic for tech spec generation"
|
||||
whole: "{output_folder}/*epic*.md"
|
||||
sharded_index: "{output_folder}/*epic*/index.md"
|
||||
sharded_single: "{output_folder}/*epic*/epic-{{epic_num}}.md"
|
||||
load_strategy: "SELECTIVE_LOAD"
|
||||
document_project:
|
||||
description: "Brownfield project documentation (optional)"
|
||||
sharded: "{output_folder}/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/epic-tech-context"
|
||||
template: "{installed_path}/template.md"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{sprint_artifacts}/tech-spec-epic-{{epic_id}}.md"
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -12,7 +12,8 @@
|
|||
# ==================
|
||||
# Epic Status:
|
||||
# - backlog: Epic exists in epic file but not contexted
|
||||
# - contexted: Next epic tech context created by *epic-tech-context (required)
|
||||
# - contexted or in-progress
|
||||
# - done: Epic completed
|
||||
#
|
||||
# Story Status:
|
||||
# - backlog: Story only exists in epic file
|
||||
|
|
@ -28,7 +29,7 @@
|
|||
#
|
||||
# WORKFLOW NOTES:
|
||||
# ===============
|
||||
# - Epics should be 'contexted' before stories can be 'drafted'
|
||||
# - Epics should be marked `in-progress` before stories can be marked beyond `backlog`
|
||||
# - SM typically drafts next story ONLY after previous one is 'done' to incorporate learnings
|
||||
# - Dev moves story to 'review', dev reviews, then Dev moves to 'done'
|
||||
|
||||
|
|
@ -41,7 +42,7 @@ tracking_system: file-system
|
|||
story_location: "{story_location}"
|
||||
|
||||
development_status:
|
||||
epic-1: contexted
|
||||
epic-1: backlog
|
||||
1-1-user-authentication: done
|
||||
1-2-account-management: drafted
|
||||
1-3-plant-data-model: backlog
|
||||
|
|
|
|||
|
|
@ -18,6 +18,8 @@ validation: "{installed_path}/checklist.md"
|
|||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
# Project context
|
||||
project_context: "**/project-context.md"
|
||||
# Project identification
|
||||
project_name: "{config_source}:project_name"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +0,0 @@
|
|||
# Story Context Assembly Checklist
|
||||
|
||||
```xml
|
||||
<checklist id="{bmad_folder}/bmm/workflows/4-implementation/story-context/checklist">
|
||||
<item>Story fields (asA/iWant/soThat) captured</item>
|
||||
<item>Acceptance criteria list matches story draft exactly (no invention)</item>
|
||||
<item>Tasks/subtasks captured as task list</item>
|
||||
<item>Relevant docs (5-15) included with path and snippets</item>
|
||||
<item>Relevant code references included with reason and line hints</item>
|
||||
<item>Interfaces/API contracts extracted if applicable</item>
|
||||
<item>Constraints include applicable dev rules and patterns</item>
|
||||
<item>Dependencies detected from manifests and frameworks</item>
|
||||
<item>Testing standards and locations populated</item>
|
||||
<item>XML structure follows story-context template format</item>
|
||||
</checklist>
|
||||
```
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
<story-context id="{bmad_folder}/bmm/workflows/4-implementation/story-context/template" v="1.0">
|
||||
<metadata>
|
||||
<epicId>{{epic_id}}</epicId>
|
||||
<storyId>{{story_id}}</storyId>
|
||||
<title>{{story_title}}</title>
|
||||
<status>{{story_status}}</status>
|
||||
<generatedAt>{{date}}</generatedAt>
|
||||
<generator>BMAD Story Context Workflow</generator>
|
||||
<sourceStoryPath>{{story_path}}</sourceStoryPath>
|
||||
</metadata>
|
||||
|
||||
<story>
|
||||
<asA>{{as_a}}</asA>
|
||||
<iWant>{{i_want}}</iWant>
|
||||
<soThat>{{so_that}}</soThat>
|
||||
<tasks>{{story_tasks}}</tasks>
|
||||
</story>
|
||||
|
||||
<acceptanceCriteria>{{acceptance_criteria}}</acceptanceCriteria>
|
||||
|
||||
<artifacts>
|
||||
<docs>{{docs_artifacts}}</docs>
|
||||
<code>{{code_artifacts}}</code>
|
||||
<dependencies>{{dependencies_artifacts}}</dependencies>
|
||||
</artifacts>
|
||||
|
||||
<constraints>{{constraints}}</constraints>
|
||||
<interfaces>{{interfaces}}</interfaces>
|
||||
<tests>
|
||||
<standards>{{test_standards}}</standards>
|
||||
<locations>{{test_locations}}</locations>
|
||||
<ideas>{{test_ideas}}</ideas>
|
||||
</tests>
|
||||
</story-context>
|
||||
|
|
@ -1,209 +0,0 @@
|
|||
<!-- BMAD BMM Story Context Assembly Instructions (v6) -->
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate all responses in {communication_language}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
<critical>This workflow assembles a Story Context file for a single drafted story by extracting acceptance criteria, tasks, relevant docs/code, interfaces, constraints, and testing guidance.</critical>
|
||||
<critical>If {story_path} is provided, use it. Otherwise, find the first story with status "drafted" in sprint-status.yaml. If none found, HALT.</critical>
|
||||
<critical>Check if context file already exists. If it does, ask user if they want to replace it, verify it, or cancel.</critical>
|
||||
|
||||
<critical>DOCUMENT OUTPUT: Technical context file (.context.xml). Concise, structured, project-relative paths only.</critical>
|
||||
|
||||
<workflow>
|
||||
<step n="1" goal="Find drafted story and check for existing context" tag="sprint-status">
|
||||
<check if="{{story_path}} is provided">
|
||||
<action>Use {{story_path}} directly</action>
|
||||
<action>Read COMPLETE story file and parse sections</action>
|
||||
<action>Extract story_key from filename or story metadata</action>
|
||||
<action>Verify Status is "drafted" - if not, HALT with message: "Story status must be 'drafted' to generate context"</action>
|
||||
</check>
|
||||
|
||||
<check if="{{story_path}} is NOT provided">
|
||||
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Read ALL lines from beginning to end - do not skip any content</action>
|
||||
<action>Parse the development_status section completely</action>
|
||||
|
||||
<action>Find FIRST story (reading in order from top to bottom) where:
|
||||
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
|
||||
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
|
||||
- Status value equals "drafted"
|
||||
</action>
|
||||
|
||||
<check if="no story with status 'drafted' found">
|
||||
<output>📋 No drafted stories found in sprint-status.yaml
|
||||
All stories are either still in backlog or already marked ready/in-progress/done.
|
||||
|
||||
**Next Steps:**
|
||||
1. Run `create-story` to draft more stories
|
||||
2. Run `sprint-planning` to refresh story tracking
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Use the first drafted story found</action>
|
||||
<action>Find matching story file in {{story_path}} using story_key pattern</action>
|
||||
<action>Read the COMPLETE story file</action>
|
||||
</check>
|
||||
|
||||
<action>Extract {{epic_id}}, {{story_id}}, {{story_title}}, {{story_status}} from filename/content</action>
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes</action>
|
||||
<action>Extract user story fields (asA, iWant, soThat)</action>
|
||||
<template-output file="{default_output_file}">story_tasks</template-output>
|
||||
<template-output file="{default_output_file}">acceptance_criteria</template-output>
|
||||
|
||||
<!-- Check if context file already exists -->
|
||||
<action>Check if file exists at {default_output_file}</action>
|
||||
|
||||
<check if="context file already exists">
|
||||
<output>⚠️ Context file already exists: {default_output_file}
|
||||
|
||||
**What would you like to do?**
|
||||
1. **Replace** - Generate new context file (overwrites existing)
|
||||
2. **Verify** - Validate existing context file
|
||||
3. **Cancel** - Exit without changes
|
||||
</output>
|
||||
<ask>Choose action (replace/verify/cancel):</ask>
|
||||
|
||||
<check if="user chooses verify">
|
||||
<action>GOTO validation_step</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses cancel">
|
||||
<action>HALT with message: "Context generation cancelled"</action>
|
||||
</check>
|
||||
|
||||
<check if="user chooses replace">
|
||||
<action>Continue to generate new context file</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Store project root path for relative path conversion: extract from {project-root} variable</action>
|
||||
<action>Define path normalization function: convert any absolute path to project-relative by removing project root prefix</action>
|
||||
<action>Initialize output by writing template to {default_output_file}</action>
|
||||
<template-output file="{default_output_file}">as_a</template-output>
|
||||
<template-output file="{default_output_file}">i_want</template-output>
|
||||
<template-output file="{default_output_file}">so_that</template-output>
|
||||
</step>
|
||||
|
||||
<step n="1.5" goal="Discover and load project documents">
|
||||
<invoke-protocol name="discover_inputs" />
|
||||
<note>After discovery, these content variables are available: {prd_content}, {tech_spec_content}, {architecture_content}, {ux_design_content}, {epics_content} (loads only epic for this story if sharded), {document_project_content}</note>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Collect relevant documentation">
|
||||
<action>Review loaded content from Step 1.5 for items relevant to this story's domain (use keywords from story title, ACs, and tasks).</action>
|
||||
<action>Extract relevant sections from: {prd_content}, {tech_spec_content}, {architecture_content}, {ux_design_content}, {document_project_content}</action>
|
||||
<action>Note: Tech-Spec ({tech_spec_content}) is used for Level 0-1 projects (instead of PRD). It contains comprehensive technical context, brownfield analysis, framework details, existing patterns, and implementation guidance.</action>
|
||||
<action>For each discovered document: convert absolute paths to project-relative format by removing {project-root} prefix. Store only relative paths (e.g., "docs/prd.md" not "/Users/.../docs/prd.md").</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Add artifacts.docs entries with {path, title, section, snippet}:
|
||||
- path: PROJECT-RELATIVE path only (strip {project-root} prefix)
|
||||
- title: Document title
|
||||
- section: Relevant section name
|
||||
- snippet: Brief excerpt (2-3 sentences max, NO invention)
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Analyze existing code, interfaces, and constraints">
|
||||
<action>Search source tree for modules, files, and symbols matching story intent and AC keywords (controllers, services, components, tests).</action>
|
||||
<action>Identify existing interfaces/APIs the story should reuse rather than recreate.</action>
|
||||
<action>Extract development constraints from Dev Notes and architecture (patterns, layers, testing requirements).</action>
|
||||
<action>For all discovered code artifacts: convert absolute paths to project-relative format (strip {project-root} prefix).</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Add artifacts.code entries with {path, kind, symbol, lines, reason}:
|
||||
- path: PROJECT-RELATIVE path only (e.g., "src/services/api.js" not full path)
|
||||
- kind: file type (controller, service, component, test, etc.)
|
||||
- symbol: function/class/interface name
|
||||
- lines: line range if specific (e.g., "45-67")
|
||||
- reason: brief explanation of relevance to this story
|
||||
|
||||
Populate interfaces with API/interface signatures:
|
||||
- name: Interface or API name
|
||||
- kind: REST endpoint, GraphQL, function signature, class interface
|
||||
- signature: Full signature or endpoint definition
|
||||
- path: PROJECT-RELATIVE path to definition
|
||||
|
||||
Populate constraints with development rules:
|
||||
- Extract from Dev Notes and architecture
|
||||
- Include: required patterns, layer restrictions, testing requirements, coding standards
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Gather dependencies and frameworks">
|
||||
<action>Detect dependency manifests and frameworks in the repo:
|
||||
- Node: package.json (dependencies/devDependencies)
|
||||
- Python: pyproject.toml/requirements.txt
|
||||
- Go: go.mod
|
||||
- Unity: Packages/manifest.json, Assets/, ProjectSettings/
|
||||
- Other: list notable frameworks/configs found</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Populate artifacts.dependencies with keys for detected ecosystems and their packages with version ranges where present
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Testing standards and ideas">
|
||||
<action>From Dev Notes, architecture docs, testing docs, and existing tests, extract testing standards (frameworks, patterns, locations).</action>
|
||||
<template-output file="{default_output_file}">
|
||||
Populate tests.standards with a concise paragraph
|
||||
Populate tests.locations with directories or glob patterns where tests live
|
||||
Populate tests.ideas with initial test ideas mapped to acceptance criteria IDs
|
||||
</template-output>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Validate and save">
|
||||
<anchor id="validation_step" />
|
||||
<action>Validate output context file structure and content</action>
|
||||
<invoke-task>Validate against checklist at {installed_path}/checklist.md using {bmad_folder}/core/tasks/validate-workflow.xml</invoke-task>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Update story file and mark ready for dev" tag="sprint-status">
|
||||
<action>Open {{story_path}}</action>
|
||||
<action>Find the "Status:" line (usually at the top)</action>
|
||||
<action>Update story file: Change Status to "ready-for-dev"</action>
|
||||
<action>Under 'Dev Agent Record' → 'Context Reference' (create if missing), add or update a list item for {default_output_file}.</action>
|
||||
<action>Save the story file.</action>
|
||||
|
||||
<!-- Update sprint status to mark ready-for-dev -->
|
||||
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
|
||||
<action>Find development_status key matching {{story_key}}</action>
|
||||
<action>Verify current status is "drafted" (expected previous state)</action>
|
||||
<action>Update development_status[{{story_key}}] = "ready-for-dev"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<check if="story key not found in file">
|
||||
<output>⚠️ Story file updated, but could not update sprint-status: {{story_key}} not found
|
||||
|
||||
You may need to run sprint-planning to refresh tracking.
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<output>✅ Story context generated successfully, {user_name}!
|
||||
|
||||
**Story Details:**
|
||||
|
||||
- Story: {{epic_id}}.{{story_id}} - {{story_title}}
|
||||
- Story Key: {{story_key}}
|
||||
- Context File: {default_output_file}
|
||||
- Status: drafted → ready-for-dev
|
||||
|
||||
**Context Includes:**
|
||||
|
||||
- Documentation artifacts and references
|
||||
- Existing code and interfaces
|
||||
- Dependencies and frameworks
|
||||
- Testing standards and ideas
|
||||
- Development constraints
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Review the context file: {default_output_file}
|
||||
2. Run `dev-story` to implement the story
|
||||
3. Generate context for more drafted stories if needed
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
|
|
@ -1,63 +0,0 @@
|
|||
# Story Context Creation Workflow
|
||||
name: story-context
|
||||
description: "Assemble a dynamic Story Context XML by pulling latest documentation and existing code/library artifacts relevant to a drafted story"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables
|
||||
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
story_path: "{config_source}:sprint_artifacts"
|
||||
date: system-generated
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/4-implementation/story-context"
|
||||
template: "{installed_path}/context-template.xml"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Smart input file references - handles both whole docs and sharded docs
|
||||
# Priority: Whole document first, then sharded version
|
||||
# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story
|
||||
input_file_patterns:
|
||||
prd:
|
||||
description: "Product requirements (optional)"
|
||||
whole: "{output_folder}/*prd*.md"
|
||||
sharded: "{output_folder}/*prd*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
tech_spec:
|
||||
description: "Technical specification (Quick Flow track)"
|
||||
whole: "{output_folder}/tech-spec.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
architecture:
|
||||
description: "System architecture and decisions"
|
||||
whole: "{output_folder}/*architecture*.md"
|
||||
sharded: "{output_folder}/*architecture*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
ux_design:
|
||||
description: "UX design specification (if UI)"
|
||||
whole: "{output_folder}/*ux*.md"
|
||||
sharded: "{output_folder}/*ux*/*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
epics:
|
||||
description: "Epic containing this story"
|
||||
whole: "{output_folder}/*epic*.md"
|
||||
sharded_index: "{output_folder}/*epic*/index.md"
|
||||
sharded_single: "{output_folder}/*epic*/epic-{{epic_num}}.md"
|
||||
load_strategy: "SELECTIVE_LOAD"
|
||||
document_project:
|
||||
description: "Brownfield project documentation (optional)"
|
||||
sharded: "{output_folder}/index.md"
|
||||
load_strategy: "INDEX_GUIDED"
|
||||
|
||||
# Output configuration
|
||||
# Uses story_key from sprint-status.yaml (e.g., "1-2-user-authentication")
|
||||
default_output_file: "{story_path}/{{story_key}}.context.xml"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue