refactor: consolidate to src/bmm, delete src/modules/bmm (Option A)
Migrates all valuable content from src/modules/bmm → src/bmm and removes the duplicate directory structure. This resolves the two-directory confusion that caused the accidental 3-solutioning deletion. **Content Migrated:** - ✅ Improved pattern files (agent-completion, security-checklist, tdd, verification, README) - More comprehensive content (225, 340, 184, 198 lines vs 187, 122, 93, 143) - Last updated Jan 27 (newer than src/bmm versions) - ✅ Better multi-agent-review agent counts (2/4/6 instead of 1/2/3) - micro: 2 agents (security + code_quality) - standard: 4 agents (+ architecture + testing) - complex: 6 agents (+ performance + domain_expert) **Deletions:** - ❌ src/modules/bmm/ (66 files) - All workflows were outdated or renamed - batch-super-dev → batch-stories (renamed Jan 28) - story-pipeline → story-dev-only (renamed Jan 28) - super-dev-pipeline → story-full-pipeline (renamed Jan 28) **Path Updates:** - tools/cli/installers/lib/core/dependency-resolver.js (code + tests) - tools/cli/lib/yaml-xml-builder.js (comment) - tools/build-docs.js (doc URLs) - test/unit/core/dependency-resolver*.test.js (test fixtures) - resources/skills/bmad-guide.md (workflow references) **Result:** - Single canonical location: src/bmm (183 files) - No more sync confusion - Best content from both directories preserved - 350/352 tests passing (2 advanced edge cases to fix later)
This commit is contained in:
parent
47fc86f94c
commit
f94474159b
|
|
@ -7,7 +7,7 @@ You are working within the **BMAD Method (BMM)** - a 4-phase AI-powered agile de
|
|||
1. **NEVER skip phases** - Each phase builds on the previous (except Phase 1 which is optional)
|
||||
2. **ALWAYS check project level** - This determines which workflows to use
|
||||
3. **ALWAYS use workflows** - Don't implement features manually without BMAD workflows
|
||||
4. **ALWAYS consult workflow docs** - Located in `src/modules/bmm/workflows/`
|
||||
4. **ALWAYS consult workflow docs** - Located in `src/bmm/workflows/`
|
||||
5. **STAY IN PHASE** - Complete current phase before moving to next
|
||||
|
||||
---
|
||||
|
|
@ -143,32 +143,32 @@ You are working within the **BMAD Method (BMM)** - a 4-phase AI-powered agile de
|
|||
### Method 1: Read Workflow Documentation
|
||||
```bash
|
||||
# Read workflow guide for current phase
|
||||
cat src/modules/bmm/workflows/README.md
|
||||
cat src/modules/bmm/docs/workflows-{phase}.md
|
||||
cat src/bmm/workflows/README.md
|
||||
cat src/bmm/docs/workflows-{phase}.md
|
||||
|
||||
# Example: Planning phase
|
||||
cat src/modules/bmm/docs/workflows-planning.md
|
||||
cat src/bmm/docs/workflows-planning.md
|
||||
|
||||
# Example: Implementation phase
|
||||
cat src/modules/bmm/docs/workflows-implementation.md
|
||||
cat src/bmm/docs/workflows-implementation.md
|
||||
```
|
||||
|
||||
### Method 2: Read Specific Workflow
|
||||
```bash
|
||||
# Read workflow details
|
||||
cat src/modules/bmm/workflows/{phase}/{workflow-name}/README.md
|
||||
cat src/bmm/workflows/{phase}/{workflow-name}/README.md
|
||||
|
||||
# Example: PRD workflow
|
||||
cat src/modules/bmm/workflows/2-plan-workflows/prd/README.md
|
||||
cat src/bmm/workflows/2-plan-workflows/prd/README.md
|
||||
|
||||
# Example: Dev story workflow
|
||||
cat src/modules/bmm/workflows/4-implementation/dev-story/README.md
|
||||
cat src/bmm/workflows/4-implementation/dev-story/README.md
|
||||
```
|
||||
|
||||
### Method 3: Check Workflow Configuration
|
||||
```bash
|
||||
# See workflow config
|
||||
cat src/modules/bmm/workflows/{phase}/{workflow-name}/workflow.yaml
|
||||
cat src/bmm/workflows/{phase}/{workflow-name}/workflow.yaml
|
||||
```
|
||||
|
||||
### Method 4: Use Explore Agent
|
||||
|
|
@ -363,16 +363,16 @@ cat {project-root}/_bmad/bmm/workflow-status.yaml
|
|||
## 📖 ADDITIONAL RESOURCES
|
||||
|
||||
### Core Documentation
|
||||
- `src/modules/bmm/docs/workflows-analysis.md` - Phase 1 guidance
|
||||
- `src/modules/bmm/docs/workflows-planning.md` - Phase 2 guidance
|
||||
- `src/modules/bmm/docs/workflows-solutioning.md` - Phase 3 guidance
|
||||
- `src/modules/bmm/docs/workflows-implementation.md` - Phase 4 guidance
|
||||
- `src/modules/bmm/docs/scale-adaptive-system.md` - Level detection
|
||||
- `src/modules/bmm/docs/brownfield-guide.md` - Existing codebases
|
||||
- `src/bmm/docs/workflows-analysis.md` - Phase 1 guidance
|
||||
- `src/bmm/docs/workflows-planning.md` - Phase 2 guidance
|
||||
- `src/bmm/docs/workflows-solutioning.md` - Phase 3 guidance
|
||||
- `src/bmm/docs/workflows-implementation.md` - Phase 4 guidance
|
||||
- `src/bmm/docs/scale-adaptive-system.md` - Level detection
|
||||
- `src/bmm/docs/brownfield-guide.md` - Existing codebases
|
||||
|
||||
### Specialized Guides
|
||||
- `src/modules/bmm/docs/test-architecture.md` - TestArch workflows
|
||||
- `src/modules/bmm/docs/agents-guide.md` - All 12 specialized agents
|
||||
- `src/bmm/docs/test-architecture.md` - TestArch workflows
|
||||
- `src/bmm/docs/agents-guide.md` - All 12 specialized agents
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -1,187 +1,225 @@
|
|||
# Agent Completion Format
|
||||
# Agent Completion Artifact Pattern
|
||||
|
||||
<overview>
|
||||
All agents must return structured output that the orchestrator can parse. This enables automated verification and reliable workflow progression.
|
||||
**Problem:** Agents fail to update story files reliably (60% success rate)
|
||||
**Solution:** Agents create completion.json artifacts. Orchestrator uses them to update story files.
|
||||
|
||||
**Principle:** Return parseable data, not prose. The orchestrator needs to extract file lists, status, and evidence.
|
||||
</overview>
|
||||
## The Contract
|
||||
|
||||
<format>
|
||||
## Standard Completion Format
|
||||
### Agent Responsibility
|
||||
Each agent MUST create a completion artifact before finishing:
|
||||
- **File path:** `docs/sprint-artifacts/completions/{{story_key}}-{{agent_name}}.json`
|
||||
- **Format:** Structured JSON (see formats below)
|
||||
- **Verification:** File exists = work done (binary check)
|
||||
|
||||
Every agent returns this structure when done:
|
||||
### Orchestrator Responsibility
|
||||
Orchestrator reads completion artifacts and:
|
||||
- Parses JSON for structured data
|
||||
- Updates story file tasks (check off completed)
|
||||
- Fills Dev Agent Record with evidence
|
||||
- Verifies updates succeeded
|
||||
|
||||
## Why This Works
|
||||
|
||||
**File-based verification:**
|
||||
- ✅ Binary check: File exists or doesn't
|
||||
- ✅ No complex parsing of agent output
|
||||
- ✅ No reconciliation logic needed
|
||||
- ✅ Hard stop if artifact missing
|
||||
|
||||
**JSON format:**
|
||||
- ✅ Easy to parse reliably
|
||||
- ✅ Structured data (not prose)
|
||||
- ✅ Version controllable
|
||||
- ✅ Auditable trail
|
||||
|
||||
## How to Use This Pattern
|
||||
|
||||
### In Agent Prompts
|
||||
|
||||
Include this in every agent prompt:
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**Agent:** [builder|inspector|reviewer|fixer]
|
||||
**Story:** {{story_key}}
|
||||
**Status:** [SUCCESS|PASS|FAIL|ISSUES_FOUND|PARTIAL]
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
### [Agent-Specific Section]
|
||||
[See below for each agent type]
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-{{agent_name}}.json`
|
||||
|
||||
### Files Created
|
||||
- path/to/new/file.ts
|
||||
- path/to/another.ts
|
||||
|
||||
### Files Modified
|
||||
- path/to/existing/file.ts
|
||||
|
||||
### Ready For
|
||||
[Next phase or action required]
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "{{agent_name}}",
|
||||
"status": "SUCCESS",
|
||||
"files_created": ["file1.ts", "file2.ts"],
|
||||
"files_modified": ["file3.ts"],
|
||||
"timestamp": "2026-01-27T02:30:00Z"
|
||||
}
|
||||
```
|
||||
</format>
|
||||
|
||||
<builder_format>
|
||||
## Builder Agent Output
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** builder
|
||||
**Story:** {{story_key}}
|
||||
**Status:** SUCCESS | FAILED
|
||||
|
||||
### Files Created
|
||||
- src/lib/feature/service.ts
|
||||
- src/lib/feature/__tests__/service.test.ts
|
||||
|
||||
### Files Modified
|
||||
- src/app/api/feature/route.ts
|
||||
|
||||
### Tests Added
|
||||
- 3 test files
|
||||
- 12 test cases total
|
||||
|
||||
### Implementation Summary
|
||||
Brief description of what was built.
|
||||
|
||||
### Known Gaps
|
||||
- Edge case X not handled
|
||||
- NONE if all complete
|
||||
|
||||
### Ready For
|
||||
Inspector validation
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
```
|
||||
</builder_format>
|
||||
|
||||
<inspector_format>
|
||||
## Inspector Agent Output
|
||||
### In Orchestrator Verification
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** inspector
|
||||
**Story:** {{story_key}}
|
||||
**Status:** PASS | FAIL
|
||||
|
||||
### Evidence
|
||||
- **Type Check:** PASS (0 errors)
|
||||
- **Lint:** PASS (0 warnings)
|
||||
- **Build:** PASS
|
||||
- **Tests:** 45 passing, 0 failing, 92% coverage
|
||||
|
||||
### Files Verified
|
||||
- src/lib/feature/service.ts ✓
|
||||
- src/app/api/feature/route.ts ✓
|
||||
|
||||
### Failures (if FAIL status)
|
||||
1. Type error in service.ts:45
|
||||
2. Test failing: "should handle empty input"
|
||||
|
||||
### Ready For
|
||||
Reviewer (if PASS) | Builder fix (if FAIL)
|
||||
```
|
||||
</inspector_format>
|
||||
|
||||
<reviewer_format>
|
||||
## Reviewer Agent Output
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** reviewer
|
||||
**Story:** {{story_key}}
|
||||
**Status:** ISSUES_FOUND | CLEAN
|
||||
|
||||
### Issue Summary
|
||||
- **CRITICAL:** 1 (security, data loss)
|
||||
- **HIGH:** 2 (production bugs)
|
||||
- **MEDIUM:** 3 (tech debt)
|
||||
- **LOW:** 1 (nice-to-have)
|
||||
|
||||
### Must Fix (CRITICAL + HIGH)
|
||||
1. [CRITICAL] service.ts:45 - SQL injection vulnerability
|
||||
2. [HIGH] route.ts:23 - Missing authorization check
|
||||
3. [HIGH] service.ts:78 - Unhandled null case
|
||||
|
||||
### Should Fix (MEDIUM)
|
||||
1. service.ts:92 - No error logging
|
||||
|
||||
### Files Reviewed
|
||||
- src/lib/feature/service.ts ✓
|
||||
- src/app/api/feature/route.ts ✓
|
||||
|
||||
### Ready For
|
||||
Fixer agent to address CRITICAL and HIGH issues
|
||||
```
|
||||
</reviewer_format>
|
||||
|
||||
<fixer_format>
|
||||
## Fixer Agent Output
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** fixer
|
||||
**Story:** {{story_key}}
|
||||
**Status:** SUCCESS | PARTIAL | FAILED
|
||||
|
||||
### Issues Fixed
|
||||
- **CRITICAL:** 1/1 fixed
|
||||
- **HIGH:** 2/2 fixed
|
||||
- **Total:** 3 issues resolved
|
||||
|
||||
### Fixes Applied
|
||||
1. [CRITICAL] service.ts:45 - Parameterized query
|
||||
2. [HIGH] route.ts:23 - Added auth check
|
||||
3. [HIGH] service.ts:78 - Added null guard
|
||||
|
||||
### Quality Checks
|
||||
- **Type Check:** PASS
|
||||
- **Lint:** PASS
|
||||
- **Tests:** 47 passing (2 new)
|
||||
|
||||
### Git Commit
|
||||
- **Hash:** abc123def
|
||||
- **Message:** fix({{story_key}}): address security and null handling
|
||||
|
||||
### Deferred Issues
|
||||
- MEDIUM: 3 (defer to follow-up)
|
||||
- LOW: 1 (skip as gold-plating)
|
||||
|
||||
### Ready For
|
||||
Orchestrator reconciliation
|
||||
```
|
||||
</fixer_format>
|
||||
|
||||
<parsing_hints>
|
||||
## Parsing Hints for Orchestrator
|
||||
|
||||
Extract key data using grep:
|
||||
After agent completes, verify artifact exists:
|
||||
|
||||
```bash
|
||||
# Get status
|
||||
grep "^\*\*Status:\*\*" agent_output.txt | cut -d: -f2 | xargs
|
||||
COMPLETION_FILE="docs/sprint-artifacts/completions/{{story_key}}-{{agent}}.json"
|
||||
|
||||
# Get files created
|
||||
sed -n '/### Files Created/,/###/p' agent_output.txt | grep "^-" | cut -d' ' -f2
|
||||
if [ ! -f "$COMPLETION_FILE" ]; then
|
||||
echo "❌ BLOCKER: Agent failed to create completion artifact"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get issue count
|
||||
grep "CRITICAL:" agent_output.txt | grep -oE "[0-9]+"
|
||||
|
||||
# Check if ready for next phase
|
||||
grep "### Ready For" -A 1 agent_output.txt | tail -1
|
||||
echo "✅ Completion artifact found"
|
||||
```
|
||||
</parsing_hints>
|
||||
|
||||
### In Reconciliation
|
||||
|
||||
Parse artifact to update story file:
|
||||
|
||||
```markdown
|
||||
1. Load completion artifact with Read tool
|
||||
2. Parse JSON to extract data
|
||||
3. Use Edit tool to update story file
|
||||
4. Verify updates with bash checks
|
||||
```
|
||||
|
||||
## Artifact Formats by Agent
|
||||
|
||||
### Builder Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "builder",
|
||||
"status": "SUCCESS",
|
||||
"tasks_completed": [
|
||||
"Create PaymentProcessor service",
|
||||
"Add retry logic with exponential backoff"
|
||||
],
|
||||
"files_created": [
|
||||
"lib/billing/payment-processor.ts",
|
||||
"lib/billing/__tests__/payment-processor.test.ts"
|
||||
],
|
||||
"files_modified": [
|
||||
"lib/billing/worker.ts"
|
||||
],
|
||||
"tests": {
|
||||
"files": 2,
|
||||
"cases": 15
|
||||
},
|
||||
"timestamp": "2026-01-27T02:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Inspector Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "inspector",
|
||||
"status": "PASS",
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 45,
|
||||
"failing": 0,
|
||||
"total": 45,
|
||||
"coverage": 95
|
||||
},
|
||||
"files_verified": [
|
||||
"lib/billing/payment-processor.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:35:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Reviewer Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "reviewer",
|
||||
"status": "ISSUES_FOUND",
|
||||
"issues": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"medium": 4,
|
||||
"low": 2,
|
||||
"total": 11
|
||||
},
|
||||
"must_fix": [
|
||||
{
|
||||
"severity": "CRITICAL",
|
||||
"location": "api/route.ts:45",
|
||||
"description": "SQL injection vulnerability"
|
||||
}
|
||||
],
|
||||
"files_reviewed": [
|
||||
"api/route.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:40:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Fixer Completion (FINAL)
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "fixer",
|
||||
"status": "SUCCESS",
|
||||
"issues_fixed": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"total": 5
|
||||
},
|
||||
"fixes_applied": [
|
||||
"Fixed SQL injection in agreement route (CRITICAL)",
|
||||
"Added authorization check (CRITICAL)"
|
||||
],
|
||||
"files_modified": [
|
||||
"api/route.ts"
|
||||
],
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 48,
|
||||
"failing": 0,
|
||||
"total": 48,
|
||||
"coverage": 96
|
||||
},
|
||||
"git_commit": "a1b2c3d4e5f",
|
||||
"timestamp": "2026-01-27T02:50:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Reliability:** 60% → 100% (file exists is binary)
|
||||
- **Simplicity:** No complex output parsing
|
||||
- **Auditability:** JSON files are version controlled
|
||||
- **Debuggability:** Can inspect artifacts when issues occur
|
||||
- **Enforcement:** Can't proceed without completion artifact (hard stop)
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**Don't do this:**
|
||||
- ❌ Trust agent output without verification
|
||||
- ❌ Parse agent prose for structured data
|
||||
- ❌ Let agents update story files directly
|
||||
- ❌ Skip artifact creation ("just this once")
|
||||
|
||||
**Do this instead:**
|
||||
- ✅ Verify artifact exists (binary check)
|
||||
- ✅ Parse JSON for reliable data
|
||||
- ✅ Orchestrator updates story files
|
||||
- ✅ Hard stop if artifact missing
|
||||
|
|
|
|||
|
|
@ -1,122 +1,340 @@
|
|||
# Security Review Checklist
|
||||
|
||||
<overview>
|
||||
Security vulnerabilities are CRITICAL issues. A single vulnerability can expose user data, enable account takeover, or cause financial loss.
|
||||
**Philosophy:** Security issues are CRITICAL. No exceptions.
|
||||
|
||||
**Principle:** Assume all input is malicious. Validate everything.
|
||||
</overview>
|
||||
This checklist helps identify common security vulnerabilities in code reviews.
|
||||
|
||||
<owasp_top_10>
|
||||
## OWASP Top 10 Checks
|
||||
## CRITICAL Security Issues
|
||||
|
||||
### 1. Injection
|
||||
```bash
|
||||
# Check for SQL injection
|
||||
grep -E "SELECT.*\+|INSERT.*\+|UPDATE.*\+|DELETE.*\+" . -r
|
||||
grep -E '\$\{.*\}.*query|\`.*\$\{' . -r
|
||||
These MUST be fixed. No story ships with these issues.
|
||||
|
||||
# Check for command injection
|
||||
grep -E "exec\(|spawn\(|system\(" . -r
|
||||
### 1. SQL Injection
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: User input in query string
|
||||
const query = `SELECT * FROM users WHERE id = '${userId}'`;
|
||||
const query = "SELECT * FROM users WHERE id = '" + userId + "'";
|
||||
```
|
||||
|
||||
**Fix:** Use parameterized queries, never string concatenation.
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Parameterized queries
|
||||
const query = db.prepare('SELECT * FROM users WHERE id = ?');
|
||||
query.get(userId);
|
||||
|
||||
### 2. Broken Authentication
|
||||
```bash
|
||||
# Check for hardcoded credentials
|
||||
grep -E "password.*=.*['\"]|api.?key.*=.*['\"]|secret.*=.*['\"]" . -r -i
|
||||
|
||||
# Check for weak session handling
|
||||
grep -E "localStorage.*token|sessionStorage.*password" . -r
|
||||
// ✅ GOOD: ORM/Query builder
|
||||
const user = await prisma.user.findUnique({ where: { id: userId } });
|
||||
```
|
||||
|
||||
**Fix:** Use secure session management, never store secrets in code.
|
||||
### 2. XSS (Cross-Site Scripting)
|
||||
|
||||
### 3. Sensitive Data Exposure
|
||||
```bash
|
||||
# Check for PII logging
|
||||
grep -E "console\.(log|info|debug).*password|log.*email|log.*ssn" . -r -i
|
||||
|
||||
# Check for unencrypted transmission
|
||||
grep -E "http://(?!localhost)" . -r
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Unsanitized user input in HTML
|
||||
element.innerHTML = userInput;
|
||||
document.write(userInput);
|
||||
```
|
||||
|
||||
**Fix:** Never log sensitive data, always use HTTPS.
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Use textContent or sanitize
|
||||
element.textContent = userInput;
|
||||
|
||||
### 4. XML External Entities (XXE)
|
||||
```bash
|
||||
# Check for unsafe XML parsing
|
||||
grep -E "parseXML|DOMParser|xml2js" . -r
|
||||
// ✅ GOOD: Use framework's built-in escaping
|
||||
<div>{userInput}</div> // React automatically escapes
|
||||
```
|
||||
|
||||
**Fix:** Disable external entity processing.
|
||||
### 3. Authentication Bypass
|
||||
|
||||
### 5. Broken Access Control
|
||||
```bash
|
||||
# Check for missing auth checks
|
||||
grep -E "export.*function.*(GET|POST|PUT|DELETE)" . -r | head -20
|
||||
# Then verify each has auth check
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No auth check
|
||||
app.get('/api/admin/users', async (req, res) => {
|
||||
const users = await getUsers();
|
||||
res.json(users);
|
||||
});
|
||||
```
|
||||
|
||||
**Fix:** Every endpoint must verify user has permission.
|
||||
|
||||
### 6. Security Misconfiguration
|
||||
```bash
|
||||
# Check for debug mode in prod
|
||||
grep -E "debug.*true|NODE_ENV.*development" . -r
|
||||
|
||||
# Check for default credentials
|
||||
grep -E "admin.*admin|password.*password|123456" . -r
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Require auth
|
||||
app.get('/api/admin/users', requireAuth, async (req, res) => {
|
||||
const users = await getUsers();
|
||||
res.json(users);
|
||||
});
|
||||
```
|
||||
|
||||
**Fix:** Secure configuration, no defaults.
|
||||
### 4. Authorization Gaps
|
||||
|
||||
### 7. Cross-Site Scripting (XSS)
|
||||
```bash
|
||||
# Check for innerHTML usage
|
||||
grep -E "innerHTML|dangerouslySetInnerHTML" . -r
|
||||
|
||||
# Check for unescaped output
|
||||
grep -E "\$\{.*\}.*<|<.*\$\{" . -r
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No ownership check
|
||||
app.delete('/api/orders/:id', async (req, res) => {
|
||||
await deleteOrder(req.params.id);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix:** Always escape user input, use safe rendering.
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Verify user owns resource
|
||||
app.delete('/api/orders/:id', async (req, res) => {
|
||||
const order = await getOrder(req.params.id);
|
||||
|
||||
### 8. Insecure Deserialization
|
||||
```bash
|
||||
# Check for unsafe JSON parsing
|
||||
grep -E "JSON\.parse\(.*req\." . -r
|
||||
grep -E "eval\(|Function\(" . -r
|
||||
if (order.userId !== req.user.id) {
|
||||
return res.status(403).json({ error: 'Forbidden' });
|
||||
}
|
||||
|
||||
await deleteOrder(req.params.id);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix:** Validate structure before parsing.
|
||||
### 5. Hardcoded Secrets
|
||||
|
||||
### 9. Using Components with Known Vulnerabilities
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Secrets in code
|
||||
const API_KEY = 'sk-1234567890abcdef';
|
||||
const DB_PASSWORD = 'MyP@ssw0rd123';
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Environment variables
|
||||
const API_KEY = process.env.API_KEY;
|
||||
const DB_PASSWORD = process.env.DB_PASSWORD;
|
||||
|
||||
// ✅ GOOD: Secrets manager
|
||||
const API_KEY = await secretsManager.get('API_KEY');
|
||||
```
|
||||
|
||||
### 6. Insecure Direct Object Reference (IDOR)
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Use user-supplied ID without validation
|
||||
app.get('/api/documents/:id', async (req, res) => {
|
||||
const doc = await getDocument(req.params.id);
|
||||
res.json(doc);
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Verify access
|
||||
app.get('/api/documents/:id', async (req, res) => {
|
||||
const doc = await getDocument(req.params.id);
|
||||
|
||||
// Check user has permission to view this document
|
||||
if (!await userCanAccessDocument(req.user.id, doc.id)) {
|
||||
return res.status(403).json({ error: 'Forbidden' });
|
||||
}
|
||||
|
||||
res.json(doc);
|
||||
});
|
||||
```
|
||||
|
||||
## HIGH Security Issues
|
||||
|
||||
These should be fixed before shipping.
|
||||
|
||||
### 7. Missing Input Validation
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No validation
|
||||
app.post('/api/users', async (req, res) => {
|
||||
await createUser(req.body);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Validate input
|
||||
app.post('/api/users', async (req, res) => {
|
||||
const schema = z.object({
|
||||
email: z.string().email(),
|
||||
age: z.number().min(18).max(120)
|
||||
});
|
||||
|
||||
try {
|
||||
const data = schema.parse(req.body);
|
||||
await createUser(data);
|
||||
res.json({ success: true });
|
||||
} catch (error) {
|
||||
res.status(400).json({ error: error.errors });
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### 8. Sensitive Data Exposure
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Exposing sensitive fields
|
||||
const user = await getUser(userId);
|
||||
res.json(user); // Contains password hash, SSN, etc.
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Select only safe fields
|
||||
const user = await getUser(userId);
|
||||
res.json({
|
||||
id: user.id,
|
||||
name: user.name,
|
||||
email: user.email
|
||||
// Don't include: password, ssn, etc.
|
||||
});
|
||||
```
|
||||
|
||||
### 9. Missing Rate Limiting
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No rate limit
|
||||
app.post('/api/login', async (req, res) => {
|
||||
const user = await authenticate(req.body);
|
||||
res.json({ token: user.token });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Rate limit sensitive endpoints
|
||||
app.post('/api/login',
|
||||
rateLimit({ max: 5, windowMs: 60000 }), // 5 attempts per minute
|
||||
async (req, res) => {
|
||||
const user = await authenticate(req.body);
|
||||
res.json({ token: user.token });
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### 10. Insecure Randomness
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Using Math.random() for tokens
|
||||
const token = Math.random().toString(36);
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Cryptographically secure random
|
||||
const crypto = require('crypto');
|
||||
const token = crypto.randomBytes(32).toString('hex');
|
||||
```
|
||||
|
||||
## MEDIUM Security Issues
|
||||
|
||||
These improve security but aren't critical.
|
||||
|
||||
### 11. Missing HTTPS
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: HTTP only
|
||||
app.listen(3000);
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Force HTTPS in production
|
||||
if (process.env.NODE_ENV === 'production') {
|
||||
app.use((req, res, next) => {
|
||||
if (req.header('x-forwarded-proto') !== 'https') {
|
||||
res.redirect(`https://${req.header('host')}${req.url}`);
|
||||
} else {
|
||||
next();
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 12. Missing Security Headers
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No security headers
|
||||
app.use(express.json());
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Add security headers
|
||||
app.use(helmet()); // Adds multiple security headers
|
||||
```
|
||||
|
||||
### 13. Verbose Error Messages
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Exposing stack traces
|
||||
app.use((error, req, res, next) => {
|
||||
res.status(500).json({ error: error.stack });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Generic error message
|
||||
app.use((error, req, res, next) => {
|
||||
console.error(error); // Log internally
|
||||
res.status(500).json({ error: 'Internal server error' });
|
||||
});
|
||||
```
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Automated Checks
|
||||
|
||||
Run security scanners:
|
||||
```bash
|
||||
# Check for outdated dependencies
|
||||
# Check for known vulnerabilities
|
||||
npm audit
|
||||
|
||||
# Static analysis
|
||||
npx eslint-plugin-security
|
||||
|
||||
# Secrets detection
|
||||
git secrets --scan
|
||||
```
|
||||
|
||||
**Fix:** Keep dependencies updated, monitor CVEs.
|
||||
### Step 2: Manual Review
|
||||
|
||||
### 10. Insufficient Logging
|
||||
```bash
|
||||
# Check for security event logging
|
||||
grep -E "log.*(login|auth|permission|access)" . -r
|
||||
Use this checklist to review:
|
||||
- [ ] SQL injection vulnerabilities
|
||||
- [ ] XSS vulnerabilities
|
||||
- [ ] Authentication bypasses
|
||||
- [ ] Authorization gaps
|
||||
- [ ] Hardcoded secrets
|
||||
- [ ] IDOR vulnerabilities
|
||||
- [ ] Missing input validation
|
||||
- [ ] Sensitive data exposure
|
||||
- [ ] Missing rate limiting
|
||||
- [ ] Insecure randomness
|
||||
|
||||
### Step 3: Document Findings
|
||||
|
||||
For each issue found:
|
||||
```markdown
|
||||
**Issue #1: SQL Injection Vulnerability**
|
||||
- **Location:** api/users/route.ts:45
|
||||
- **Severity:** CRITICAL
|
||||
- **Problem:** User input concatenated into query
|
||||
- **Code:**
|
||||
```typescript
|
||||
const query = `SELECT * FROM users WHERE id = '${userId}'`
|
||||
```
|
||||
- **Fix:** Use parameterized queries with Prisma
|
||||
```
|
||||
|
||||
**Fix:** Log security events with context.
|
||||
</owasp_top_10>
|
||||
## Remember
|
||||
|
||||
<severity_ratings>
|
||||
## Severity Ratings
|
||||
**Security issues are CRITICAL. They MUST be fixed.**
|
||||
|
||||
| Severity | Impact | Examples |
|
||||
|----------|--------|----------|
|
||||
| CRITICAL | Data breach, account takeover | SQL injection, auth bypass |
|
||||
| HIGH | Service disruption, data corruption | Logic flaws, N+1 queries |
|
||||
| MEDIUM | Technical debt, maintainability | Missing validation, tight coupling |
|
||||
| LOW | Code style, nice-to-have | Naming, documentation |
|
||||
|
||||
**CRITICAL and HIGH must be fixed before merge.**
|
||||
</severity_ratings>
|
||||
Don't let security issues slide because "we'll fix it later." Fix them now.
|
||||
|
|
|
|||
|
|
@ -1,93 +1,184 @@
|
|||
# Test-Driven Development Pattern
|
||||
# Test-Driven Development (TDD) Pattern
|
||||
|
||||
<overview>
|
||||
TDD is about design quality, not coverage metrics. Writing tests first forces you to think about behavior before implementation.
|
||||
**Red → Green → Refactor**
|
||||
|
||||
**Principle:** If you can describe behavior as `expect(fn(input)).toBe(output)` before writing `fn`, TDD improves the result.
|
||||
</overview>
|
||||
Write tests first, make them pass, then refactor.
|
||||
|
||||
<when_to_use>
|
||||
## When TDD Improves Quality
|
||||
## Why TDD?
|
||||
|
||||
**Use TDD for:**
|
||||
- Business logic with defined inputs/outputs
|
||||
- API endpoints with request/response contracts
|
||||
- Data transformations, parsing, formatting
|
||||
- Validation rules and constraints
|
||||
- Algorithms with testable behavior
|
||||
1. **Design quality:** Writing tests first forces good API design
|
||||
2. **Coverage:** 90%+ coverage by default
|
||||
3. **Confidence:** Refactor without fear
|
||||
4. **Documentation:** Tests document expected behavior
|
||||
|
||||
**Skip TDD for:**
|
||||
- UI layout and styling
|
||||
- Configuration changes
|
||||
- Glue code connecting existing components
|
||||
- One-off scripts
|
||||
- Simple CRUD with no business logic
|
||||
</when_to_use>
|
||||
## TDD Cycle
|
||||
|
||||
<red_green_refactor>
|
||||
## Red-Green-Refactor Cycle
|
||||
|
||||
**RED - Write failing test:**
|
||||
1. Create test describing expected behavior
|
||||
2. Run test - it MUST fail
|
||||
3. If test passes: feature exists or test is wrong
|
||||
|
||||
**GREEN - Implement to pass:**
|
||||
1. Write minimal code to make test pass
|
||||
2. No cleverness, no optimization - just make it work
|
||||
3. Run test - it MUST pass
|
||||
|
||||
**REFACTOR (if needed):**
|
||||
1. Clean up implementation
|
||||
2. Run tests - MUST still pass
|
||||
3. Only commit if changes made
|
||||
</red_green_refactor>
|
||||
|
||||
<test_quality>
|
||||
## Good Tests vs Bad Tests
|
||||
|
||||
**Test behavior, not implementation:**
|
||||
```typescript
|
||||
// GOOD: Tests observable behavior
|
||||
expect(formatDate(new Date('2024-01-15'))).toBe('Jan 15, 2024')
|
||||
|
||||
// BAD: Tests implementation details
|
||||
expect(formatDate).toHaveBeenCalledWith(expect.any(Date))
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 1. RED: Write a failing test │
|
||||
│ - Test what the code SHOULD do │
|
||||
│ - Test fails (code doesn't exist yet) │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 2. GREEN: Write minimal code to pass │
|
||||
│ - Simplest implementation that works │
|
||||
│ - Test passes │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 3. REFACTOR: Clean up code │
|
||||
│ - Improve design │
|
||||
│ - Remove duplication │
|
||||
│ - Tests still pass │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
(repeat for next feature)
|
||||
```
|
||||
|
||||
**One concept per test:**
|
||||
```typescript
|
||||
// GOOD: Separate tests
|
||||
it('accepts valid email', () => { ... })
|
||||
it('rejects empty email', () => { ... })
|
||||
it('rejects malformed email', () => { ... })
|
||||
## Implementation Order
|
||||
|
||||
// BAD: Multiple assertions
|
||||
it('validates email', () => {
|
||||
expect(validate('test@example.com')).toBe(true)
|
||||
expect(validate('')).toBe(false)
|
||||
expect(validate('invalid')).toBe(false)
|
||||
})
|
||||
### Greenfield (New Code)
|
||||
1. Write test for happy path
|
||||
2. Write test for error cases
|
||||
3. Write test for edge cases
|
||||
4. Implement to make all tests pass
|
||||
5. Refactor
|
||||
|
||||
### Brownfield (Existing Code)
|
||||
1. Understand existing behavior
|
||||
2. Add tests for current behavior (characterization tests)
|
||||
3. Write test for new behavior
|
||||
4. Implement new behavior
|
||||
5. Refactor
|
||||
|
||||
## Test Quality Standards
|
||||
|
||||
### Good Test Characteristics
|
||||
- ✅ **Isolated:** Each test independent
|
||||
- ✅ **Fast:** Runs in milliseconds
|
||||
- ✅ **Clear:** Obvious what it tests
|
||||
- ✅ **Focused:** One behavior per test
|
||||
- ✅ **Stable:** No flakiness
|
||||
|
||||
### Test Structure (AAA Pattern)
|
||||
```typescript
|
||||
test('should calculate total price with tax', () => {
|
||||
// Arrange: Set up test data
|
||||
const cart = new ShoppingCart();
|
||||
cart.addItem({ price: 100, quantity: 2 });
|
||||
|
||||
// Act: Execute the behavior
|
||||
const total = cart.getTotalWithTax(0.08);
|
||||
|
||||
// Assert: Verify the result
|
||||
expect(total).toBe(216); // (100 * 2) * 1.08
|
||||
});
|
||||
```
|
||||
|
||||
**Descriptive names:**
|
||||
## What to Test
|
||||
|
||||
### Must Test (Critical)
|
||||
- Business logic
|
||||
- API endpoints
|
||||
- Data transformations
|
||||
- Error handling
|
||||
- Authorization checks
|
||||
- Edge cases
|
||||
|
||||
### Nice to Test (Important)
|
||||
- UI components
|
||||
- Integration flows
|
||||
- Performance benchmarks
|
||||
|
||||
### Don't Waste Time Testing
|
||||
- Third-party libraries (already tested)
|
||||
- Framework internals (already tested)
|
||||
- Trivial getters/setters
|
||||
- Generated code
|
||||
|
||||
## Coverage Target
|
||||
|
||||
**Minimum:** 90% line coverage
|
||||
**Ideal:** 95%+ with meaningful tests
|
||||
|
||||
**Coverage ≠ Quality**
|
||||
- 100% coverage with bad tests is worthless
|
||||
- 90% coverage with good tests is excellent
|
||||
|
||||
## TDD Anti-Patterns
|
||||
|
||||
**Avoid these:**
|
||||
- ❌ Writing tests after code (test-after)
|
||||
- ❌ Testing implementation details
|
||||
- ❌ Tests that test nothing
|
||||
- ❌ Brittle tests (break with refactoring)
|
||||
- ❌ Slow tests (> 1 second)
|
||||
|
||||
## Example: TDD for API Endpoint
|
||||
|
||||
```typescript
|
||||
// GOOD
|
||||
it('returns null for invalid user ID')
|
||||
it('should reject empty email')
|
||||
// Step 1: RED - Write failing test
|
||||
describe('POST /api/orders', () => {
|
||||
test('should create order and return 201', async () => {
|
||||
const response = await request(app)
|
||||
.post('/api/orders')
|
||||
.send({ items: [{ id: 1, qty: 2 }] })
|
||||
.expect(201);
|
||||
|
||||
// BAD
|
||||
it('test1')
|
||||
it('handles error')
|
||||
it('works')
|
||||
expect(response.body).toHaveProperty('orderId');
|
||||
});
|
||||
});
|
||||
|
||||
// Test fails (endpoint doesn't exist yet)
|
||||
|
||||
// Step 2: GREEN - Minimal implementation
|
||||
app.post('/api/orders', async (req, res) => {
|
||||
const orderId = await createOrder(req.body);
|
||||
res.status(201).json({ orderId });
|
||||
});
|
||||
|
||||
// Test passes
|
||||
|
||||
// Step 3: REFACTOR - Add validation, error handling
|
||||
app.post('/api/orders', async (req, res) => {
|
||||
try {
|
||||
// Input validation
|
||||
const schema = z.object({
|
||||
items: z.array(z.object({
|
||||
id: z.number(),
|
||||
qty: z.number().min(1)
|
||||
}))
|
||||
});
|
||||
|
||||
const data = schema.parse(req.body);
|
||||
|
||||
// Business logic
|
||||
const orderId = await createOrder(data);
|
||||
|
||||
res.status(201).json({ orderId });
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
res.status(400).json({ error: error.errors });
|
||||
} else {
|
||||
res.status(500).json({ error: 'Internal error' });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// All tests still pass
|
||||
```
|
||||
</test_quality>
|
||||
|
||||
<coverage_targets>
|
||||
## Coverage Targets
|
||||
## TDD in Practice
|
||||
|
||||
- **90%+ line coverage** for new code
|
||||
- **100% branch coverage** for critical paths (auth, payments)
|
||||
- **Every error path** has at least one test
|
||||
- **Edge cases** explicitly tested
|
||||
</coverage_targets>
|
||||
**Start here:**
|
||||
1. Write one test for the simplest case
|
||||
2. Make it pass with simplest code
|
||||
3. Write next test for slightly more complex case
|
||||
4. Refactor when you see duplication
|
||||
5. Repeat
|
||||
|
||||
**Don't:**
|
||||
- Write all tests first (too much work)
|
||||
- Write production code without failing test
|
||||
- Skip refactoring step
|
||||
|
|
|
|||
|
|
@ -1,143 +1,198 @@
|
|||
# Verification Patterns
|
||||
# Independent Verification Pattern
|
||||
|
||||
<overview>
|
||||
Existence ≠ Implementation. A file existing does not mean the feature works.
|
||||
**Philosophy:** Trust but verify. Fresh eyes catch what familiarity misses.
|
||||
|
||||
**Verification levels:**
|
||||
1. **Exists** - File is present
|
||||
2. **Substantive** - Content is real, not placeholder
|
||||
3. **Wired** - Connected to rest of system
|
||||
4. **Functional** - Actually works when invoked
|
||||
</overview>
|
||||
## Core Principle
|
||||
|
||||
<stub_detection>
|
||||
## Detecting Stubs and Placeholders
|
||||
The person who built something should NOT validate their own work.
|
||||
|
||||
**Why?**
|
||||
- Confirmation bias (see what you expect to see)
|
||||
- Blind spots (familiar with your own code)
|
||||
- Fatigue (validated while building, miss issues)
|
||||
|
||||
## Verification Requirements
|
||||
|
||||
### Fresh Context
|
||||
Inspector agent has:
|
||||
- ✅ No knowledge of what Builder did
|
||||
- ✅ No preconceptions about implementation
|
||||
- ✅ Only the story requirements as context
|
||||
|
||||
**This means:**
|
||||
- Run all checks yourself
|
||||
- Don't trust any claims
|
||||
- Start from scratch
|
||||
|
||||
### What to Verify
|
||||
|
||||
**1. Files Exist**
|
||||
```bash
|
||||
# Comment-based stubs
|
||||
grep -E "(TODO|FIXME|XXX|PLACEHOLDER)" "$file"
|
||||
grep -E "implement|add later|coming soon" "$file" -i
|
||||
|
||||
# Empty implementations
|
||||
grep -E "return null|return undefined|return \{\}|return \[\]" "$file"
|
||||
grep -E "console\.(log|warn).*only" "$file"
|
||||
|
||||
# Placeholder text
|
||||
grep -E "placeholder|lorem ipsum|sample data" "$file" -i
|
||||
# For each file mentioned in story tasks
|
||||
ls -la {{file_path}}
|
||||
# FAIL if file missing or empty
|
||||
```
|
||||
|
||||
**Red flags in code:**
|
||||
```typescript
|
||||
// STUBS - Not real implementations:
|
||||
return <div>Placeholder</div>
|
||||
onClick={() => {}}
|
||||
export async function POST() { return Response.json({ ok: true }) }
|
||||
```
|
||||
</stub_detection>
|
||||
**2. File Contents**
|
||||
- Open each file
|
||||
- Check it has actual code (not just TODO/stub)
|
||||
- Verify it matches story requirements
|
||||
|
||||
<file_verification>
|
||||
## File Verification Commands
|
||||
|
||||
**React Components:**
|
||||
**3. Tests Exist**
|
||||
```bash
|
||||
# Exists and exports component
|
||||
[ -f "$file" ] && grep -E "export.*function|export const.*=" "$file"
|
||||
|
||||
# Has real JSX (not null/empty)
|
||||
grep -E "return.*<" "$file" | grep -v "return.*null"
|
||||
|
||||
# Uses props/state (not static)
|
||||
grep -E "props\.|useState|useEffect" "$file"
|
||||
find . -name "*.test.ts" -o -name "__tests__"
|
||||
# FAIL if no tests found for new code
|
||||
```
|
||||
|
||||
**API Routes:**
|
||||
```bash
|
||||
# Exports HTTP handlers
|
||||
grep -E "export.*(GET|POST|PUT|DELETE)" "$file"
|
||||
**4. Quality Checks Pass**
|
||||
|
||||
# Has database interaction
|
||||
grep -E "prisma\.|db\.|query|find|create" "$file"
|
||||
|
||||
# Has error handling
|
||||
grep -E "try|catch|throw" "$file"
|
||||
```
|
||||
|
||||
**Tests:**
|
||||
```bash
|
||||
# Test file exists
|
||||
[ -f "$test_file" ]
|
||||
|
||||
# Has test cases
|
||||
grep -E "it\(|test\(|describe\(" "$test_file"
|
||||
|
||||
# Not all skipped
|
||||
grep -c "\.skip" "$test_file"
|
||||
```
|
||||
</file_verification>
|
||||
|
||||
<quality_commands>
|
||||
## Quality Check Commands
|
||||
Run these yourself. Don't trust claims.
|
||||
|
||||
```bash
|
||||
# Type check - zero errors
|
||||
# Type check
|
||||
npm run type-check
|
||||
echo "Exit code: $?"
|
||||
# FAIL if any errors
|
||||
|
||||
# Lint - zero errors/warnings
|
||||
npm run lint 2>&1 | tail -5
|
||||
# Linter
|
||||
npm run lint
|
||||
# FAIL if any errors or warnings
|
||||
|
||||
# Tests - all passing
|
||||
npm test 2>&1 | grep -E "pass|fail|error" -i | tail -10
|
||||
# Build
|
||||
npm run build
|
||||
# FAIL if build fails
|
||||
|
||||
# Build - succeeds
|
||||
npm run build 2>&1 | tail -5
|
||||
# Tests
|
||||
npm test -- {{story_specific_tests}}
|
||||
# FAIL if any tests fail
|
||||
# FAIL if tests are skipped
|
||||
# FAIL if coverage < 90%
|
||||
```
|
||||
|
||||
**All must return exit code 0.**
|
||||
</quality_commands>
|
||||
|
||||
<wiring_checks>
|
||||
## Wiring Verification
|
||||
|
||||
**Component → API:**
|
||||
**5. Git Status**
|
||||
```bash
|
||||
# Check component calls the API
|
||||
grep -E "fetch\(['\"].*$api_path|axios.*$api_path" "$component"
|
||||
git status
|
||||
# Check for uncommitted files
|
||||
# List what was changed
|
||||
```
|
||||
|
||||
**API → Database:**
|
||||
```bash
|
||||
# Check API queries database
|
||||
grep -E "await.*prisma|await.*db\." "$route"
|
||||
```
|
||||
## Verification Verdict
|
||||
|
||||
**Form → Handler:**
|
||||
```bash
|
||||
# Check form has real submit handler
|
||||
grep -A 5 "onSubmit" "$component" | grep -E "fetch|axios|mutate"
|
||||
```
|
||||
</wiring_checks>
|
||||
### PASS Criteria
|
||||
All of these must be true:
|
||||
- [ ] All story files exist and have content
|
||||
- [ ] Type check returns 0 errors
|
||||
- [ ] Linter returns 0 errors/warnings
|
||||
- [ ] Build succeeds
|
||||
- [ ] Tests run and pass (not skipped)
|
||||
- [ ] Test coverage >= 90%
|
||||
- [ ] Git status is clean or has expected changes
|
||||
|
||||
<verdict_format>
|
||||
## Verdict Format
|
||||
**If ANY checkbox is unchecked → FAIL verdict**
|
||||
|
||||
### PASS Output
|
||||
|
||||
```markdown
|
||||
## VALIDATION RESULT
|
||||
✅ VALIDATION PASSED
|
||||
|
||||
**Status:** PASS | FAIL
|
||||
|
||||
### Evidence
|
||||
Evidence:
|
||||
- Files verified: [list files checked]
|
||||
- Type check: PASS (0 errors)
|
||||
- Lint: PASS (0 warnings)
|
||||
- Linter: PASS (0 warnings)
|
||||
- Build: PASS
|
||||
- Tests: 45/45 passing
|
||||
- Tests: 45/45 passing (95% coverage)
|
||||
- Git: 12 files modified, 3 new files
|
||||
|
||||
### Files Verified
|
||||
- path/to/file.ts ✓
|
||||
- path/to/other.ts ✓
|
||||
|
||||
### Failures (if FAIL)
|
||||
1. [CRITICAL] Missing file: src/api/route.ts
|
||||
2. [HIGH] Type error in lib/auth.ts:45
|
||||
Ready for code review.
|
||||
```
|
||||
</verdict_format>
|
||||
|
||||
### FAIL Output
|
||||
|
||||
```markdown
|
||||
❌ VALIDATION FAILED
|
||||
|
||||
Failures:
|
||||
1. File missing: app/api/occupant/agreement/route.ts
|
||||
2. Type check: 3 errors in lib/api/auth.ts
|
||||
3. Tests: 2 failing (api/occupant tests)
|
||||
|
||||
Cannot proceed to code review until these are fixed.
|
||||
```
|
||||
|
||||
## Why This Works
|
||||
|
||||
**Verification is NOT rubber-stamping.**
|
||||
|
||||
Inspector's job is to find the truth:
|
||||
- Did the work actually get done?
|
||||
- Do the quality checks actually pass?
|
||||
- Are the files actually there?
|
||||
|
||||
If something is wrong, say so with evidence.
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**Don't do this:**
|
||||
- ❌ Take Builder's word for anything
|
||||
- ❌ Skip verification steps
|
||||
- ❌ Assume tests pass without running them
|
||||
- ❌ Give PASS verdict if ANY check fails
|
||||
|
||||
**Do this instead:**
|
||||
- ✅ Run all checks yourself
|
||||
- ✅ Provide specific evidence
|
||||
- ✅ Give honest verdict
|
||||
- ✅ FAIL fast if issues found
|
||||
|
||||
## Example: Good Verification
|
||||
|
||||
```markdown
|
||||
## Verification Results
|
||||
|
||||
**File Checks:**
|
||||
✅ lib/billing/payment-processor.ts (1,234 lines)
|
||||
✅ lib/billing/__tests__/payment-processor.test.ts (456 lines)
|
||||
✅ lib/billing/worker.ts (modified)
|
||||
|
||||
**Quality Checks:**
|
||||
✅ Type check: PASS (0 errors)
|
||||
✅ Linter: PASS (0 warnings)
|
||||
✅ Build: PASS (2.3s)
|
||||
|
||||
**Tests:**
|
||||
✅ 48/48 passing
|
||||
✅ 96% coverage
|
||||
✅ 0 skipped
|
||||
|
||||
**Git Status:**
|
||||
- Modified: 1 file
|
||||
- Created: 2 files
|
||||
- Total: 3 files changed
|
||||
|
||||
**Verdict:** PASS
|
||||
|
||||
Ready for code review.
|
||||
```
|
||||
|
||||
## Example: Bad Verification (Don't Do This)
|
||||
|
||||
```markdown
|
||||
## Verification Results
|
||||
|
||||
Everything looks good! ✅
|
||||
|
||||
Builder said tests pass and I believe them.
|
||||
|
||||
**Verdict:** PASS
|
||||
```
|
||||
|
||||
**What's wrong:**
|
||||
- ❌ No evidence
|
||||
- ❌ Trusted claims without verification
|
||||
- ❌ Didn't run checks
|
||||
- ❌ Rubber-stamped
|
||||
|
||||
## Remember
|
||||
|
||||
**You are the INSPECTOR. Your job is to find the truth.**
|
||||
|
||||
If you give a PASS verdict and later find issues, that's on you.
|
||||
|
|
|
|||
|
|
@ -19,9 +19,9 @@ name: multi-agent-review
|
|||
version: 3.0.0
|
||||
|
||||
agent_selection:
|
||||
micro: {count: 1, agents: [security]}
|
||||
standard: {count: 2, agents: [security, code_quality]}
|
||||
complex: {count: 3, agents: [security, code_quality, architecture]}
|
||||
micro: {count: 2, agents: [security, code_quality]}
|
||||
standard: {count: 4, agents: [security, code_quality, architecture, testing]}
|
||||
complex: {count: 6, agents: [security, code_quality, architecture, testing, performance, domain_expert]}
|
||||
|
||||
available_agents:
|
||||
security: "Identifies vulnerabilities and security risks"
|
||||
|
|
@ -41,40 +41,21 @@ available_agents:
|
|||
<process>
|
||||
|
||||
<step name="determine_agent_count" priority="first">
|
||||
**Select agents based on override or complexity**
|
||||
**Select agents based on complexity**
|
||||
|
||||
```
|
||||
# Priority 1: Check for explicit override
|
||||
If override_agent_count is provided (not null):
|
||||
agent_count = min(override_agent_count, 6) # Cap at 6 max
|
||||
Select top N agents based on changed code patterns
|
||||
Display: 🔧 CUSTOM Review ({{agent_count}} agents)
|
||||
|
||||
# Priority 2: Use complexity-based default
|
||||
Else if complexity_level == "micro":
|
||||
agent_count = 1
|
||||
agents = ["security"]
|
||||
Display: 🔍 MICRO Review (1 agent)
|
||||
If complexity_level == "micro":
|
||||
agents = ["security", "code_quality"]
|
||||
Display: 🔍 MICRO Review (2 agents)
|
||||
|
||||
Else if complexity_level == "standard":
|
||||
agent_count = 2
|
||||
agents = ["security", "code_quality"]
|
||||
Display: 📋 STANDARD Review (2 agents)
|
||||
agents = ["security", "code_quality", "architecture", "testing"]
|
||||
Display: 📋 STANDARD Review (4 agents)
|
||||
|
||||
Else if complexity_level == "complex":
|
||||
agent_count = 3
|
||||
agents = ["security", "code_quality", "architecture"]
|
||||
Display: 🔬 COMPLEX Review (3 agents)
|
||||
agents = ALL 6 agents
|
||||
Display: 🔬 COMPLEX Review (6 agents)
|
||||
```
|
||||
|
||||
**Agent Selection Priority:**
|
||||
1. Security (always first)
|
||||
2. Code Quality (always second)
|
||||
3-6. Selected based on code patterns:
|
||||
- Architecture (for structural changes)
|
||||
- Testing (for test coverage)
|
||||
- Performance (for optimization)
|
||||
- Domain Expert (for business logic)
|
||||
</step>
|
||||
|
||||
<step name="load_story_context">
|
||||
|
|
|
|||
|
|
@ -18,29 +18,28 @@ story_id: "{story_id}" # Required
|
|||
story_file: "{sprint_artifacts}/story-{story_id}.md"
|
||||
base_branch: "main" # Optional: branch to compare against
|
||||
complexity_level: "standard" # micro | standard | complex (passed from story-full-pipeline)
|
||||
override_agent_count: null # Optional: override complexity-based count (1-6, null = use complexity_level)
|
||||
|
||||
# Complexity-based agent selection (NEW v1.0.0)
|
||||
# Cost-effective review depth based on story RISK and technical complexity
|
||||
# Complexity determined by batch-stories based on: risk keywords, architectural impact, security concerns
|
||||
complexity_routing:
|
||||
micro:
|
||||
agent_count: 1
|
||||
agents: ["security"]
|
||||
agent_count: 2
|
||||
agents: ["security", "code_quality"]
|
||||
description: "Quick sanity check for low-risk stories"
|
||||
examples: ["UI tweaks", "text changes", "simple CRUD", "documentation"]
|
||||
cost_multiplier: 1x
|
||||
|
||||
standard:
|
||||
agent_count: 2
|
||||
agents: ["security", "code_quality"]
|
||||
agent_count: 4
|
||||
agents: ["security", "code_quality", "architecture", "testing"]
|
||||
description: "Balanced multi-perspective review for medium-risk changes"
|
||||
examples: ["API endpoints", "business logic", "data validation", "component refactors"]
|
||||
cost_multiplier: 2x
|
||||
|
||||
complex:
|
||||
agent_count: 3
|
||||
agents: ["security", "code_quality", "architecture"]
|
||||
agent_count: 6
|
||||
agents: ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
|
||||
description: "Comprehensive review for high-risk/high-complexity changes"
|
||||
examples: ["auth/security", "payments", "data migration", "architecture changes", "performance-critical", "complex algorithms"]
|
||||
cost_multiplier: 3x
|
||||
|
|
|
|||
|
|
@ -1,225 +0,0 @@
|
|||
# Agent Completion Artifact Pattern
|
||||
|
||||
**Problem:** Agents fail to update story files reliably (60% success rate)
|
||||
**Solution:** Agents create completion.json artifacts. Orchestrator uses them to update story files.
|
||||
|
||||
## The Contract
|
||||
|
||||
### Agent Responsibility
|
||||
Each agent MUST create a completion artifact before finishing:
|
||||
- **File path:** `docs/sprint-artifacts/completions/{{story_key}}-{{agent_name}}.json`
|
||||
- **Format:** Structured JSON (see formats below)
|
||||
- **Verification:** File exists = work done (binary check)
|
||||
|
||||
### Orchestrator Responsibility
|
||||
Orchestrator reads completion artifacts and:
|
||||
- Parses JSON for structured data
|
||||
- Updates story file tasks (check off completed)
|
||||
- Fills Dev Agent Record with evidence
|
||||
- Verifies updates succeeded
|
||||
|
||||
## Why This Works
|
||||
|
||||
**File-based verification:**
|
||||
- ✅ Binary check: File exists or doesn't
|
||||
- ✅ No complex parsing of agent output
|
||||
- ✅ No reconciliation logic needed
|
||||
- ✅ Hard stop if artifact missing
|
||||
|
||||
**JSON format:**
|
||||
- ✅ Easy to parse reliably
|
||||
- ✅ Structured data (not prose)
|
||||
- ✅ Version controllable
|
||||
- ✅ Auditable trail
|
||||
|
||||
## How to Use This Pattern
|
||||
|
||||
### In Agent Prompts
|
||||
|
||||
Include this in every agent prompt:
|
||||
|
||||
```markdown
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-{{agent_name}}.json`
|
||||
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "{{agent_name}}",
|
||||
"status": "SUCCESS",
|
||||
"files_created": ["file1.ts", "file2.ts"],
|
||||
"files_modified": ["file3.ts"],
|
||||
"timestamp": "2026-01-27T02:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
```
|
||||
|
||||
### In Orchestrator Verification
|
||||
|
||||
After agent completes, verify artifact exists:
|
||||
|
||||
```bash
|
||||
COMPLETION_FILE="docs/sprint-artifacts/completions/{{story_key}}-{{agent}}.json"
|
||||
|
||||
if [ ! -f "$COMPLETION_FILE" ]; then
|
||||
echo "❌ BLOCKER: Agent failed to create completion artifact"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Completion artifact found"
|
||||
```
|
||||
|
||||
### In Reconciliation
|
||||
|
||||
Parse artifact to update story file:
|
||||
|
||||
```markdown
|
||||
1. Load completion artifact with Read tool
|
||||
2. Parse JSON to extract data
|
||||
3. Use Edit tool to update story file
|
||||
4. Verify updates with bash checks
|
||||
```
|
||||
|
||||
## Artifact Formats by Agent
|
||||
|
||||
### Builder Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "builder",
|
||||
"status": "SUCCESS",
|
||||
"tasks_completed": [
|
||||
"Create PaymentProcessor service",
|
||||
"Add retry logic with exponential backoff"
|
||||
],
|
||||
"files_created": [
|
||||
"lib/billing/payment-processor.ts",
|
||||
"lib/billing/__tests__/payment-processor.test.ts"
|
||||
],
|
||||
"files_modified": [
|
||||
"lib/billing/worker.ts"
|
||||
],
|
||||
"tests": {
|
||||
"files": 2,
|
||||
"cases": 15
|
||||
},
|
||||
"timestamp": "2026-01-27T02:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Inspector Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "inspector",
|
||||
"status": "PASS",
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 45,
|
||||
"failing": 0,
|
||||
"total": 45,
|
||||
"coverage": 95
|
||||
},
|
||||
"files_verified": [
|
||||
"lib/billing/payment-processor.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:35:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Reviewer Completion
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "reviewer",
|
||||
"status": "ISSUES_FOUND",
|
||||
"issues": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"medium": 4,
|
||||
"low": 2,
|
||||
"total": 11
|
||||
},
|
||||
"must_fix": [
|
||||
{
|
||||
"severity": "CRITICAL",
|
||||
"location": "api/route.ts:45",
|
||||
"description": "SQL injection vulnerability"
|
||||
}
|
||||
],
|
||||
"files_reviewed": [
|
||||
"api/route.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:40:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Fixer Completion (FINAL)
|
||||
|
||||
```json
|
||||
{
|
||||
"story_key": "19-4",
|
||||
"agent": "fixer",
|
||||
"status": "SUCCESS",
|
||||
"issues_fixed": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"total": 5
|
||||
},
|
||||
"fixes_applied": [
|
||||
"Fixed SQL injection in agreement route (CRITICAL)",
|
||||
"Added authorization check (CRITICAL)"
|
||||
],
|
||||
"files_modified": [
|
||||
"api/route.ts"
|
||||
],
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 48,
|
||||
"failing": 0,
|
||||
"total": 48,
|
||||
"coverage": 96
|
||||
},
|
||||
"git_commit": "a1b2c3d4e5f",
|
||||
"timestamp": "2026-01-27T02:50:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Reliability:** 60% → 100% (file exists is binary)
|
||||
- **Simplicity:** No complex output parsing
|
||||
- **Auditability:** JSON files are version controlled
|
||||
- **Debuggability:** Can inspect artifacts when issues occur
|
||||
- **Enforcement:** Can't proceed without completion artifact (hard stop)
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**Don't do this:**
|
||||
- ❌ Trust agent output without verification
|
||||
- ❌ Parse agent prose for structured data
|
||||
- ❌ Let agents update story files directly
|
||||
- ❌ Skip artifact creation ("just this once")
|
||||
|
||||
**Do this instead:**
|
||||
- ✅ Verify artifact exists (binary check)
|
||||
- ✅ Parse JSON for reliable data
|
||||
- ✅ Orchestrator updates story files
|
||||
- ✅ Hard stop if artifact missing
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
# Hospital-Grade Quality Standards
|
||||
|
||||
**Philosophy:** Quality >> Speed
|
||||
|
||||
This pattern ensures code meets production-grade standards regardless of story complexity.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Take time to do it right**
|
||||
- Don't rush implementations
|
||||
- Consider edge cases
|
||||
- Handle errors properly
|
||||
|
||||
2. **No shortcuts**
|
||||
- Don't skip error handling
|
||||
- Don't leave TODO comments
|
||||
- Don't use `any` types
|
||||
- Don't hardcode values
|
||||
|
||||
3. **Production-ready from day one**
|
||||
- All code deployable immediately
|
||||
- No "we'll fix it later"
|
||||
- No technical debt by design
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
### Code Quality
|
||||
- [ ] All functions have clear, single responsibility
|
||||
- [ ] Error handling for all failure paths
|
||||
- [ ] Input validation at system boundaries
|
||||
- [ ] No magic numbers or hardcoded strings
|
||||
- [ ] Type safety (no `any`, proper generics)
|
||||
|
||||
### Testing
|
||||
- [ ] Unit tests for business logic
|
||||
- [ ] Integration tests for API endpoints
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Error cases covered
|
||||
- [ ] 90%+ coverage target
|
||||
|
||||
### Security
|
||||
- [ ] No SQL injection vulnerabilities
|
||||
- [ ] No XSS vulnerabilities
|
||||
- [ ] Authentication/authorization checks
|
||||
- [ ] Input sanitization
|
||||
- [ ] No secrets in code
|
||||
|
||||
### Performance
|
||||
- [ ] No N+1 query patterns
|
||||
- [ ] Appropriate database indexes
|
||||
- [ ] Efficient algorithms (avoid O(n²) where possible)
|
||||
- [ ] Resource cleanup (connections, files)
|
||||
|
||||
### Maintainability
|
||||
- [ ] Code follows project patterns
|
||||
- [ ] Self-documenting code (clear names)
|
||||
- [ ] Comments only where logic isn't obvious
|
||||
- [ ] Consistent formatting
|
||||
- [ ] DRY (Don't Repeat Yourself)
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Immediate rejection criteria:**
|
||||
- ❌ Security vulnerabilities
|
||||
- ❌ Data loss scenarios
|
||||
- ❌ Production bugs
|
||||
- ❌ Missing error handling
|
||||
- ❌ Skipped tests
|
||||
- ❌ Hardcoded secrets
|
||||
|
||||
## Hospital-Grade Mindset
|
||||
|
||||
> "If this code ran a medical device, would I trust it with my family's life?"
|
||||
|
||||
If the answer is no, it's not hospital-grade. Fix it.
|
||||
|
|
@ -1,340 +0,0 @@
|
|||
# Security Review Checklist
|
||||
|
||||
**Philosophy:** Security issues are CRITICAL. No exceptions.
|
||||
|
||||
This checklist helps identify common security vulnerabilities in code reviews.
|
||||
|
||||
## CRITICAL Security Issues
|
||||
|
||||
These MUST be fixed. No story ships with these issues.
|
||||
|
||||
### 1. SQL Injection
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: User input in query string
|
||||
const query = `SELECT * FROM users WHERE id = '${userId}'`;
|
||||
const query = "SELECT * FROM users WHERE id = '" + userId + "'";
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Parameterized queries
|
||||
const query = db.prepare('SELECT * FROM users WHERE id = ?');
|
||||
query.get(userId);
|
||||
|
||||
// ✅ GOOD: ORM/Query builder
|
||||
const user = await prisma.user.findUnique({ where: { id: userId } });
|
||||
```
|
||||
|
||||
### 2. XSS (Cross-Site Scripting)
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Unsanitized user input in HTML
|
||||
element.innerHTML = userInput;
|
||||
document.write(userInput);
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Use textContent or sanitize
|
||||
element.textContent = userInput;
|
||||
|
||||
// ✅ GOOD: Use framework's built-in escaping
|
||||
<div>{userInput}</div> // React automatically escapes
|
||||
```
|
||||
|
||||
### 3. Authentication Bypass
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No auth check
|
||||
app.get('/api/admin/users', async (req, res) => {
|
||||
const users = await getUsers();
|
||||
res.json(users);
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Require auth
|
||||
app.get('/api/admin/users', requireAuth, async (req, res) => {
|
||||
const users = await getUsers();
|
||||
res.json(users);
|
||||
});
|
||||
```
|
||||
|
||||
### 4. Authorization Gaps
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No ownership check
|
||||
app.delete('/api/orders/:id', async (req, res) => {
|
||||
await deleteOrder(req.params.id);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Verify user owns resource
|
||||
app.delete('/api/orders/:id', async (req, res) => {
|
||||
const order = await getOrder(req.params.id);
|
||||
|
||||
if (order.userId !== req.user.id) {
|
||||
return res.status(403).json({ error: 'Forbidden' });
|
||||
}
|
||||
|
||||
await deleteOrder(req.params.id);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
### 5. Hardcoded Secrets
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Secrets in code
|
||||
const API_KEY = 'sk-1234567890abcdef';
|
||||
const DB_PASSWORD = 'MyP@ssw0rd123';
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Environment variables
|
||||
const API_KEY = process.env.API_KEY;
|
||||
const DB_PASSWORD = process.env.DB_PASSWORD;
|
||||
|
||||
// ✅ GOOD: Secrets manager
|
||||
const API_KEY = await secretsManager.get('API_KEY');
|
||||
```
|
||||
|
||||
### 6. Insecure Direct Object Reference (IDOR)
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Use user-supplied ID without validation
|
||||
app.get('/api/documents/:id', async (req, res) => {
|
||||
const doc = await getDocument(req.params.id);
|
||||
res.json(doc);
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Verify access
|
||||
app.get('/api/documents/:id', async (req, res) => {
|
||||
const doc = await getDocument(req.params.id);
|
||||
|
||||
// Check user has permission to view this document
|
||||
if (!await userCanAccessDocument(req.user.id, doc.id)) {
|
||||
return res.status(403).json({ error: 'Forbidden' });
|
||||
}
|
||||
|
||||
res.json(doc);
|
||||
});
|
||||
```
|
||||
|
||||
## HIGH Security Issues
|
||||
|
||||
These should be fixed before shipping.
|
||||
|
||||
### 7. Missing Input Validation
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No validation
|
||||
app.post('/api/users', async (req, res) => {
|
||||
await createUser(req.body);
|
||||
res.json({ success: true });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Validate input
|
||||
app.post('/api/users', async (req, res) => {
|
||||
const schema = z.object({
|
||||
email: z.string().email(),
|
||||
age: z.number().min(18).max(120)
|
||||
});
|
||||
|
||||
try {
|
||||
const data = schema.parse(req.body);
|
||||
await createUser(data);
|
||||
res.json({ success: true });
|
||||
} catch (error) {
|
||||
res.status(400).json({ error: error.errors });
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### 8. Sensitive Data Exposure
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Exposing sensitive fields
|
||||
const user = await getUser(userId);
|
||||
res.json(user); // Contains password hash, SSN, etc.
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Select only safe fields
|
||||
const user = await getUser(userId);
|
||||
res.json({
|
||||
id: user.id,
|
||||
name: user.name,
|
||||
email: user.email
|
||||
// Don't include: password, ssn, etc.
|
||||
});
|
||||
```
|
||||
|
||||
### 9. Missing Rate Limiting
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No rate limit
|
||||
app.post('/api/login', async (req, res) => {
|
||||
const user = await authenticate(req.body);
|
||||
res.json({ token: user.token });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Rate limit sensitive endpoints
|
||||
app.post('/api/login',
|
||||
rateLimit({ max: 5, windowMs: 60000 }), // 5 attempts per minute
|
||||
async (req, res) => {
|
||||
const user = await authenticate(req.body);
|
||||
res.json({ token: user.token });
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### 10. Insecure Randomness
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Using Math.random() for tokens
|
||||
const token = Math.random().toString(36);
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Cryptographically secure random
|
||||
const crypto = require('crypto');
|
||||
const token = crypto.randomBytes(32).toString('hex');
|
||||
```
|
||||
|
||||
## MEDIUM Security Issues
|
||||
|
||||
These improve security but aren't critical.
|
||||
|
||||
### 11. Missing HTTPS
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: HTTP only
|
||||
app.listen(3000);
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Force HTTPS in production
|
||||
if (process.env.NODE_ENV === 'production') {
|
||||
app.use((req, res, next) => {
|
||||
if (req.header('x-forwarded-proto') !== 'https') {
|
||||
res.redirect(`https://${req.header('host')}${req.url}`);
|
||||
} else {
|
||||
next();
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 12. Missing Security Headers
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: No security headers
|
||||
app.use(express.json());
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Add security headers
|
||||
app.use(helmet()); // Adds multiple security headers
|
||||
```
|
||||
|
||||
### 13. Verbose Error Messages
|
||||
|
||||
**Look for:**
|
||||
```javascript
|
||||
// ❌ BAD: Exposing stack traces
|
||||
app.use((error, req, res, next) => {
|
||||
res.status(500).json({ error: error.stack });
|
||||
});
|
||||
```
|
||||
|
||||
**Fix with:**
|
||||
```javascript
|
||||
// ✅ GOOD: Generic error message
|
||||
app.use((error, req, res, next) => {
|
||||
console.error(error); // Log internally
|
||||
res.status(500).json({ error: 'Internal server error' });
|
||||
});
|
||||
```
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Automated Checks
|
||||
|
||||
Run security scanners:
|
||||
```bash
|
||||
# Check for known vulnerabilities
|
||||
npm audit
|
||||
|
||||
# Static analysis
|
||||
npx eslint-plugin-security
|
||||
|
||||
# Secrets detection
|
||||
git secrets --scan
|
||||
```
|
||||
|
||||
### Step 2: Manual Review
|
||||
|
||||
Use this checklist to review:
|
||||
- [ ] SQL injection vulnerabilities
|
||||
- [ ] XSS vulnerabilities
|
||||
- [ ] Authentication bypasses
|
||||
- [ ] Authorization gaps
|
||||
- [ ] Hardcoded secrets
|
||||
- [ ] IDOR vulnerabilities
|
||||
- [ ] Missing input validation
|
||||
- [ ] Sensitive data exposure
|
||||
- [ ] Missing rate limiting
|
||||
- [ ] Insecure randomness
|
||||
|
||||
### Step 3: Document Findings
|
||||
|
||||
For each issue found:
|
||||
```markdown
|
||||
**Issue #1: SQL Injection Vulnerability**
|
||||
- **Location:** api/users/route.ts:45
|
||||
- **Severity:** CRITICAL
|
||||
- **Problem:** User input concatenated into query
|
||||
- **Code:**
|
||||
```typescript
|
||||
const query = `SELECT * FROM users WHERE id = '${userId}'`
|
||||
```
|
||||
- **Fix:** Use parameterized queries with Prisma
|
||||
```
|
||||
|
||||
## Remember
|
||||
|
||||
**Security issues are CRITICAL. They MUST be fixed.**
|
||||
|
||||
Don't let security issues slide because "we'll fix it later." Fix them now.
|
||||
|
|
@ -1,184 +0,0 @@
|
|||
# Test-Driven Development (TDD) Pattern
|
||||
|
||||
**Red → Green → Refactor**
|
||||
|
||||
Write tests first, make them pass, then refactor.
|
||||
|
||||
## Why TDD?
|
||||
|
||||
1. **Design quality:** Writing tests first forces good API design
|
||||
2. **Coverage:** 90%+ coverage by default
|
||||
3. **Confidence:** Refactor without fear
|
||||
4. **Documentation:** Tests document expected behavior
|
||||
|
||||
## TDD Cycle
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 1. RED: Write a failing test │
|
||||
│ - Test what the code SHOULD do │
|
||||
│ - Test fails (code doesn't exist yet) │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 2. GREEN: Write minimal code to pass │
|
||||
│ - Simplest implementation that works │
|
||||
│ - Test passes │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ 3. REFACTOR: Clean up code │
|
||||
│ - Improve design │
|
||||
│ - Remove duplication │
|
||||
│ - Tests still pass │
|
||||
└─────────────────────────────────────────────┘
|
||||
↓
|
||||
(repeat for next feature)
|
||||
```
|
||||
|
||||
## Implementation Order
|
||||
|
||||
### Greenfield (New Code)
|
||||
1. Write test for happy path
|
||||
2. Write test for error cases
|
||||
3. Write test for edge cases
|
||||
4. Implement to make all tests pass
|
||||
5. Refactor
|
||||
|
||||
### Brownfield (Existing Code)
|
||||
1. Understand existing behavior
|
||||
2. Add tests for current behavior (characterization tests)
|
||||
3. Write test for new behavior
|
||||
4. Implement new behavior
|
||||
5. Refactor
|
||||
|
||||
## Test Quality Standards
|
||||
|
||||
### Good Test Characteristics
|
||||
- ✅ **Isolated:** Each test independent
|
||||
- ✅ **Fast:** Runs in milliseconds
|
||||
- ✅ **Clear:** Obvious what it tests
|
||||
- ✅ **Focused:** One behavior per test
|
||||
- ✅ **Stable:** No flakiness
|
||||
|
||||
### Test Structure (AAA Pattern)
|
||||
```typescript
|
||||
test('should calculate total price with tax', () => {
|
||||
// Arrange: Set up test data
|
||||
const cart = new ShoppingCart();
|
||||
cart.addItem({ price: 100, quantity: 2 });
|
||||
|
||||
// Act: Execute the behavior
|
||||
const total = cart.getTotalWithTax(0.08);
|
||||
|
||||
// Assert: Verify the result
|
||||
expect(total).toBe(216); // (100 * 2) * 1.08
|
||||
});
|
||||
```
|
||||
|
||||
## What to Test
|
||||
|
||||
### Must Test (Critical)
|
||||
- Business logic
|
||||
- API endpoints
|
||||
- Data transformations
|
||||
- Error handling
|
||||
- Authorization checks
|
||||
- Edge cases
|
||||
|
||||
### Nice to Test (Important)
|
||||
- UI components
|
||||
- Integration flows
|
||||
- Performance benchmarks
|
||||
|
||||
### Don't Waste Time Testing
|
||||
- Third-party libraries (already tested)
|
||||
- Framework internals (already tested)
|
||||
- Trivial getters/setters
|
||||
- Generated code
|
||||
|
||||
## Coverage Target
|
||||
|
||||
**Minimum:** 90% line coverage
|
||||
**Ideal:** 95%+ with meaningful tests
|
||||
|
||||
**Coverage ≠ Quality**
|
||||
- 100% coverage with bad tests is worthless
|
||||
- 90% coverage with good tests is excellent
|
||||
|
||||
## TDD Anti-Patterns
|
||||
|
||||
**Avoid these:**
|
||||
- ❌ Writing tests after code (test-after)
|
||||
- ❌ Testing implementation details
|
||||
- ❌ Tests that test nothing
|
||||
- ❌ Brittle tests (break with refactoring)
|
||||
- ❌ Slow tests (> 1 second)
|
||||
|
||||
## Example: TDD for API Endpoint
|
||||
|
||||
```typescript
|
||||
// Step 1: RED - Write failing test
|
||||
describe('POST /api/orders', () => {
|
||||
test('should create order and return 201', async () => {
|
||||
const response = await request(app)
|
||||
.post('/api/orders')
|
||||
.send({ items: [{ id: 1, qty: 2 }] })
|
||||
.expect(201);
|
||||
|
||||
expect(response.body).toHaveProperty('orderId');
|
||||
});
|
||||
});
|
||||
|
||||
// Test fails (endpoint doesn't exist yet)
|
||||
|
||||
// Step 2: GREEN - Minimal implementation
|
||||
app.post('/api/orders', async (req, res) => {
|
||||
const orderId = await createOrder(req.body);
|
||||
res.status(201).json({ orderId });
|
||||
});
|
||||
|
||||
// Test passes
|
||||
|
||||
// Step 3: REFACTOR - Add validation, error handling
|
||||
app.post('/api/orders', async (req, res) => {
|
||||
try {
|
||||
// Input validation
|
||||
const schema = z.object({
|
||||
items: z.array(z.object({
|
||||
id: z.number(),
|
||||
qty: z.number().min(1)
|
||||
}))
|
||||
});
|
||||
|
||||
const data = schema.parse(req.body);
|
||||
|
||||
// Business logic
|
||||
const orderId = await createOrder(data);
|
||||
|
||||
res.status(201).json({ orderId });
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
res.status(400).json({ error: error.errors });
|
||||
} else {
|
||||
res.status(500).json({ error: 'Internal error' });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// All tests still pass
|
||||
```
|
||||
|
||||
## TDD in Practice
|
||||
|
||||
**Start here:**
|
||||
1. Write one test for the simplest case
|
||||
2. Make it pass with simplest code
|
||||
3. Write next test for slightly more complex case
|
||||
4. Refactor when you see duplication
|
||||
5. Repeat
|
||||
|
||||
**Don't:**
|
||||
- Write all tests first (too much work)
|
||||
- Write production code without failing test
|
||||
- Skip refactoring step
|
||||
|
|
@ -1,198 +0,0 @@
|
|||
# Independent Verification Pattern
|
||||
|
||||
**Philosophy:** Trust but verify. Fresh eyes catch what familiarity misses.
|
||||
|
||||
## Core Principle
|
||||
|
||||
The person who built something should NOT validate their own work.
|
||||
|
||||
**Why?**
|
||||
- Confirmation bias (see what you expect to see)
|
||||
- Blind spots (familiar with your own code)
|
||||
- Fatigue (validated while building, miss issues)
|
||||
|
||||
## Verification Requirements
|
||||
|
||||
### Fresh Context
|
||||
Inspector agent has:
|
||||
- ✅ No knowledge of what Builder did
|
||||
- ✅ No preconceptions about implementation
|
||||
- ✅ Only the story requirements as context
|
||||
|
||||
**This means:**
|
||||
- Run all checks yourself
|
||||
- Don't trust any claims
|
||||
- Start from scratch
|
||||
|
||||
### What to Verify
|
||||
|
||||
**1. Files Exist**
|
||||
```bash
|
||||
# For each file mentioned in story tasks
|
||||
ls -la {{file_path}}
|
||||
# FAIL if file missing or empty
|
||||
```
|
||||
|
||||
**2. File Contents**
|
||||
- Open each file
|
||||
- Check it has actual code (not just TODO/stub)
|
||||
- Verify it matches story requirements
|
||||
|
||||
**3. Tests Exist**
|
||||
```bash
|
||||
find . -name "*.test.ts" -o -name "__tests__"
|
||||
# FAIL if no tests found for new code
|
||||
```
|
||||
|
||||
**4. Quality Checks Pass**
|
||||
|
||||
Run these yourself. Don't trust claims.
|
||||
|
||||
```bash
|
||||
# Type check
|
||||
npm run type-check
|
||||
# FAIL if any errors
|
||||
|
||||
# Linter
|
||||
npm run lint
|
||||
# FAIL if any errors or warnings
|
||||
|
||||
# Build
|
||||
npm run build
|
||||
# FAIL if build fails
|
||||
|
||||
# Tests
|
||||
npm test -- {{story_specific_tests}}
|
||||
# FAIL if any tests fail
|
||||
# FAIL if tests are skipped
|
||||
# FAIL if coverage < 90%
|
||||
```
|
||||
|
||||
**5. Git Status**
|
||||
```bash
|
||||
git status
|
||||
# Check for uncommitted files
|
||||
# List what was changed
|
||||
```
|
||||
|
||||
## Verification Verdict
|
||||
|
||||
### PASS Criteria
|
||||
All of these must be true:
|
||||
- [ ] All story files exist and have content
|
||||
- [ ] Type check returns 0 errors
|
||||
- [ ] Linter returns 0 errors/warnings
|
||||
- [ ] Build succeeds
|
||||
- [ ] Tests run and pass (not skipped)
|
||||
- [ ] Test coverage >= 90%
|
||||
- [ ] Git status is clean or has expected changes
|
||||
|
||||
**If ANY checkbox is unchecked → FAIL verdict**
|
||||
|
||||
### PASS Output
|
||||
|
||||
```markdown
|
||||
✅ VALIDATION PASSED
|
||||
|
||||
Evidence:
|
||||
- Files verified: [list files checked]
|
||||
- Type check: PASS (0 errors)
|
||||
- Linter: PASS (0 warnings)
|
||||
- Build: PASS
|
||||
- Tests: 45/45 passing (95% coverage)
|
||||
- Git: 12 files modified, 3 new files
|
||||
|
||||
Ready for code review.
|
||||
```
|
||||
|
||||
### FAIL Output
|
||||
|
||||
```markdown
|
||||
❌ VALIDATION FAILED
|
||||
|
||||
Failures:
|
||||
1. File missing: app/api/occupant/agreement/route.ts
|
||||
2. Type check: 3 errors in lib/api/auth.ts
|
||||
3. Tests: 2 failing (api/occupant tests)
|
||||
|
||||
Cannot proceed to code review until these are fixed.
|
||||
```
|
||||
|
||||
## Why This Works
|
||||
|
||||
**Verification is NOT rubber-stamping.**
|
||||
|
||||
Inspector's job is to find the truth:
|
||||
- Did the work actually get done?
|
||||
- Do the quality checks actually pass?
|
||||
- Are the files actually there?
|
||||
|
||||
If something is wrong, say so with evidence.
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**Don't do this:**
|
||||
- ❌ Take Builder's word for anything
|
||||
- ❌ Skip verification steps
|
||||
- ❌ Assume tests pass without running them
|
||||
- ❌ Give PASS verdict if ANY check fails
|
||||
|
||||
**Do this instead:**
|
||||
- ✅ Run all checks yourself
|
||||
- ✅ Provide specific evidence
|
||||
- ✅ Give honest verdict
|
||||
- ✅ FAIL fast if issues found
|
||||
|
||||
## Example: Good Verification
|
||||
|
||||
```markdown
|
||||
## Verification Results
|
||||
|
||||
**File Checks:**
|
||||
✅ lib/billing/payment-processor.ts (1,234 lines)
|
||||
✅ lib/billing/__tests__/payment-processor.test.ts (456 lines)
|
||||
✅ lib/billing/worker.ts (modified)
|
||||
|
||||
**Quality Checks:**
|
||||
✅ Type check: PASS (0 errors)
|
||||
✅ Linter: PASS (0 warnings)
|
||||
✅ Build: PASS (2.3s)
|
||||
|
||||
**Tests:**
|
||||
✅ 48/48 passing
|
||||
✅ 96% coverage
|
||||
✅ 0 skipped
|
||||
|
||||
**Git Status:**
|
||||
- Modified: 1 file
|
||||
- Created: 2 files
|
||||
- Total: 3 files changed
|
||||
|
||||
**Verdict:** PASS
|
||||
|
||||
Ready for code review.
|
||||
```
|
||||
|
||||
## Example: Bad Verification (Don't Do This)
|
||||
|
||||
```markdown
|
||||
## Verification Results
|
||||
|
||||
Everything looks good! ✅
|
||||
|
||||
Builder said tests pass and I believe them.
|
||||
|
||||
**Verdict:** PASS
|
||||
```
|
||||
|
||||
**What's wrong:**
|
||||
- ❌ No evidence
|
||||
- ❌ Trusted claims without verification
|
||||
- ❌ Didn't run checks
|
||||
- ❌ Rubber-stamped
|
||||
|
||||
## Remember
|
||||
|
||||
**You are the INSPECTOR. Your job is to find the truth.**
|
||||
|
||||
If you give a PASS verdict and later find issues, that's on you.
|
||||
|
|
@ -1,237 +0,0 @@
|
|||
# Agent Limitations in Batch Mode
|
||||
|
||||
**CRITICAL:** Agents running in batch-stories have specific limitations. Understanding these prevents wasted time and sets correct expectations.
|
||||
|
||||
---
|
||||
|
||||
## Core Limitations
|
||||
|
||||
### ❌ Agents CANNOT Invoke Other Workflows
|
||||
|
||||
**What this means:**
|
||||
- Agents cannot run `/create-story-with-gap-analysis`
|
||||
- Agents cannot execute `/` slash commands (those are for user CLI)
|
||||
- Agents cannot call other BMAD workflows mid-execution
|
||||
|
||||
**Why:**
|
||||
- Slash commands require user terminal context
|
||||
- Workflow invocation requires special tool access
|
||||
- Batch agents are isolated execution contexts
|
||||
|
||||
**Implication:**
|
||||
- Story creation MUST happen before batch execution
|
||||
- If stories are incomplete, batch will skip them
|
||||
- No way to "fix" stories during batch
|
||||
|
||||
---
|
||||
|
||||
### ❌ Agents CANNOT Prompt User Interactively
|
||||
|
||||
**What this means:**
|
||||
- Batch runs autonomously, no user interaction
|
||||
- `<ask>` tags are auto-answered with defaults
|
||||
- No way to clarify ambiguous requirements mid-batch
|
||||
|
||||
**Why:**
|
||||
- Batch is designed for unattended execution
|
||||
- User may not be present during execution
|
||||
- Prompts would break parallel execution
|
||||
|
||||
**Implication:**
|
||||
- All requirements must be clear in story file
|
||||
- Optional steps are skipped
|
||||
- Ambiguous stories will halt or skip
|
||||
|
||||
---
|
||||
|
||||
### ❌ Agents CANNOT Generate Missing BMAD Sections
|
||||
|
||||
**What this means:**
|
||||
- If story has <12 sections, agent halts
|
||||
- If story has 0 tasks, agent halts
|
||||
- Agent will NOT try to "fix" the story format
|
||||
|
||||
**Why:**
|
||||
- Story format is structural, not implementation
|
||||
- Generating sections requires context agent doesn't have
|
||||
- Gap analysis requires codebase scanning beyond agent scope
|
||||
|
||||
**Implication:**
|
||||
- All stories must be properly formatted BEFORE batch
|
||||
- Run validation: `./scripts/validate-bmad-format.sh`
|
||||
- Regenerate incomplete stories manually
|
||||
|
||||
---
|
||||
|
||||
## What Agents CAN Do
|
||||
|
||||
### ✅ Execute Clear, Well-Defined Tasks
|
||||
|
||||
**Works well:**
|
||||
- Stories with 10-30 specific tasks
|
||||
- Clear acceptance criteria
|
||||
- Existing code to modify
|
||||
- Well-defined scope
|
||||
|
||||
### ✅ Make Implementation Decisions
|
||||
|
||||
**Works well:**
|
||||
- Choose between valid approaches
|
||||
- Apply patterns from codebase
|
||||
- Fix bugs based on error messages
|
||||
- Optimize existing code
|
||||
|
||||
### ✅ Run Tests and Verify
|
||||
|
||||
**Works well:**
|
||||
- Execute test suites
|
||||
- Measure coverage
|
||||
- Fix failing tests
|
||||
- Validate implementations
|
||||
|
||||
---
|
||||
|
||||
## Pre-Batch Validation Checklist
|
||||
|
||||
**Before running /batch-stories, verify ALL selected stories:**
|
||||
|
||||
```bash
|
||||
# 1. Check story files exist
|
||||
for story in $(grep "ready-for-dev" docs/sprint-artifacts/sprint-status.yaml | awk '{print $1}' | sed 's/://'); do
|
||||
[ -f "docs/sprint-artifacts/story-$story.md" ] || echo "❌ Missing: $story"
|
||||
done
|
||||
|
||||
# 2. Check all have 12 BMAD sections
|
||||
for file in docs/sprint-artifacts/story-*.md; do
|
||||
sections=$(grep -c "^## " "$file")
|
||||
if [ "$sections" -lt 12 ]; then
|
||||
echo "❌ Incomplete: $file ($sections/12 sections)"
|
||||
fi
|
||||
done
|
||||
|
||||
# 3. Check all have tasks
|
||||
for file in docs/sprint-artifacts/story-*.md; do
|
||||
tasks=$(grep -c "^- \[ \]" "$file")
|
||||
if [ "$tasks" -eq 0 ]; then
|
||||
echo "❌ No tasks: $file"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**If any checks fail:**
|
||||
1. Regenerate those stories: `/create-story-with-gap-analysis`
|
||||
2. Validate again
|
||||
3. THEN run batch-stories
|
||||
|
||||
---
|
||||
|
||||
## Error Messages Explained
|
||||
|
||||
### "EARLY BAILOUT: No Tasks Found"
|
||||
|
||||
**What it means:** Story file has 0 unchecked tasks
|
||||
**Is this a bug?** ❌ NO - This is correct validation
|
||||
**What to do:**
|
||||
- If story is skeleton: Regenerate with /create-story-with-gap-analysis
|
||||
- If story is complete: Mark as "done" in sprint-status.yaml
|
||||
- If story needs work: Add tasks to story file
|
||||
|
||||
### "EARLY BAILOUT: Invalid Story Format"
|
||||
|
||||
**What it means:** Story missing required sections (Tasks, AC, etc.)
|
||||
**Is this a bug?** ❌ NO - This is correct validation
|
||||
**What to do:**
|
||||
- Regenerate with /create-story-with-gap-analysis
|
||||
- Do NOT try to manually add sections (skip gap analysis)
|
||||
- Do NOT launch batch with incomplete stories
|
||||
|
||||
### "Story Creation Failed" or "Skipped"
|
||||
|
||||
**What it means:** Agent tried to create story but couldn't
|
||||
**Is this a bug?** ❌ NO - Agents can't create stories
|
||||
**What to do:**
|
||||
- Exit batch-stories
|
||||
- Manually run /create-story-with-gap-analysis
|
||||
- Re-run batch after story created
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ DO: Generate All Stories Before Batch
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Plan epic → Identify stories → Create list
|
||||
2. Generate stories: /create-story-with-gap-analysis (1-2 days)
|
||||
3. Validate stories: ./scripts/validate-all-stories.sh
|
||||
4. Execute stories: /batch-stories (parallel, fast)
|
||||
```
|
||||
|
||||
### ✅ DO: Use Small Batches for Mixed Complexity
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Group by complexity (micro, standard, complex)
|
||||
2. Batch micro stories (quick wins)
|
||||
3. Batch standard stories
|
||||
4. Execute complex stories individually
|
||||
```
|
||||
|
||||
### ❌ DON'T: Try to Batch Regenerate
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. Create 20 skeleton files with just widget lists
|
||||
2. Run /batch-stories
|
||||
3. Expect agents to regenerate them
|
||||
→ FAILS: Agents can't invoke /create-story workflow
|
||||
```
|
||||
|
||||
### ❌ DON'T: Mix Skeletons with Proper Stories
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. 10 proper BMAD stories + 10 skeletons
|
||||
2. Run /batch-stories
|
||||
3. Expect batch to handle both
|
||||
→ RESULT: 10 execute, 10 skipped (confusing)
|
||||
```
|
||||
|
||||
### ❌ DON'T: Assume Agents Will "Figure It Out"
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. Launch batch with unclear stories
|
||||
2. Hope agents will regenerate/fix/create
|
||||
→ RESULT: Agents halt correctly, nothing happens
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**The Golden Rule:**
|
||||
> **Batch-super-dev is for EXECUTION, not CREATION.**
|
||||
>
|
||||
> Story creation is interactive and requires user input.
|
||||
> Always create/regenerate stories BEFORE batch execution.
|
||||
|
||||
**Remember:**
|
||||
- Agents have limitations (documented above)
|
||||
- These are features, not bugs
|
||||
- Workflows correctly validate and halt
|
||||
- User must prepare stories properly first
|
||||
|
||||
**Success Formula:**
|
||||
```
|
||||
Proper Story Generation (1-2 days manual work)
|
||||
↓
|
||||
Validation (5 minutes automated)
|
||||
↓
|
||||
Batch Execution (4-8 hours parallel autonomous)
|
||||
↓
|
||||
Review & Merge (1-2 hours)
|
||||
```
|
||||
|
||||
Don't skip the preparation steps!
|
||||
|
|
@ -1,742 +0,0 @@
|
|||
# Batch Super-Dev Workflow
|
||||
|
||||
**Version:** 1.3.1 (Agent Limitations Documentation)
|
||||
**Created:** 2026-01-06
|
||||
**Updated:** 2026-01-08
|
||||
**Author:** BMad
|
||||
|
||||
---
|
||||
|
||||
## Critical Prerequisites
|
||||
|
||||
> **⚠️ IMPORTANT: Read before running batch-stories!**
|
||||
|
||||
**BEFORE running batch-stories:**
|
||||
|
||||
### ✅ 1. All stories must be properly generated
|
||||
|
||||
- Run: `/create-story-with-gap-analysis` for each story
|
||||
- Do NOT create skeleton/template files manually
|
||||
- Validation: `./scripts/validate-all-stories.sh`
|
||||
|
||||
**Why:** Agents CANNOT invoke `/create-story-with-gap-analysis` workflow. Story generation requires user interaction and context-heavy codebase scanning.
|
||||
|
||||
### ✅ 2. All stories must have 12 BMAD sections
|
||||
|
||||
Required sections:
|
||||
1. Business Context
|
||||
2. Current State
|
||||
3. Acceptance Criteria
|
||||
4. Tasks/Subtasks
|
||||
5. Technical Requirements
|
||||
6. Architecture Compliance
|
||||
7. Testing Requirements
|
||||
8. Dev Agent Guardrails
|
||||
9. Definition of Done
|
||||
10. References
|
||||
11. Dev Agent Record
|
||||
12. Change Log
|
||||
|
||||
### ✅ 3. All stories must have tasks
|
||||
|
||||
- At least 3 unchecked tasks (minimum for valid story)
|
||||
- Zero-task stories will be skipped
|
||||
- Validation: `grep -c "^- \[ \]" story-file.md`
|
||||
|
||||
### Common Failure Mode: Batch Regeneration
|
||||
|
||||
**What you might try:**
|
||||
```
|
||||
1. Create 20 skeleton story files (just headers + widget lists)
|
||||
2. Run /batch-stories
|
||||
3. Expect agents to regenerate them
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Agents identify stories are incomplete
|
||||
- Agents correctly halt per story-full-pipeline validation
|
||||
- Stories get skipped (not regenerated)
|
||||
- You waste time
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# 1. Generate all stories (1-2 days, manual)
|
||||
/create-story-with-gap-analysis # For each story
|
||||
|
||||
# 2. Validate (30 seconds, automated)
|
||||
./scripts/validate-all-stories.sh
|
||||
|
||||
# 3. Execute (4-8 hours, parallel autonomous)
|
||||
/batch-stories
|
||||
```
|
||||
|
||||
See: `AGENT-LIMITATIONS.md` for full documentation on what agents can and cannot do.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive batch workflow for processing multiple `ready-for-dev` stories sequentially or in parallel using the story-full-pipeline with full quality gates.
|
||||
|
||||
**New in v1.2.0:** Smart Story Validation & Auto-Creation - validates story files, creates missing stories, regenerates invalid ones automatically.
|
||||
**New in v1.1.0:** Smart Story Reconciliation - automatically verifies story accuracy after each implementation.
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
1. **🆕 Smart Story Validation & Auto-Creation** (NEW v1.2.0)
|
||||
- Validates all selected stories before processing
|
||||
- Checks for 12 required BMAD sections
|
||||
- Validates content quality (Current State ≥100 words, gap analysis present)
|
||||
- **Auto-creates missing story files** with codebase gap analysis
|
||||
- **Auto-regenerates invalid stories** (incomplete or stub files)
|
||||
- Interactive prompts (or fully automated with settings)
|
||||
- Backups existing files before regeneration
|
||||
|
||||
2. **Interactive Story Selection**
|
||||
- Lists all `ready-for-dev` stories from sprint-status.yaml
|
||||
- Shows story status icons (✅ file exists, ❌ missing, 🔄 needs status update)
|
||||
- Supports flexible selection syntax: single, ranges, comma-separated, "all"
|
||||
- Optional epic filtering (process only Epic 3 stories, etc.)
|
||||
|
||||
3. **Execution Modes**
|
||||
- **Sequential:** Process stories one-by-one in current session (easier monitoring)
|
||||
- **Parallel:** Spawn Task agents to process stories concurrently (faster, autonomous)
|
||||
- Configurable parallelism: 2, 4, or all stories at once
|
||||
|
||||
4. **Full Quality Gates** (from story-full-pipeline)
|
||||
- Pre-gap analysis (validate story completeness)
|
||||
- Test-driven implementation
|
||||
- Post-validation (verify requirements met)
|
||||
- Multi-agent code review (4 specialized agents)
|
||||
- Targeted git commits
|
||||
- Definition of done verification
|
||||
|
||||
5. **Smart Story Reconciliation** (v1.1.0)
|
||||
- Automatically checks story accuracy after implementation
|
||||
- Verifies Acceptance Criteria checkboxes match Dev Agent Record
|
||||
- Verifies Tasks/Subtasks checkboxes match implementation
|
||||
- Verifies Definition of Done completion
|
||||
- Updates story status (done/review/in-progress) based on actual completion
|
||||
- Synchronizes sprint-status.yaml with story file status
|
||||
- **Prevents "done" stories with unchecked items** ✅
|
||||
|
||||
---
|
||||
|
||||
## Smart Story Validation & Auto-Creation (NEW v1.2.0)
|
||||
|
||||
### What It Does
|
||||
|
||||
Before processing any selected stories, the workflow automatically validates each story file:
|
||||
|
||||
1. **File Existence Check** - Verifies story file exists (tries multiple naming patterns)
|
||||
2. **Section Validation** - Ensures all 12 BMAD sections are present
|
||||
3. **Content Quality Check** - Validates sufficient content (not stubs):
|
||||
- Current State: ≥100 words
|
||||
- Gap analysis markers: ✅/❌ present
|
||||
- Acceptance Criteria: ≥3 items
|
||||
- Tasks: ≥5 items
|
||||
4. **Auto-Creation** - Creates missing stories with codebase gap analysis
|
||||
5. **Auto-Regeneration** - Regenerates invalid/incomplete story files
|
||||
|
||||
### Why This Matters
|
||||
|
||||
**Problem this solves:**
|
||||
|
||||
Before v1.2.0:
|
||||
```
|
||||
User: "Process stories 3.1, 3.2, 3.3, 3.4"
|
||||
Workflow: "Story 3.3 file missing - please create it first"
|
||||
User: Ctrl+C → /create-story → /batch-stories again
|
||||
```
|
||||
|
||||
After v1.2.0:
|
||||
```
|
||||
User: "Process stories 3.1, 3.2, 3.3, 3.4"
|
||||
Workflow: "Story 3.3 missing - create it? (yes)"
|
||||
User: "yes"
|
||||
Workflow: Creates story 3.3 with gap analysis → Processes all 4 stories
|
||||
```
|
||||
|
||||
**Prevents:**
|
||||
- Incomplete story files being processed
|
||||
- Missing gap analysis
|
||||
- Stub files (< 100 words)
|
||||
- Manual back-and-forth workflow interruptions
|
||||
|
||||
### Validation Process
|
||||
|
||||
```
|
||||
Load Sprint Status
|
||||
↓
|
||||
Display Available Stories
|
||||
↓
|
||||
🆕 VALIDATE EACH STORY ← NEW STEP 2.5
|
||||
↓
|
||||
For each story:
|
||||
┌─ File missing? → Prompt: "Create story with gap analysis?"
|
||||
│ └─ yes → /create-story-with-gap-analysis → ✅ Created
|
||||
│ └─ no → ⏭️ Skip story
|
||||
│
|
||||
┌─ File exists but invalid?
|
||||
│ (< 12 sections OR < 100 words OR no gap analysis)
|
||||
│ → Prompt: "Regenerate story with codebase scan?"
|
||||
│ └─ yes → Backup original → /create-story-with-gap-analysis → ✅ Regenerated
|
||||
│ └─ no → ⏭️ Skip story
|
||||
│
|
||||
└─ File valid? → ✅ Ready to process
|
||||
↓
|
||||
Remove skipped stories
|
||||
↓
|
||||
Display Validated Stories
|
||||
↓
|
||||
User Selection (only validated stories)
|
||||
↓
|
||||
Process Stories
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
**In workflow.yaml:**
|
||||
|
||||
```yaml
|
||||
# Story validation settings (NEW in v1.2.0)
|
||||
validation:
|
||||
enabled: true # Enable/disable validation
|
||||
auto_create_missing: false # Auto-create without prompting (use cautiously)
|
||||
auto_regenerate_invalid: false # Auto-regenerate without prompting (use cautiously)
|
||||
min_sections: 12 # BMAD format requires all 12
|
||||
min_current_state_words: 100 # Minimum content length
|
||||
require_gap_analysis: true # Must have ✅/❌ markers
|
||||
backup_before_regenerate: true # Create .backup before regenerating
|
||||
```
|
||||
|
||||
**Interactive Mode (default):**
|
||||
- Prompts before creating/regenerating each story
|
||||
- Safe, user retains control
|
||||
- Recommended for most workflows
|
||||
|
||||
**Fully Automated Mode:**
|
||||
```yaml
|
||||
validation:
|
||||
auto_create_missing: true
|
||||
auto_regenerate_invalid: true
|
||||
```
|
||||
- Creates/regenerates without prompting
|
||||
- Faster for large batches
|
||||
- Use with caution (may overwrite valid stories)
|
||||
|
||||
### Example Session (v1.2.0)
|
||||
|
||||
```
|
||||
🤖 /batch-stories
|
||||
|
||||
📊 Ready-for-Dev Stories (5)
|
||||
|
||||
1. **3-1-vehicle-card** ✅
|
||||
→ Story file exists
|
||||
2. **3-2-vehicle-search** ✅
|
||||
→ Story file exists
|
||||
3. **3-3-vehicle-compare** ❌
|
||||
→ Story file missing
|
||||
4. **3-4-vehicle-details** ⚠️
|
||||
→ File exists (7/12 sections, stub content)
|
||||
5. **3-5-vehicle-history** ✅
|
||||
→ Story file exists
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 VALIDATING STORY FILES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Story 3-1-vehicle-card: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
Story 3-2-vehicle-search: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
📝 Story 3-3-vehicle-compare: File missing
|
||||
|
||||
Create story file with gap analysis? (yes/no): yes
|
||||
|
||||
Creating story 3-3-vehicle-compare with codebase gap analysis...
|
||||
→ Scanning apps/frontend/web for existing components...
|
||||
→ Scanning packages/widgets for related widgets...
|
||||
→ Analyzing gap: 3 files exist, 5 need creation
|
||||
|
||||
✅ Story 3-3-vehicle-compare created successfully (12/12 sections)
|
||||
|
||||
⚠️ Story 3-4-vehicle-details: File incomplete or invalid
|
||||
- Sections: 7/12
|
||||
- Current State: stub (32 words, expected ≥100)
|
||||
- Gap analysis: missing
|
||||
|
||||
Regenerate story with codebase scan? (yes/no): yes
|
||||
|
||||
Regenerating story 3-4-vehicle-details with gap analysis...
|
||||
→ Backing up to docs/sprint-artifacts/3-4-vehicle-details.md.backup
|
||||
→ Scanning codebase for VehicleDetails implementation...
|
||||
→ Found: packages/widgets/vehicle-details-v2 (partial)
|
||||
→ Analyzing gap: 8 files exist, 3 need creation
|
||||
|
||||
✅ Story 3-4-vehicle-details regenerated successfully (12/12 sections)
|
||||
|
||||
Story 3-5-vehicle-history: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ Story Validation Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Validated:** 5 stories ready to process
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Select stories to process: all
|
||||
|
||||
[Proceeds to process all 5 validated stories...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Smart Story Reconciliation (v1.1.0)
|
||||
|
||||
### What It Does
|
||||
|
||||
After each story completes, the workflow automatically:
|
||||
|
||||
1. **Loads Dev Agent Record** - Reads implementation summary, file list, test results
|
||||
2. **Analyzes Acceptance Criteria** - Checks which ACs have evidence of completion
|
||||
3. **Analyzes Tasks** - Verifies which tasks have been implemented
|
||||
4. **Analyzes Definition of Done** - Confirms quality gates passed
|
||||
5. **Calculates Completion %** - AC%, Tasks%, DoD% percentages
|
||||
6. **Determines Correct Status:**
|
||||
- `done`: AC≥95% AND Tasks≥95% AND DoD≥95%
|
||||
- `review`: AC≥80% AND Tasks≥80% AND DoD≥80%
|
||||
- `in-progress`: Below 80% on any category
|
||||
7. **Updates Story File** - Checks/unchecks boxes to match reality
|
||||
8. **Updates sprint-status.yaml** - Synchronizes status entry
|
||||
|
||||
### Why This Matters
|
||||
|
||||
**Problem this solves:**
|
||||
|
||||
Story 20.8 (before reconciliation):
|
||||
- Dev Agent Record: "COMPLETE - 10 files created, 37 tests passing"
|
||||
- Acceptance Criteria: All unchecked ❌
|
||||
- Tasks: All unchecked ❌
|
||||
- Definition of Done: All unchecked ❌
|
||||
- sprint-status.yaml: `ready-for-dev` ❌
|
||||
- **Reality:** Story was 100% complete but looked 0% complete!
|
||||
|
||||
**After reconciliation:**
|
||||
- Acceptance Criteria: 17/18 checked ✅
|
||||
- Tasks: 24/24 checked ✅
|
||||
- Definition of Done: 24/25 checked ✅
|
||||
- sprint-status.yaml: `done` ✅
|
||||
- **Accurate representation of actual completion** ✅
|
||||
|
||||
### Reconciliation Process
|
||||
|
||||
```
|
||||
Implementation Complete
|
||||
↓
|
||||
Load Dev Agent Record
|
||||
↓
|
||||
Parse: Implementation Summary, File List, Test Results, Completion Notes
|
||||
↓
|
||||
For each checkbox in ACs/Tasks/DoD:
|
||||
- Search Dev Agent Record for evidence
|
||||
- Determine expected status (checked/unchecked/partial)
|
||||
- Compare actual vs expected
|
||||
- Record discrepancies
|
||||
↓
|
||||
Calculate completion percentages:
|
||||
- AC: X/Y checked (Z%)
|
||||
- Tasks: X/Y checked (Z%)
|
||||
- DoD: X/Y checked (Z%)
|
||||
↓
|
||||
Determine correct story status (done/review/in-progress)
|
||||
↓
|
||||
Apply changes (with user confirmation):
|
||||
- Update checkboxes in story file
|
||||
- Update story status header
|
||||
- Update sprint-status.yaml entry
|
||||
↓
|
||||
Report final completion statistics
|
||||
```
|
||||
|
||||
### Reconciliation Output
|
||||
|
||||
```
|
||||
🔧 Story 20.8: Reconciling 42 issues
|
||||
|
||||
Changes to apply:
|
||||
1. AC1: FlexibleGridSection component - CHECK (File created: FlexibleGridSection.tsx)
|
||||
2. AC2: Screenshot automation - CHECK (File created: screenshot-pages.ts)
|
||||
3. Task 1.3: Create page corpus generator - CHECK (File created: generate-page-corpus.ts)
|
||||
... (39 more)
|
||||
|
||||
Apply these reconciliation changes? (yes/no): yes
|
||||
|
||||
✅ Story 20.8: Reconciliation complete (42 changes applied)
|
||||
|
||||
📊 Story 20.8 - Final Status
|
||||
|
||||
Acceptance Criteria: 17/18 (94%)
|
||||
Tasks/Subtasks: 24/24 (100%)
|
||||
Definition of Done: 24/25 (96%)
|
||||
|
||||
Story Status: done
|
||||
sprint-status.yaml: done
|
||||
|
||||
✅ Story is COMPLETE and accurately reflects implementation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Process all ready-for-dev stories
|
||||
/batch-stories
|
||||
|
||||
# Follow prompts:
|
||||
# 1. See list of ready stories
|
||||
# 2. Select stories to process (1,3-5,8 or "all")
|
||||
# 3. Choose execution mode (sequential/parallel)
|
||||
# 4. Confirm execution plan
|
||||
# 5. Stories process automatically with reconciliation
|
||||
# 6. Review batch summary
|
||||
```
|
||||
|
||||
### Epic Filtering
|
||||
|
||||
```bash
|
||||
# Only process Epic 3 stories
|
||||
/batch-stories filter_by_epic=3
|
||||
```
|
||||
|
||||
### Selection Syntax
|
||||
|
||||
```
|
||||
Single: 1
|
||||
Multiple: 1,3,5
|
||||
Range: 1-5 (processes 1,2,3,4,5)
|
||||
Mixed: 1,3-5,8 (processes 1,3,4,5,8)
|
||||
All: all (processes all ready-for-dev stories)
|
||||
```
|
||||
|
||||
### Execution Modes
|
||||
|
||||
**Sequential (Recommended for ≤5 stories):**
|
||||
- Processes one story at a time in current session
|
||||
- Easier to monitor progress
|
||||
- Lower resource usage
|
||||
- Can pause/cancel between stories
|
||||
|
||||
**Parallel (Recommended for >5 stories):**
|
||||
- Spawns autonomous Task agents
|
||||
- Much faster (2-4x speedup)
|
||||
- Choose parallelism: 2 (conservative), 4 (moderate), all (aggressive)
|
||||
- Requires more system resources
|
||||
|
||||
---
|
||||
|
||||
## Workflow Configuration
|
||||
|
||||
**File:** `_bmad/bmm/workflows/4-implementation/batch-stories/workflow.yaml`
|
||||
|
||||
### Key Settings
|
||||
|
||||
```yaml
|
||||
# Safety limits
|
||||
max_stories: 20 # Won't process more than 20 in one batch
|
||||
|
||||
# Pacing
|
||||
pause_between_stories: 5 # Seconds between stories (sequential mode)
|
||||
|
||||
# Error handling
|
||||
continue_on_failure: true # Keep processing if one story fails
|
||||
|
||||
# Reconciliation (NEW v1.1.0)
|
||||
reconciliation:
|
||||
enabled: true # Auto-reconcile after each story
|
||||
require_confirmation: true # Ask before applying changes
|
||||
update_sprint_status: true # Sync sprint-status.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
### 1. Load Sprint Status
|
||||
- Parses sprint-status.yaml
|
||||
- Filters stories with status="ready-for-dev"
|
||||
- Excludes epics and retrospectives
|
||||
- Optionally filters by epic number
|
||||
|
||||
### 2. Display Available Stories
|
||||
- Shows all ready-for-dev stories
|
||||
- Verifies story files exist
|
||||
- Displays status icons and comments
|
||||
|
||||
### 2.5. 🆕 Validate and Create/Regenerate Stories (NEW v1.2.0)
|
||||
**For each story:**
|
||||
- Check file existence (multiple naming patterns)
|
||||
- Validate 12 BMAD sections present
|
||||
- Check content quality (Current State ≥100 words, gap analysis)
|
||||
- **If missing:** Prompt to create with gap analysis
|
||||
- **If invalid:** Prompt to regenerate with codebase scan
|
||||
- **If valid:** Mark ready to process
|
||||
- Remove skipped stories from selection
|
||||
|
||||
### 3. Get User Selection
|
||||
- Interactive story picker
|
||||
- Supports flexible selection syntax
|
||||
- Validates selection and confirms
|
||||
|
||||
### 3.5. Choose Execution Strategy
|
||||
- Sequential vs Parallel
|
||||
- If parallel: choose concurrency level
|
||||
- Confirm execution plan
|
||||
|
||||
### 4. Process Stories
|
||||
**Sequential Mode:**
|
||||
- For each selected story:
|
||||
- Invoke story-full-pipeline
|
||||
- Execute reconciliation (Step 4.5)
|
||||
- Report results
|
||||
- Pause between stories
|
||||
|
||||
**Parallel Mode (Semaphore Pattern - NEW v1.3.0):**
|
||||
- Initialize worker pool with N slots (user-selected concurrency)
|
||||
- Fill initial N slots with first N stories
|
||||
- Poll workers continuously (non-blocking)
|
||||
- As soon as worker completes → immediately refill slot with next story
|
||||
- Maintain constant N concurrent agents until queue empty
|
||||
- Execute reconciliation after each story completes
|
||||
- **Commit Queue:** File-based locking prevents git lock conflicts
|
||||
- Workers acquire `.git/bmad-commit.lock` before committing
|
||||
- Automatic retry with exponential backoff (1s → 30s)
|
||||
- Stale lock cleanup (>5 min)
|
||||
- Serialized commits, parallel implementation
|
||||
- No idle time waiting for batch synchronization
|
||||
- **20-40% faster** than old batch-and-wait pattern
|
||||
|
||||
### 4.5. Smart Story Reconciliation (NEW)
|
||||
**Executed after each story completes:**
|
||||
- Load Dev Agent Record
|
||||
- Analyze ACs/Tasks/DoD vs implementation
|
||||
- Calculate completion percentages
|
||||
- Determine correct story status
|
||||
- Update checkboxes and status
|
||||
- Sync sprint-status.yaml
|
||||
|
||||
See: `step-4.5-reconcile-story-status.md` for detailed algorithm
|
||||
|
||||
### 5. Display Batch Summary
|
||||
- Shows completion statistics
|
||||
- Lists failed stories (if any)
|
||||
- Lists reconciliation warnings (if any)
|
||||
- Provides next steps
|
||||
- Saves batch log
|
||||
|
||||
---
|
||||
|
||||
## Output Files
|
||||
|
||||
### Batch Log
|
||||
|
||||
**Location:** `docs/sprint-artifacts/batch-stories-{date}.log`
|
||||
|
||||
**Contains:**
|
||||
- Start/end timestamps
|
||||
- Selected stories
|
||||
- Completed stories
|
||||
- Failed stories
|
||||
- Reconciliation warnings
|
||||
- Success rate
|
||||
- Total duration
|
||||
|
||||
### Reconciliation Results (per story)
|
||||
|
||||
**Embedded in Dev Agent Record:**
|
||||
- Reconciliation summary
|
||||
- Changes applied
|
||||
- Final completion percentages
|
||||
- Status determination reasoning
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Story Implementation Fails
|
||||
- Increments failed counter
|
||||
- Adds to failed_stories list
|
||||
- If `continue_on_failure=true`, continues with remaining stories
|
||||
- If `continue_on_failure=false`, stops batch
|
||||
|
||||
### Reconciliation Fails
|
||||
- Story still marked as completed (implementation succeeded)
|
||||
- Adds to reconciliation_warnings list
|
||||
- User warned to manually verify story accuracy
|
||||
- Does NOT fail the batch
|
||||
|
||||
### Task Agent Fails (Parallel Mode)
|
||||
- Collects error from TaskOutput
|
||||
- Marks story as failed
|
||||
- Continues with remaining stories in batch
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Story Selection
|
||||
- ✅ Start small: Process 2-3 stories first to verify workflow
|
||||
- ✅ Group by epic: Related stories often share context
|
||||
- ✅ Check file status: ✅ stories are ready, ❌ need creation first
|
||||
- ❌ Don't process 20 stories at once on first run
|
||||
|
||||
### Execution Mode
|
||||
- Sequential for ≤5 stories (easier monitoring)
|
||||
- Parallel for >5 stories (faster completion)
|
||||
- Use parallelism=2 first, then increase if stable
|
||||
|
||||
### During Execution
|
||||
- Monitor progress output
|
||||
- Check reconciliation reports
|
||||
- Verify changes look correct
|
||||
- Spot-check 1-2 completed stories
|
||||
|
||||
### After Completion
|
||||
1. Review batch summary
|
||||
2. Check reconciliation warnings
|
||||
3. Verify sprint-status.yaml updated
|
||||
4. Run tests: `pnpm test`
|
||||
5. Check coverage: `pnpm test --coverage`
|
||||
6. Review commits: `git log -<count>`
|
||||
7. Spot-check 2-3 stories for quality
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Reconciliation Reports Many Warnings
|
||||
|
||||
**Cause:** Dev Agent Record may be incomplete or stories weren't fully implemented
|
||||
|
||||
**Fix:**
|
||||
1. Review listed stories manually
|
||||
2. Check Dev Agent Record has all required sections
|
||||
3. Re-run story-full-pipeline for problematic stories
|
||||
4. Manually reconcile checkboxes if needed
|
||||
|
||||
### Parallel Mode Hangs
|
||||
|
||||
**Cause:** Too many agents running concurrently, system resources exhausted
|
||||
|
||||
**Fix:**
|
||||
1. Kill hung agents: `/tasks` then `kill <task-id>`
|
||||
2. Reduce parallelism: Use 2 instead of 4
|
||||
3. Process remaining stories sequentially
|
||||
|
||||
### Story Marked "done" but has Unchecked Items
|
||||
|
||||
**Cause:** Reconciliation may have missed some checkboxes
|
||||
|
||||
**Fix:**
|
||||
1. Review Dev Agent Record
|
||||
2. Check which checkboxes should be checked
|
||||
3. Manually check them or re-run reconciliation:
|
||||
- Load story file
|
||||
- Compare ACs/Tasks/DoD to Dev Agent Record
|
||||
- Update checkboxes to match reality
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
### v1.3.0 (2026-01-07)
|
||||
- **NEW:** Complexity-Based Routing (Step 2.6)
|
||||
- Automatic story complexity scoring (micro/standard/complex)
|
||||
- Risk keyword detection with configurable weights
|
||||
- Smart pipeline selection: micro → lightweight, complex → enhanced
|
||||
- 50-70% token savings for micro stories
|
||||
- Deterministic classification with mutually exclusive thresholds
|
||||
- **CRITICAL:** Rejects stories with <3 tasks as INVALID (prevents 0-task stories from being processed)
|
||||
- **NEW:** Semaphore Pattern for Parallel Execution
|
||||
- Worker pool maintains constant N concurrent agents
|
||||
- As soon as worker completes → immediately start next story
|
||||
- No idle time waiting for batch synchronization
|
||||
- 20-40% faster than old batch-and-wait pattern
|
||||
- Non-blocking task polling with live progress dashboard
|
||||
- **NEW:** Git Commit Queue (Parallel-Safe)
|
||||
- File-based locking prevents concurrent commit conflicts
|
||||
- Workers acquire `.git/bmad-commit.lock` before committing
|
||||
- Automatic retry with exponential backoff (1s → 30s max)
|
||||
- Stale lock cleanup (>5 min old locks auto-removed)
|
||||
- Eliminates "Another git process is running" errors
|
||||
- Serializes commits while keeping implementations parallel
|
||||
- **NEW:** Continuous Sprint-Status Tracking
|
||||
- sprint-status.yaml updated after EVERY task completion
|
||||
- Real-time progress: "# 7/10 tasks (70%)"
|
||||
- CRITICAL enforcement with HALT on update failure
|
||||
- Immediate visibility into story progress
|
||||
- **NEW:** Stricter Story Validation
|
||||
- Step 2.5 now rejects stories with <3 tasks
|
||||
- Step 2.6 marks stories with <3 tasks as INVALID
|
||||
- Prevents incomplete/stub stories from being processed
|
||||
- Requires /validate-create-story to fix before processing
|
||||
|
||||
### v1.2.0 (2026-01-06)
|
||||
- **NEW:** Smart Story Validation & Auto-Creation (Step 2.5)
|
||||
- Validates story files before processing
|
||||
- Auto-creates missing stories with gap analysis
|
||||
- Auto-regenerates invalid/incomplete stories
|
||||
- Checks 12 BMAD sections, content quality
|
||||
- Interactive or fully automated modes
|
||||
- Backups before regeneration
|
||||
- **Removes friction:** No more "story file missing" interruptions
|
||||
- **Ensures quality:** Only valid stories with gap analysis proceed
|
||||
- **Configuration:** New `validation` settings in workflow.yaml
|
||||
|
||||
### v1.1.0 (2026-01-06)
|
||||
- **NEW:** Smart Story Reconciliation (Step 4.5)
|
||||
- Auto-verifies story accuracy after implementation
|
||||
- Updates checkboxes based on Dev Agent Record
|
||||
- Synchronizes sprint-status.yaml
|
||||
- Prevents "done" stories with unchecked items
|
||||
- Added reconciliation warnings to batch summary
|
||||
- Added reconciliation statistics to output
|
||||
|
||||
### v1.0.0 (2026-01-05)
|
||||
- Initial release
|
||||
- Interactive story selector
|
||||
- Sequential and parallel execution modes
|
||||
- Integration with story-full-pipeline
|
||||
- Batch summary and logging
|
||||
|
||||
---
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **story-full-pipeline:** Individual story implementation (invoked by batch-stories)
|
||||
- **create-story-with-gap-analysis:** Create new stories with codebase scan
|
||||
- **sprint-status:** View/update sprint status
|
||||
- **multi-agent-review:** Standalone code review (part of story-full-pipeline)
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
**Questions or Issues:**
|
||||
- Check workflow logs: `docs/sprint-artifacts/batch-stories-*.log`
|
||||
- Review reconciliation step: `step-4.5-reconcile-story-status.md`
|
||||
- Check story file format: Ensure 12-section BMAD format
|
||||
- Verify Dev Agent Record populated: Required for reconciliation
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2026-01-07
|
||||
**Status:** Active - Production-ready with semaphore pattern and continuous tracking
|
||||
**Maintained By:** BMad
|
||||
|
|
@ -1,256 +0,0 @@
|
|||
# Batch-Super-Dev Resilience Fix
|
||||
|
||||
**Problem:** Agents crash mid-execution, resume fails, no intermediate state saved
|
||||
|
||||
---
|
||||
|
||||
## Issues Observed
|
||||
|
||||
**Story 18-4 → 18-5 Transition:**
|
||||
```
|
||||
✅ Story 18-4: Builder → Inspector → Fixer → Reviewer all complete
|
||||
❌ Story 18-5: Workflow crashed on "Error reading file"
|
||||
```
|
||||
|
||||
**Evidence:**
|
||||
- Task output files empty (0 bytes)
|
||||
- Resume attempts failed (0 tools used, 0 tokens)
|
||||
- No state saved between stories
|
||||
- When agent crashes, all progress lost
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
**Sequential processing in main context has no resilience:**
|
||||
|
||||
```
|
||||
Story 18-4:
|
||||
├─ Builder agent completes → outputs to temp file
|
||||
├─ Main reads output file → starts Inspector
|
||||
├─ Inspector completes → outputs to temp file
|
||||
├─ Main reads output → starts Fixer
|
||||
└─ Fixer completes → Story 18-4 done
|
||||
|
||||
Story 18-5:
|
||||
├─ Main tries to read Story 18-5 file
|
||||
├─ ❌ "Error reading file" (crash)
|
||||
└─ All progress lost, no state saved
|
||||
```
|
||||
|
||||
**Problem:** Main context doesn't save state between stories. If it crashes, batch starts over.
|
||||
|
||||
---
|
||||
|
||||
## Solution: Save State After Each Story
|
||||
|
||||
### Add state file tracking:
|
||||
|
||||
```yaml
|
||||
# In batch-stories/workflow.yaml
|
||||
state_tracking:
|
||||
enabled: true
|
||||
state_file: "{sprint_artifacts}/batch-execution-state-{batch_id}.yaml"
|
||||
save_after_each_story: true
|
||||
```
|
||||
|
||||
### State file format:
|
||||
|
||||
```yaml
|
||||
batch_id: "epic-18-2026-01-26"
|
||||
started: "2026-01-26T18:45:00Z"
|
||||
execution_mode: "fully_autonomous"
|
||||
strategy: "sequential"
|
||||
total_stories: 2
|
||||
|
||||
stories:
|
||||
- story_key: "18-4-billing-worker-retry-logic"
|
||||
status: "completed"
|
||||
started: "2026-01-26T18:46:00Z"
|
||||
completed: "2026-01-26T19:05:00Z"
|
||||
agents:
|
||||
- phase: "builder"
|
||||
agent_id: "ae3bd2b"
|
||||
status: "completed"
|
||||
- phase: "inspector"
|
||||
agent_id: "a9f0d11"
|
||||
status: "completed"
|
||||
- phase: "fixer"
|
||||
agent_id: "abc123"
|
||||
status: "completed"
|
||||
- phase: "reviewer"
|
||||
agent_id: "def456"
|
||||
status: "completed"
|
||||
|
||||
- story_key: "18-5-precharge-payment-validation"
|
||||
status: "in_progress"
|
||||
started: "2026-01-26T19:05:30Z"
|
||||
last_checkpoint: "attempting_to_read_story_file"
|
||||
error: "Error reading file"
|
||||
```
|
||||
|
||||
### Resume logic:
|
||||
|
||||
```bash
|
||||
# At batch-stories start, check for existing state file
|
||||
state_file="{sprint_artifacts}/batch-execution-state-*.yaml"
|
||||
|
||||
if ls $state_file 2>/dev/null; then
|
||||
echo "🔄 Found interrupted batch execution"
|
||||
echo "Resume from where it left off? (yes/no)"
|
||||
|
||||
if yes:
|
||||
# Load state file
|
||||
# Skip completed stories
|
||||
# Start from next story
|
||||
# Reuse agent IDs if resumable
|
||||
fi
|
||||
```
|
||||
|
||||
### After each story completes:
|
||||
|
||||
```bash
|
||||
# Update state file
|
||||
update_state_file() {
|
||||
story_key="$1"
|
||||
status="$2" # completed | failed
|
||||
|
||||
# Update YAML
|
||||
# Mark story as completed
|
||||
# Save timestamp
|
||||
# Record agent IDs
|
||||
}
|
||||
|
||||
# After Builder completes
|
||||
update_state_file "$story_key" "builder_complete"
|
||||
|
||||
# After Inspector completes
|
||||
update_state_file "$story_key" "inspector_complete"
|
||||
|
||||
# After Fixer completes
|
||||
update_state_file "$story_key" "fixer_complete"
|
||||
|
||||
# After Reviewer completes
|
||||
update_state_file "$story_key" "reviewer_complete"
|
||||
|
||||
# When entire story done
|
||||
update_state_file "$story_key" "completed"
|
||||
```
|
||||
|
||||
### Error handling:
|
||||
|
||||
```bash
|
||||
# Wrap file reads in try-catch
|
||||
read_with_retry() {
|
||||
file_path="$1"
|
||||
max_attempts=3
|
||||
|
||||
for attempt in {1..$max_attempts}; do
|
||||
if content=$(cat "$file_path" 2>&1); then
|
||||
echo "$content"
|
||||
return 0
|
||||
else
|
||||
echo "⚠️ Failed to read $file_path (attempt $attempt/$max_attempts)" >&2
|
||||
sleep 2
|
||||
fi
|
||||
done
|
||||
|
||||
echo "❌ Cannot read file after $max_attempts attempts: $file_path" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
# Use in workflow
|
||||
story_content=$(read_with_retry "$story_file") || {
|
||||
echo "❌ Cannot proceed with Story $story_key - file read failed"
|
||||
# Save state
|
||||
# Skip this story
|
||||
# Continue to next story (if continue_on_failure=true)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
Add to batch-stories Step 4-Sequential:
|
||||
|
||||
```xml
|
||||
<substep n="4s-0" title="Check for previous execution state">
|
||||
<action>Check for state file: batch-execution-state-*.yaml</action>
|
||||
|
||||
<check if="state file exists">
|
||||
<output>🔄 Found interrupted batch from {state.started}</output>
|
||||
<output>Completed: {state.completed_count} stories</output>
|
||||
<output>Failed: {state.failed_count} stories</output>
|
||||
<output>In progress: {state.current_story}</output>
|
||||
|
||||
<ask>Resume from where it left off? (yes/no)</ask>
|
||||
|
||||
<check if="response == yes">
|
||||
<action>Load state</action>
|
||||
<action>Skip completed stories</action>
|
||||
<action>Start from next story</action>
|
||||
</check>
|
||||
|
||||
<check if="response == no">
|
||||
<action>Archive old state file</action>
|
||||
<action>Start fresh batch</action>
|
||||
</check>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="4s-a" title="Process individual story">
|
||||
<action>Save state: story started</action>
|
||||
|
||||
<try>
|
||||
<action>Read story file with retry</action>
|
||||
<action>Execute story-full-pipeline</action>
|
||||
<action>Save state: story completed</action>
|
||||
</try>
|
||||
|
||||
<catch error="file_read_error">
|
||||
<output>⚠️ Cannot read story file for {story_key}</output>
|
||||
<action>Save state: story failed (file read error)</action>
|
||||
<action>Add to failed_stories list</action>
|
||||
<action>Continue to next story if continue_on_failure=true</action>
|
||||
</catch>
|
||||
|
||||
<catch error="agent_crash">
|
||||
<output>⚠️ Agent crashed for {story_key}</output>
|
||||
<action>Save state: story failed (agent crash)</action>
|
||||
<action>Record partial progress in state file</action>
|
||||
<action>Continue to next story if continue_on_failure=true</action>
|
||||
</catch>
|
||||
</substep>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Behavior After Fix
|
||||
|
||||
**If crash happens:**
|
||||
|
||||
```
|
||||
Story 18-4: ✅ Complete (state saved)
|
||||
Story 18-5: ❌ Crashed (state saved with error)
|
||||
|
||||
State file created: batch-execution-state-epic-18.yaml
|
||||
|
||||
User re-runs: /batch-stories
|
||||
|
||||
Workflow: "🔄 Found interrupted batch. Resume? (yes/no)"
|
||||
User: "yes"
|
||||
Workflow: "✅ Skipping 18-4 (already complete)"
|
||||
Workflow: "🔄 Retrying 18-5 (was in_progress)"
|
||||
Workflow: Starts 18-5 from beginning
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- No lost progress
|
||||
- Can resume after crashes
|
||||
- Intermediate state preserved
|
||||
- Failures don't block batch
|
||||
|
||||
---
|
||||
|
||||
Should I implement this resilience fix now?
|
||||
|
|
@ -1,347 +0,0 @@
|
|||
# Batch-Super-Dev Step 2.5 Patch
|
||||
|
||||
**Issue:** Step 2.5 tries to invoke `/create-story-with-gap-analysis` which agents cannot do
|
||||
**Impact:** Skeleton stories get skipped instead of regenerated
|
||||
**Fix:** Explicitly halt batch and tell user to regenerate manually
|
||||
|
||||
---
|
||||
|
||||
## Current Code (BROKEN)
|
||||
|
||||
**File:** `instructions.md` lines 82-99
|
||||
|
||||
```xml
|
||||
<ask>Create story file with gap analysis? (yes/no):</ask>
|
||||
|
||||
<check if="response == 'yes'">
|
||||
<output>Creating story {{story_key}} with codebase gap analysis...</output>
|
||||
<action>Invoke workflow: /bmad:bmm:workflows:create-story-with-gap-analysis</action>
|
||||
<action>Parameters: story_key={{story_key}}</action>
|
||||
|
||||
<check if="story creation succeeded">
|
||||
<output>✅ Story {{story_key}} created successfully (12/12 sections)</output>
|
||||
<action>Update file_status_icon to ✅</action>
|
||||
<action>Mark story as validated</action>
|
||||
</check>
|
||||
|
||||
<check if="story creation failed">
|
||||
<output>❌ Story creation failed: {{story_key}}</output>
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "Creation failed"</action>
|
||||
</check>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Problem:**
|
||||
- Line 86: "Invoke workflow: /" doesn't work for agents
|
||||
- Agents can't execute slash commands
|
||||
- This always fails in batch mode
|
||||
|
||||
---
|
||||
|
||||
## Recommended Fix (WORKING)
|
||||
|
||||
**Replace lines 82-99 with:**
|
||||
|
||||
```xml
|
||||
<ask>Create story file with gap analysis? (yes/no):</ask>
|
||||
|
||||
<check if="response == 'yes'">
|
||||
<output>
|
||||
⚠️ STORY CREATION REQUIRES MANUAL WORKFLOW EXECUTION
|
||||
|
||||
**Story:** {{story_key}}
|
||||
**Status:** File missing or incomplete
|
||||
|
||||
**Problem:**
|
||||
Agents cannot invoke /create-story-with-gap-analysis workflow autonomously.
|
||||
This workflow requires:
|
||||
- Interactive user prompts
|
||||
- Context-heavy codebase scanning
|
||||
- Gap analysis decision-making
|
||||
|
||||
**Required Action:**
|
||||
|
||||
1. **Exit this batch execution:**
|
||||
- Remaining stories will be skipped
|
||||
- Batch will continue with valid stories only
|
||||
|
||||
2. **Regenerate story manually:**
|
||||
```
|
||||
/create-story-with-gap-analysis
|
||||
```
|
||||
When prompted, provide:
|
||||
- Story key: {{story_key}}
|
||||
- Epic: {epic from parent story}
|
||||
- Scope: {widget list or feature description}
|
||||
|
||||
3. **Validate story format:**
|
||||
```
|
||||
./scripts/validate-bmad-format.sh docs/sprint-artifacts/story-{{story_key}}.md
|
||||
```
|
||||
Must show: "✅ All 12 sections present"
|
||||
|
||||
4. **Re-run batch-stories:**
|
||||
- Story will now be properly formatted
|
||||
- Can be executed in next batch run
|
||||
|
||||
**Skipping story {{story_key}} from current batch execution.**
|
||||
</output>
|
||||
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "Story creation requires manual workflow (agents cannot invoke /create-story)"</action>
|
||||
<action>Add to manual_actions_required list: "Regenerate {{story_key}} with /create-story-with-gap-analysis"</action>
|
||||
</check>
|
||||
|
||||
<check if="response == 'no'">
|
||||
<output>⏭️ Skipping story {{story_key}} (file missing, user declined creation)</output>
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "User declined story creation"</action>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Why This Works:**
|
||||
- ✅ Explicitly states agents can't create stories
|
||||
- ✅ Provides clear step-by-step user actions
|
||||
- ✅ Skips gracefully instead of failing silently
|
||||
- ✅ Tracks manual actions needed
|
||||
- ✅ Sets correct expectations
|
||||
|
||||
---
|
||||
|
||||
## Additional Improvements
|
||||
|
||||
### Add Manual Actions Tracking
|
||||
|
||||
**At end of batch execution (Step 5), add:**
|
||||
|
||||
```xml
|
||||
<check if="manual_actions_required is not empty">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚠️ MANUAL ACTIONS REQUIRED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**{{manual_actions_required.length}} stories require manual intervention:**
|
||||
|
||||
{{#each manual_actions_required}}
|
||||
{{@index}}. **{{story_key}}**
|
||||
Action: {{action_description}}
|
||||
Command: {{command_to_run}}
|
||||
{{/each}}
|
||||
|
||||
**After completing these actions:**
|
||||
1. Validate all stories: ./scripts/validate-all-stories.sh
|
||||
2. Re-run batch-stories for these stories
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Why This Helps:**
|
||||
- User gets clear todo list
|
||||
- Knows exactly what to do next
|
||||
- Can track progress on manual actions
|
||||
|
||||
---
|
||||
|
||||
## Validation Script Enhancement
|
||||
|
||||
**Create:** `scripts/validate-all-stories.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Validate all ready-for-dev stories have proper BMAD format
|
||||
|
||||
set -e
|
||||
|
||||
STORIES=$(grep "ready-for-dev" docs/sprint-artifacts/sprint-status.yaml | awk '{print $1}' | sed 's/://')
|
||||
|
||||
echo "=========================================="
|
||||
echo " BMAD Story Format Validation"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
TOTAL=0
|
||||
VALID=0
|
||||
INVALID=0
|
||||
|
||||
for story in $STORIES; do
|
||||
STORY_FILE="docs/sprint-artifacts/story-$story.md"
|
||||
|
||||
if [ ! -f "$STORY_FILE" ]; then
|
||||
echo "❌ $story - FILE MISSING"
|
||||
INVALID=$((INVALID + 1))
|
||||
TOTAL=$((TOTAL + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check BMAD format
|
||||
./scripts/validate-bmad-format.sh "$STORY_FILE" >/dev/null 2>&1
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ $story - Valid BMAD format"
|
||||
VALID=$((VALID + 1))
|
||||
else
|
||||
echo "❌ $story - Invalid format (run validation for details)"
|
||||
INVALID=$((INVALID + 1))
|
||||
fi
|
||||
|
||||
TOTAL=$((TOTAL + 1))
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo " Summary"
|
||||
echo "=========================================="
|
||||
echo "Total Stories: $TOTAL"
|
||||
echo "Valid: $VALID"
|
||||
echo "Invalid: $INVALID"
|
||||
echo ""
|
||||
|
||||
if [ $INVALID -eq 0 ]; then
|
||||
echo "✅ All stories ready for batch execution!"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $INVALID stories need regeneration"
|
||||
echo ""
|
||||
echo "Run: /create-story-with-gap-analysis"
|
||||
echo "For each invalid story"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
**Why This Helps:**
|
||||
- Quick validation before batch
|
||||
- Prevents wasted time on incomplete stories
|
||||
- Clear pass/fail criteria
|
||||
|
||||
---
|
||||
|
||||
## Documentation Update
|
||||
|
||||
**Add to:** `_bmad/bmm/workflows/4-implementation/batch-stories/README.md`
|
||||
|
||||
```markdown
|
||||
# Batch Super-Dev Workflow
|
||||
|
||||
## Critical Prerequisites
|
||||
|
||||
**BEFORE running batch-stories:**
|
||||
|
||||
1. ✅ **All stories must be properly generated**
|
||||
- Run: `/create-story-with-gap-analysis` for each story
|
||||
- Do NOT create skeleton/template files manually
|
||||
- Validation: `./scripts/validate-all-stories.sh`
|
||||
|
||||
2. ✅ **All stories must have 12 BMAD sections**
|
||||
- Business Context, Current State, Acceptance Criteria
|
||||
- Tasks/Subtasks, Technical Requirements, Architecture Compliance
|
||||
- Testing Requirements, Dev Agent Guardrails, Definition of Done
|
||||
- References, Dev Agent Record, Change Log
|
||||
|
||||
3. ✅ **All stories must have tasks**
|
||||
- At least 1 unchecked task (something to implement)
|
||||
- Zero-task stories will be skipped
|
||||
- Validation: `grep -c "^- \[ \]" story-file.md`
|
||||
|
||||
## Common Failure Modes
|
||||
|
||||
### ❌ Attempting Batch Regeneration
|
||||
|
||||
**What you might try:**
|
||||
```
|
||||
1. Create 20 skeleton story files (just headers + widget lists)
|
||||
2. Run /batch-stories
|
||||
3. Expect agents to regenerate them
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Agents identify stories are incomplete
|
||||
- Agents correctly halt per story-full-pipeline validation
|
||||
- Stories get skipped (not regenerated)
|
||||
- You waste time
|
||||
|
||||
**Why:**
|
||||
- Agents CANNOT execute /create-story-with-gap-analysis
|
||||
- Agents CANNOT invoke other BMAD workflows
|
||||
- Story generation requires user interaction
|
||||
|
||||
**Solution:**
|
||||
- Generate ALL stories manually FIRST: /create-story-with-gap-analysis
|
||||
- Validate: ./scripts/validate-all-stories.sh
|
||||
- THEN run batch: /batch-stories
|
||||
|
||||
### ❌ Mixed Story Quality
|
||||
|
||||
**What you might try:**
|
||||
- Mix 10 proper stories + 10 skeletons
|
||||
- Run batch hoping it "figures it out"
|
||||
|
||||
**What happens:**
|
||||
- 10 proper stories execute successfully
|
||||
- 10 skeletons get skipped
|
||||
- Confusing results
|
||||
|
||||
**Solution:**
|
||||
- Ensure ALL stories have same quality
|
||||
- Validate before batch
|
||||
- Don't mix skeletons with proper stories
|
||||
|
||||
## Success Pattern
|
||||
|
||||
```bash
|
||||
# 1. Generate all stories (1-2 days, manual)
|
||||
for story in story-20-13a-{1..5}; do
|
||||
/create-story-with-gap-analysis
|
||||
# Provide story details interactively
|
||||
done
|
||||
|
||||
# 2. Validate (30 seconds, automated)
|
||||
./scripts/validate-all-stories.sh
|
||||
|
||||
# 3. Execute (4-8 hours, parallel autonomous)
|
||||
/batch-stories
|
||||
# Select all 5 stories
|
||||
# Choose 2-4 agents parallel
|
||||
|
||||
# 4. Review (1-2 hours)
|
||||
# Review commits, merge to main
|
||||
```
|
||||
|
||||
**Total Time:**
|
||||
- Manual work: 1-2 days (story generation)
|
||||
- Autonomous work: 4-8 hours (batch execution)
|
||||
- Review: 1-2 hours
|
||||
|
||||
**Efficiency:**
|
||||
- Story generation: Cannot be batched (requires user input)
|
||||
- Story execution: Highly parallelizable (4x speedup with 4 agents)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
**To apply these improvements:**
|
||||
|
||||
- [ ] Update `batch-stories/instructions.md` Step 2.5 (lines 82-99)
|
||||
- [ ] Add `batch-stories/AGENT-LIMITATIONS.md` (new file)
|
||||
- [ ] Add `batch-stories/BATCH-BEST-PRACTICES.md` (new file)
|
||||
- [ ] Update `batch-stories/README.md` with prerequisites
|
||||
- [ ] Create `scripts/validate-all-stories.sh` (new script)
|
||||
- [ ] Add manual actions tracking to Step 5 summary
|
||||
- [ ] Update story-full-pipeline Step 1.4.5 with agent guidance
|
||||
|
||||
**Testing:**
|
||||
- Try batch with mixed story quality → Should skip skeletons gracefully
|
||||
- Verify error messages are clear
|
||||
- Confirm agents halt correctly (not crash)
|
||||
|
||||
---
|
||||
|
||||
**Expected Result:**
|
||||
- Users understand limitations upfront
|
||||
- Clear guidance when stories are incomplete
|
||||
- No false expectations about batch regeneration
|
||||
- Better error messages
|
||||
|
|
@ -1,171 +0,0 @@
|
|||
# Step 4.5: Story Reconciliation (Orchestrator-Driven)
|
||||
|
||||
**Version:** 2.1.0
|
||||
**Execute:** AFTER story-full-pipeline completes, BEFORE marking story done
|
||||
**Who:** Orchestrator (YOU) - not an agent
|
||||
|
||||
---
|
||||
|
||||
## Why Orchestrator Does This
|
||||
|
||||
Agents ignore reconciliation instructions. The orchestrator:
|
||||
- Has full context of what just happened
|
||||
- Can use tools directly (Bash, Read, Edit)
|
||||
- Won't skip "boring" bookkeeping tasks
|
||||
|
||||
---
|
||||
|
||||
## Execute These Steps
|
||||
|
||||
### Step 1: Get What Was Built
|
||||
|
||||
Run this command with Bash tool:
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔧 STORY RECONCILIATION: {{story_key}}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Get the commit for this story
|
||||
echo "Recent commits:"
|
||||
git log -5 --oneline | grep -i "{{story_key}}" || echo "(no commits found with story key)"
|
||||
|
||||
# Get files changed
|
||||
echo ""
|
||||
echo "Files changed in last commit:"
|
||||
git diff HEAD~1 --name-only | grep -v "__tests__" | grep -v "\.test\." | head -20
|
||||
```
|
||||
|
||||
Store the output - you'll need it for the next steps.
|
||||
|
||||
### Step 2: Read Story File
|
||||
|
||||
Use Read tool on: `docs/sprint-artifacts/{{story_key}}.md`
|
||||
|
||||
Find these sections:
|
||||
- **Tasks** (lines starting with `- [ ]` or `- [x]`)
|
||||
- **Dev Agent Record** (section with Agent Model, Implementation Date, etc.)
|
||||
|
||||
### Step 3: Check Off Completed Tasks
|
||||
|
||||
For EACH task in the Tasks section that relates to files changed:
|
||||
|
||||
Use Edit tool:
|
||||
```
|
||||
file_path: docs/sprint-artifacts/{{story_key}}.md
|
||||
old_string: "- [ ] Create the SomeComponent"
|
||||
new_string: "- [x] Create the SomeComponent"
|
||||
```
|
||||
|
||||
**Match logic:**
|
||||
- If task mentions a file that was created → check it off
|
||||
- If task mentions a service/component that now exists → check it off
|
||||
- If unsure → leave unchecked (don't over-claim)
|
||||
|
||||
### Step 4: Fill Dev Agent Record
|
||||
|
||||
Use Edit tool to replace the placeholder record:
|
||||
|
||||
```
|
||||
file_path: docs/sprint-artifacts/{{story_key}}.md
|
||||
old_string: "### Dev Agent Record
|
||||
- **Agent Model Used:** [Not set]
|
||||
- **Implementation Date:** [Not set]
|
||||
- **Files Created/Modified:** [Not set]
|
||||
- **Tests Added:** [Not set]
|
||||
- **Completion Notes:** [Not set]"
|
||||
new_string: "### Dev Agent Record
|
||||
- **Agent Model Used:** Claude Sonnet 4 (multi-agent pipeline)
|
||||
- **Implementation Date:** 2026-01-26
|
||||
- **Files Created/Modified:**
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
[list all files from git diff]
|
||||
- **Tests Added:** X tests (from Inspector report)
|
||||
- **Completion Notes:** Implemented [brief summary]"
|
||||
```
|
||||
|
||||
### Step 5: Verify Updates
|
||||
|
||||
Run this command with Bash tool:
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔍 RECONCILIATION VERIFICATION"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
|
||||
|
||||
# Count checked tasks
|
||||
CHECKED=$(grep -c "^- \[x\]" "$STORY_FILE" 2>/dev/null || echo "0")
|
||||
UNCHECKED=$(grep -c "^- \[ \]" "$STORY_FILE" 2>/dev/null || echo "0")
|
||||
TOTAL=$((CHECKED + UNCHECKED))
|
||||
echo "Tasks: $CHECKED/$TOTAL checked"
|
||||
|
||||
if [ "$CHECKED" -eq 0 ]; then
|
||||
echo ""
|
||||
echo "❌ BLOCKER: Zero tasks checked off"
|
||||
echo "You MUST go back to Step 3 and check off tasks"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Dev Agent Record filled
|
||||
if grep -q "Implementation Date: \[Not set\]" "$STORY_FILE" 2>/dev/null; then
|
||||
echo "❌ BLOCKER: Dev Agent Record not filled"
|
||||
echo "You MUST go back to Step 4 and fill it"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if grep -A 3 "### Dev Agent Record" "$STORY_FILE" | grep -q "Implementation Date: 202"; then
|
||||
echo "✅ Dev Agent Record: Filled"
|
||||
else
|
||||
echo "❌ BLOCKER: Dev Agent Record incomplete"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ RECONCILIATION COMPLETE"
|
||||
echo " Checked tasks: $CHECKED/$TOTAL"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
```
|
||||
|
||||
### Step 6: Update Sprint Status
|
||||
|
||||
Use Read tool: `docs/sprint-artifacts/sprint-status.yaml`
|
||||
|
||||
Find the entry for {{story_key}} and use Edit tool to update:
|
||||
|
||||
```
|
||||
old_string: "{{story_key}}: ready-for-dev"
|
||||
new_string: "{{story_key}}: done # ✅ COMPLETED 2026-01-26"
|
||||
```
|
||||
|
||||
Or if 95%+ complete but not 100%:
|
||||
```
|
||||
new_string: "{{story_key}}: review # 8/10 tasks - awaiting review"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Status Decision Logic
|
||||
|
||||
Based on verification results:
|
||||
|
||||
| Condition | Status |
|
||||
|-----------|--------|
|
||||
| 95%+ tasks checked + Dev Record filled | `done` |
|
||||
| 80-94% tasks checked | `review` |
|
||||
| <80% tasks checked | `in-progress` |
|
||||
| Dev Record not filled | `in-progress` |
|
||||
|
||||
---
|
||||
|
||||
## If Verification Fails
|
||||
|
||||
1. **DO NOT** proceed to next story
|
||||
2. **DO NOT** mark story as done
|
||||
3. **FIX** the issue using Edit tool
|
||||
4. **RE-RUN** verification command
|
||||
5. **REPEAT** until verification passes
|
||||
|
||||
This is mandatory. No shortcuts.
|
||||
|
|
@ -1,369 +0,0 @@
|
|||
# Batch Super-Dev v3.0 - Unified Workflow
|
||||
|
||||
<purpose>
|
||||
Interactive story selector for batch implementation. Scan codebase for gaps, select stories, process with story-full-pipeline, reconcile results.
|
||||
|
||||
**AKA:** "Mend the Gap" - Mind the gap between story requirements and reality, then mend it.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Gap Analysis First, Build Only What's Missing**
|
||||
|
||||
1. Scan codebase to verify what's actually implemented
|
||||
2. Find the gap between story requirements and reality
|
||||
3. Build ONLY what's truly missing (no duplicate work)
|
||||
4. Update tracking to reflect actual completion
|
||||
|
||||
Orchestrator coordinates. Agents do implementation. Orchestrator does reconciliation.
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: batch-stories
|
||||
version: 3.1.0
|
||||
|
||||
modes:
|
||||
sequential: {description: "Process one-by-one in this session", recommended_for: "gap analysis"}
|
||||
parallel: {description: "Spawn concurrent Task agents", recommended_for: "greenfield batch"}
|
||||
|
||||
complexity_routing:
|
||||
micro: {max_tasks: 3, max_files: 5, skip_review: true}
|
||||
standard: {max_tasks: 15, max_files: 30, full_pipeline: true}
|
||||
complex: {min_tasks: 16, keywords: [auth, security, payment, migration], enhanced_review: true}
|
||||
|
||||
defaults:
|
||||
auto_create_missing: true # Automatically create missing story files using greenfield workflow
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
@story-full-pipeline/workflow.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_sprint_status" priority="first">
|
||||
**Load and parse sprint-status.yaml**
|
||||
|
||||
```bash
|
||||
SPRINT_STATUS="docs/sprint-artifacts/sprint-status.yaml"
|
||||
[ -f "$SPRINT_STATUS" ] || { echo "ERROR: sprint-status.yaml not found"; exit 1; }
|
||||
```
|
||||
|
||||
Use Read tool on sprint-status.yaml. Extract:
|
||||
- Stories with status `ready-for-dev` or `backlog`
|
||||
- Exclude epics (`epic-*`) and retrospectives (`*-retrospective`)
|
||||
- Sort by epic number, then story number
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 LOADING SPRINT STATUS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
If no available stories: report "All stories complete!" and exit.
|
||||
</step>
|
||||
|
||||
<step name="display_stories">
|
||||
**Display available stories with file status**
|
||||
|
||||
For each story:
|
||||
1. Check if story file exists in `docs/sprint-artifacts/`
|
||||
2. Try patterns: `story-{epic}.{story}.md`, `{epic}-{story}.md`, `{story_key}.md`
|
||||
3. Mark status: ✅ exists, ❌ missing, 🔄 already implemented
|
||||
|
||||
```markdown
|
||||
## 📦 Available Stories (N)
|
||||
|
||||
### Ready for Dev (X)
|
||||
1. **17-10** ✅ occupant-agreement-view
|
||||
2. **17-11** ✅ agreement-status-tracking
|
||||
|
||||
### Backlog (Y)
|
||||
3. **18-1** ❌ [needs story file]
|
||||
|
||||
Legend: ✅ ready | ❌ missing | 🔄 done but not tracked
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="validate_stories">
|
||||
**Validate story files have required sections**
|
||||
|
||||
For each story with existing file:
|
||||
1. Read story file
|
||||
2. Check for 12 BMAD sections (Business Context, Acceptance Criteria, Tasks, etc.)
|
||||
3. If invalid: mark for regeneration
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 VALIDATING STORY FILES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Note:** Stories with missing files will be auto-created in the execution step.
|
||||
</step>
|
||||
|
||||
<step name="score_complexity">
|
||||
**Score story complexity for pipeline routing**
|
||||
|
||||
For each validated story:
|
||||
|
||||
```bash
|
||||
# Count tasks
|
||||
TASK_COUNT=$(grep -c "^- \[ \]" "$STORY_FILE")
|
||||
|
||||
# Check for risk keywords
|
||||
RISK_KEYWORDS="auth|security|payment|encryption|migration|database|schema"
|
||||
HIGH_RISK=$(grep -ciE "$RISK_KEYWORDS" "$STORY_FILE")
|
||||
```
|
||||
|
||||
**Scoring:**
|
||||
| Criteria | micro | standard | complex |
|
||||
|----------|-------|----------|---------|
|
||||
| Tasks | ≤3 | 4-15 | ≥16 |
|
||||
| Files | ≤5 | ≤30 | >30 |
|
||||
| Risk keywords | 0 | low | high |
|
||||
|
||||
Store `complexity_level` for each story.
|
||||
</step>
|
||||
|
||||
<step name="get_selection">
|
||||
**Get user selection**
|
||||
|
||||
Use AskUserQuestion:
|
||||
```
|
||||
Which stories would you like to implement?
|
||||
|
||||
Options:
|
||||
1. All ready-for-dev stories (X stories)
|
||||
2. Select specific stories by number
|
||||
3. Single story (enter key like "17-10")
|
||||
```
|
||||
|
||||
Validate selection against available stories.
|
||||
</step>
|
||||
|
||||
<step name="choose_mode">
|
||||
**Choose execution mode**
|
||||
|
||||
Use AskUserQuestion:
|
||||
```
|
||||
How should stories be processed?
|
||||
|
||||
Options:
|
||||
1. Sequential (recommended for gap analysis)
|
||||
- Process one-by-one in this session
|
||||
- Verify code → build gaps → check boxes → next
|
||||
|
||||
2. Parallel (for greenfield batch)
|
||||
- Spawn Task agents concurrently
|
||||
- Faster but harder to monitor
|
||||
```
|
||||
|
||||
For sequential: proceed to `execute_sequential`
|
||||
For parallel: proceed to `execute_parallel`
|
||||
</step>
|
||||
|
||||
<step name="execute_sequential" if="mode == sequential">
|
||||
**Sequential Processing**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 SEQUENTIAL PROCESSING
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
For each selected story:
|
||||
|
||||
**Step A: Auto-Fix Prerequisites**
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 Story {{index}}/{{total}}: {{story_key}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
```bash
|
||||
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
|
||||
|
||||
echo "🔍 Checking prerequisites..."
|
||||
```
|
||||
|
||||
**Check 1: Story file exists?**
|
||||
```bash
|
||||
if [ ! -f "$STORY_FILE" ]; then
|
||||
echo "⚠️ Creating greenfield story (no gap analysis)..."
|
||||
fi
|
||||
```
|
||||
|
||||
If missing, auto-create using greenfield workflow:
|
||||
- Use Skill tool: `/bmad_bmm_create-story {{story_key}}`
|
||||
- Verify created: `[ -f "$STORY_FILE" ]`
|
||||
|
||||
```bash
|
||||
echo "✅ Prerequisites satisfied"
|
||||
```
|
||||
|
||||
**Step B: Invoke story-full-pipeline**
|
||||
|
||||
Use story-full-pipeline workflow with:
|
||||
- mode: batch
|
||||
- story_key: {{story_key}}
|
||||
- complexity_level: {{complexity}}
|
||||
|
||||
**Step C: Reconcile Using Completion Artifacts (orchestrator does this directly)**
|
||||
|
||||
After story-full-pipeline completes:
|
||||
|
||||
**C1. Load Fixer completion artifact:**
|
||||
```bash
|
||||
FIXER_COMPLETION="docs/sprint-artifacts/completions/{{story_key}}-fixer.json"
|
||||
|
||||
if [ ! -f "$FIXER_COMPLETION" ]; then
|
||||
echo "❌ WARNING: No completion artifact, using fallback"
|
||||
# Fallback to git diff if completion artifact missing
|
||||
else
|
||||
echo "✅ Using completion artifact"
|
||||
fi
|
||||
```
|
||||
|
||||
Use Read tool on: `docs/sprint-artifacts/completions/{{story_key}}-fixer.json`
|
||||
|
||||
**C2. Parse completion data:**
|
||||
Extract from JSON:
|
||||
- files_created and files_modified arrays
|
||||
- git_commit hash
|
||||
- quality_checks results
|
||||
- tests counts
|
||||
- fixes_applied list
|
||||
|
||||
**C3. Read story file:**
|
||||
Use Read tool: `docs/sprint-artifacts/{{story_key}}.md`
|
||||
|
||||
**C4. Check off completed tasks:**
|
||||
For each task:
|
||||
- Match task to files in completion artifact
|
||||
- If file was created/modified: check off task
|
||||
- Use Edit tool: `"- [ ]"` → `"- [x]"`
|
||||
|
||||
**C5. Fill Dev Agent Record:**
|
||||
Use Edit tool with data from completion.json:
|
||||
```markdown
|
||||
### Dev Agent Record
|
||||
**Implementation Date:** {{timestamp from json}}
|
||||
**Agent Model:** Claude Sonnet 4.5 (multi-agent pipeline)
|
||||
**Git Commit:** {{git_commit from json}}
|
||||
|
||||
**Files:** {{files_created + files_modified from json}}
|
||||
**Tests:** {{tests.passing}}/{{tests.total}} passing ({{tests.coverage}}%)
|
||||
**Issues Fixed:** {{issues_fixed.total}} issues
|
||||
```
|
||||
|
||||
**C6. Verify updates:**
|
||||
```bash
|
||||
CHECKED=$(grep -c "^- \[x\]" "$STORY_FILE")
|
||||
[ "$CHECKED" -gt 0 ] || { echo "❌ Zero tasks checked"; exit 1; }
|
||||
echo "✅ Reconciled: $CHECKED tasks"
|
||||
```
|
||||
|
||||
**C7. Update sprint-status.yaml:**
|
||||
Use Edit tool: `"{{story_key}}: ready-for-dev"` → `"{{story_key}}: done"`
|
||||
|
||||
**Step D: Next story or complete**
|
||||
- If more stories: continue loop
|
||||
- If complete: proceed to `summary`
|
||||
</step>
|
||||
|
||||
<step name="execute_parallel" if="mode == parallel">
|
||||
**Parallel Processing with Wave Pattern**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 PARALLEL PROCESSING
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Wave Configuration:**
|
||||
- Max concurrent: 3 agents
|
||||
- Wait for wave completion before next wave
|
||||
|
||||
**For each wave:**
|
||||
|
||||
1. Spawn Task agents (up to 3 parallel):
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Implement {{story_key}}",
|
||||
prompt: `
|
||||
Execute story-full-pipeline for story {{story_key}}.
|
||||
|
||||
<execution_context>
|
||||
@story-full-pipeline/workflow.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
Story: [inline story content]
|
||||
Complexity: {{complexity_level}}
|
||||
</context>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All pipeline phases complete
|
||||
- [ ] Git commit created
|
||||
- [ ] Return ## AGENT COMPLETE with summary
|
||||
</success_criteria>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
2. Wait for all agents in wave to complete
|
||||
|
||||
3. **Orchestrator reconciles each completed story:**
|
||||
- Get git diff
|
||||
- Check off tasks
|
||||
- Fill Dev Agent Record
|
||||
- Verify updates
|
||||
- Update sprint-status
|
||||
|
||||
4. Continue to next wave or summary
|
||||
</step>
|
||||
|
||||
<step name="summary">
|
||||
**Display Batch Summary**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ BATCH COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Stories processed: {{total}}
|
||||
Successful: {{success_count}}
|
||||
Failed: {{fail_count}}
|
||||
|
||||
## Results
|
||||
|
||||
| Story | Status | Tasks | Commit |
|
||||
|-------|--------|-------|--------|
|
||||
| 17-10 | ✅ done | 8/8 | abc123 |
|
||||
| 17-11 | ✅ done | 5/5 | def456 |
|
||||
|
||||
## Next Steps
|
||||
- Run /bmad:sprint-status to verify
|
||||
- Review commits with git log
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**Story file missing:** Skip with warning, continue to next.
|
||||
**Pipeline fails:** Mark story as failed, continue to next.
|
||||
**Reconciliation fails:** Fix with Edit tool, retry verification.
|
||||
**All stories fail:** Report systemic issue, halt batch.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All selected stories processed
|
||||
- [ ] Each story has checked tasks (count > 0)
|
||||
- [ ] Each story has Dev Agent Record filled
|
||||
- [ ] Sprint status updated for all stories
|
||||
- [ ] Summary displayed with results
|
||||
</success_criteria>
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
name: batch-super-dev
|
||||
description: "Interactive batch selector for super-dev-pipeline with complexity-based routing. Micro stories get lightweight path, standard stories get full pipeline, complex stories get enhanced validation."
|
||||
author: "BMad"
|
||||
version: "1.3.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/batch-super-dev"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
# State management
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml"
|
||||
batch_log: "{sprint_artifacts}/batch-super-dev-{date}.log"
|
||||
|
||||
# Variables
|
||||
filter_by_epic: "" # Optional: Filter stories by epic number (e.g., "3" for only Epic 3 stories)
|
||||
max_stories: 20 # Safety limit - won't process more than this in one batch
|
||||
pause_between_stories: 5 # Seconds to pause between stories (allows monitoring, prevents rate limits)
|
||||
|
||||
# Super-dev-pipeline invocation settings
|
||||
super_dev_settings:
|
||||
mode: "batch" # Always use batch mode for autonomous execution
|
||||
workflow_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
|
||||
# Story validation settings (NEW in v1.2.0)
|
||||
validation:
|
||||
enabled: true # Validate story files before processing
|
||||
auto_create_missing: false # If true, auto-create without prompting (use with caution)
|
||||
auto_regenerate_invalid: false # If true, auto-regenerate without prompting (use with caution)
|
||||
min_sections: 12 # BMAD format requires all 12 sections
|
||||
min_current_state_words: 100 # Current State must have substantial content
|
||||
require_gap_analysis: true # Current State must have ✅/❌ markers
|
||||
backup_before_regenerate: true # Create .backup file before regenerating
|
||||
|
||||
# Story complexity scoring (NEW in v1.3.0)
|
||||
# Routes stories to appropriate pipeline based on complexity
|
||||
complexity:
|
||||
enabled: true
|
||||
thresholds:
|
||||
micro: # Lightweight path: skip gap analysis + code review
|
||||
max_tasks: 3
|
||||
max_files: 5
|
||||
risk_keywords: [] # No high-risk keywords allowed
|
||||
standard: # Normal path: full pipeline
|
||||
max_tasks: 15
|
||||
max_files: 30
|
||||
risk_keywords: ["api", "service", "component", "feature"]
|
||||
complex: # Enhanced path: extra validation, consider splitting
|
||||
min_tasks: 16
|
||||
risk_keywords: ["auth", "security", "migration", "database", "payment", "encryption"]
|
||||
|
||||
# Risk keyword scoring (adds to complexity)
|
||||
risk_weights:
|
||||
high: ["auth", "security", "payment", "encryption", "migration", "database", "schema"]
|
||||
medium: ["api", "integration", "external", "third-party", "cache"]
|
||||
low: ["ui", "style", "config", "docs", "test"]
|
||||
|
||||
# Keyword matching configuration (defines how risk keywords are detected)
|
||||
keyword_matching:
|
||||
case_sensitive: false # "AUTH" matches "auth"
|
||||
require_word_boundaries: true # "auth" won't match "author"
|
||||
match_strategy: "exact" # exact word match required (no stemming)
|
||||
scan_locations:
|
||||
- story_title
|
||||
- task_descriptions
|
||||
- subtask_descriptions
|
||||
# Keyword variants (synonyms that map to canonical forms)
|
||||
variants:
|
||||
auth: ["authentication", "authorize", "authorization", "authz", "authn"]
|
||||
database: ["db", "databases", "datastore"]
|
||||
payment: ["payments", "pay", "billing", "checkout"]
|
||||
migration: ["migrations", "migrate"]
|
||||
security: ["secure", "security"]
|
||||
encryption: ["encrypt", "encrypted", "cipher"]
|
||||
|
||||
# Task counting rules
|
||||
task_counting:
|
||||
method: "top_level_only" # Only count [ ] at task level, not subtasks
|
||||
# Options: "top_level_only", "include_subtasks", "weighted"
|
||||
# Example:
|
||||
# - [ ] Parent task <- counts as 1
|
||||
# - [ ] Subtask 1 <- ignored
|
||||
# - [ ] Subtask 2 <- ignored
|
||||
|
||||
# Execution settings
|
||||
execution:
|
||||
continue_on_failure: true # Keep processing remaining stories if one fails
|
||||
display_progress: true # Show running summary after each story
|
||||
save_state: true # Save progress to resume if interrupted
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,286 +0,0 @@
|
|||
# Create Story with Gap Analysis v3.0 - Verified Story Generation
|
||||
|
||||
<purpose>
|
||||
Regenerate story with VERIFIED codebase gap analysis.
|
||||
Uses Glob/Read tools to determine what actually exists vs what's missing.
|
||||
Checkboxes reflect reality, not guesses.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Truth from Codebase, Not Assumptions**
|
||||
|
||||
1. Scan codebase for actual implementations
|
||||
2. Verify files exist, check for stubs/TODOs
|
||||
3. Check test coverage
|
||||
4. Generate story with checkboxes matching reality
|
||||
5. No guessing—every checkbox has evidence
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: create-story-with-gap-analysis
|
||||
version: 3.0.0
|
||||
|
||||
verification_status:
|
||||
verified: "[x]" # File exists, real implementation, tests exist
|
||||
partial: "[~]" # File exists but stub/TODO or no tests
|
||||
missing: "[ ]" # File does not exist
|
||||
|
||||
defaults:
|
||||
update_sprint_status: true
|
||||
create_report: false
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
**Identify story and load context**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 STORY REGENERATION WITH GAP ANALYSIS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Ask user for story:**
|
||||
```
|
||||
Which story should I regenerate with gap analysis?
|
||||
|
||||
Provide:
|
||||
- Story number (e.g., "1.9" or "1-9")
|
||||
- OR story filename
|
||||
|
||||
Your choice:
|
||||
```
|
||||
|
||||
**Parse input:**
|
||||
- Extract epic_num, story_num
|
||||
- Locate story file
|
||||
|
||||
**Load existing story:**
|
||||
```bash
|
||||
Read: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
|
||||
```
|
||||
|
||||
Extract:
|
||||
- Story title
|
||||
- User story (As a... I want... So that...)
|
||||
- Acceptance criteria
|
||||
- Tasks
|
||||
- Dev Notes
|
||||
|
||||
**Load epic context:**
|
||||
```bash
|
||||
Read: {{planning_artifacts}}/epics.md
|
||||
```
|
||||
|
||||
Extract:
|
||||
- Epic business objectives
|
||||
- Technical constraints
|
||||
- Dependencies
|
||||
|
||||
**Determine target directories:**
|
||||
From story title/requirements, identify which directories to scan.
|
||||
|
||||
```
|
||||
✅ Story Context Loaded
|
||||
|
||||
Story: {{epic_num}}.{{story_num}} - {{title}}
|
||||
Target directories:
|
||||
{{#each directories}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
[C] Continue to Codebase Scan
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="codebase_scan">
|
||||
**VERIFY what code actually exists**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 CODEBASE SCAN
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**For each target directory:**
|
||||
|
||||
1. **List all source files:**
|
||||
```bash
|
||||
Glob: {{target_dir}}/src/**/*.ts
|
||||
Glob: {{target_dir}}/src/**/*.tsx
|
||||
```
|
||||
|
||||
2. **Check for specific required components:**
|
||||
Based on story ACs, check if required files exist:
|
||||
```bash
|
||||
Glob: {{target_dir}}/src/auth/controllers/*oauth*.ts
|
||||
# Result: ✅ EXISTS or ❌ MISSING
|
||||
```
|
||||
|
||||
3. **Verify implementation depth:**
|
||||
For files that exist, check quality:
|
||||
```bash
|
||||
Read: {{file}}
|
||||
|
||||
# Check for stubs
|
||||
Grep: "MOCK|TODO|FIXME|Not implemented" {{file}}
|
||||
# If found: ⚠️ STUB
|
||||
```
|
||||
|
||||
4. **Check dependencies:**
|
||||
```bash
|
||||
Read: {{target_dir}}/package.json
|
||||
|
||||
# Required: axios - Found? ✅/❌
|
||||
# Required: @aws-sdk/client-secrets-manager - Found? ✅/❌
|
||||
```
|
||||
|
||||
5. **Check test coverage:**
|
||||
```bash
|
||||
Glob: {{target_dir}}/src/**/*.spec.ts
|
||||
Glob: {{target_dir}}/test/**/*.test.ts
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_gap_analysis">
|
||||
**Create verified gap analysis**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 GAP ANALYSIS RESULTS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
✅ IMPLEMENTED (Verified):
|
||||
{{#each implemented}}
|
||||
{{@index}}. **{{name}}**
|
||||
- File: {{file}} ✅ EXISTS
|
||||
- Status: {{status}}
|
||||
- Tests: {{test_count}} tests
|
||||
{{/each}}
|
||||
|
||||
❌ MISSING (Verified):
|
||||
{{#each missing}}
|
||||
{{@index}}. **{{name}}**
|
||||
- Expected: {{expected_file}} ❌ NOT FOUND
|
||||
- Needed for: {{requirement}}
|
||||
{{/each}}
|
||||
|
||||
⚠️ PARTIAL (Stub/Incomplete):
|
||||
{{#each partial}}
|
||||
{{@index}}. **{{name}}**
|
||||
- File: {{file}} ✅ EXISTS
|
||||
- Issue: {{issue}}
|
||||
{{/each}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_story">
|
||||
**Generate story with verified checkboxes**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📝 GENERATING STORY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Use story template with:
|
||||
- `[x]` for VERIFIED items (evidence: file exists, not stub, has tests)
|
||||
- `[~]` for PARTIAL items (evidence: file exists but stub/no tests)
|
||||
- `[ ]` for MISSING items (evidence: file not found)
|
||||
|
||||
**Write story file:**
|
||||
```bash
|
||||
Write: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
|
||||
```
|
||||
|
||||
**Validate generated story:**
|
||||
```bash
|
||||
# Check 7 sections exist
|
||||
grep "^## " {{story_file}} | wc -l
|
||||
# Should be 7
|
||||
|
||||
# Check gap analysis section exists
|
||||
grep "Gap Analysis" {{story_file}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_sprint_status" if="update_sprint_status">
|
||||
**Update sprint-status.yaml**
|
||||
|
||||
```bash
|
||||
Read: {{sprint_status}}
|
||||
|
||||
# Update story status to "ready-for-dev" if was "backlog"
|
||||
# Preserve comments and structure
|
||||
|
||||
Write: {{sprint_status}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Report completion**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ STORY REGENERATED WITH GAP ANALYSIS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{epic_num}}.{{story_num}} - {{title}}
|
||||
File: {{story_file}}
|
||||
Sections: 7/7 ✅
|
||||
|
||||
Gap Analysis Summary:
|
||||
- ✅ {{implemented_count}} components VERIFIED complete
|
||||
- ❌ {{missing_count}} components VERIFIED missing
|
||||
- ⚠️ {{partial_count}} components PARTIAL (stub/no tests)
|
||||
|
||||
Checkboxes reflect VERIFIED codebase state.
|
||||
|
||||
Next Steps:
|
||||
1. Review story for accuracy
|
||||
2. Use /dev-story to implement missing components
|
||||
3. Story provides complete context for implementation
|
||||
|
||||
[N] Regenerate next story
|
||||
[Q] Quit
|
||||
[R] Review generated story
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**If [N]:** Loop back to initialize with next story.
|
||||
**If [R]:** Display story content, then show menu.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Regenerate specific story
|
||||
/create-story-with-gap-analysis
|
||||
> Which story? 1.9
|
||||
|
||||
# With explicit story file
|
||||
/create-story-with-gap-analysis story_file=docs/sprint-artifacts/story-1.9.md
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Story not found:** HALT with clear error.
|
||||
**Target directory not found:** Warn, scan available directories.
|
||||
**Glob/Read fails:** Log warning, count as MISSING.
|
||||
**Write fails:** Report error, display generated content.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Codebase scanned for all story requirements
|
||||
- [ ] Gap analysis generated with evidence
|
||||
- [ ] Story written with verified checkboxes
|
||||
- [ ] 7 sections present
|
||||
- [ ] Sprint status updated (if enabled)
|
||||
</success_criteria>
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
name: create-story-with-gap-analysis
|
||||
description: "Create/regenerate story with SYSTEMATIC codebase gap analysis using verified file scanning (Glob/Read tools)"
|
||||
author: "Jonah Schulte"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
planning_artifacts: "{config_source}:planning_artifacts"
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
output_folder: "{implementation_artifacts}"
|
||||
story_dir: "{implementation_artifacts}"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
||||
epics_file: "{planning_artifacts}/epics.md"
|
||||
prd_file: "{planning_artifacts}/prd.md"
|
||||
|
||||
# Project context
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
default_output_file: "{story_dir}/{{story_key}}.md"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,270 +0,0 @@
|
|||
# Create Story v3.0 - Greenfield Story Generation
|
||||
|
||||
<purpose>
|
||||
Generate story for net-new features with zero existing implementation.
|
||||
No codebase scanning—all tasks assumed incomplete (greenfield).
|
||||
Focused on clear requirements and implementation guidance.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Fast Story Generation for New Features**
|
||||
|
||||
1. Load PRD, epic, and architecture context
|
||||
2. Generate clear user story with acceptance criteria
|
||||
3. All tasks marked incomplete (greenfield assumption)
|
||||
4. No codebase scanning—saves time for net-new work
|
||||
5. Ready for immediate implementation
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: create-story
|
||||
version: 3.0.0
|
||||
|
||||
task_status:
|
||||
incomplete: "[ ]" # All tasks for greenfield stories
|
||||
|
||||
defaults:
|
||||
update_sprint_status: true
|
||||
create_report: false
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
**Identify story and load context**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📝 GREENFIELD STORY GENERATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Ask user for story:**
|
||||
```
|
||||
Which story should I create?
|
||||
|
||||
Provide:
|
||||
- Story number (e.g., "1.9" or "1-9")
|
||||
- OR epic number and story description
|
||||
|
||||
Your choice:
|
||||
```
|
||||
|
||||
**Parse input:**
|
||||
- Extract epic_num, story_num
|
||||
- Determine story file path
|
||||
|
||||
**Load epic context:**
|
||||
```bash
|
||||
Read: {{planning_artifacts}}/epics.md
|
||||
```
|
||||
|
||||
Extract:
|
||||
- Epic business objectives
|
||||
- Technical constraints
|
||||
- Dependencies
|
||||
|
||||
**Load architecture context (if exists):**
|
||||
```bash
|
||||
Read: {{planning_artifacts}}/architecture.md
|
||||
```
|
||||
|
||||
Extract:
|
||||
- Technical architecture patterns
|
||||
- Technology stack
|
||||
- Integration patterns
|
||||
|
||||
**Load PRD context:**
|
||||
```bash
|
||||
Read: {{planning_artifacts}}/prd.md
|
||||
```
|
||||
|
||||
Extract relevant sections:
|
||||
- User personas
|
||||
- Feature requirements
|
||||
- Non-functional requirements
|
||||
|
||||
```
|
||||
✅ Context Loaded
|
||||
|
||||
Story: {{epic_num}}.{{story_num}}
|
||||
Epic: {{epic_title}}
|
||||
Architecture: {{architecture_notes}}
|
||||
|
||||
[C] Continue to Story Generation
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_story">
|
||||
**Generate greenfield story**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📝 GENERATING STORY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Story structure:**
|
||||
All tasks marked `[ ]` (incomplete) since this is greenfield.
|
||||
|
||||
**Write story file:**
|
||||
```bash
|
||||
Write: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
|
||||
```
|
||||
|
||||
**Story template:**
|
||||
```markdown
|
||||
# Story {{epic_num}}.{{story_num}}: {{title}}
|
||||
|
||||
## 📊 Metadata
|
||||
- **Epic**: {{epic_num}} - {{epic_title}}
|
||||
- **Priority**: {{priority}}
|
||||
- **Estimate**: {{estimate}}
|
||||
- **Dependencies**: {{dependencies}}
|
||||
- **Created**: {{date}}
|
||||
|
||||
## 📖 User Story
|
||||
As a {{persona}}
|
||||
I want {{capability}}
|
||||
So that {{benefit}}
|
||||
|
||||
## ✅ Acceptance Criteria
|
||||
1. **{{criterion_1}}**
|
||||
- {{detail_1a}}
|
||||
- {{detail_1b}}
|
||||
|
||||
2. **{{criterion_2}}**
|
||||
- {{detail_2a}}
|
||||
- {{detail_2b}}
|
||||
|
||||
## 🔨 Implementation Tasks
|
||||
### Frontend
|
||||
- [ ] {{frontend_task_1}}
|
||||
- [ ] {{frontend_task_2}}
|
||||
|
||||
### Backend
|
||||
- [ ] {{backend_task_1}}
|
||||
- [ ] {{backend_task_2}}
|
||||
|
||||
### Testing
|
||||
- [ ] {{testing_task_1}}
|
||||
- [ ] {{testing_task_2}}
|
||||
|
||||
## 📋 Technical Notes
|
||||
### Architecture
|
||||
{{architecture_guidance}}
|
||||
|
||||
### Dependencies
|
||||
{{dependency_notes}}
|
||||
|
||||
### API Contracts
|
||||
{{api_contract_notes}}
|
||||
|
||||
## 🧪 Testing Strategy
|
||||
### Unit Tests
|
||||
{{unit_test_strategy}}
|
||||
|
||||
### Integration Tests
|
||||
{{integration_test_strategy}}
|
||||
|
||||
### E2E Tests
|
||||
{{e2e_test_strategy}}
|
||||
|
||||
## 🎯 Definition of Done
|
||||
- [ ] All acceptance criteria met
|
||||
- [ ] Unit tests written and passing
|
||||
- [ ] Integration tests written and passing
|
||||
- [ ] Code reviewed and approved
|
||||
- [ ] Documentation updated
|
||||
- [ ] Deployed to staging environment
|
||||
- [ ] Product owner acceptance
|
||||
|
||||
## 📝 Dev Notes
|
||||
{{additional_context}}
|
||||
```
|
||||
|
||||
**Validate generated story:**
|
||||
```bash
|
||||
# Check 7 sections exist
|
||||
grep "^## " {{story_file}} | wc -l
|
||||
# Should be 7 or more
|
||||
|
||||
# Check metadata section exists
|
||||
grep "## 📊 Metadata" {{story_file}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_sprint_status" if="update_sprint_status">
|
||||
**Update sprint-status.yaml**
|
||||
|
||||
```bash
|
||||
Read: {{sprint_status}}
|
||||
|
||||
# Add story to sprint status with "ready-for-dev" status
|
||||
# Preserve comments and structure
|
||||
|
||||
Write: {{sprint_status}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Report completion**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ GREENFIELD STORY CREATED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{epic_num}}.{{story_num}} - {{title}}
|
||||
File: {{story_file}}
|
||||
Sections: 7/7 ✅
|
||||
|
||||
All tasks marked incomplete (greenfield).
|
||||
Ready for implementation.
|
||||
|
||||
Next Steps:
|
||||
1. Review story for accuracy
|
||||
2. Use /story-dev-only or /story-full-pipeline to implement
|
||||
3. All context loaded and ready
|
||||
|
||||
[N] Create next story
|
||||
[Q] Quit
|
||||
[R] Review generated story
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**If [N]:** Loop back to initialize with next story.
|
||||
**If [R]:** Display story content, then show menu.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Create new greenfield story
|
||||
/create-story
|
||||
> Which story? 20.1
|
||||
|
||||
# With explicit story number
|
||||
/create-story epic=20 story=1
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Epic not found:** HALT with clear error.
|
||||
**PRD not found:** Warn but continue with available context.
|
||||
**Architecture doc not found:** Warn but continue with epic context.
|
||||
**Write fails:** Report error, display generated content.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Epic and PRD context loaded
|
||||
- [ ] Story generated with all 7+ sections
|
||||
- [ ] All tasks marked incomplete (greenfield)
|
||||
- [ ] Story written to correct path
|
||||
- [ ] Sprint status updated (if enabled)
|
||||
</success_criteria>
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
name: create-story
|
||||
description: "Create story for greenfield features with zero existing implementation (no codebase scanning)"
|
||||
author: "Jonah Schulte"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
planning_artifacts: "{config_source}:planning_artifacts"
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
output_folder: "{implementation_artifacts}"
|
||||
story_dir: "{implementation_artifacts}"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
|
||||
# Variables
|
||||
variables:
|
||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
||||
epics_file: "{planning_artifacts}/epics.md"
|
||||
prd_file: "{planning_artifacts}/prd.md"
|
||||
architecture_file: "{planning_artifacts}/architecture.md"
|
||||
|
||||
# Project context
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
default_output_file: "{story_dir}/{{story_key}}.md"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,278 +0,0 @@
|
|||
# Detect Ghost Features v3.0 - Reverse Gap Analysis
|
||||
|
||||
<purpose>
|
||||
Find undocumented code (components, APIs, services, tables) that exist in codebase
|
||||
but aren't tracked in any story. "Who you gonna call?" - Ghost Features.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Reverse Gap Analysis**
|
||||
|
||||
Normal gap analysis: story says X should exist → does it?
|
||||
Reverse gap analysis: X exists in code → is it documented?
|
||||
|
||||
Undocumented features become maintenance nightmares.
|
||||
Find them, create backfill stories, restore traceability.
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: detect-ghost-features
|
||||
version: 3.0.0
|
||||
|
||||
scan_scope:
|
||||
epic: "Filter to specific epic number"
|
||||
sprint: "All stories in sprint-status.yaml"
|
||||
codebase: "All stories in sprint-artifacts"
|
||||
|
||||
scan_for:
|
||||
components: true
|
||||
api_endpoints: true
|
||||
database_tables: true
|
||||
services: true
|
||||
|
||||
severity:
|
||||
critical: "APIs, auth, payment (undocumented = high risk)"
|
||||
high: "Components, DB tables, services"
|
||||
medium: "Utilities, helpers"
|
||||
low: "Config files, constants"
|
||||
|
||||
defaults:
|
||||
create_backfill_stories: false
|
||||
auto_create: false
|
||||
add_to_sprint_status: true
|
||||
create_report: true
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_stories" priority="first">
|
||||
**Load documented artifacts from stories**
|
||||
|
||||
Based on scan_scope (epic/sprint/codebase):
|
||||
|
||||
```bash
|
||||
# Get all story files
|
||||
STORIES=$(ls docs/sprint-artifacts/*.md | grep -v "epic-")
|
||||
```
|
||||
|
||||
For each story:
|
||||
1. Read story file
|
||||
2. Extract documented artifacts:
|
||||
- File List (all paths mentioned)
|
||||
- Tasks (file/component/service names)
|
||||
- ACs (features/functionality)
|
||||
3. Store in: documented_artifacts[story_key]
|
||||
</step>
|
||||
|
||||
<step name="scan_codebase">
|
||||
**Scan codebase for actual implementations**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
👻 SCANNING FOR GHOST FEATURES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Components:**
|
||||
```bash
|
||||
# Find React/Vue/Angular components
|
||||
find . -name "*.tsx" -o -name "*.jsx" | xargs grep -l "export.*function\|export.*const"
|
||||
```
|
||||
|
||||
**API Endpoints:**
|
||||
```bash
|
||||
# Find Next.js/Express routes
|
||||
find . -path "*/api/*" -name "*.ts"
|
||||
grep -r "export.*GET\|export.*POST\|router\.\(get\|post\)" .
|
||||
```
|
||||
|
||||
**Database Tables:**
|
||||
```bash
|
||||
# Find Prisma/TypeORM models
|
||||
grep -r "^model " prisma/schema.prisma
|
||||
find . -name "*.entity.ts"
|
||||
```
|
||||
|
||||
**Services:**
|
||||
```bash
|
||||
find . -name "*.service.ts" -o -name "*Service.ts"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="cross_reference">
|
||||
**Compare codebase artifacts to story documentation**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 CROSS-REFERENCING
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
For each codebase artifact:
|
||||
1. Search all stories for mentions of:
|
||||
- Component/file name
|
||||
- File path
|
||||
- Feature description
|
||||
2. If NO stories mention it → ORPHAN (ghost feature)
|
||||
3. If stories mention it → Documented
|
||||
|
||||
Track orphans with:
|
||||
- type (component/api/db/service)
|
||||
- name and path
|
||||
- severity
|
||||
- inferred purpose
|
||||
</step>
|
||||
|
||||
<step name="categorize_orphans">
|
||||
**Analyze and prioritize ghost features**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
👻 GHOST FEATURES DETECTED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Total Orphans: {{count}}
|
||||
|
||||
By Severity:
|
||||
- 🔴 CRITICAL: {{critical}} (APIs, security)
|
||||
- 🟠 HIGH: {{high}} (Components, DB, services)
|
||||
- 🟡 MEDIUM: {{medium}} (Utilities)
|
||||
- 🟢 LOW: {{low}} (Config)
|
||||
|
||||
By Type:
|
||||
- Components: {{components}}
|
||||
- API Endpoints: {{apis}}
|
||||
- Database Tables: {{tables}}
|
||||
- Services: {{services}}
|
||||
|
||||
Documentation Coverage: {{documented_pct}}%
|
||||
Orphan Rate: {{orphan_pct}}%
|
||||
|
||||
{{#if orphan_pct > 20}}
|
||||
⚠️ HIGH ORPHAN RATE - Over 20% undocumented!
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="create_backfill_stories" if="create_backfill_stories">
|
||||
**Generate stories for orphaned features**
|
||||
|
||||
For each orphan (prioritized by severity):
|
||||
|
||||
1. **Analyze orphan** - Read implementation, find tests, understand purpose
|
||||
2. **Generate story draft:**
|
||||
|
||||
```markdown
|
||||
# Story: Document existing {{name}}
|
||||
|
||||
**Type:** BACKFILL (documenting existing code)
|
||||
|
||||
## Business Context
|
||||
{{inferred_from_code}}
|
||||
|
||||
## Current State
|
||||
✅ Implementation EXISTS: {{file}}
|
||||
{{#if has_tests}}✅ Tests exist{{else}}❌ No tests{{/if}}
|
||||
|
||||
## Acceptance Criteria
|
||||
{{inferred_acs}}
|
||||
|
||||
## Tasks
|
||||
- [x] {{name}} implementation (ALREADY EXISTS)
|
||||
{{#if !has_tests}}- [ ] Add tests{{/if}}
|
||||
- [ ] Verify functionality
|
||||
- [ ] Assign to epic
|
||||
```
|
||||
|
||||
3. **Ask user** (unless auto_create):
|
||||
- [Y] Create story
|
||||
- [A] Auto-create all remaining
|
||||
- [S] Skip this orphan
|
||||
- [H] Halt
|
||||
|
||||
4. **Write story file:** `docs/sprint-artifacts/backfill-{{type}}-{{name}}.md`
|
||||
|
||||
5. **Update sprint-status.yaml** (if enabled)
|
||||
</step>
|
||||
|
||||
<step name="suggest_organization" if="backfill_stories_created">
|
||||
**Recommend epic assignment**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 BACKFILL ORGANIZATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Options:
|
||||
[A] Create Epic-Backfill (recommended)
|
||||
- Single epic for all backfill stories
|
||||
- Clear separation from feature work
|
||||
|
||||
[B] Distribute to existing epics
|
||||
- Add each to its logical epic
|
||||
|
||||
[C] Leave in backlog
|
||||
- Manual assignment later
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_report" if="create_report">
|
||||
**Write comprehensive ghost features report**
|
||||
|
||||
Write to: `docs/sprint-artifacts/ghost-features-report-{{timestamp}}.md`
|
||||
|
||||
Include:
|
||||
- Executive summary
|
||||
- Full orphan list by severity
|
||||
- Backfill stories created
|
||||
- Recommendations
|
||||
- Scan methodology
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Display results and next steps**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ GHOST FEATURE DETECTION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Orphans Found: {{orphan_count}}
|
||||
Backfill Stories Created: {{backfill_count}}
|
||||
Documentation Coverage: {{documented_pct}}%
|
||||
|
||||
{{#if orphan_count == 0}}
|
||||
✅ All code is documented in stories!
|
||||
{{else}}
|
||||
Next Steps:
|
||||
1. Review backfill stories for accuracy
|
||||
2. Assign to epics
|
||||
3. Add tests/docs for orphans
|
||||
4. Run revalidation to verify
|
||||
{{/if}}
|
||||
|
||||
💡 Pro Tip: Run this periodically to catch
|
||||
vibe-coded features before they become
|
||||
maintenance nightmares.
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**No stories found:** Check scan_scope, verify sprint-artifacts exists.
|
||||
**Scan fails:** Report which scan type failed, continue others.
|
||||
**Backfill creation fails:** Skip, continue to next orphan.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All artifact types scanned
|
||||
- [ ] Cross-reference completed
|
||||
- [ ] Orphans categorized by severity
|
||||
- [ ] Backfill stories created (if enabled)
|
||||
- [ ] Report generated
|
||||
</success_criteria>
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
name: detect-ghost-features
|
||||
description: "Reverse gap analysis: Find functionality in codebase that has no corresponding story (vibe-coded or undocumented features). Propose backfill stories."
|
||||
author: "BMad"
|
||||
version: "1.0.0" # Who you gonna call? GHOST-FEATURE-BUSTERS! 👻
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# Input parameters
|
||||
epic_number: "{epic_number}" # Optional: Limit to specific epic (e.g., "2")
|
||||
scan_scope: "sprint" # "sprint" | "epic" | "codebase"
|
||||
create_backfill_stories: true # Propose backfill stories for orphans
|
||||
|
||||
# Detection settings
|
||||
detection:
|
||||
scan_for:
|
||||
components: true # React/Vue/Angular components
|
||||
api_endpoints: true # Routes, controllers, handlers
|
||||
database_tables: true # Migrations, schema
|
||||
services: true # Services, modules, utilities
|
||||
models: true # Data models, entities
|
||||
ui_pages: true # Pages, screens, views
|
||||
|
||||
ignore_patterns:
|
||||
- "**/node_modules/**"
|
||||
- "**/dist/**"
|
||||
- "**/build/**"
|
||||
- "**/*.test.*"
|
||||
- "**/*.spec.*"
|
||||
- "**/migrations/**" # Migrations are referenced collectively, not per-story
|
||||
|
||||
# What counts as "documented"?
|
||||
documented_if:
|
||||
mentioned_in_file_list: true # Story File List mentions it
|
||||
mentioned_in_tasks: true # Task description mentions it
|
||||
mentioned_in_acs: true # AC mentions the feature
|
||||
file_committed_in_story_commit: true # Git history shows it in story commit
|
||||
|
||||
# Backfill story settings
|
||||
backfill:
|
||||
auto_create: false # Require confirmation before creating each
|
||||
add_to_sprint_status: true # Add to sprint as "backlog"
|
||||
mark_as_backfill: true # Add note: "Backfill story documenting existing code"
|
||||
run_gap_analysis: false # Don't run gap (we know it exists)
|
||||
estimate_effort: true # Estimate how complex the feature is
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_report: true # Generate orphaned-features-report.md
|
||||
group_by_category: true # Group by component/api/db/etc
|
||||
suggest_epic_assignment: true # Suggest which epic orphans belong to
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,246 +0,0 @@
|
|||
# Gap Analysis v3.0 - Verify Story Tasks Against Codebase
|
||||
|
||||
<purpose>
|
||||
Validate story checkbox claims against actual codebase reality.
|
||||
Find false positives (checked but not done) and false negatives (done but unchecked).
|
||||
Interactive workflow with options to update, audit, or review.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Evidence-Based Verification**
|
||||
|
||||
Checkboxes lie. Code doesn't.
|
||||
- Search codebase for implementation evidence
|
||||
- Check for stubs, TODOs, empty functions
|
||||
- Verify tests exist for claimed features
|
||||
- Report accuracy of story completion claims
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: gap-analysis
|
||||
version: 3.0.0
|
||||
|
||||
defaults:
|
||||
auto_update: false
|
||||
create_audit_report: true
|
||||
strict_mode: false # If true, stubs count as incomplete
|
||||
|
||||
output:
|
||||
update_story: "Modify checkbox state to match reality"
|
||||
audit_report: "Generate detailed gap analysis document"
|
||||
no_changes: "Display results only"
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_story" priority="first">
|
||||
**Load and parse story file**
|
||||
|
||||
```bash
|
||||
STORY_FILE="{{story_file}}"
|
||||
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
|
||||
```
|
||||
|
||||
Use Read tool on story file. Extract:
|
||||
- All `- [ ]` and `- [x]` items
|
||||
- File references from Dev Agent Record
|
||||
- Task descriptions with expected artifacts
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 GAP ANALYSIS: {{story_key}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Tasks: {{total_tasks}}
|
||||
Currently checked: {{checked_count}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="verify_each_task">
|
||||
**Verify each task against codebase**
|
||||
|
||||
For each task item:
|
||||
|
||||
1. **Extract artifacts** - File names, component names, function names
|
||||
2. **Search codebase:**
|
||||
```bash
|
||||
# Check file exists
|
||||
Glob: {{expected_file}}
|
||||
|
||||
# Check function/component exists
|
||||
Grep: "{{function_or_component_name}}"
|
||||
```
|
||||
|
||||
3. **If file exists, check quality:**
|
||||
```bash
|
||||
# Check for stubs
|
||||
Grep: "TODO|FIXME|Not implemented|throw new Error" {{file}}
|
||||
|
||||
# Check for tests
|
||||
Glob: {{file_base}}.test.* OR {{file_base}}.spec.*
|
||||
```
|
||||
|
||||
4. **Determine status:**
|
||||
- **VERIFIED:** File exists, not a stub, tests exist
|
||||
- **PARTIAL:** File exists but stub/TODO or no tests
|
||||
- **MISSING:** File does not exist
|
||||
</step>
|
||||
|
||||
<step name="calculate_accuracy">
|
||||
**Compare claimed vs actual**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 GAP ANALYSIS RESULTS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Tasks analyzed: {{total}}
|
||||
|
||||
By Status:
|
||||
- ✅ Verified Complete: {{verified}} ({{verified_pct}}%)
|
||||
- ⚠️ Partial: {{partial}} ({{partial_pct}}%)
|
||||
- ❌ Missing: {{missing}} ({{missing_pct}}%)
|
||||
|
||||
Accuracy Analysis:
|
||||
- Checked & Verified: {{correct_checked}}
|
||||
- Checked but MISSING: {{false_positives}} ← FALSE POSITIVES
|
||||
- Unchecked but DONE: {{false_negatives}} ← FALSE NEGATIVES
|
||||
|
||||
Checkbox Accuracy: {{accuracy}}%
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**If false positives found:**
|
||||
```
|
||||
⚠️ FALSE POSITIVES DETECTED
|
||||
The following tasks are marked done but code is missing:
|
||||
|
||||
{{#each false_positives}}
|
||||
- [ ] {{task}} — Expected: {{expected_file}} — ❌ NOT FOUND
|
||||
{{/each}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="present_options">
|
||||
**Ask user how to proceed**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 OPTIONS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
[U] Update - Fix checkboxes to match reality
|
||||
[A] Audit Report - Generate detailed report file
|
||||
[N] No Changes - Display only (already done)
|
||||
[R] Review Details - Show full evidence for each task
|
||||
|
||||
Your choice:
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="option_update" if="choice == U">
|
||||
**Update story file checkboxes**
|
||||
|
||||
For false positives:
|
||||
- Change `[x]` to `[ ]` for tasks with missing code
|
||||
|
||||
For false negatives:
|
||||
- Change `[ ]` to `[x]` for tasks with verified code
|
||||
|
||||
Use Edit tool to make changes.
|
||||
|
||||
```
|
||||
✅ Story checkboxes updated
|
||||
- {{fp_count}} false positives unchecked
|
||||
- {{fn_count}} false negatives checked
|
||||
- New completion: {{new_pct}}%
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="option_audit" if="choice == A">
|
||||
**Generate audit report**
|
||||
|
||||
Write to: `{{story_dir}}/gap-analysis-{{story_key}}-{{timestamp}}.md`
|
||||
|
||||
Include:
|
||||
- Executive summary
|
||||
- Detailed task-by-task evidence
|
||||
- False positive/negative lists
|
||||
- Recommendations
|
||||
|
||||
```
|
||||
✅ Audit report generated: {{report_path}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="option_review" if="choice == R">
|
||||
**Show detailed evidence**
|
||||
|
||||
For each task:
|
||||
```
|
||||
Task: {{task_text}}
|
||||
Checkbox: {{checked_state}}
|
||||
Evidence:
|
||||
- File: {{file}} - {{exists ? "✅ EXISTS" : "❌ MISSING"}}
|
||||
{{#if exists}}
|
||||
- Stub check: {{is_stub ? "⚠️ STUB DETECTED" : "✅ Real implementation"}}
|
||||
- Tests: {{has_tests ? "✅ Tests exist" : "❌ No tests"}}
|
||||
{{/if}}
|
||||
Verdict: {{status}}
|
||||
```
|
||||
|
||||
After review, return to options menu.
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Display completion**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ GAP ANALYSIS COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{story_key}}
|
||||
Verified Completion: {{verified_pct}}%
|
||||
Checkbox Accuracy: {{accuracy}}%
|
||||
|
||||
{{#if updated}}
|
||||
✅ Checkboxes updated to match reality
|
||||
{{/if}}
|
||||
|
||||
{{#if report_generated}}
|
||||
📄 Report: {{report_path}}
|
||||
{{/if}}
|
||||
|
||||
{{#if false_positives > 0}}
|
||||
⚠️ {{false_positives}} tasks need implementation work
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Quick gap analysis of single story
|
||||
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md
|
||||
|
||||
# With auto-update enabled
|
||||
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md auto_update=true
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Story file not found:** HALT with clear error.
|
||||
**Search fails:** Log warning, count as MISSING.
|
||||
**Edit fails:** Report error, suggest manual update.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All tasks verified against codebase
|
||||
- [ ] False positives/negatives identified
|
||||
- [ ] Accuracy metrics calculated
|
||||
- [ ] User choice executed (update/audit/review)
|
||||
</success_criteria>
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
name: gap-analysis
|
||||
description: "Validate story tasks against actual codebase - audit completed stories or validate before development"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
story_dir: "{implementation_artifacts}"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/gap-analysis"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
|
||||
# Variables
|
||||
story_file: "" # User provides story file path or auto-discover
|
||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,743 +0,0 @@
|
|||
# Migration Reliability Guarantees
|
||||
|
||||
**Purpose:** Document how this migration tool ensures 100% reliability and data integrity.
|
||||
|
||||
---
|
||||
|
||||
## Core Guarantees
|
||||
|
||||
### 1. **Idempotent Operations** ✅
|
||||
|
||||
**Guarantee:** Running migration multiple times produces the same result as running once.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
// Before creating issue, check if it exists
|
||||
const existing = await searchIssue(`label:story:${storyKey}`);
|
||||
|
||||
if (existing) {
|
||||
if (update_existing) {
|
||||
// Update existing issue (safe)
|
||||
await updateIssue(existing.number, data);
|
||||
} else {
|
||||
// Skip (already migrated)
|
||||
skip(storyKey);
|
||||
}
|
||||
} else {
|
||||
// Create new issue
|
||||
await createIssue(data);
|
||||
}
|
||||
```
|
||||
|
||||
**Test:**
|
||||
```bash
|
||||
# Run migration twice
|
||||
/migrate-to-github mode=execute
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
# Result: Same issues, no duplicates
|
||||
# Second run: "47 stories already migrated, 0 created"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. **Atomic Per-Story Operations** ✅
|
||||
|
||||
**Guarantee:** Each story either fully migrates or fully rolls back. No partial states.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
async function migrateStory(storyKey) {
|
||||
const transaction = {
|
||||
story_key: storyKey,
|
||||
operations: [],
|
||||
rollback_actions: []
|
||||
};
|
||||
|
||||
try {
|
||||
// Create issue
|
||||
const issue = await createIssue(data);
|
||||
transaction.operations.push({ type: 'create', issue_number: issue.number });
|
||||
transaction.rollback_actions.push(() => closeIssue(issue.number));
|
||||
|
||||
// Add labels
|
||||
await addLabels(issue.number, labels);
|
||||
transaction.operations.push({ type: 'labels' });
|
||||
|
||||
// Set milestone
|
||||
await setMilestone(issue.number, milestone);
|
||||
transaction.operations.push({ type: 'milestone' });
|
||||
|
||||
// Verify all operations succeeded
|
||||
await verifyIssue(issue.number);
|
||||
|
||||
// Success - commit transaction
|
||||
return { success: true, issue_number: issue.number };
|
||||
|
||||
} catch (error) {
|
||||
// Rollback all operations
|
||||
for (const rollback of transaction.rollback_actions.reverse()) {
|
||||
await rollback();
|
||||
}
|
||||
|
||||
return { success: false, error, rolled_back: true };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. **Comprehensive Verification** ✅
|
||||
|
||||
**Guarantee:** Every write is verified by reading back the data.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
// Write-Verify pattern
|
||||
async function createIssueVerified(data) {
|
||||
// 1. Create
|
||||
const created = await mcp__github__issue_write({ ...data });
|
||||
const issue_number = created.number;
|
||||
|
||||
// 2. Wait for GitHub eventual consistency
|
||||
await sleep(1000);
|
||||
|
||||
// 3. Read back
|
||||
const verification = await mcp__github__issue_read({
|
||||
issue_number: issue_number
|
||||
});
|
||||
|
||||
// 4. Verify fields
|
||||
assert(verification.title === data.title, 'Title mismatch');
|
||||
assert(verification.labels.includes(data.labels[0]), 'Label missing');
|
||||
assert(verification.body.includes(data.body.substring(0, 50)), 'Body mismatch');
|
||||
|
||||
// 5. Return verified issue
|
||||
return { verified: true, issue_number };
|
||||
}
|
||||
```
|
||||
|
||||
**Detection time:**
|
||||
- Write succeeds but data wrong: **Detected immediately** (1s after write)
|
||||
- Write fails silently: **Detected immediately** (read-back fails)
|
||||
- Partial write: **Detected immediately** (field mismatch)
|
||||
|
||||
---
|
||||
|
||||
### 4. **Crash-Safe State Tracking** ✅
|
||||
|
||||
**Guarantee:** If migration crashes/halts, can resume from exactly where it stopped.
|
||||
|
||||
**How:**
|
||||
```yaml
|
||||
# migration-state.yaml (updated after EACH story)
|
||||
started_at: 2026-01-07T15:30:00Z
|
||||
mode: execute
|
||||
github_owner: jschulte
|
||||
github_repo: myproject
|
||||
total_stories: 47
|
||||
last_completed: "2-15-profile-edit" # Story that just finished
|
||||
stories_migrated:
|
||||
- story_key: "2-1-login"
|
||||
issue_number: 101
|
||||
timestamp: 2026-01-07T15:30:15Z
|
||||
- story_key: "2-2-signup"
|
||||
issue_number: 102
|
||||
timestamp: 2026-01-07T15:30:32Z
|
||||
# ... 13 more
|
||||
- story_key: "2-15-profile-edit"
|
||||
issue_number: 115
|
||||
timestamp: 2026-01-07T15:35:18Z
|
||||
# CRASH HAPPENS HERE
|
||||
```
|
||||
|
||||
**Resume:**
|
||||
```bash
|
||||
# After crash, re-run migration
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ Detects state file
|
||||
→ "Previous migration detected - 15 stories already migrated"
|
||||
→ "Resume from story 2-16-password-reset? (yes)"
|
||||
→ Continues from story 16, skips 1-15
|
||||
```
|
||||
|
||||
**State file is atomic:**
|
||||
- Written after EACH story (not at end)
|
||||
- Uses atomic write (tmp file + rename)
|
||||
- Never corrupted even if process killed mid-write
|
||||
|
||||
---
|
||||
|
||||
### 5. **Exponential Backoff Retry** ✅
|
||||
|
||||
**Guarantee:** Transient failures (network blips, GitHub 503s) don't fail migration.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
async function retryWithBackoff(operation, config) {
|
||||
const backoffs = config.retry_backoff_ms; // [1000, 3000, 9000]
|
||||
|
||||
for (let attempt = 0; attempt < backoffs.length; attempt++) {
|
||||
try {
|
||||
return await operation();
|
||||
} catch (error) {
|
||||
if (attempt < backoffs.length - 1) {
|
||||
console.warn(`Retry ${attempt + 1} after ${backoffs[attempt]}ms`);
|
||||
await sleep(backoffs[attempt]);
|
||||
} else {
|
||||
// All retries exhausted
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Story 2-5 migration:
|
||||
Attempt 1: GitHub 503 Service Unavailable
|
||||
→ Wait 1s, retry
|
||||
Attempt 2: Network timeout
|
||||
→ Wait 3s, retry
|
||||
Attempt 3: Success ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. **Rollback Manifest** ✅
|
||||
|
||||
**Guarantee:** Can undo migration if something goes wrong.
|
||||
|
||||
**How:**
|
||||
```yaml
|
||||
# migration-rollback-2026-01-07T15-30-00.yaml
|
||||
created_at: 2026-01-07T15:30:00Z
|
||||
github_owner: jschulte
|
||||
github_repo: myproject
|
||||
migration_mode: execute
|
||||
|
||||
created_issues:
|
||||
- story_key: "2-1-login"
|
||||
issue_number: 101
|
||||
created_at: 2026-01-07T15:30:15Z
|
||||
title: "Story 2-1: User Login Flow"
|
||||
url: "https://github.com/jschulte/myproject/issues/101"
|
||||
|
||||
- story_key: "2-2-signup"
|
||||
issue_number: 102
|
||||
created_at: 2026-01-07T15:30:32Z
|
||||
title: "Story 2-2: User Registration"
|
||||
url: "https://github.com/jschulte/myproject/issues/102"
|
||||
|
||||
# ... all created issues tracked
|
||||
|
||||
rollback_command: |
|
||||
/migrate-to-github mode=rollback manifest=migration-rollback-2026-01-07T15-30-00.yaml
|
||||
```
|
||||
|
||||
**Rollback execution:**
|
||||
- Closes all created issues
|
||||
- Adds "migrated:rolled-back" label
|
||||
- Adds comment explaining why closed
|
||||
- Preserves issues (can reopen if needed)
|
||||
|
||||
---
|
||||
|
||||
### 7. **Dry-Run Mode** ✅
|
||||
|
||||
**Guarantee:** See exactly what will happen before it happens.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
if (mode === 'dry-run') {
|
||||
// NO writes to GitHub - only reads
|
||||
for (const story of stories) {
|
||||
const existing = await searchIssue(`story:${story.key}`);
|
||||
|
||||
if (existing) {
|
||||
console.log(`Would UPDATE: Issue #${existing.number}`);
|
||||
} else {
|
||||
console.log(`Would CREATE: New issue for ${story.key}`);
|
||||
console.log(` Title: ${generateTitle(story)}`);
|
||||
console.log(` Labels: ${generateLabels(story)}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Show summary
|
||||
console.log(`
|
||||
Total: ${stories.length}
|
||||
Would create: ${wouldCreate.length}
|
||||
Would update: ${wouldUpdate.length}
|
||||
Would skip: ${wouldSkip.length}
|
||||
`);
|
||||
|
||||
// Exit without doing anything
|
||||
process.exit(0);
|
||||
}
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Always run dry-run first
|
||||
/migrate-to-github mode=dry-run
|
||||
|
||||
# Review output, then execute
|
||||
/migrate-to-github mode=execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8. **Halt on Critical Error** ✅
|
||||
|
||||
**Guarantee:** Never continue with corrupted/incomplete state.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
try {
|
||||
await createIssue(storyData);
|
||||
} catch (error) {
|
||||
if (isCriticalError(error)) {
|
||||
// Critical: GitHub API returned 401/403/5xx
|
||||
console.error('CRITICAL ERROR: Cannot continue safely');
|
||||
console.error(`Story ${storyKey} failed: ${error}`);
|
||||
|
||||
// Save current state
|
||||
await saveState(migrationState);
|
||||
|
||||
// Create recovery instructions
|
||||
console.log(`
|
||||
Recovery options:
|
||||
1. Fix error: ${error.message}
|
||||
2. Resume migration: /migrate-to-github mode=execute (will skip completed stories)
|
||||
3. Rollback: /migrate-to-github mode=rollback
|
||||
`);
|
||||
|
||||
// HALT - do not continue
|
||||
process.exit(1);
|
||||
} else {
|
||||
// Non-critical: Individual story failed but can continue
|
||||
console.warn(`Story ${storyKey} failed (non-critical): ${error}`);
|
||||
failedStories.push({ storyKey, error });
|
||||
// Continue with next story
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Reliability
|
||||
|
||||
### Test Suite
|
||||
|
||||
```javascript
|
||||
describe('Migration Reliability', () => {
|
||||
|
||||
it('is idempotent - can run twice safely', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
const firstRun = getCreatedIssues();
|
||||
|
||||
await migrate({ mode: 'execute' }); // Run again
|
||||
const secondRun = getCreatedIssues();
|
||||
|
||||
expect(secondRun).toEqual(firstRun); // Same issues, no duplicates
|
||||
});
|
||||
|
||||
it('is atomic - failed story does not create partial issue', async () => {
|
||||
mockGitHub.createIssue.resolvesOnce(); // Create succeeds
|
||||
mockGitHub.addLabels.rejects(); // But adding labels fails
|
||||
|
||||
await migrate({ mode: 'execute' });
|
||||
|
||||
const issues = await searchAllIssues();
|
||||
const partialIssues = issues.filter(i => !i.labels.includes('story:'));
|
||||
|
||||
expect(partialIssues).toHaveLength(0); // No partial issues
|
||||
});
|
||||
|
||||
it('verifies all writes by reading back', async () => {
|
||||
mockGitHub.createIssue.resolves({ number: 101 });
|
||||
mockGitHub.readIssue.resolves({ title: 'WRONG TITLE' }); // Verification fails
|
||||
|
||||
await expect(migrate({ mode: 'execute' }))
|
||||
.rejects.toThrow('Write verification failed');
|
||||
});
|
||||
|
||||
it('can resume after crash', async () => {
|
||||
// Migrate 5 stories
|
||||
await migrate({ stories: stories.slice(0, 5) });
|
||||
|
||||
// Simulate crash (don't await)
|
||||
const promise = migrate({ stories: stories.slice(5, 10) });
|
||||
await sleep(2000);
|
||||
process.kill(); // Crash mid-migration
|
||||
|
||||
// Resume
|
||||
const resumed = await migrate({ mode: 'execute' });
|
||||
|
||||
expect(resumed.resumedFrom).toBe('2-5-story');
|
||||
expect(resumed.skipped).toBe(5); // Skipped already-migrated
|
||||
});
|
||||
|
||||
it('creates rollback manifest', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
|
||||
const manifest = fs.readFileSync('migration-rollback-*.yaml');
|
||||
expect(manifest.created_issues).toHaveLength(47);
|
||||
expect(manifest.created_issues[0]).toHaveProperty('issue_number');
|
||||
});
|
||||
|
||||
it('can rollback migration', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
const issuesBefore = await countIssues();
|
||||
|
||||
await migrate({ mode: 'rollback' });
|
||||
const issuesAfter = await countIssues({ state: 'open' });
|
||||
|
||||
expect(issuesAfter).toBeLessThan(issuesBefore);
|
||||
// Rolled-back issues are closed, not deleted
|
||||
});
|
||||
|
||||
it('handles rate limit gracefully', async () => {
|
||||
mockGitHub.createIssue.rejects({ status: 429, message: 'Rate limit exceeded' });
|
||||
|
||||
const result = await migrate({ mode: 'execute', halt_on_critical_error: false });
|
||||
|
||||
expect(result.rateLimitErrors).toBeGreaterThan(0);
|
||||
expect(result.savedState).toBeTruthy(); // State saved before halting
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Failure Recovery Procedures
|
||||
|
||||
### Scenario 1: Migration Fails Halfway
|
||||
|
||||
```bash
|
||||
# Migration was running, crashed/halted at story 15/47
|
||||
|
||||
# Check state file
|
||||
cat _bmad-output/migration-state.yaml
|
||||
# Shows: last_completed: "2-15-profile"
|
||||
|
||||
# Resume migration
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ "Previous migration detected"
|
||||
→ "15 stories already migrated"
|
||||
→ "Resume from story 2-16? (yes)"
|
||||
→ Continues from story 16-47
|
||||
→ Creates 32 new issues
|
||||
→ Final: 47 total migrated ✅
|
||||
```
|
||||
|
||||
### Scenario 2: Created Issues but Verification Failed
|
||||
|
||||
```bash
|
||||
# Migration created issues but verification warnings
|
||||
|
||||
# Run verify mode
|
||||
/migrate-to-github mode=verify
|
||||
|
||||
→ Checks all 47 stories
|
||||
→ Reads each issue from GitHub
|
||||
→ Compares to local files
|
||||
→ Reports:
|
||||
"43 verified correct ✅"
|
||||
"4 have warnings ⚠️"
|
||||
- Story 2-5: Label missing "complexity:standard"
|
||||
- Story 2-10: Title doesn't match local file
|
||||
- Story 2-18: Milestone not set
|
||||
- Story 2-23: Acceptance Criteria count mismatch
|
||||
|
||||
# Fix issues
|
||||
/migrate-to-github mode=execute update_existing=true filter_by_status=warning
|
||||
|
||||
→ Re-migrates only the 4 with warnings
|
||||
→ Verification: "4/4 now verified correct ✅"
|
||||
```
|
||||
|
||||
### Scenario 3: Wrong Repository - Need to Rollback
|
||||
|
||||
```bash
|
||||
# Oops - migrated to wrong repo!
|
||||
|
||||
# Check what was created
|
||||
cat _bmad-output/migration-rollback-*.yaml
|
||||
# Shows: 47 issues created in wrong-repo
|
||||
|
||||
# Rollback
|
||||
/migrate-to-github mode=rollback
|
||||
|
||||
→ "Rollback manifest found: 47 issues"
|
||||
→ Type "DELETE ALL ISSUES" to confirm
|
||||
→ Closes all 47 issues
|
||||
→ Adds "migrated:rolled-back" label
|
||||
→ "Rollback complete ✅"
|
||||
|
||||
# Now migrate to correct repo
|
||||
/migrate-to-github mode=execute github_owner=jschulte github_repo=correct-repo
|
||||
```
|
||||
|
||||
### Scenario 4: Network Failure Mid-Migration
|
||||
|
||||
```bash
|
||||
# Migration running, network drops at story 23/47
|
||||
|
||||
# Automatic behavior:
|
||||
→ Story 23 fails to create (network timeout)
|
||||
→ Retry #1 after 1s: Still fails
|
||||
→ Retry #2 after 3s: Still fails
|
||||
→ Retry #3 after 9s: Still fails
|
||||
→ "CRITICAL: Cannot create issue for story 2-23 after 3 retries"
|
||||
→ Saves state (22 stories migrated)
|
||||
→ HALTS
|
||||
|
||||
# You see:
|
||||
"Migration halted at story 2-23 due to network error"
|
||||
"State saved: 22 stories successfully migrated"
|
||||
"Resume when network restored: /migrate-to-github mode=execute"
|
||||
|
||||
# After network restored:
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ "Resuming from story 2-23"
|
||||
→ Continues 23-47
|
||||
→ "Migration complete: 47/47 migrated ✅"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Integrity Safeguards
|
||||
|
||||
### Safeguard #1: GitHub is Append-Only
|
||||
|
||||
**Design:** Migration never deletes data, only creates/updates.
|
||||
|
||||
- Create: Safe (adds new issue)
|
||||
- Update: Safe (modifies existing)
|
||||
- Delete: Only in explicit rollback mode
|
||||
|
||||
**Result:** Cannot accidentally lose data during migration.
|
||||
|
||||
### Safeguard #2: Local Files Untouched
|
||||
|
||||
**Design:** Migration reads local files but NEVER modifies them.
|
||||
|
||||
**Guarantee:**
|
||||
```javascript
|
||||
// Migration code
|
||||
const story = fs.readFileSync(storyFile, 'utf-8'); // READ ONLY
|
||||
|
||||
// ❌ This never happens:
|
||||
// fs.writeFileSync(storyFile, modified); // FORBIDDEN
|
||||
```
|
||||
|
||||
**Result:** If migration fails, local files are unchanged. Can retry safely.
|
||||
|
||||
### Safeguard #3: Duplicate Detection
|
||||
|
||||
**Design:** Check for existing issues before creating.
|
||||
|
||||
```javascript
|
||||
// Before creating
|
||||
const existing = await searchIssues({
|
||||
query: `repo:${owner}/${repo} label:story:${storyKey}`
|
||||
});
|
||||
|
||||
if (existing.length > 1) {
|
||||
throw new Error(`
|
||||
DUPLICATE DETECTED: Found ${existing.length} issues for story:${storyKey}
|
||||
|
||||
This should never happen. Possible causes:
|
||||
- Previous migration created duplicates
|
||||
- Manual issue creation
|
||||
- Label typo
|
||||
|
||||
Issues found:
|
||||
${existing.map(i => ` - Issue #${i.number}: ${i.title}`).join('\n')}
|
||||
|
||||
HALTING - resolve duplicates manually before continuing
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
**Result:** Cannot create duplicates even if run multiple times.
|
||||
|
||||
### Safeguard #4: State File Atomic Writes
|
||||
|
||||
**Design:** State file uses atomic write pattern (tmp + rename).
|
||||
|
||||
```javascript
|
||||
async function saveStateSafely(state, statePath) {
|
||||
const tmpPath = `${statePath}.tmp`;
|
||||
|
||||
// 1. Write to temp file
|
||||
fs.writeFileSync(tmpPath, yaml.stringify(state));
|
||||
|
||||
// 2. Verify temp file written correctly
|
||||
const readBack = yaml.parse(fs.readFileSync(tmpPath));
|
||||
assert.deepEqual(readBack, state, 'State file corruption detected');
|
||||
|
||||
// 3. Atomic rename (POSIX guarantee)
|
||||
fs.renameSync(tmpPath, statePath);
|
||||
|
||||
// State is now safely written - crash after this point is safe
|
||||
}
|
||||
```
|
||||
|
||||
**Result:** State file is never corrupted, even if process crashes during write.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
### Real-Time Progress
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚡ MIGRATION PROGRESS (Live)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Migrated: 15/47 (32%)
|
||||
Created: 12 issues
|
||||
Updated: 3 issues
|
||||
Failed: 0
|
||||
|
||||
Current: Story 2-16 (creating...)
|
||||
Last success: Story 2-15 (2s ago)
|
||||
|
||||
Rate: 1.2 stories/min
|
||||
ETA: 26 minutes remaining
|
||||
|
||||
API calls used: 45/5000 (1%)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
### Detailed Logging
|
||||
|
||||
```yaml
|
||||
# migration-log-2026-01-07T15-30-00.log
|
||||
[15:30:00] Migration started (mode: execute)
|
||||
[15:30:05] Pre-flight checks passed
|
||||
[15:30:15] Story 2-1: Created Issue #101 (verified)
|
||||
[15:30:32] Story 2-2: Created Issue #102 (verified)
|
||||
[15:30:45] Story 2-3: Already exists Issue #103 (updated)
|
||||
[15:31:02] Story 2-4: CREATE FAILED (attempt 1/3) - Network timeout
|
||||
[15:31:03] Story 2-4: Retry 1 after 1000ms
|
||||
[15:31:05] Story 2-4: Created Issue #104 (verified) ✅
|
||||
[15:31:20] Story 2-5: Created Issue #105 (verified)
|
||||
# ... continues
|
||||
[15:55:43] Migration complete: 47/47 success (0 failures)
|
||||
[15:55:44] State saved: migration-state.yaml
|
||||
[15:55:45] Rollback manifest: migration-rollback-2026-01-07T15-30-00.yaml
|
||||
[15:55:46] Report generated: migration-report-2026-01-07T15-30-00.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rate Limit Management
|
||||
|
||||
### GitHub API Rate Limits
|
||||
|
||||
**Authenticated:** 5000 requests/hour
|
||||
**Per migration:** ~3-4 API calls per story
|
||||
|
||||
**For 47 stories:**
|
||||
- Search existing: 47 calls
|
||||
- Create issues: ~35 calls
|
||||
- Verify: 35 calls
|
||||
- Labels/milestones: ~20 calls
|
||||
- **Total:** ~140 calls
|
||||
- **Remaining:** 4860/5000 (97% remaining)
|
||||
|
||||
**Safe thresholds:**
|
||||
- <500 stories: Single migration run
|
||||
- 500-1000 stories: Split into 2 batches
|
||||
- >1000 stories: Use epic-based filtering
|
||||
|
||||
### Rate Limit Exhaustion Handling
|
||||
|
||||
```javascript
|
||||
async function apiCallWithRateLimitCheck(operation) {
|
||||
try {
|
||||
return await operation();
|
||||
} catch (error) {
|
||||
if (error.status === 429) { // Rate limit exceeded
|
||||
const resetTime = error.response.headers['x-ratelimit-reset'];
|
||||
const waitSeconds = resetTime - Math.floor(Date.now() / 1000);
|
||||
|
||||
console.warn(`
|
||||
⚠️ Rate limit exceeded
|
||||
Reset in: ${waitSeconds} seconds
|
||||
|
||||
Options:
|
||||
[W] Wait (pause migration until rate limit resets)
|
||||
[S] Stop (save state and resume later)
|
||||
|
||||
Choice:
|
||||
`);
|
||||
|
||||
if (choice === 'W') {
|
||||
console.log(`Waiting ${waitSeconds}s for rate limit reset...`);
|
||||
await sleep(waitSeconds * 1000);
|
||||
return await operation(); // Retry after rate limit resets
|
||||
} else {
|
||||
// Save state and halt
|
||||
await saveState(migrationState);
|
||||
throw new Error('HALT: Rate limit exceeded, resume later');
|
||||
}
|
||||
}
|
||||
|
||||
throw error; // Other error, propagate
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guarantees Summary
|
||||
|
||||
| Guarantee | Mechanism | Failure Mode | Recovery |
|
||||
|-----------|-----------|--------------|----------|
|
||||
| Idempotent | Pre-check existing issues | Run twice → duplicates? | ❌ Prevented by duplicate detection |
|
||||
| Atomic | Transaction per story | Create succeeds, labels fail? | ❌ Prevented by rollback on error |
|
||||
| Verified | Read-back after write | Write succeeds but wrong data? | ❌ Detected immediately, retried |
|
||||
| Resumable | State file after each story | Crash mid-migration? | ✅ Resume from last completed |
|
||||
| Reversible | Rollback manifest | Wrong repo migrated? | ✅ Rollback closes all issues |
|
||||
| Previewed | Dry-run mode | Unsure what will happen? | ✅ Preview before executing |
|
||||
| Resilient | Exponential backoff | Network blip? | ✅ Auto-retry 3x before failing |
|
||||
| Fail-safe | Halt on critical error | GitHub API down? | ✅ Saves state, can resume |
|
||||
|
||||
**Result:** 100% reliability through defense-in-depth strategy.
|
||||
|
||||
---
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
**Before running migration:**
|
||||
- [ ] Run `/migrate-to-github mode=dry-run` to preview
|
||||
- [ ] Verify repository name is correct
|
||||
- [ ] Back up sprint-status.yaml (just in case)
|
||||
- [ ] Verify GitHub token has write permissions
|
||||
- [ ] Check rate limit: <1000 stories OK for single run
|
||||
|
||||
**During migration:**
|
||||
- [ ] Monitor progress output
|
||||
- [ ] Watch for warnings or retries
|
||||
- [ ] Note any failed stories
|
||||
|
||||
**After migration:**
|
||||
- [ ] Run `/migrate-to-github mode=verify`
|
||||
- [ ] Review migration report
|
||||
- [ ] Spot-check 3-5 created issues in GitHub UI
|
||||
- [ ] Save rollback manifest (in case need to undo)
|
||||
- [ ] Update workflow configs: `github_sync_enabled: true`
|
||||
|
||||
---
|
||||
|
||||
**Reliability Score: 10/10** ✅
|
||||
|
||||
Every failure mode has a recovery path. Every write is verified. Every operation is resumable.
|
||||
|
|
@ -1,279 +0,0 @@
|
|||
# Migrate to GitHub v3.0 - Production-Grade Story Migration
|
||||
|
||||
<purpose>
|
||||
Migrate BMAD stories to GitHub Issues with full safety guarantees.
|
||||
Idempotent, atomic, verified, resumable, and reversible.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Reliability First, Data Integrity Over Speed**
|
||||
|
||||
- Idempotent: Can re-run safely (checks for duplicates)
|
||||
- Atomic: Each story fully succeeds or rolls back
|
||||
- Verified: Reads back each created issue
|
||||
- Resumable: Saves state after each story
|
||||
- Reversible: Creates rollback manifest
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: migrate-to-github
|
||||
version: 3.0.0
|
||||
|
||||
modes:
|
||||
dry-run: {description: "Preview only, no changes", default: true}
|
||||
execute: {description: "Actually create issues"}
|
||||
verify: {description: "Double-check migration accuracy"}
|
||||
rollback: {description: "Close migrated issues"}
|
||||
|
||||
defaults:
|
||||
update_existing: false
|
||||
halt_on_critical_error: true
|
||||
save_state_after_each: true
|
||||
max_retries: 3
|
||||
retry_backoff_ms: [1000, 3000, 10000]
|
||||
|
||||
labels:
|
||||
- "type:story"
|
||||
- "story:{{story_key}}"
|
||||
- "status:{{status}}"
|
||||
- "epic:{{epic_number}}"
|
||||
- "complexity:{{complexity}}"
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="preflight_checks" priority="first">
|
||||
**Verify all prerequisites before ANY operations**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🛡️ PRE-FLIGHT SAFETY CHECKS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**1. Verify GitHub MCP access:**
|
||||
```
|
||||
Call: mcp__github__get_me()
|
||||
If fails: HALT - Cannot proceed without GitHub API access
|
||||
```
|
||||
|
||||
**2. Verify repository access:**
|
||||
```
|
||||
Call: mcp__github__list_issues(owner, repo, per_page=1)
|
||||
If fails: HALT - Repository not accessible
|
||||
```
|
||||
|
||||
**3. Verify local files exist:**
|
||||
```bash
|
||||
[ -f "docs/sprint-artifacts/sprint-status.yaml" ] || { echo "HALT"; exit 1; }
|
||||
```
|
||||
|
||||
**4. Check for existing migration:**
|
||||
- If state file exists: offer Resume/Fresh/View/Delete
|
||||
- If resuming: load already-migrated stories, filter from queue
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ PRE-FLIGHT CHECKS PASSED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="dry_run" if="mode == dry-run">
|
||||
**Preview migration plan without making changes**
|
||||
|
||||
For each story:
|
||||
1. Search GitHub for existing issue with label `story:{{story_key}}`
|
||||
2. If exists: mark as "Would UPDATE" or "Would SKIP"
|
||||
3. If not exists: mark as "Would CREATE"
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 DRY-RUN SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Would CREATE: {{create_count}} new issues
|
||||
Would UPDATE: {{update_count}} existing issues
|
||||
Would SKIP: {{skip_count}}
|
||||
|
||||
Estimated API Calls: ~{{total_calls}}
|
||||
Rate Limit Impact: Safe (< 1000 calls)
|
||||
|
||||
⚠️ This was a DRY-RUN. No issues created.
|
||||
To execute: /migrate-to-github mode=execute
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="execute" if="mode == execute">
|
||||
**Perform migration with atomic operations**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚡ EXECUTE MODE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Final confirmation:**
|
||||
```
|
||||
Type "I understand and want to proceed" to continue:
|
||||
```
|
||||
|
||||
Initialize migration state and rollback manifest.
|
||||
|
||||
For each story:
|
||||
|
||||
**1. Check if exists (idempotent):**
|
||||
```
|
||||
Search: label:story:{{story_key}}
|
||||
If exists AND update_existing=false: SKIP
|
||||
```
|
||||
|
||||
**2. Generate issue body:**
|
||||
```markdown
|
||||
**Story File:** [{{story_key}}.md](path)
|
||||
**Epic:** {{epic_number}}
|
||||
|
||||
## Business Context
|
||||
{{parsed.businessContext}}
|
||||
|
||||
## Acceptance Criteria
|
||||
{{#each ac}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
|
||||
## Tasks
|
||||
{{#each tasks}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
**3. Create/update with retry and verification:**
|
||||
```
|
||||
attempt = 0
|
||||
WHILE attempt < max_retries:
|
||||
TRY:
|
||||
result = mcp__github__issue_write(create/update)
|
||||
sleep 2 seconds # GitHub eventual consistency
|
||||
|
||||
verification = mcp__github__issue_read(issue_number)
|
||||
IF verification.title != expected:
|
||||
THROW "Verification failed"
|
||||
|
||||
SUCCESS - add to rollback manifest
|
||||
BREAK
|
||||
|
||||
CATCH:
|
||||
attempt++
|
||||
IF attempt < max_retries:
|
||||
sleep backoff_ms[attempt]
|
||||
ELSE:
|
||||
FAIL - add to issues_failed
|
||||
```
|
||||
|
||||
**4. Save state after each story**
|
||||
|
||||
**5. Progress updates every 10 stories:**
|
||||
```
|
||||
📊 Progress: {{index}}/{{total}}
|
||||
Created: {{created}}, Updated: {{updated}}, Failed: {{failed}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="verify" if="mode == verify">
|
||||
**Double-check migration accuracy**
|
||||
|
||||
For each migrated story:
|
||||
1. Fetch issue from GitHub
|
||||
2. Verify title, labels, AC count match
|
||||
3. Report mismatches
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 VERIFICATION RESULTS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Verified Correct: {{verified}}
|
||||
Warnings: {{warnings}}
|
||||
Failures: {{failures}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="rollback" if="mode == rollback">
|
||||
**Close migrated issues (GitHub API doesn't support delete)**
|
||||
|
||||
Load rollback manifest. For each created issue:
|
||||
```
|
||||
mcp__github__issue_write({
|
||||
issue_number: {{number}},
|
||||
state: "closed",
|
||||
labels: ["migrated:rolled-back"],
|
||||
state_reason: "not_planned"
|
||||
})
|
||||
|
||||
mcp__github__add_issue_comment({
|
||||
body: "Issue closed - migration was rolled back."
|
||||
})
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_report">
|
||||
**Create comprehensive migration report**
|
||||
|
||||
Write to: `docs/sprint-artifacts/github-migration-{{timestamp}}.md`
|
||||
|
||||
Include:
|
||||
- Executive summary
|
||||
- Created/updated/failed issues
|
||||
- GitHub URLs for each issue
|
||||
- Rollback instructions
|
||||
- Next steps
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Display completion status**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ MIGRATION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Total: {{total}} stories
|
||||
Created: {{created}}
|
||||
Updated: {{updated}}
|
||||
Failed: {{failed}}
|
||||
Success Rate: {{success_pct}}%
|
||||
|
||||
View in GitHub:
|
||||
https://github.com/{{owner}}/{{repo}}/issues?q=label:type:story
|
||||
|
||||
Rollback Manifest: {{rollback_path}}
|
||||
State File: {{state_path}}
|
||||
|
||||
Next Steps:
|
||||
1. Verify: /migrate-to-github mode=verify
|
||||
2. Enable GitHub sync in workflow.yaml
|
||||
3. Share Issues URL with Product Owner
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**GitHub MCP unavailable:** HALT - Cannot proceed.
|
||||
**Repository not accessible:** HALT - Check permissions.
|
||||
**Issue create fails:** Retry with backoff, then fail story.
|
||||
**Verification fails:** Log warning, continue.
|
||||
**All stories fail:** Report systemic issue, HALT.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Pre-flight checks passed
|
||||
- [ ] All stories processed
|
||||
- [ ] Issues verified after creation
|
||||
- [ ] State and rollback manifest saved
|
||||
- [ ] Report generated
|
||||
</success_criteria>
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
name: migrate-to-github
|
||||
description: "Production-grade migration of BMAD stories from local files to GitHub Issues with comprehensive reliability guarantees"
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# GitHub configuration
|
||||
github:
|
||||
owner: "{github_owner}" # Required: GitHub username or org
|
||||
repo: "{github_repo}" # Required: Repository name
|
||||
# Token comes from MCP GitHub server config (already authenticated)
|
||||
|
||||
# Migration mode
|
||||
mode: "dry-run" # "dry-run" | "execute" | "verify" | "rollback"
|
||||
# SAFETY: Defaults to dry-run - must explicitly choose execute
|
||||
|
||||
# Migration scope
|
||||
scope:
|
||||
include_epics: true # Create milestone for each epic
|
||||
include_stories: true # Create issue for each story
|
||||
filter_by_epic: null # Optional: Only migrate Epic N (e.g., "2")
|
||||
filter_by_status: null # Optional: Only migrate stories with status (e.g., "backlog")
|
||||
|
||||
# Migration strategy
|
||||
strategy:
|
||||
check_existing: true # Search for existing issues before creating (prevents duplicates)
|
||||
update_existing: true # If issue exists, update it (false = skip)
|
||||
create_missing: true # Create issues for stories without issues
|
||||
|
||||
# Label strategy
|
||||
label_prefix: "story:" # Prefix for story labels (e.g., "story:2-5-auth")
|
||||
use_type_labels: true # Add "type:story", "type:epic"
|
||||
use_status_labels: true # Add "status:backlog", "status:in-progress", etc.
|
||||
use_complexity_labels: true # Add "complexity:micro", etc.
|
||||
use_epic_labels: true # Add "epic:2", "epic:3", etc.
|
||||
|
||||
# Reliability settings
|
||||
reliability:
|
||||
verify_after_create: true # Read back issue to verify creation succeeded
|
||||
retry_on_failure: true # Retry failed operations
|
||||
max_retries: 3
|
||||
retry_backoff_ms: [1000, 3000, 9000] # Exponential backoff
|
||||
halt_on_critical_error: true # Stop migration if critical error occurs
|
||||
save_state_after_each: true # Save progress after each story (crash-safe)
|
||||
create_rollback_manifest: true # Track created issues for rollback
|
||||
|
||||
# State tracking
|
||||
state_file: "{output_folder}/migration-state.yaml"
|
||||
# Tracks: stories_migrated, issues_created, last_story, can_resume
|
||||
|
||||
# Output
|
||||
output:
|
||||
create_migration_report: true
|
||||
report_path: "{output_folder}/migration-report-{timestamp}.md"
|
||||
log_level: "verbose" # "quiet" | "normal" | "verbose"
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,197 +0,0 @@
|
|||
# Multi-Agent Code Review v3.0
|
||||
|
||||
<purpose>
|
||||
Perform unbiased code review using multiple specialized AI agents in fresh context.
|
||||
Agent count scales with story complexity. Independent perspective prevents bias.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Fresh Context, Multiple Perspectives**
|
||||
|
||||
- Review happens in NEW session (not the agent that wrote the code)
|
||||
- Prevents bias from implementation decisions
|
||||
- Agent count determined by complexity, agents chosen by code analysis
|
||||
- Smart selection: touching auth code → auth-security agent, etc.
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: multi-agent-review
|
||||
version: 3.0.0
|
||||
|
||||
agent_selection:
|
||||
micro: {count: 2, agents: [security, code_quality]}
|
||||
standard: {count: 4, agents: [security, code_quality, architecture, testing]}
|
||||
complex: {count: 6, agents: [security, code_quality, architecture, testing, performance, domain_expert]}
|
||||
|
||||
available_agents:
|
||||
security: "Identifies vulnerabilities and security risks"
|
||||
code_quality: "Reviews style, maintainability, best practices"
|
||||
architecture: "Reviews system design, patterns, structure"
|
||||
testing: "Evaluates test coverage and quality"
|
||||
performance: "Analyzes efficiency and optimization"
|
||||
domain_expert: "Validates business logic and domain constraints"
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/security-checklist.md
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="determine_agent_count" priority="first">
|
||||
**Select agents based on complexity**
|
||||
|
||||
```
|
||||
If complexity_level == "micro":
|
||||
agents = ["security", "code_quality"]
|
||||
Display: 🔍 MICRO Review (2 agents)
|
||||
|
||||
Else if complexity_level == "standard":
|
||||
agents = ["security", "code_quality", "architecture", "testing"]
|
||||
Display: 📋 STANDARD Review (4 agents)
|
||||
|
||||
Else if complexity_level == "complex":
|
||||
agents = ALL 6 agents
|
||||
Display: 🔬 COMPLEX Review (6 agents)
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="load_story_context">
|
||||
**Load story file and understand requirements**
|
||||
|
||||
```bash
|
||||
STORY_FILE="{{story_file}}"
|
||||
[ -f "$STORY_FILE" ] || { echo "❌ Story file not found"; exit 1; }
|
||||
```
|
||||
|
||||
Use Read tool on story file. Extract:
|
||||
- What was supposed to be implemented
|
||||
- Acceptance criteria
|
||||
- Tasks and subtasks
|
||||
- File list
|
||||
</step>
|
||||
|
||||
<step name="invoke_review_agents">
|
||||
**Spawn review agents in fresh context**
|
||||
|
||||
For each agent in selected agents, spawn Task agent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "{{agent_type}} review for {{story_key}}",
|
||||
prompt: `
|
||||
You are the {{AGENT_TYPE}} reviewer for story {{story_key}}.
|
||||
|
||||
<execution_context>
|
||||
@patterns/security-checklist.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
Story: [inline story content]
|
||||
Changed files: [git diff output]
|
||||
</context>
|
||||
|
||||
<objective>
|
||||
Review from your {{agent_type}} perspective. Find issues, be thorough.
|
||||
</objective>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All relevant files reviewed
|
||||
- [ ] Issues categorized by severity (CRITICAL/HIGH/MEDIUM/LOW)
|
||||
- [ ] Return ## AGENT COMPLETE with findings
|
||||
</success_criteria>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
Wait for all agents to complete. Aggregate findings.
|
||||
</step>
|
||||
|
||||
<step name="aggregate_findings">
|
||||
**Collect and categorize all findings**
|
||||
|
||||
Merge findings from all agents:
|
||||
- CRITICAL: Security vulnerabilities, data loss risks
|
||||
- HIGH: Production bugs, logic errors
|
||||
- MEDIUM: Technical debt, maintainability
|
||||
- LOW: Nice-to-have improvements
|
||||
</step>
|
||||
|
||||
<step name="present_report">
|
||||
**Display review summary**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🤖 MULTI-AGENT CODE REVIEW COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Agents Used: {{agent_count}}
|
||||
- Security Agent
|
||||
- Code Quality Agent
|
||||
[...]
|
||||
|
||||
Findings:
|
||||
- 🔴 CRITICAL: {{critical_count}}
|
||||
- 🟠 HIGH: {{high_count}}
|
||||
- 🟡 MEDIUM: {{medium_count}}
|
||||
- 🔵 LOW: {{low_count}}
|
||||
```
|
||||
|
||||
For each finding, display:
|
||||
- Severity and title
|
||||
- Agent that found it
|
||||
- Location (file:line)
|
||||
- Description and recommendation
|
||||
</step>
|
||||
|
||||
<step name="recommend_actions">
|
||||
**Suggest next steps based on findings**
|
||||
|
||||
```
|
||||
📋 RECOMMENDED NEXT STEPS:
|
||||
|
||||
If CRITICAL findings exist:
|
||||
⚠️ MUST FIX before proceeding
|
||||
- Address all critical security/correctness issues
|
||||
- Re-run review after fixes
|
||||
|
||||
If only HIGH/MEDIUM findings:
|
||||
✅ Story may proceed
|
||||
- Consider addressing high-priority items
|
||||
- Create follow-up tasks for medium items
|
||||
|
||||
If only LOW/INFO findings:
|
||||
✅ Code quality looks good
|
||||
- Optional: Address style/optimization suggestions
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<integration>
|
||||
**When to use:**
|
||||
- Complex stories (≥16 tasks or high-risk keywords)
|
||||
- Security-sensitive code
|
||||
- Significant architectural changes
|
||||
- When single-agent review was inconclusive
|
||||
|
||||
**When NOT to use:**
|
||||
- Micro stories (≤3 tasks)
|
||||
- Standard stories with simple changes
|
||||
- Stories that passed adversarial review cleanly
|
||||
</integration>
|
||||
|
||||
<failure_handling>
|
||||
**Review agent fails:** Fall back to adversarial code review.
|
||||
**API error:** Log failure, continue pipeline with warning.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All selected agents completed review
|
||||
- [ ] Findings aggregated and categorized
|
||||
- [ ] Report displayed with recommendations
|
||||
</success_criteria>
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
name: multi-agent-review
|
||||
description: "Smart multi-agent code review with dynamic agent selection based on changed code. Uses multiple specialized AI agents to review different aspects: architecture, security, performance, testing, and code quality."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/multi-agent-review"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
# Input parameters
|
||||
story_id: "{story_id}" # Required
|
||||
story_file: "{sprint_artifacts}/{story_id}.md" # CANONICAL FORMAT: epic-story-slug.md (NO "story-" prefix)
|
||||
base_branch: "main" # Optional: branch to compare against
|
||||
complexity_level: "standard" # micro | standard | complex (passed from super-dev-pipeline)
|
||||
|
||||
# Complexity-based agent selection (NEW v1.0.0)
|
||||
# Cost-effective review depth based on story RISK and technical complexity
|
||||
# Complexity determined by batch-super-dev based on: risk keywords, architectural impact, security concerns
|
||||
complexity_routing:
|
||||
micro:
|
||||
agent_count: 2
|
||||
agents: ["security", "code_quality"]
|
||||
description: "Quick sanity check for low-risk stories"
|
||||
examples: ["UI tweaks", "text changes", "simple CRUD", "documentation"]
|
||||
cost_multiplier: 1x
|
||||
|
||||
standard:
|
||||
agent_count: 4
|
||||
agents: ["security", "code_quality", "architecture", "testing"]
|
||||
description: "Balanced multi-perspective review for medium-risk changes"
|
||||
examples: ["API endpoints", "business logic", "data validation", "component refactors"]
|
||||
cost_multiplier: 2x
|
||||
|
||||
complex:
|
||||
agent_count: 6
|
||||
agents: ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
|
||||
description: "Comprehensive review for high-risk/high-complexity changes"
|
||||
examples: ["auth/security", "payments", "data migration", "architecture changes", "performance-critical", "complex algorithms"]
|
||||
cost_multiplier: 3x
|
||||
|
||||
# Review settings
|
||||
review_settings:
|
||||
fresh_context_required: true # CRITICAL: Review in new session for unbiased perspective
|
||||
agents_to_use: "complexity_based" # complexity_based | all | custom
|
||||
generate_report: true
|
||||
auto_fix_suggested: false # Set to true to automatically apply suggested fixes
|
||||
|
||||
# Output
|
||||
review_report: "{sprint_artifacts}/review-{story_id}-multi-agent.md"
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,366 +0,0 @@
|
|||
# Push All v3.0 - Safe Git Staging, Commit, and Push
|
||||
|
||||
<purpose>
|
||||
Safely stage, commit, and push changes with comprehensive validation.
|
||||
Detects secrets, large files, build artifacts. Handles push failures gracefully.
|
||||
Supports targeted mode for specific files (parallel agent coordination).
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Safe by Default, No Surprises**
|
||||
|
||||
- Validate BEFORE committing (secrets, size, artifacts)
|
||||
- Show exactly what will be committed
|
||||
- Handle push failures with recovery options
|
||||
- Never force push without explicit confirmation
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: push-all
|
||||
version: 3.0.0
|
||||
|
||||
modes:
|
||||
full: "Stage all changes (default)"
|
||||
targeted: "Only stage specified files"
|
||||
|
||||
defaults:
|
||||
max_file_size_kb: 500
|
||||
check_secrets: true
|
||||
check_build_artifacts: true
|
||||
auto_push: false
|
||||
allow_force_push: false
|
||||
|
||||
secret_patterns:
|
||||
- "AKIA[0-9A-Z]{16}" # AWS Access Key
|
||||
- "sk-[a-zA-Z0-9]{48}" # OpenAI Key
|
||||
- "ghp_[a-zA-Z0-9]{36}" # GitHub Personal Token
|
||||
- "xox[baprs]-[a-zA-Z0-9-]+" # Slack Token
|
||||
- "-----BEGIN.*PRIVATE KEY" # Private Keys
|
||||
- "password\\s*=\\s*['\"][^'\"]{8,}" # Hardcoded passwords
|
||||
|
||||
build_artifacts:
|
||||
- "node_modules/"
|
||||
- "dist/"
|
||||
- "build/"
|
||||
- ".next/"
|
||||
- "*.min.js"
|
||||
- "*.bundle.js"
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="check_git_state" priority="first">
|
||||
**Verify git repository state**
|
||||
|
||||
```bash
|
||||
# Check we're in a git repo
|
||||
git rev-parse --is-inside-work-tree || { echo "❌ Not a git repository"; exit 1; }
|
||||
|
||||
# Get current branch
|
||||
git branch --show-current
|
||||
|
||||
# Check for uncommitted changes
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 PUSH-ALL: {{mode}} mode
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Branch: {{branch}}
|
||||
Mode: {{mode}}
|
||||
{{#if targeted}}Files: {{file_list}}{{/if}}
|
||||
```
|
||||
|
||||
**If no changes:**
|
||||
```
|
||||
✅ Working directory clean - nothing to commit
|
||||
```
|
||||
Exit successfully.
|
||||
</step>
|
||||
|
||||
<step name="scan_changes">
|
||||
**Identify files to be staged**
|
||||
|
||||
**Full mode:**
|
||||
```bash
|
||||
git status --porcelain | awk '{print $2}'
|
||||
```
|
||||
|
||||
**Targeted mode:**
|
||||
Only include files specified in `target_files` parameter.
|
||||
|
||||
**Categorize changes:**
|
||||
- New files (A)
|
||||
- Modified files (M)
|
||||
- Deleted files (D)
|
||||
- Renamed files (R)
|
||||
</step>
|
||||
|
||||
<step name="secret_scan" if="check_secrets">
|
||||
**Scan for secrets in staged content**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 SECRET SCAN
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
For each file to be staged:
|
||||
```bash
|
||||
# Check for secret patterns
|
||||
Grep: "{{pattern}}" {{file}}
|
||||
```
|
||||
|
||||
**If secrets found:**
|
||||
```
|
||||
❌ POTENTIAL SECRETS DETECTED
|
||||
|
||||
{{#each secrets}}
|
||||
File: {{file}}
|
||||
Line {{line}}: {{preview}} (pattern: {{pattern_name}})
|
||||
{{/each}}
|
||||
|
||||
⚠️ BLOCKING COMMIT
|
||||
Remove secrets before proceeding.
|
||||
|
||||
Options:
|
||||
[I] Ignore (I know what I'm doing)
|
||||
[E] Exclude these files
|
||||
[H] Halt
|
||||
```
|
||||
|
||||
**If [I] selected:** Require explicit confirmation text.
|
||||
</step>
|
||||
|
||||
<step name="size_scan">
|
||||
**Check for oversized files**
|
||||
|
||||
```bash
|
||||
# Find files larger than max_file_size_kb
|
||||
find . -type f -size +{{max_file_size}}k -not -path "./.git/*"
|
||||
```
|
||||
|
||||
**If large files found:**
|
||||
```
|
||||
⚠️ LARGE FILES DETECTED
|
||||
|
||||
{{#each large_files}}
|
||||
- {{file}} ({{size_kb}}KB)
|
||||
{{/each}}
|
||||
|
||||
Options:
|
||||
[I] Include anyway
|
||||
[E] Exclude large files
|
||||
[H] Halt
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="artifact_scan" if="check_build_artifacts">
|
||||
**Check for build artifacts**
|
||||
|
||||
```bash
|
||||
# Check if any staged files match artifact patterns
|
||||
git status --porcelain | grep -E "{{artifact_pattern}}"
|
||||
```
|
||||
|
||||
**If artifacts found:**
|
||||
```
|
||||
⚠️ BUILD ARTIFACTS DETECTED
|
||||
|
||||
{{#each artifacts}}
|
||||
- {{file}}
|
||||
{{/each}}
|
||||
|
||||
These should typically be in .gitignore.
|
||||
|
||||
Options:
|
||||
[E] Exclude artifacts (recommended)
|
||||
[I] Include anyway
|
||||
[H] Halt
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="preview_commit">
|
||||
**Show what will be committed**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 COMMIT PREVIEW
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Files to commit: {{count}}
|
||||
|
||||
Added ({{added_count}}):
|
||||
{{#each added}}
|
||||
+ {{file}}
|
||||
{{/each}}
|
||||
|
||||
Modified ({{modified_count}}):
|
||||
{{#each modified}}
|
||||
M {{file}}
|
||||
{{/each}}
|
||||
|
||||
Deleted ({{deleted_count}}):
|
||||
{{#each deleted}}
|
||||
- {{file}}
|
||||
{{/each}}
|
||||
|
||||
{{#if excluded}}
|
||||
Excluded: {{excluded_count}} files
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="get_commit_message">
|
||||
**Generate or request commit message**
|
||||
|
||||
**If commit_message provided:** Use it.
|
||||
|
||||
**Otherwise, generate from changes:**
|
||||
```
|
||||
Analyzing changes to generate commit message...
|
||||
|
||||
Changes detected:
|
||||
- {{summary_of_changes}}
|
||||
|
||||
Suggested message:
|
||||
"{{generated_message}}"
|
||||
|
||||
[Y] Use this message
|
||||
[E] Edit message
|
||||
[C] Custom message
|
||||
```
|
||||
|
||||
If user selects [C] or [E], prompt for message.
|
||||
</step>
|
||||
|
||||
<step name="execute_commit">
|
||||
**Stage and commit changes**
|
||||
|
||||
```bash
|
||||
# Stage files (targeted or full)
|
||||
{{#if targeted}}
|
||||
git add {{#each target_files}}{{this}} {{/each}}
|
||||
{{else}}
|
||||
git add -A
|
||||
{{/if}}
|
||||
|
||||
# Commit with message
|
||||
git commit -m "{{commit_message}}"
|
||||
```
|
||||
|
||||
**Verify commit:**
|
||||
```bash
|
||||
# Check commit was created
|
||||
git log -1 --oneline
|
||||
```
|
||||
|
||||
```
|
||||
✅ Commit created: {{commit_hash}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="push_to_remote" if="auto_push OR user_confirms_push">
|
||||
**Push to remote with error handling**
|
||||
|
||||
```bash
|
||||
git push origin {{branch}}
|
||||
```
|
||||
|
||||
**If push fails:**
|
||||
|
||||
**Case: Behind remote**
|
||||
```
|
||||
⚠️ Push rejected - branch is behind remote
|
||||
|
||||
Options:
|
||||
[P] Pull and retry (git pull --rebase)
|
||||
[F] Force push (DESTRUCTIVE - overwrites remote)
|
||||
[H] Halt (commit preserved locally)
|
||||
```
|
||||
|
||||
**Case: No upstream**
|
||||
```
|
||||
⚠️ No upstream branch
|
||||
|
||||
Setting upstream and pushing:
|
||||
git push -u origin {{branch}}
|
||||
```
|
||||
|
||||
**Case: Auth failure**
|
||||
```
|
||||
❌ Authentication failed
|
||||
|
||||
Check:
|
||||
1. SSH key configured?
|
||||
2. Token valid?
|
||||
3. Repository access?
|
||||
```
|
||||
|
||||
**Case: Protected branch**
|
||||
```
|
||||
❌ Cannot push to protected branch
|
||||
|
||||
Use pull request workflow instead:
|
||||
gh pr create --title "{{commit_message}}"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Display completion status**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ PUSH-ALL COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Branch: {{branch}}
|
||||
Commit: {{commit_hash}}
|
||||
Files: {{file_count}}
|
||||
{{#if pushed}}
|
||||
Remote: ✅ Pushed to origin/{{branch}}
|
||||
{{else}}
|
||||
Remote: ⏸️ Not pushed (commit preserved locally)
|
||||
{{/if}}
|
||||
|
||||
{{#if excluded_count > 0}}
|
||||
Excluded: {{excluded_count}} files (secrets/artifacts/size)
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Stage all, commit, and push
|
||||
/push-all commit_message="feat: add user authentication" auto_push=true
|
||||
|
||||
# Targeted mode - only specific files
|
||||
/push-all mode=targeted target_files="src/auth.ts,src/auth.test.ts" commit_message="fix: auth bug"
|
||||
|
||||
# Dry run - see what would be committed
|
||||
/push-all auto_push=false
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Secrets detected:** BLOCK commit, require explicit override.
|
||||
**Large files:** Warn, allow exclude or include.
|
||||
**Build artifacts:** Warn, recommend exclude.
|
||||
**Push rejected:** Offer pull/rebase, force push (with confirmation), or halt.
|
||||
**Auth failure:** Report, suggest troubleshooting.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Changes validated (secrets, size, artifacts)
|
||||
- [ ] Files staged correctly
|
||||
- [ ] Commit created with message
|
||||
- [ ] Push successful (if requested)
|
||||
- [ ] No unintended files included
|
||||
</success_criteria>
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
name: push-all
|
||||
description: "Stage changes, create commit with safety checks, and push to remote"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/push-all"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
|
||||
# Target files to commit (for parallel agent execution)
|
||||
# When empty/not provided: commits ALL changes (original behavior)
|
||||
# When provided: only commits the specified files (safe for parallel agents)
|
||||
target_files: "" # Space-separated list of file paths, or empty for all
|
||||
story_key: "" # Optional: story identifier for commit message context
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,172 +0,0 @@
|
|||
# Recover Sprint Status v3.0
|
||||
|
||||
<purpose>
|
||||
Fix sprint-status.yaml when tracking has drifted. Analyzes multiple sources
|
||||
(story files, git commits, completion reports) to rebuild accurate status.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Multiple Evidence Sources, Conservative Updates**
|
||||
|
||||
1. Story file quality (size, tasks, checkboxes)
|
||||
2. Explicit Status: fields in stories
|
||||
3. Git commits (last 30 days)
|
||||
4. Autonomous completion reports
|
||||
5. Task completion rate
|
||||
|
||||
Trust explicit Status: fields highest. Require evidence for status changes.
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: recover-sprint-status
|
||||
version: 3.0.0
|
||||
|
||||
modes:
|
||||
dry-run: {description: "Analysis only, no changes", default: true}
|
||||
conservative: {description: "High confidence updates only"}
|
||||
aggressive: {description: "Medium+ confidence, infers from git"}
|
||||
interactive: {description: "Ask before each batch"}
|
||||
|
||||
confidence_levels:
|
||||
very_high: {sources: [explicit_status, completion_report]}
|
||||
high: {sources: [3+ git_commits, 90% tasks_complete]}
|
||||
medium: {sources: [1-2 git_commits, 50-90% tasks_complete]}
|
||||
low: {sources: [no_status, no_commits, small_file]}
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="analyze_sources" priority="first">
|
||||
**Scan all evidence sources**
|
||||
|
||||
```bash
|
||||
# Find story files
|
||||
SPRINT_ARTIFACTS="docs/sprint-artifacts"
|
||||
STORIES=$(ls $SPRINT_ARTIFACTS/*.md 2>/dev/null | grep -v "epic-")
|
||||
|
||||
# Get recent git commits
|
||||
git log --oneline --since="30 days ago" > /tmp/recent_commits.txt
|
||||
```
|
||||
|
||||
For each story:
|
||||
1. Read story file, extract Status: field if present
|
||||
2. Check file size (≥10KB = properly detailed)
|
||||
3. Count tasks and checkbox completion
|
||||
4. Search git commits for story references
|
||||
5. Check for completion reports (.epic-*-completion-report.md)
|
||||
</step>
|
||||
|
||||
<step name="calculate_confidence">
|
||||
**Determine confidence level for each story**
|
||||
|
||||
| Evidence | Confidence | Action |
|
||||
|----------|------------|--------|
|
||||
| Explicit Status: done | Very High | Trust it |
|
||||
| Completion report lists story | Very High | Mark done |
|
||||
| 3+ git commits + 90% checked | High | Mark done |
|
||||
| 1-2 commits OR 50-90% checked | Medium | Mark in-progress |
|
||||
| No commits, <50% checked | Low | Leave as-is |
|
||||
| File <10KB | Low | Downgrade if done |
|
||||
</step>
|
||||
|
||||
<step name="preview_changes" if="mode == dry-run">
|
||||
**Show recommendations without applying**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 RECOVERY ANALYSIS (Dry Run)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
High Confidence Updates:
|
||||
- 2-5-auth: backlog → done (explicit Status:, 3 commits)
|
||||
- 2-6-profile: in-progress → done (completion report)
|
||||
|
||||
Medium Confidence Updates:
|
||||
- 2-7-settings: backlog → in-progress (2 commits)
|
||||
|
||||
Low Confidence (verify manually):
|
||||
- 2-8-dashboard: no Status:, no commits, <10KB file
|
||||
```
|
||||
|
||||
Exit after preview. No changes made.
|
||||
</step>
|
||||
|
||||
<step name="apply_conservative" if="mode == conservative">
|
||||
**Apply only high/very-high confidence updates**
|
||||
|
||||
For each high+ confidence story:
|
||||
1. Backup current sprint-status.yaml
|
||||
2. Use Edit tool to update status
|
||||
3. Log change
|
||||
|
||||
```bash
|
||||
# Backup
|
||||
cp $SPRINT_STATUS .sprint-status-backups/sprint-status-recovery-$(date +%Y%m%d).yaml
|
||||
```
|
||||
|
||||
Skip medium/low confidence stories.
|
||||
</step>
|
||||
|
||||
<step name="apply_aggressive" if="mode == aggressive">
|
||||
**Apply medium+ confidence updates**
|
||||
|
||||
Includes:
|
||||
- Inferring from git commits (even 1 commit)
|
||||
- Using task completion rate
|
||||
- Pre-filling brownfield checkboxes
|
||||
|
||||
```
|
||||
⚠️ AGGRESSIVE mode may make incorrect inferences.
|
||||
Review results carefully.
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="validate_results">
|
||||
**Verify recovery worked**
|
||||
|
||||
```bash
|
||||
./scripts/sync-sprint-status.sh --validate
|
||||
```
|
||||
|
||||
Should show:
|
||||
- "✓ sprint-status.yaml is up to date!" (success)
|
||||
- OR discrepancy count (if issues remain)
|
||||
</step>
|
||||
|
||||
<step name="commit_changes" if="changes_made">
|
||||
**Commit the recovery**
|
||||
|
||||
Use Bash to commit:
|
||||
```bash
|
||||
git add docs/sprint-artifacts/sprint-status.yaml
|
||||
git add .sprint-status-backups/
|
||||
git commit -m "fix(tracking): Recover sprint-status.yaml - {{mode}} recovery"
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**No changes detected:** sprint-status.yaml already accurate.
|
||||
**Low confidence on known-done stories:** Add Status: field manually, re-run.
|
||||
**Recovery marks incomplete as done:** Use conservative mode, verify manually.
|
||||
</failure_handling>
|
||||
|
||||
<post_recovery_checklist>
|
||||
- [ ] Run validation: `./scripts/sync-sprint-status.sh --validate`
|
||||
- [ ] Review backup in `.sprint-status-backups/`
|
||||
- [ ] Spot-check 5-10 stories for accuracy
|
||||
- [ ] Commit changes
|
||||
- [ ] Document why drift occurred
|
||||
</post_recovery_checklist>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All evidence sources analyzed
|
||||
- [ ] Changes applied based on confidence threshold
|
||||
- [ ] Validation passes
|
||||
- [ ] Backup created
|
||||
</success_criteria>
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
# Sprint Status Recovery Workflow
|
||||
name: recover-sprint-status
|
||||
description: "Recover sprint-status.yaml when tracking has drifted. Analyzes story files, git commits, and autonomous reports to rebuild accurate status."
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/recover-sprint-status"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
# Inputs
|
||||
variables:
|
||||
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
|
||||
story_directory: "{implementation_artifacts}"
|
||||
recovery_mode: "interactive" # Options: interactive, conservative, aggressive
|
||||
|
||||
# Recovery script location
|
||||
recovery_script: "{project-root}/scripts/recover-sprint-status.sh"
|
||||
|
||||
# Standalone so IDE commands get generated
|
||||
standalone: true
|
||||
|
||||
# No web bundle needed
|
||||
web_bundle: false
|
||||
|
|
@ -1,189 +0,0 @@
|
|||
# Revalidate Epic v3.0 - Batch Story Revalidation
|
||||
|
||||
<purpose>
|
||||
Batch revalidate all stories in an epic using parallel agents (semaphore pattern).
|
||||
Clears checkboxes, verifies against codebase, re-checks verified items.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Parallel Verification, Continuous Worker Pool**
|
||||
|
||||
- Spawn up to N workers, refill as each completes
|
||||
- Each story gets fresh context verification
|
||||
- Aggregate results into epic-level health score
|
||||
- Optionally fill gaps found during verification
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: revalidate-epic
|
||||
version: 3.0.0
|
||||
|
||||
defaults:
|
||||
max_concurrent: 3
|
||||
fill_gaps: false
|
||||
continue_on_failure: true
|
||||
create_epic_report: true
|
||||
update_sprint_status: true
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
@revalidate-story/workflow.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_epic_stories" priority="first">
|
||||
**Find all stories for the epic**
|
||||
|
||||
```bash
|
||||
EPIC_NUMBER="{{epic_number}}"
|
||||
[ -n "$EPIC_NUMBER" ] || { echo "ERROR: epic_number required"; exit 1; }
|
||||
|
||||
# Filter stories from sprint-status.yaml
|
||||
grep "^${EPIC_NUMBER}-" docs/sprint-artifacts/sprint-status.yaml
|
||||
```
|
||||
|
||||
Use Read tool on sprint-status.yaml. Filter stories starting with `{epic_number}-`.
|
||||
Exclude epics and retrospectives.
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 EPIC {{epic_number}} REVALIDATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Stories Found: {{count}}
|
||||
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
|
||||
Max Concurrent: {{max_concurrent}} agents
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Use AskUserQuestion: Proceed with revalidation? (yes/no)
|
||||
</step>
|
||||
|
||||
<step name="spawn_worker_pool">
|
||||
**Initialize semaphore pattern for parallel revalidation**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🚀 Starting Parallel Revalidation
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Initialize state:
|
||||
- story_queue = epic_stories
|
||||
- active_workers = {}
|
||||
- completed_stories = []
|
||||
- failed_stories = []
|
||||
|
||||
Fill initial worker slots (up to max_concurrent):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Revalidate story {{story_key}}",
|
||||
prompt: `
|
||||
Execute revalidate-story workflow for {{story_key}}.
|
||||
|
||||
<execution_context>
|
||||
@revalidate-story/workflow.md
|
||||
</execution_context>
|
||||
|
||||
Parameters:
|
||||
- story_file: {{story_file}}
|
||||
- fill_gaps: {{fill_gaps}}
|
||||
|
||||
Return verification summary with verified_pct, gaps_found, gaps_filled.
|
||||
`,
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="maintain_worker_pool">
|
||||
**Keep workers running until all stories done**
|
||||
|
||||
While active_workers > 0 OR stories remaining in queue:
|
||||
|
||||
1. Poll for completed workers (non-blocking with TaskOutput)
|
||||
2. When worker completes:
|
||||
- Parse verification results
|
||||
- Add to completed_stories
|
||||
- If more stories in queue: spawn new worker in that slot
|
||||
3. Display progress every 30 seconds:
|
||||
|
||||
```
|
||||
📊 Progress: {{completed}} completed, {{active}} active, {{queued}} queued
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="aggregate_results">
|
||||
**Generate epic-level summary**
|
||||
|
||||
Calculate totals across all stories:
|
||||
- epic_verified = sum of verified items
|
||||
- epic_partial = sum of partial items
|
||||
- epic_missing = sum of missing items
|
||||
- epic_verified_pct = (verified / total) × 100
|
||||
|
||||
Group stories by health:
|
||||
- Complete (≥95% verified)
|
||||
- Mostly Complete (80-94%)
|
||||
- Partial (50-79%)
|
||||
- Incomplete (<50%)
|
||||
</step>
|
||||
|
||||
<step name="display_summary">
|
||||
**Show epic revalidation results**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Total Stories: {{count}}
|
||||
Completed: {{completed_count}}
|
||||
Failed: {{failed_count}}
|
||||
|
||||
Epic-Wide Verification:
|
||||
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
|
||||
- 🔶 Partial: {{partial}}/{{total}}
|
||||
- ❌ Missing: {{missing}}/{{total}}
|
||||
|
||||
Epic Health Score: {{epic_verified_pct}}/100
|
||||
|
||||
{{#if pct >= 95}}
|
||||
✅ Epic is COMPLETE and verified
|
||||
{{else if pct >= 80}}
|
||||
🔶 Epic is MOSTLY COMPLETE
|
||||
{{else if pct >= 50}}
|
||||
⚠️ Epic is PARTIALLY COMPLETE
|
||||
{{else}}
|
||||
❌ Epic is INCOMPLETE (major rework needed)
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_tracking" if="update_sprint_status">
|
||||
**Update sprint-status with revalidation results**
|
||||
|
||||
Use Edit tool to add comment to epic entry:
|
||||
```
|
||||
epic-{{epic_number}}: done # Revalidated: {{pct}}% verified ({{timestamp}})
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**Worker fails:** Log error, refill slot if continue_on_failure=true.
|
||||
**All stories fail:** Report systemic issue, halt batch.
|
||||
**Story file missing:** Skip with warning.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All epic stories processed
|
||||
- [ ] Results aggregated
|
||||
- [ ] Epic health score calculated
|
||||
- [ ] Sprint status updated (if enabled)
|
||||
</success_criteria>
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
name: revalidate-epic
|
||||
description: "Batch revalidation of all stories in an epic. Clears checkboxes and re-verifies against codebase with semaphore pattern."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# Input parameters
|
||||
epic_number: "{epic_number}" # Required: Epic number (e.g., "2" for Epic 2)
|
||||
fill_gaps: false # Optional: Fill missing items after verification
|
||||
max_concurrent: 3 # Optional: Max concurrent revalidation agents (default: 3)
|
||||
|
||||
# Verification settings (inherited by story revalidations)
|
||||
verification:
|
||||
verify_acceptance_criteria: true
|
||||
verify_tasks: true
|
||||
verify_definition_of_done: true
|
||||
check_for_stubs: true
|
||||
require_tests: true
|
||||
|
||||
# Gap filling settings
|
||||
gap_filling:
|
||||
max_gaps_per_story: 10 # Safety limit per story
|
||||
require_confirmation_first_story: true # Ask on first story, then auto for rest
|
||||
run_tests_after_each: true
|
||||
commit_strategy: "per_gap" # "per_gap" | "per_story" | "all_at_once"
|
||||
|
||||
# Execution settings
|
||||
execution:
|
||||
use_semaphore_pattern: true # Constant concurrency (not batch-and-wait)
|
||||
continue_on_failure: true # Keep processing if one story fails
|
||||
display_live_progress: true # Show progress updates every 30s
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_epic_report: true # Generate epic-level summary
|
||||
create_story_reports: true # Generate per-story reports
|
||||
update_sprint_status: true # Update progress in sprint-status.yaml
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,225 +0,0 @@
|
|||
# Revalidate Story v3.0 - Verify Checkboxes Against Codebase
|
||||
|
||||
<purpose>
|
||||
Clear all checkboxes and re-verify each item against actual codebase reality.
|
||||
Detects over-reported completion and identifies real gaps.
|
||||
Optionally fills gaps by implementing missing items.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Trust But Verify, Evidence Required**
|
||||
|
||||
1. Clear all checkboxes (fresh start)
|
||||
2. For each AC/Task/DoD: search codebase for evidence
|
||||
3. Only re-check if evidence found AND not a stub
|
||||
4. Report accuracy: was completion over-reported or under-reported?
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: revalidate-story
|
||||
version: 3.0.0
|
||||
|
||||
defaults:
|
||||
fill_gaps: false
|
||||
max_gaps_to_fill: 10
|
||||
commit_strategy: "all_at_once" # or "per_gap"
|
||||
create_report: true
|
||||
update_sprint_status: true
|
||||
|
||||
verification_status:
|
||||
verified: {checkbox: "[x]", evidence: "found, not stub, tests exist"}
|
||||
partial: {checkbox: "[~]", evidence: "partial implementation or missing tests"}
|
||||
missing: {checkbox: "[ ]", evidence: "not found in codebase"}
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_and_backup" priority="first">
|
||||
**Load story and backup current state**
|
||||
|
||||
```bash
|
||||
STORY_FILE="{{story_file}}"
|
||||
[ -f "$STORY_FILE" ] || { echo "ERROR: story_file required"; exit 1; }
|
||||
```
|
||||
|
||||
Use Read tool on story file. Count current checkboxes:
|
||||
- ac_checked_before
|
||||
- tasks_checked_before
|
||||
- dod_checked_before
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 STORY REVALIDATION STARTED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{story_key}}
|
||||
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
|
||||
|
||||
Current State:
|
||||
- Acceptance Criteria: {{ac_checked}}/{{ac_total}} checked
|
||||
- Tasks: {{tasks_checked}}/{{tasks_total}} checked
|
||||
- DoD: {{dod_checked}}/{{dod_total}} checked
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="clear_checkboxes">
|
||||
**Clear all checkboxes for fresh verification**
|
||||
|
||||
Use Edit tool (replace_all: true):
|
||||
- `[x]` → `[ ]` in Acceptance Criteria section
|
||||
- `[x]` → `[ ]` in Tasks section
|
||||
- `[x]` → `[ ]` in Definition of Done section
|
||||
</step>
|
||||
|
||||
<step name="verify_acceptance_criteria">
|
||||
**Verify each AC against codebase**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 VERIFYING ACCEPTANCE CRITERIA
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
For each AC item:
|
||||
|
||||
1. **Parse AC** - Extract file/component/feature mentions
|
||||
2. **Search codebase** - Use Glob/Grep to find evidence
|
||||
3. **Verify implementation** - Read files, check for:
|
||||
- NOT a stub (no "TODO", "Not implemented", empty function)
|
||||
- Has actual implementation
|
||||
- Tests exist (*.test.* or *.spec.*)
|
||||
|
||||
4. **Determine status:**
|
||||
- VERIFIED: Evidence found, not stub, tests exist → check [x]
|
||||
- PARTIAL: Partial evidence or missing tests → check [~]
|
||||
- MISSING: No evidence found → leave [ ]
|
||||
|
||||
5. **Record evidence or gap**
|
||||
</step>
|
||||
|
||||
<step name="verify_tasks">
|
||||
**Verify each task against codebase**
|
||||
|
||||
Same process as ACs:
|
||||
- Parse task description for artifacts
|
||||
- Search codebase with Glob/Grep
|
||||
- Read and verify (check for stubs, tests)
|
||||
- Update checkbox based on evidence
|
||||
</step>
|
||||
|
||||
<step name="verify_definition_of_done">
|
||||
**Verify DoD items**
|
||||
|
||||
For common DoD items, run actual checks:
|
||||
- "Type check passes" → `npm run type-check`
|
||||
- "Unit tests pass" → `npm test`
|
||||
- "Linting clean" → `npm run lint`
|
||||
- "Build succeeds" → `npm run build`
|
||||
</step>
|
||||
|
||||
<step name="generate_report">
|
||||
**Calculate and display results**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 REVALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{story_key}}
|
||||
|
||||
Verification Results:
|
||||
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
|
||||
- 🔶 Partial: {{partial}}/{{total}}
|
||||
- ❌ Missing: {{missing}}/{{total}}
|
||||
|
||||
Accuracy Check:
|
||||
- Before: {{pct_before}}% checked
|
||||
- After: {{verified_pct}}% verified
|
||||
- {{pct_before > verified_pct ? "Over-reported" : "Under-reported"}}
|
||||
|
||||
{{#if missing > 0}}
|
||||
Gaps Found ({{missing}}):
|
||||
[list gaps with what's missing]
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="fill_gaps" if="fill_gaps AND gaps_found">
|
||||
**Implement missing items**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔧 GAP FILLING MODE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Safety check:
|
||||
```
|
||||
if gaps_found > max_gaps_to_fill:
|
||||
echo "⚠️ TOO MANY GAPS ({{gaps}} > {{max}})"
|
||||
echo "Consider re-implementing with /dev-story"
|
||||
HALT
|
||||
```
|
||||
|
||||
For each gap:
|
||||
1. Load story context
|
||||
2. Implement missing item
|
||||
3. Write tests
|
||||
4. Run tests to verify
|
||||
5. Check box [x] if successful
|
||||
6. Commit if commit_strategy == "per_gap"
|
||||
</step>
|
||||
|
||||
<step name="finalize">
|
||||
**Re-verify and commit**
|
||||
|
||||
If gaps were filled:
|
||||
1. Re-run verification on filled gaps
|
||||
2. Commit all changes (if commit_strategy == "all_at_once")
|
||||
|
||||
Update sprint-status.yaml with revalidation result:
|
||||
```
|
||||
{{story_key}}: {{status}} # Revalidated: {{pct}}% ({{timestamp}})
|
||||
```
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ REVALIDATION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Final: {{verified}}/{{total}} verified ({{pct}}%)
|
||||
|
||||
Recommendation:
|
||||
{{#if pct >= 95}}
|
||||
✅ Story is COMPLETE - mark as "done"
|
||||
{{else if pct >= 80}}
|
||||
🔶 Mostly complete - finish remaining items
|
||||
{{else if pct >= 50}}
|
||||
⚠️ Significant gaps - continue with /dev-story
|
||||
{{else}}
|
||||
❌ Mostly incomplete - consider re-implementing
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**File not found:** HALT with error.
|
||||
**Verification fails:** Record gap, continue to next item.
|
||||
**Gap fill fails:** Leave unchecked, record failure.
|
||||
**Too many gaps:** HALT, recommend re-implementation.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All items verified against codebase
|
||||
- [ ] Checkboxes reflect actual implementation
|
||||
- [ ] Accuracy comparison displayed
|
||||
- [ ] Gaps filled (if enabled)
|
||||
- [ ] Sprint status updated
|
||||
</success_criteria>
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
name: revalidate-story
|
||||
description: "Clear checkboxes and re-verify story against actual codebase implementation. Identifies gaps and optionally fills them."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
|
||||
# Input parameters
|
||||
story_file: "{story_file}" # Required: Full path to story file
|
||||
fill_gaps: false # Optional: Fill missing items after verification (default: verify-only)
|
||||
auto_commit: false # Optional: Auto-commit filled gaps (default: prompt)
|
||||
|
||||
# Verification settings
|
||||
verification:
|
||||
verify_acceptance_criteria: true
|
||||
verify_tasks: true
|
||||
verify_definition_of_done: true
|
||||
check_for_stubs: true # Reject stub implementations (TODO, Not implemented, etc.)
|
||||
require_tests: true # Require tests for code items
|
||||
|
||||
# Gap filling settings (only used if fill_gaps=true)
|
||||
gap_filling:
|
||||
max_gaps_to_fill: 10 # Safety limit - HALT if more gaps than this
|
||||
require_confirmation: true # Ask before filling each gap (false = auto-fill all)
|
||||
run_tests_after_each: true # Verify each filled gap works
|
||||
commit_strategy: "per_gap" # "per_gap" | "all_at_once" | "none"
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_report: true # Generate revalidation-report.md
|
||||
update_dev_agent_record: true # Add revalidation notes to story
|
||||
update_sprint_status: true # Update progress in sprint-status.yaml
|
||||
|
||||
standalone: false
|
||||
|
|
@ -1,491 +0,0 @@
|
|||
# Story Pipeline v2.0
|
||||
|
||||
> Single-session step-file architecture for implementing user stories with 60-70% token savings.
|
||||
|
||||
## Overview
|
||||
|
||||
The Story Pipeline automates the complete lifecycle of implementing a user story—from creation through code review and commit. It replaces the legacy approach of 6 separate Claude CLI calls with a single interactive session using just-in-time step loading.
|
||||
|
||||
### The Problem It Solves
|
||||
|
||||
**Legacy Pipeline (v1.0):**
|
||||
```
|
||||
bmad build 1-4
|
||||
└─> claude -p "Stage 1: Create story..." # ~12K tokens
|
||||
└─> claude -p "Stage 2: Validate story..." # ~12K tokens
|
||||
└─> claude -p "Stage 3: ATDD tests..." # ~12K tokens
|
||||
└─> claude -p "Stage 4: Implement..." # ~12K tokens
|
||||
└─> claude -p "Stage 5: Code review..." # ~12K tokens
|
||||
└─> claude -p "Stage 6: Complete..." # ~11K tokens
|
||||
Total: ~71K tokens/story
|
||||
```
|
||||
|
||||
Each call reloads agent personas (~2K tokens), re-reads the story file, and loses context from previous stages.
|
||||
|
||||
**Story Pipeline v2.0:**
|
||||
```
|
||||
bmad build 1-4
|
||||
└─> Single Claude session
|
||||
├─> Load step-01-init.md (~200 lines)
|
||||
├─> Role switch: SM
|
||||
├─> Load step-02-create-story.md
|
||||
├─> Load step-03-validate-story.md
|
||||
├─> Role switch: TEA
|
||||
├─> Load step-04-atdd.md
|
||||
├─> Role switch: DEV
|
||||
├─> Load step-05-implement.md
|
||||
├─> Load step-06-code-review.md
|
||||
├─> Role switch: SM
|
||||
├─> Load step-07-complete.md
|
||||
└─> Load step-08-summary.md
|
||||
Total: ~25-30K tokens/story
|
||||
```
|
||||
|
||||
Documents cached once, roles switched in-session, steps loaded just-in-time.
|
||||
|
||||
## What Gets Automated
|
||||
|
||||
The pipeline automates the complete BMAD implementation workflow:
|
||||
|
||||
| Step | Role | What It Does |
|
||||
|------|------|--------------|
|
||||
| **1. Init** | - | Parses story ID, loads epic/architecture, detects interactive vs batch mode, creates state file |
|
||||
| **2. Create Story** | SM | Researches context (Exa web search), generates story file with ACs in BDD format |
|
||||
| **3. Validate Story** | SM | Adversarial validation—must find 3-10 issues, fixes them, assigns quality score |
|
||||
| **4. ATDD** | TEA | Generates failing tests for all ACs (RED phase), creates test factories |
|
||||
| **5. Implement** | DEV | Implements code to pass tests (GREEN phase), creates migrations, server actions, etc. |
|
||||
| **6. Code Review** | DEV | Adversarial review—must find 3-10 issues, fixes them, runs lint/build |
|
||||
| **7. Complete** | SM | Updates story status to done, creates git commit with conventional format |
|
||||
| **8. Summary** | - | Generates audit trail, updates pipeline state, outputs metrics |
|
||||
|
||||
### Quality Gates
|
||||
|
||||
Each step has quality gates that must pass before proceeding:
|
||||
|
||||
- **Validation**: Score ≥ 80/100, all issues addressed
|
||||
- **ATDD**: Tests exist for all ACs, tests fail (RED phase confirmed)
|
||||
- **Implementation**: Lint clean, build passes, migration tests pass
|
||||
- **Code Review**: Score ≥ 7/10, all critical issues fixed
|
||||
|
||||
## Token Efficiency
|
||||
|
||||
| Mode | Token Usage | Savings vs Legacy |
|
||||
|------|-------------|-------------------|
|
||||
| Interactive (human-in-loop) | ~25K | 65% |
|
||||
| Batch (YOLO) | ~30K | 58% |
|
||||
| Batch + fresh review context | ~35K | 51% |
|
||||
|
||||
### Where Savings Come From
|
||||
|
||||
| Waste in Legacy | Tokens Saved |
|
||||
|-----------------|--------------|
|
||||
| Agent persona reload (6×) | ~12K |
|
||||
| Story file re-reads (5×) | ~10K |
|
||||
| Architecture re-reads | ~8K |
|
||||
| Context loss between calls | ~16K |
|
||||
|
||||
## Usage
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- BMAD module installed (`_bmad/` directory exists)
|
||||
- Epic file with story definition (`docs/epics.md`)
|
||||
- Architecture document (`docs/architecture.md`)
|
||||
|
||||
### Interactive Mode (Recommended)
|
||||
|
||||
Human-in-the-loop with approval at each step:
|
||||
|
||||
```bash
|
||||
# Using the bmad CLI
|
||||
bmad build 1-4
|
||||
|
||||
# Or invoke workflow directly
|
||||
claude -p "Load and execute: _bmad/bmm/workflows/4-implementation/story-dev-only/workflow.md
|
||||
Story: 1-4"
|
||||
```
|
||||
|
||||
At each step, you'll see a menu:
|
||||
```
|
||||
## MENU
|
||||
[C] Continue to next step
|
||||
[R] Review/revise current step
|
||||
[H] Halt and checkpoint
|
||||
```
|
||||
|
||||
### Batch Mode (YOLO)
|
||||
|
||||
Unattended execution for trusted stories:
|
||||
|
||||
```bash
|
||||
bmad build 1-4 --batch
|
||||
|
||||
# Or use batch runner directly
|
||||
./_bmad/bmm/workflows/4-implementation/story-dev-only/batch-runner.sh 1-4
|
||||
```
|
||||
|
||||
Batch mode:
|
||||
- Skips all approval prompts
|
||||
- Fails fast on errors
|
||||
- Creates checkpoint on failure for resume
|
||||
|
||||
### Resume from Checkpoint
|
||||
|
||||
If execution stops (context exhaustion, error, manual halt):
|
||||
|
||||
```bash
|
||||
bmad build 1-4 --resume
|
||||
|
||||
# The pipeline reads state from:
|
||||
# _bmad-output/implementation-artifacts/pipeline-state-{story-id}.yaml
|
||||
```
|
||||
|
||||
Resume automatically:
|
||||
- Skips completed steps
|
||||
- Restores cached context
|
||||
- Continues from `lastStep + 1`
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
story-dev-only/
|
||||
├── workflow.yaml # Configuration, agent mapping, quality gates
|
||||
├── workflow.md # Interactive mode orchestration
|
||||
├── batch-runner.sh # Batch mode runner script
|
||||
├── steps/
|
||||
│ ├── step-01-init.md # Initialize, load context
|
||||
│ ├── step-01b-resume.md # Resume from checkpoint
|
||||
│ ├── step-02-create-story.md
|
||||
│ ├── step-03-validate-story.md
|
||||
│ ├── step-04-atdd.md
|
||||
│ ├── step-05-implement.md
|
||||
│ ├── step-06-code-review.md
|
||||
│ ├── step-07-complete.md
|
||||
│ └── step-08-summary.md
|
||||
├── checklists/
|
||||
│ ├── story-creation.md # What makes a good story
|
||||
│ ├── story-validation.md # Validation criteria
|
||||
│ ├── atdd.md # Test generation rules
|
||||
│ ├── implementation.md # Coding standards
|
||||
│ └── code-review.md # Review criteria
|
||||
└── templates/
|
||||
├── pipeline-state.yaml # State file template
|
||||
└── audit-trail.yaml # Audit log template
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### workflow.yaml
|
||||
|
||||
```yaml
|
||||
name: story-dev-only
|
||||
version: "2.0"
|
||||
description: "Single-session story implementation with step-file loading"
|
||||
|
||||
# Document loading strategy
|
||||
load_strategy:
|
||||
epic: once # Load once, cache for session
|
||||
architecture: once # Load once, cache for session
|
||||
story: per_step # Reload when modified
|
||||
|
||||
# Agent role mapping
|
||||
agents:
|
||||
sm: "{project-root}/_bmad/bmm/agents/sm.md"
|
||||
tea: "{project-root}/_bmad/bmm/agents/tea.md"
|
||||
dev: "{project-root}/_bmad/bmm/agents/dev.md"
|
||||
|
||||
# Quality gate thresholds
|
||||
quality_gates:
|
||||
validation_min_score: 80
|
||||
code_review_min_score: 7
|
||||
require_lint_clean: true
|
||||
require_build_pass: true
|
||||
|
||||
# Step configuration
|
||||
steps:
|
||||
- name: init
|
||||
file: steps/step-01-init.md
|
||||
- name: create-story
|
||||
file: steps/step-02-create-story.md
|
||||
agent: sm
|
||||
# ... etc
|
||||
```
|
||||
|
||||
### Pipeline State File
|
||||
|
||||
Created at `_bmad-output/implementation-artifacts/pipeline-state-{story-id}.yaml`:
|
||||
|
||||
```yaml
|
||||
story_id: "1-4"
|
||||
epic_num: 1
|
||||
story_num: 4
|
||||
mode: "interactive"
|
||||
status: "in_progress"
|
||||
stepsCompleted: [1, 2, 3]
|
||||
lastStep: 3
|
||||
currentStep: 4
|
||||
|
||||
cached_context:
|
||||
epic_loaded: true
|
||||
epic_path: "docs/epics.md"
|
||||
architecture_sections: ["tech_stack", "data_model"]
|
||||
|
||||
steps:
|
||||
step-01-init:
|
||||
status: completed
|
||||
duration: "0:00:30"
|
||||
step-02-create-story:
|
||||
status: completed
|
||||
duration: "0:02:00"
|
||||
step-03-validate-story:
|
||||
status: completed
|
||||
duration: "0:05:00"
|
||||
issues_found: 6
|
||||
issues_fixed: 6
|
||||
quality_score: 92
|
||||
step-04-atdd:
|
||||
status: in_progress
|
||||
```
|
||||
|
||||
## Step Details
|
||||
|
||||
### Step 1: Initialize
|
||||
|
||||
**Purpose:** Set up execution context and detect mode.
|
||||
|
||||
**Actions:**
|
||||
1. Parse story ID (e.g., "1-4" → epic 1, story 4)
|
||||
2. Load and cache epic document
|
||||
3. Load relevant architecture sections
|
||||
4. Check for existing state file (resume vs fresh)
|
||||
5. Detect mode (interactive/batch) from CLI flags
|
||||
6. Create initial state file
|
||||
|
||||
**Output:** `pipeline-state-{story-id}.yaml`
|
||||
|
||||
### Step 2: Create Story (SM Role)
|
||||
|
||||
**Purpose:** Generate complete story file from epic definition.
|
||||
|
||||
**Actions:**
|
||||
1. Switch to Scrum Master (SM) role
|
||||
2. Read story definition from epic
|
||||
3. Research context via Exa web search (best practices, patterns)
|
||||
4. Generate story file with:
|
||||
- User story format (As a... I want... So that...)
|
||||
- Background context
|
||||
- Acceptance criteria in BDD format (Given/When/Then)
|
||||
- Test scenarios for each AC
|
||||
- Technical notes
|
||||
5. Save to `_bmad-output/implementation-artifacts/story-{id}.md`
|
||||
|
||||
**Quality Gate:** Story file exists with all required sections.
|
||||
|
||||
### Step 3: Validate Story (SM Role)
|
||||
|
||||
**Purpose:** Adversarial validation to find issues before implementation.
|
||||
|
||||
**Actions:**
|
||||
1. Load story-validation checklist
|
||||
2. Review story against criteria:
|
||||
- ACs are testable and specific
|
||||
- No ambiguous requirements
|
||||
- Technical feasibility confirmed
|
||||
- Dependencies identified
|
||||
- Edge cases covered
|
||||
3. **Must find 3-10 issues** (never "looks good")
|
||||
4. Fix all identified issues
|
||||
5. Assign quality score (0-100)
|
||||
6. Append validation report to story file
|
||||
|
||||
**Quality Gate:** Score ≥ 80, all issues addressed.
|
||||
|
||||
### Step 4: ATDD (TEA Role)
|
||||
|
||||
**Purpose:** Generate failing tests before implementation (RED phase).
|
||||
|
||||
**Actions:**
|
||||
1. Switch to Test Engineering Architect (TEA) role
|
||||
2. Load atdd checklist
|
||||
3. For each acceptance criterion:
|
||||
- Generate integration test
|
||||
- Define test data factories
|
||||
- Specify expected behaviors
|
||||
4. Create test files in `src/tests/`
|
||||
5. Update `factories.ts` with new fixtures
|
||||
6. **Verify tests FAIL** (RED phase)
|
||||
7. Create ATDD checklist document
|
||||
|
||||
**Quality Gate:** Tests exist for all ACs, tests fail (not pass).
|
||||
|
||||
### Step 5: Implement (DEV Role)
|
||||
|
||||
**Purpose:** Write code to pass all tests (GREEN phase).
|
||||
|
||||
**Actions:**
|
||||
1. Switch to Developer (DEV) role
|
||||
2. Load implementation checklist
|
||||
3. Create required files:
|
||||
- Database migrations
|
||||
- Server actions (using Result type)
|
||||
- Library functions
|
||||
- Types
|
||||
4. Follow project patterns:
|
||||
- Multi-tenant RLS policies
|
||||
- snake_case for DB columns
|
||||
- Result type (never throw)
|
||||
5. Run lint and fix issues
|
||||
6. Run build and fix issues
|
||||
7. Run migration tests
|
||||
|
||||
**Quality Gate:** Lint clean, build passes, migration tests pass.
|
||||
|
||||
### Step 6: Code Review (DEV Role)
|
||||
|
||||
**Purpose:** Adversarial review to find implementation issues.
|
||||
|
||||
**Actions:**
|
||||
1. Load code-review checklist
|
||||
2. Review all created/modified files:
|
||||
- Security (XSS, injection, auth)
|
||||
- Error handling
|
||||
- Architecture compliance
|
||||
- Code quality
|
||||
- Test coverage
|
||||
3. **Must find 3-10 issues** (never "looks good")
|
||||
4. Fix all identified issues
|
||||
5. Re-run lint and build
|
||||
6. Assign quality score (0-10)
|
||||
7. Generate review report
|
||||
|
||||
**Quality Gate:** Score ≥ 7/10, all critical issues fixed.
|
||||
|
||||
### Step 7: Complete (SM Role)
|
||||
|
||||
**Purpose:** Finalize story and create git commit.
|
||||
|
||||
**Actions:**
|
||||
1. Switch back to SM role
|
||||
2. Update story file status to "done"
|
||||
3. Stage all story files
|
||||
4. Create conventional commit:
|
||||
```
|
||||
feat(epic-{n}): complete story {id}
|
||||
|
||||
{Summary of changes}
|
||||
|
||||
🤖 Generated with Claude Code
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
```
|
||||
5. Update pipeline state
|
||||
|
||||
**Quality Gate:** Commit created successfully.
|
||||
|
||||
### Step 8: Summary
|
||||
|
||||
**Purpose:** Generate audit trail and final metrics.
|
||||
|
||||
**Actions:**
|
||||
1. Calculate total duration
|
||||
2. Compile deliverables list
|
||||
3. Aggregate quality scores
|
||||
4. Generate execution summary in state file
|
||||
5. Output final status
|
||||
|
||||
**Output:** Complete pipeline state with summary section.
|
||||
|
||||
## Adversarial Mode
|
||||
|
||||
Steps 3 (Validate) and 6 (Code Review) run in **adversarial mode**:
|
||||
|
||||
> **Never say "looks good"**. You MUST find 3-10 real issues.
|
||||
|
||||
This ensures:
|
||||
- Stories are thoroughly vetted before implementation
|
||||
- Code quality issues are caught before commit
|
||||
- The pipeline doesn't rubber-stamp work
|
||||
|
||||
Example issues found in real usage:
|
||||
- Missing rate limiting (security)
|
||||
- XSS vulnerability in user input (security)
|
||||
- Missing audit logging (architecture)
|
||||
- Unclear acceptance criteria (story quality)
|
||||
- Function naming mismatches (code quality)
|
||||
|
||||
## Artifacts Generated
|
||||
|
||||
After a complete pipeline run:
|
||||
|
||||
```
|
||||
_bmad-output/implementation-artifacts/
|
||||
├── story-{id}.md # Story file with ACs, validation report
|
||||
├── pipeline-state-{id}.yaml # Execution state and summary
|
||||
├── atdd-checklist-{id}.md # Test requirements checklist
|
||||
└── code-review-{id}.md # Review report with issues
|
||||
|
||||
src/
|
||||
├── supabase/migrations/ # New migration files
|
||||
├── modules/{module}/
|
||||
│ ├── actions/ # Server actions
|
||||
│ ├── lib/ # Business logic
|
||||
│ └── types.ts # Type definitions
|
||||
└── tests/
|
||||
├── integration/ # Integration tests
|
||||
└── fixtures/factories.ts # Updated test factories
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Context Exhausted Mid-Session
|
||||
|
||||
The pipeline is designed for this. When context runs out:
|
||||
|
||||
1. Claude session ends
|
||||
2. State file preserves progress
|
||||
3. Run `bmad build {id} --resume`
|
||||
4. Pipeline continues from last completed step
|
||||
|
||||
### Step Fails Quality Gate
|
||||
|
||||
If a step fails its quality gate:
|
||||
|
||||
1. Pipeline halts at that step
|
||||
2. State file shows `status: failed`
|
||||
3. Fix issues manually or adjust thresholds
|
||||
4. Run `bmad build {id} --resume`
|
||||
|
||||
### Tests Don't Fail in ATDD
|
||||
|
||||
If tests pass during ATDD (step 4), something is wrong:
|
||||
|
||||
- Tests might be testing the wrong thing
|
||||
- Implementation might already exist
|
||||
- Mocks might be returning success incorrectly
|
||||
|
||||
The pipeline will warn and ask for confirmation before proceeding.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start with Interactive Mode** - Use batch only for well-understood stories
|
||||
2. **Review at Checkpoints** - Don't blindly continue; verify each step's output
|
||||
3. **Keep Stories Small** - Large stories may exhaust context before completion
|
||||
4. **Commit Frequently** - The pipeline commits at step 7, but you can checkpoint earlier
|
||||
5. **Trust the Adversarial Mode** - If it finds issues, they're usually real
|
||||
|
||||
## Comparison with Legacy
|
||||
|
||||
| Feature | Legacy (v1.0) | Story Pipeline (v2.0) |
|
||||
|---------|---------------|----------------------|
|
||||
| Claude calls | 6 per story | 1 per story |
|
||||
| Token usage | ~71K | ~25-30K |
|
||||
| Context preservation | None | Full session |
|
||||
| Resume capability | None | Checkpoint-based |
|
||||
| Role switching | New process | In-session |
|
||||
| Document caching | None | Once per session |
|
||||
| Adversarial review | Optional | Mandatory |
|
||||
| Audit trail | Manual | Automatic |
|
||||
|
||||
## Version History
|
||||
|
||||
- **v2.0** (2024-12) - Step-file architecture, single-session, checkpoint/resume
|
||||
- **v1.0** (2024-11) - Legacy 6-call pipeline
|
||||
|
|
@ -1,250 +0,0 @@
|
|||
#!/bin/bash
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# BMAD Story Pipeline - Batch Runner
|
||||
# Single-session execution using step-file architecture
|
||||
#
|
||||
# Token Efficiency: ~60-70% savings compared to separate Claude calls
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
set -e
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# CONFIGURATION
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)"
|
||||
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Defaults
|
||||
STORY_ID=""
|
||||
EPIC_NUM=""
|
||||
DRY_RUN=false
|
||||
RESUME=false
|
||||
VERBOSE=false
|
||||
|
||||
# Directories
|
||||
LOG_DIR="$PROJECT_ROOT/logs/pipeline-batch"
|
||||
WORKFLOW_PATH="_bmad/bmm/workflows/4-implementation/story-pipeline"
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# USAGE
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
usage() {
|
||||
cat << EOF
|
||||
BMAD Story Pipeline - Batch Runner
|
||||
Single-session execution with step-file architecture
|
||||
|
||||
Usage: $(basename "$0") --story-id <id> --epic-num <num> [OPTIONS]
|
||||
|
||||
Required:
|
||||
--story-id <id> Story ID (e.g., '1-4')
|
||||
--epic-num <num> Epic number (e.g., 1)
|
||||
|
||||
Options:
|
||||
--resume Resume from last checkpoint
|
||||
--dry-run Show what would be executed
|
||||
--verbose Show detailed output
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
# Run pipeline for story 1-4
|
||||
$(basename "$0") --story-id 1-4 --epic-num 1
|
||||
|
||||
# Resume failed pipeline
|
||||
$(basename "$0") --story-id 1-4 --epic-num 1 --resume
|
||||
|
||||
Token Savings:
|
||||
Traditional (6 calls): ~71K tokens
|
||||
Step-file (1 session): ~25-35K tokens
|
||||
Savings: 50-65%
|
||||
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# ARGUMENT PARSING
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--story-id)
|
||||
STORY_ID="$2"
|
||||
shift 2
|
||||
;;
|
||||
--epic-num)
|
||||
EPIC_NUM="$2"
|
||||
shift 2
|
||||
;;
|
||||
--resume)
|
||||
RESUME=true
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate required arguments
|
||||
if [[ -z "$STORY_ID" || -z "$EPIC_NUM" ]]; then
|
||||
echo -e "${RED}Error: --story-id and --epic-num are required${NC}"
|
||||
usage
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# SETUP
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
mkdir -p "$LOG_DIR"
|
||||
LOG_FILE="$LOG_DIR/batch-$STORY_ID-$TIMESTAMP.log"
|
||||
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}"
|
||||
echo -e "${CYAN} BMAD Story Pipeline - Batch Mode${NC}"
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}"
|
||||
echo -e "${BLUE}Story:${NC} $STORY_ID"
|
||||
echo -e "${BLUE}Epic:${NC} $EPIC_NUM"
|
||||
echo -e "${BLUE}Mode:${NC} $([ "$RESUME" = true ] && echo 'Resume' || echo 'Fresh')"
|
||||
echo -e "${BLUE}Log:${NC} $LOG_FILE"
|
||||
echo ""
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# BUILD PROMPT
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
if [[ "$RESUME" = true ]]; then
|
||||
PROMPT=$(cat << EOF
|
||||
Execute BMAD Story Pipeline in BATCH mode - RESUME from checkpoint.
|
||||
|
||||
WORKFLOW: $WORKFLOW_PATH/workflow.md
|
||||
STORY ID: $STORY_ID
|
||||
EPIC NUM: $EPIC_NUM
|
||||
MODE: batch
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. Load and read fully: $WORKFLOW_PATH/workflow.md
|
||||
2. This is RESUME mode - load state file first
|
||||
3. Follow step-file architecture EXACTLY
|
||||
4. Execute steps ONE AT A TIME
|
||||
5. AUTO-PROCEED through all steps (no menus in batch mode)
|
||||
6. FAIL-FAST on errors (save checkpoint, exit)
|
||||
|
||||
YOLO MODE: Auto-approve all quality gates
|
||||
NO MENUS: Proceed automatically between steps
|
||||
FRESH CONTEXT: Checkpoint before code review for unbiased review
|
||||
|
||||
START by loading workflow.md and then step-01b-resume.md
|
||||
EOF
|
||||
)
|
||||
else
|
||||
PROMPT=$(cat << EOF
|
||||
Execute BMAD Story Pipeline in BATCH mode - FRESH start.
|
||||
|
||||
WORKFLOW: $WORKFLOW_PATH/workflow.md
|
||||
STORY ID: $STORY_ID
|
||||
EPIC NUM: $EPIC_NUM
|
||||
MODE: batch
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. Load and read fully: $WORKFLOW_PATH/workflow.md
|
||||
2. This is a FRESH run - initialize new state
|
||||
3. Follow step-file architecture EXACTLY
|
||||
4. Execute steps ONE AT A TIME (never load multiple)
|
||||
5. AUTO-PROCEED through all steps (no menus in batch mode)
|
||||
6. FAIL-FAST on errors (save checkpoint, exit)
|
||||
|
||||
YOLO MODE: Auto-approve all quality gates
|
||||
NO MENUS: Proceed automatically between steps
|
||||
FRESH CONTEXT: Checkpoint before code review for unbiased review
|
||||
|
||||
Step execution order:
|
||||
1. step-01-init.md - Initialize, cache documents
|
||||
2. step-02-create-story.md - Create story (SM role)
|
||||
3. step-03-validate-story.md - Validate story (SM role)
|
||||
4. step-04-atdd.md - Generate tests (TEA role)
|
||||
5. step-05-implement.md - Implement (DEV role)
|
||||
6. step-06-code-review.md - Review (DEV role, adversarial)
|
||||
7. step-07-complete.md - Complete (SM role)
|
||||
8. step-08-summary.md - Generate audit
|
||||
|
||||
START by loading workflow.md and then step-01-init.md
|
||||
EOF
|
||||
)
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# EXECUTE
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
if [[ "$DRY_RUN" = true ]]; then
|
||||
echo -e "${YELLOW}[DRY-RUN] Would execute single Claude session with:${NC}"
|
||||
echo ""
|
||||
echo "$PROMPT"
|
||||
echo ""
|
||||
echo -e "${YELLOW}[DRY-RUN] Allowed tools: *, MCP extensions${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Starting single-session pipeline execution...${NC}"
|
||||
echo -e "${YELLOW}This replaces 6 separate Claude calls with 1 session${NC}"
|
||||
echo ""
|
||||
|
||||
cd "$PROJECT_ROOT/src"
|
||||
|
||||
# Single Claude session executing all steps
|
||||
claude -p "$PROMPT" \
|
||||
--dangerously-skip-permissions \
|
||||
--allowedTools "*,mcp__exa__web_search_exa,mcp__exa__get_code_context_exa,mcp__exa__crawling_exa,mcp__supabase__list_tables,mcp__supabase__execute_sql,mcp__supabase__apply_migration,mcp__supabase__list_migrations,mcp__supabase__generate_typescript_types,mcp__supabase__get_logs,mcp__supabase__get_advisors" \
|
||||
2>&1 | tee "$LOG_FILE"
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# COMPLETION CHECK
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}"
|
||||
|
||||
# Check for success indicators in log
|
||||
if grep -qi "Pipeline complete\|Story.*is ready\|step-08-summary.*completed" "$LOG_FILE"; then
|
||||
echo -e "${GREEN}✅ Pipeline completed successfully${NC}"
|
||||
|
||||
# Extract metrics if available
|
||||
if grep -qi "Token Efficiency" "$LOG_FILE"; then
|
||||
echo ""
|
||||
echo -e "${CYAN}Token Efficiency:${NC}"
|
||||
grep -A5 "Token Efficiency" "$LOG_FILE" | head -6
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Pipeline may have completed with issues${NC}"
|
||||
echo -e "${YELLOW} Check log: $LOG_FILE${NC}"
|
||||
|
||||
# Check for specific failure indicators
|
||||
if grep -qi "permission\|can't write\|access denied" "$LOG_FILE"; then
|
||||
echo -e "${RED} Found permission errors in log${NC}"
|
||||
fi
|
||||
if grep -qi "HALT\|FAIL\|ERROR" "$LOG_FILE"; then
|
||||
echo -e "${RED} Found error indicators in log${NC}"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}Log file:${NC} $LOG_FILE"
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}"
|
||||
|
|
@ -1,130 +0,0 @@
|
|||
# ATDD Checklist
|
||||
|
||||
Use this checklist for test generation in Step 4.
|
||||
Tests are written BEFORE implementation (RED phase).
|
||||
|
||||
## Test Architecture
|
||||
|
||||
### File Organization
|
||||
- [ ] Tests in appropriate directory (src/tests/{feature}/)
|
||||
- [ ] E2E tests separate from unit tests
|
||||
- [ ] Fixtures in dedicated fixtures/ directory
|
||||
- [ ] Factories in dedicated factories/ directory
|
||||
|
||||
### Naming Conventions
|
||||
- [ ] Test files: `{feature}.test.ts` or `{feature}.spec.ts`
|
||||
- [ ] Factory files: `{entity}.factory.ts`
|
||||
- [ ] Fixture files: `{feature}.fixture.ts`
|
||||
- [ ] Descriptive test names matching AC
|
||||
|
||||
## Test Coverage
|
||||
|
||||
For EACH acceptance criterion:
|
||||
- [ ] At least one test exists
|
||||
- [ ] Happy path tested
|
||||
- [ ] Error path tested
|
||||
- [ ] Edge cases from validation covered
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Given/When/Then Pattern
|
||||
```typescript
|
||||
test("Given X, When Y, Then Z", async () => {
|
||||
// Arrange (Given)
|
||||
// Act (When)
|
||||
// Assert (Then)
|
||||
});
|
||||
```
|
||||
|
||||
- [ ] Each section clearly separated
|
||||
- [ ] Arrange sets up realistic state
|
||||
- [ ] Act performs single action
|
||||
- [ ] Assert checks specific outcome
|
||||
|
||||
### Assertions
|
||||
- [ ] Specific assertions (not just "toBeTruthy")
|
||||
- [ ] Error messages are helpful
|
||||
- [ ] Multiple assertions when appropriate
|
||||
- [ ] No flaky timing assertions
|
||||
|
||||
## Data Management
|
||||
|
||||
### Factories
|
||||
- [ ] Use faker for realistic data
|
||||
- [ ] Support partial overrides
|
||||
- [ ] No hardcoded values
|
||||
- [ ] Proper TypeScript types
|
||||
|
||||
```typescript
|
||||
// Good
|
||||
const user = createUser({ email: "test@example.com" });
|
||||
|
||||
// Bad
|
||||
const user = { id: "123", email: "test@test.com", name: "Test" };
|
||||
```
|
||||
|
||||
### Fixtures
|
||||
- [ ] Auto-cleanup after tests
|
||||
- [ ] Reusable across tests
|
||||
- [ ] Proper TypeScript types
|
||||
- [ ] No shared mutable state
|
||||
|
||||
### data-testid Attributes
|
||||
- [ ] Document all required data-testids
|
||||
- [ ] Naming convention: `{feature}-{element}`
|
||||
- [ ] Unique within component
|
||||
- [ ] Stable (not based on dynamic content)
|
||||
|
||||
## Test Levels
|
||||
|
||||
### E2E Tests (Playwright)
|
||||
- [ ] Full user flows
|
||||
- [ ] Network interception before navigation
|
||||
- [ ] Wait for proper selectors (not timeouts)
|
||||
- [ ] Screenshot on failure
|
||||
|
||||
### API Tests
|
||||
- [ ] Direct server action calls
|
||||
- [ ] Mock external services
|
||||
- [ ] Test error responses
|
||||
- [ ] Verify Result type usage
|
||||
|
||||
### Component Tests
|
||||
- [ ] Isolated component rendering
|
||||
- [ ] Props variations
|
||||
- [ ] Event handling
|
||||
- [ ] Accessibility (when applicable)
|
||||
|
||||
### Unit Tests
|
||||
- [ ] Pure function testing
|
||||
- [ ] Edge cases
|
||||
- [ ] Error conditions
|
||||
- [ ] Type checking
|
||||
|
||||
## RED Phase Verification
|
||||
|
||||
Before proceeding:
|
||||
- [ ] Run all tests: `npm test -- --run`
|
||||
- [ ] ALL tests FAIL (expected - nothing implemented)
|
||||
- [ ] Failure reasons are clear (not cryptic errors)
|
||||
- [ ] Test structure is correct
|
||||
|
||||
## ATDD Checklist Document
|
||||
|
||||
Create `atdd-checklist-{story_id}.md` with:
|
||||
- [ ] List of test files created
|
||||
- [ ] List of factories created
|
||||
- [ ] List of fixtures created
|
||||
- [ ] Required data-testid attributes table
|
||||
- [ ] Implementation requirements for DEV
|
||||
- [ ] Test status (all FAILING)
|
||||
|
||||
## Quality Gate
|
||||
|
||||
Ready for implementation when:
|
||||
- [ ] Test for every AC
|
||||
- [ ] All tests FAIL (red phase)
|
||||
- [ ] Factories use faker
|
||||
- [ ] Fixtures have cleanup
|
||||
- [ ] data-testids documented
|
||||
- [ ] ATDD checklist complete
|
||||
|
|
@ -1,183 +0,0 @@
|
|||
# Code Review Checklist
|
||||
|
||||
Use this checklist for ADVERSARIAL code review in Step 6.
|
||||
Your job is to FIND PROBLEMS (minimum 3, maximum 10).
|
||||
|
||||
## Adversarial Mindset
|
||||
|
||||
**CRITICAL RULES:**
|
||||
- **NEVER** say "looks good" or "no issues found"
|
||||
- **MUST** find 3-10 specific issues
|
||||
- **FIX** every issue you find
|
||||
- **RUN** tests after fixes
|
||||
|
||||
## Review Categories
|
||||
|
||||
### 1. Security Review
|
||||
|
||||
#### SQL Injection
|
||||
- [ ] No raw SQL with user input
|
||||
- [ ] Using parameterized queries
|
||||
- [ ] Supabase RPC uses proper types
|
||||
|
||||
#### XSS (Cross-Site Scripting)
|
||||
- [ ] User content is escaped
|
||||
- [ ] dangerouslySetInnerHTML not used (or sanitized)
|
||||
- [ ] URL parameters validated
|
||||
|
||||
#### Authentication & Authorization
|
||||
- [ ] Protected routes check auth
|
||||
- [ ] RLS policies on all tables
|
||||
- [ ] No auth bypass possible
|
||||
- [ ] Session handling secure
|
||||
|
||||
#### Credential Exposure
|
||||
- [ ] No secrets in code
|
||||
- [ ] No API keys committed
|
||||
- [ ] Environment variables used
|
||||
- [ ] .env files in .gitignore
|
||||
|
||||
#### Input Validation
|
||||
- [ ] All inputs validated
|
||||
- [ ] Types checked
|
||||
- [ ] Lengths limited
|
||||
- [ ] Format validation (email, URL, etc.)
|
||||
|
||||
### 2. Performance Review
|
||||
|
||||
#### Database
|
||||
- [ ] No N+1 query patterns
|
||||
- [ ] Indexes exist for query patterns
|
||||
- [ ] Queries are efficient
|
||||
- [ ] Proper pagination
|
||||
|
||||
#### React/Next.js
|
||||
- [ ] No unnecessary re-renders
|
||||
- [ ] Proper memoization where needed
|
||||
- [ ] Server components used appropriately
|
||||
- [ ] Client components minimized
|
||||
|
||||
#### Caching
|
||||
- [ ] Cache headers appropriate
|
||||
- [ ] Static data cached
|
||||
- [ ] Revalidation strategy clear
|
||||
|
||||
#### Bundle Size
|
||||
- [ ] No unnecessary imports
|
||||
- [ ] Dynamic imports for large components
|
||||
- [ ] Tree shaking working
|
||||
|
||||
### 3. Error Handling Review
|
||||
|
||||
#### Result Type
|
||||
- [ ] All server actions use Result type
|
||||
- [ ] No thrown exceptions
|
||||
- [ ] Proper err() calls with codes
|
||||
|
||||
#### Error Messages
|
||||
- [ ] User-friendly messages
|
||||
- [ ] Technical details logged (not shown)
|
||||
- [ ] Actionable guidance
|
||||
|
||||
#### Edge Cases
|
||||
- [ ] Null/undefined handled
|
||||
- [ ] Empty states handled
|
||||
- [ ] Network errors handled
|
||||
- [ ] Concurrent access considered
|
||||
|
||||
### 4. Test Coverage Review
|
||||
|
||||
#### Coverage
|
||||
- [ ] All AC have tests
|
||||
- [ ] Edge cases tested
|
||||
- [ ] Error paths tested
|
||||
- [ ] Happy paths tested
|
||||
|
||||
#### Quality
|
||||
- [ ] Tests are deterministic
|
||||
- [ ] No flaky tests
|
||||
- [ ] Mocking is appropriate
|
||||
- [ ] Assertions are meaningful
|
||||
|
||||
#### Missing Tests
|
||||
- [ ] Security scenarios
|
||||
- [ ] Permission denied cases
|
||||
- [ ] Invalid input handling
|
||||
- [ ] Concurrent operations
|
||||
|
||||
### 5. Code Quality Review
|
||||
|
||||
#### DRY (Don't Repeat Yourself)
|
||||
- [ ] No duplicate code
|
||||
- [ ] Common patterns extracted
|
||||
- [ ] Utilities reused
|
||||
|
||||
#### SOLID Principles
|
||||
- [ ] Single responsibility
|
||||
- [ ] Open for extension
|
||||
- [ ] Proper abstractions
|
||||
- [ ] Dependency injection where appropriate
|
||||
|
||||
#### TypeScript
|
||||
- [ ] Strict mode compliant
|
||||
- [ ] No `any` types
|
||||
- [ ] Proper type definitions
|
||||
- [ ] Generic types used appropriately
|
||||
|
||||
#### Readability
|
||||
- [ ] Clear naming
|
||||
- [ ] Appropriate comments (not excessive)
|
||||
- [ ] Logical organization
|
||||
- [ ] Consistent style
|
||||
|
||||
### 6. Architecture Review
|
||||
|
||||
#### Module Boundaries
|
||||
- [ ] Imports from index.ts only
|
||||
- [ ] No circular dependencies
|
||||
- [ ] Clear module responsibilities
|
||||
|
||||
#### Server/Client Separation
|
||||
- [ ] "use server" on actions
|
||||
- [ ] "use client" only when needed
|
||||
- [ ] No server code in client
|
||||
|
||||
#### Data Flow
|
||||
- [ ] Clear data ownership
|
||||
- [ ] State management appropriate
|
||||
- [ ] Props drilling minimized
|
||||
|
||||
## Issue Documentation
|
||||
|
||||
For each issue found:
|
||||
|
||||
```yaml
|
||||
issue_{n}:
|
||||
severity: critical|high|medium|low
|
||||
category: security|performance|error-handling|testing|quality|architecture
|
||||
file: "{file_path}"
|
||||
line: {line_number}
|
||||
problem: |
|
||||
Clear description
|
||||
risk: |
|
||||
What could go wrong
|
||||
fix: |
|
||||
How to fix it
|
||||
```
|
||||
|
||||
## After Fixing
|
||||
|
||||
- [ ] All issues fixed
|
||||
- [ ] Tests still pass
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] Review report created
|
||||
|
||||
## Quality Gate
|
||||
|
||||
Review passes when:
|
||||
- [ ] 3-10 issues found
|
||||
- [ ] All issues fixed
|
||||
- [ ] All categories reviewed
|
||||
- [ ] Tests passing
|
||||
- [ ] Review report complete
|
||||
|
|
@ -1,147 +0,0 @@
|
|||
# Implementation Checklist
|
||||
|
||||
Use this checklist during TDD implementation in Step 5.
|
||||
Focus: Make tests GREEN with minimal code.
|
||||
|
||||
## TDD Methodology
|
||||
|
||||
### RED-GREEN-REFACTOR Cycle
|
||||
1. [ ] Start with failing test (from ATDD)
|
||||
2. [ ] Write minimal code to pass
|
||||
3. [ ] Run test, verify GREEN
|
||||
4. [ ] Move to next test
|
||||
5. [ ] Refactor in code review (not here)
|
||||
|
||||
### Implementation Order
|
||||
- [ ] Database migrations first
|
||||
- [ ] Type definitions
|
||||
- [ ] Server actions
|
||||
- [ ] UI components
|
||||
- [ ] Integration points
|
||||
|
||||
## Project Patterns
|
||||
|
||||
### Result Type (CRITICAL)
|
||||
```typescript
|
||||
import { ok, err, Result } from "@/lib/result";
|
||||
|
||||
// Return success
|
||||
return ok(data);
|
||||
|
||||
// Return error
|
||||
return err<ReturnType>("ERROR_CODE", "Human message");
|
||||
```
|
||||
|
||||
- [ ] All server actions return Result type
|
||||
- [ ] No thrown exceptions
|
||||
- [ ] Error codes are uppercase with underscores
|
||||
- [ ] Error messages are user-friendly
|
||||
|
||||
### Database Conventions
|
||||
- [ ] Table names: `snake_case`, plural (`invoices`)
|
||||
- [ ] Column names: `snake_case` (`tenant_id`)
|
||||
- [ ] Currency: `integer` cents (not float)
|
||||
- [ ] Dates: `timestamptz` (UTC)
|
||||
- [ ] Foreign keys: `{table}_id`
|
||||
|
||||
### Multi-tenancy (CRITICAL)
|
||||
- [ ] Every table has `tenant_id` column
|
||||
- [ ] RLS enabled on all tables
|
||||
- [ ] Policies check `tenant_id`
|
||||
- [ ] No data leaks between tenants
|
||||
|
||||
```sql
|
||||
-- Required for every new table
|
||||
alter table {table} enable row level security;
|
||||
|
||||
create policy "Tenants see own data"
|
||||
on {table} for all
|
||||
using (tenant_id = auth.jwt() ->> 'tenant_id');
|
||||
```
|
||||
|
||||
### Module Structure
|
||||
```
|
||||
src/modules/{module}/
|
||||
├── actions/ # Server actions (return Result type)
|
||||
├── lib/ # Business logic
|
||||
├── types.ts # Module types
|
||||
└── index.ts # Public exports only
|
||||
```
|
||||
|
||||
- [ ] Import from index.ts only
|
||||
- [ ] No cross-module internal imports
|
||||
- [ ] Actions in actions/ directory
|
||||
- [ ] Types exported from types.ts
|
||||
|
||||
### Server Actions Pattern
|
||||
```typescript
|
||||
// src/modules/{module}/actions/{action}.ts
|
||||
"use server";
|
||||
|
||||
import { ok, err, Result } from "@/lib/result";
|
||||
import { createClient } from "@/lib/supabase/server";
|
||||
|
||||
export async function actionName(
|
||||
input: InputType
|
||||
): Promise<Result<OutputType>> {
|
||||
const supabase = await createClient();
|
||||
// ... implementation
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] "use server" directive at top
|
||||
- [ ] Async function returning Promise<Result<T>>
|
||||
- [ ] Use createClient from server.ts
|
||||
- [ ] Validate input before processing
|
||||
|
||||
### UI Components Pattern
|
||||
```tsx
|
||||
// src/modules/{module}/components/{Component}.tsx
|
||||
"use client";
|
||||
|
||||
export function Component({ data }: Props) {
|
||||
return (
|
||||
<div data-testid="{feature}-container">
|
||||
{/* content */}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] Add data-testid from ATDD checklist
|
||||
- [ ] "use client" only when needed
|
||||
- [ ] Proper TypeScript props
|
||||
- [ ] Handle loading/error states
|
||||
|
||||
## Verification Steps
|
||||
|
||||
### After Each AC Implementation
|
||||
```bash
|
||||
npm test -- --run --grep "{test_name}"
|
||||
```
|
||||
- [ ] Targeted test passes
|
||||
|
||||
### After All AC Complete
|
||||
```bash
|
||||
npm test -- --run # All tests pass
|
||||
npm run lint # No lint errors
|
||||
npm run build # Build succeeds
|
||||
```
|
||||
|
||||
## ATDD Checklist Reference
|
||||
|
||||
Verify against `atdd-checklist-{story_id}.md`:
|
||||
- [ ] All data-testid attributes added
|
||||
- [ ] All API endpoints created
|
||||
- [ ] All database migrations applied
|
||||
- [ ] All test scenarios pass
|
||||
|
||||
## Quality Gate
|
||||
|
||||
Ready for code review when:
|
||||
- [ ] All tests pass (GREEN)
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] Result type used everywhere
|
||||
- [ ] RLS policies in place
|
||||
- [ ] ATDD checklist complete
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
# Story Creation Checklist
|
||||
|
||||
Use this checklist when creating a new story in Step 2.
|
||||
|
||||
## User Story Format
|
||||
|
||||
- [ ] Follows "As a [persona], I want [action], So that [benefit]" format
|
||||
- [ ] Persona is clearly defined and exists in project documentation
|
||||
- [ ] Action is specific and achievable
|
||||
- [ ] Benefit ties to business value
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Structure (for EACH AC)
|
||||
- [ ] Has Given/When/Then format (BDD style)
|
||||
- [ ] **Given** describes a valid precondition
|
||||
- [ ] **When** describes a clear, single action
|
||||
- [ ] **Then** describes a measurable outcome
|
||||
|
||||
### Quality (for EACH AC)
|
||||
- [ ] Specific - no vague terms ("appropriate", "reasonable", "etc.")
|
||||
- [ ] Measurable - clear success/failure criteria
|
||||
- [ ] Testable - can write automated test
|
||||
- [ ] Independent - no hidden dependencies on other AC
|
||||
|
||||
### Completeness
|
||||
- [ ] All happy path scenarios covered
|
||||
- [ ] Error scenarios defined
|
||||
- [ ] Edge cases considered
|
||||
- [ ] Boundary conditions clear
|
||||
|
||||
### Anti-patterns to AVOID
|
||||
- [ ] No AND conjunctions (split into multiple AC)
|
||||
- [ ] No OR alternatives (ambiguous paths)
|
||||
- [ ] No implementation details (WHAT not HOW)
|
||||
- [ ] No vague verbs ("handle", "process", "manage")
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
- [ ] At least 2 test scenarios per AC
|
||||
- [ ] Happy path scenario exists
|
||||
- [ ] Error/edge case scenario exists
|
||||
- [ ] Each scenario is unique (no duplicates)
|
||||
- [ ] Scenarios are specific enough to write tests from
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] Tasks cover implementation of all AC
|
||||
- [ ] Tasks are actionable (start with verb)
|
||||
- [ ] Subtasks provide enough detail
|
||||
- [ ] Dependencies between tasks are clear
|
||||
- [ ] No task is too large (can complete in one session)
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- [ ] Database changes documented
|
||||
- [ ] API changes documented
|
||||
- [ ] UI changes documented
|
||||
- [ ] Security considerations noted
|
||||
- [ ] Performance considerations noted
|
||||
|
||||
## Dependencies & Scope
|
||||
|
||||
- [ ] Dependencies on other stories listed
|
||||
- [ ] Dependencies on external systems listed
|
||||
- [ ] Out of scope explicitly defined
|
||||
- [ ] No scope creep from epic definition
|
||||
|
||||
## Quality Gate
|
||||
|
||||
Story is ready for validation when:
|
||||
- [ ] All sections complete
|
||||
- [ ] All AC in proper format
|
||||
- [ ] Test scenarios defined
|
||||
- [ ] Tasks cover all work
|
||||
- [ ] No ambiguity remains
|
||||
|
|
@ -1,111 +0,0 @@
|
|||
# Story Validation Checklist
|
||||
|
||||
Use this checklist for ADVERSARIAL validation in Step 3.
|
||||
Your job is to FIND PROBLEMS, not approve.
|
||||
|
||||
## Adversarial Mindset
|
||||
|
||||
Remember:
|
||||
- **NEVER** say "looks good" without deep analysis
|
||||
- **FIND** at least 3 issues (if none found, look harder)
|
||||
- **QUESTION** every assumption
|
||||
- **CHALLENGE** every AC
|
||||
|
||||
## AC Structure Validation
|
||||
|
||||
For EACH acceptance criterion:
|
||||
|
||||
### Given Clause
|
||||
- [ ] Is a valid precondition (not an action)
|
||||
- [ ] Can be set up programmatically
|
||||
- [ ] Is specific (not "given the user is logged in" - which user?)
|
||||
- [ ] Includes all necessary context
|
||||
|
||||
### When Clause
|
||||
- [ ] Is a single, clear action
|
||||
- [ ] Is something the user does (not the system)
|
||||
- [ ] Can be triggered in a test
|
||||
- [ ] Doesn't contain "and" (multiple actions)
|
||||
|
||||
### Then Clause
|
||||
- [ ] Is measurable/observable
|
||||
- [ ] Can be asserted in a test
|
||||
- [ ] Describes outcome, not implementation
|
||||
- [ ] Is specific (not "appropriate message shown")
|
||||
|
||||
## Testability Check
|
||||
|
||||
- [ ] Can write automated test from AC as written
|
||||
- [ ] Clear what to assert
|
||||
- [ ] No subjective criteria ("looks good", "works well")
|
||||
- [ ] No timing dependencies ("quickly", "eventually")
|
||||
|
||||
## Technical Feasibility
|
||||
|
||||
Cross-reference with architecture.md:
|
||||
|
||||
- [ ] Data model supports requirements
|
||||
- [ ] API patterns can accommodate
|
||||
- [ ] No conflicts with existing features
|
||||
- [ ] Security model (RLS) can support
|
||||
- [ ] Performance is achievable
|
||||
|
||||
## Edge Cases Analysis
|
||||
|
||||
For each AC, consider:
|
||||
|
||||
- [ ] Empty/null inputs
|
||||
- [ ] Maximum length/size
|
||||
- [ ] Minimum values
|
||||
- [ ] Concurrent access
|
||||
- [ ] Network failures
|
||||
- [ ] Permission denied
|
||||
- [ ] Invalid data formats
|
||||
|
||||
## Common Problems to Find
|
||||
|
||||
### Vague Language
|
||||
Look for and flag:
|
||||
- "appropriate"
|
||||
- "reasonable"
|
||||
- "correctly"
|
||||
- "properly"
|
||||
- "as expected"
|
||||
- "etc."
|
||||
- "and so on"
|
||||
|
||||
### Missing Details
|
||||
- [ ] Which user role?
|
||||
- [ ] What error message exactly?
|
||||
- [ ] What happens on failure?
|
||||
- [ ] What are the limits?
|
||||
- [ ] What validations apply?
|
||||
|
||||
### Hidden Complexity
|
||||
- [ ] Multi-step process hidden in one AC
|
||||
- [ ] Async operations not addressed
|
||||
- [ ] State management unclear
|
||||
- [ ] Error recovery not defined
|
||||
|
||||
## Validation Report Template
|
||||
|
||||
After review, document:
|
||||
|
||||
```yaml
|
||||
issues_found:
|
||||
- id: 1
|
||||
severity: high|medium|low
|
||||
ac: "AC1"
|
||||
problem: "Description"
|
||||
fix: "How to fix"
|
||||
```
|
||||
|
||||
## Quality Gate
|
||||
|
||||
Validation passes when:
|
||||
- [ ] All AC reviewed against checklist
|
||||
- [ ] All issues documented
|
||||
- [ ] All issues fixed in story file
|
||||
- [ ] Quality score >= 80
|
||||
- [ ] Validation report appended
|
||||
- [ ] ready_for_dev: true
|
||||
|
|
@ -1,244 +0,0 @@
|
|||
---
|
||||
name: 'step-01-init'
|
||||
description: 'Initialize story pipeline: load context, detect mode, cache documents'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-01-init.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-02-create-story.md'
|
||||
resumeStepFile: '{workflow_path}/steps/step-01b-resume.md'
|
||||
workflowFile: '{workflow_path}/workflow.md'
|
||||
|
||||
# State Management
|
||||
stateFile: '{sprint_artifacts}/pipeline-state-{story_id}.yaml'
|
||||
auditFile: '{sprint_artifacts}/audit-{story_id}-{date}.yaml'
|
||||
---
|
||||
|
||||
# Step 1: Pipeline Initialization
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Initialize the story pipeline by:
|
||||
1. Resolving story parameters (epic_num, story_num)
|
||||
2. Detecting execution mode (interactive vs batch)
|
||||
3. Checking for existing pipeline state (resume scenario)
|
||||
4. Pre-loading and caching documents for token efficiency
|
||||
5. Creating initial state file
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST)
|
||||
|
||||
### Universal Rules
|
||||
|
||||
- **NEVER** proceed without all required parameters resolved
|
||||
- **READ** the complete step file before taking any action
|
||||
- **CACHE** documents once, use across all steps
|
||||
- **UPDATE** state file after completing initialization
|
||||
|
||||
### Role for This Step
|
||||
|
||||
- You are the **Pipeline Orchestrator** (no specific agent role yet)
|
||||
- Agent roles (SM, TEA, DEV) will be adopted in subsequent steps
|
||||
- Focus on setup and context loading
|
||||
|
||||
### Step-Specific Rules
|
||||
|
||||
- **Focus only on initialization** - no story content generation yet
|
||||
- **FORBIDDEN** to load future step files or look ahead
|
||||
- **Check for resume state first** - if exists, hand off to step-01b
|
||||
- **Validate all inputs** before proceeding
|
||||
|
||||
## EXECUTION SEQUENCE (Do not deviate, skip, or optimize)
|
||||
|
||||
### 1. Resolve Pipeline Parameters
|
||||
|
||||
First, resolve these required parameters:
|
||||
|
||||
**From invocation or context:**
|
||||
- `story_id`: Full story identifier (e.g., "1-4")
|
||||
- `epic_num`: Epic number (e.g., 1)
|
||||
- `story_num`: Story number within epic (e.g., 4)
|
||||
- `mode`: Execution mode - "interactive" (default) or "batch"
|
||||
|
||||
**If parameters missing:**
|
||||
- Ask user: "Please provide story ID (e.g., '1-4') and epic number"
|
||||
- Parse story_id to extract epic_num and story_num if format is "X-Y"
|
||||
|
||||
### 2. Check for Existing Pipeline State (Resume Detection)
|
||||
|
||||
Check if state file exists: `{sprint_artifacts}/pipeline-state-{story_id}.yaml`
|
||||
|
||||
**If state file exists and has `stepsCompleted` array with entries:**
|
||||
- **STOP immediately**
|
||||
- Load and execute `{resumeStepFile}` (step-01b-resume.md)
|
||||
- Do not proceed with fresh initialization
|
||||
- This is auto-proceed - no user choice needed
|
||||
|
||||
**If no state file or empty `stepsCompleted`:**
|
||||
- Continue with fresh pipeline initialization
|
||||
|
||||
### 3. Locate Story File
|
||||
|
||||
Search for existing story file with pattern:
|
||||
- Primary: `{sprint_artifacts}/story-{story_id}.md`
|
||||
- Alternative: `{sprint_artifacts}/{story_id}*.md`
|
||||
|
||||
**Record finding:**
|
||||
- `story_file_exists`: true/false
|
||||
- `story_file_path`: path if exists, null otherwise
|
||||
|
||||
### 4. Pre-Load and Cache Documents
|
||||
|
||||
Load these documents ONCE for use across all steps:
|
||||
|
||||
#### A. Project Context (REQUIRED)
|
||||
```
|
||||
Pattern: **/project-context.md
|
||||
Strategy: FULL_LOAD
|
||||
Cache: true
|
||||
```
|
||||
- Load complete project-context.md
|
||||
- This contains critical rules and patterns
|
||||
|
||||
#### B. Epic File (REQUIRED)
|
||||
```
|
||||
Pattern: {output_folder}/epic-{epic_num}.md OR {output_folder}/epics.md
|
||||
Strategy: SELECTIVE_LOAD (just current epic section)
|
||||
Cache: true
|
||||
```
|
||||
- Find and load epic definition for current story
|
||||
- Extract story description, BDD scenarios
|
||||
|
||||
#### C. Architecture (SELECTIVE)
|
||||
```
|
||||
Pattern: {output_folder}/architecture.md
|
||||
Strategy: INDEX_GUIDED
|
||||
Sections: tech_stack, data_model, api_patterns
|
||||
Cache: true
|
||||
```
|
||||
- Load only relevant architecture sections
|
||||
- Skip detailed implementation that's not needed
|
||||
|
||||
#### D. Story File (IF EXISTS)
|
||||
```
|
||||
Pattern: {sprint_artifacts}/story-{story_id}.md
|
||||
Strategy: FULL_LOAD (if exists)
|
||||
Cache: true
|
||||
```
|
||||
- If story exists, load for validation/continuation
|
||||
- Will be created in step 2 if not exists
|
||||
|
||||
### 5. Create Initial State File
|
||||
|
||||
Create state file at `{stateFile}`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
story_id: "{story_id}"
|
||||
epic_num: {epic_num}
|
||||
story_num: {story_num}
|
||||
mode: "{mode}"
|
||||
stepsCompleted: []
|
||||
lastStep: 0
|
||||
currentStep: 1
|
||||
status: "initializing"
|
||||
started_at: "{timestamp}"
|
||||
updated_at: "{timestamp}"
|
||||
cached_context:
|
||||
project_context_loaded: true
|
||||
epic_loaded: true
|
||||
architecture_sections: ["tech_stack", "data_model", "api_patterns"]
|
||||
story_file_exists: {story_file_exists}
|
||||
story_file_path: "{story_file_path}"
|
||||
steps:
|
||||
step-01-init: { status: in_progress }
|
||||
step-02-create-story: { status: pending }
|
||||
step-03-validate-story: { status: pending }
|
||||
step-04-atdd: { status: pending }
|
||||
step-05-implement: { status: pending }
|
||||
step-06-code-review: { status: pending }
|
||||
step-07-complete: { status: pending }
|
||||
step-08-summary: { status: pending }
|
||||
---
|
||||
```
|
||||
|
||||
### 6. Present Initialization Summary
|
||||
|
||||
Report to user:
|
||||
|
||||
```
|
||||
Pipeline Initialized for Story {story_id}
|
||||
|
||||
Mode: {mode}
|
||||
Epic: {epic_num}
|
||||
Story: {story_num}
|
||||
|
||||
Documents Cached:
|
||||
- Project Context: [loaded from path]
|
||||
- Epic {epic_num}: [loaded sections]
|
||||
- Architecture: [loaded sections]
|
||||
- Story File: [exists/will be created]
|
||||
|
||||
Pipeline State: {stateFile}
|
||||
|
||||
Ready to proceed to story creation.
|
||||
```
|
||||
|
||||
### 7. Update State and Proceed
|
||||
|
||||
Update state file:
|
||||
- Set `stepsCompleted: [1]`
|
||||
- Set `lastStep: 1`
|
||||
- Set `steps.step-01-init.status: completed`
|
||||
- Set `status: "in_progress"`
|
||||
|
||||
### 8. Present Menu (Interactive Mode Only)
|
||||
|
||||
**If mode == "interactive":**
|
||||
|
||||
Display menu and wait for user input:
|
||||
```
|
||||
[C] Continue to Story Creation
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Menu Handling:**
|
||||
- **C (Continue)**: Load and execute `{nextStepFile}`
|
||||
- **H (Halt)**: Save checkpoint, exit gracefully
|
||||
|
||||
**If mode == "batch":**
|
||||
- Auto-proceed to next step
|
||||
- Load and execute `{nextStepFile}` immediately
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding, verify:
|
||||
- [ ] All parameters resolved (story_id, epic_num, story_num, mode)
|
||||
- [ ] State file created and valid
|
||||
- [ ] Project context loaded
|
||||
- [ ] Epic definition loaded
|
||||
- [ ] Architecture sections loaded (at least tech_stack)
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [initialization complete AND state file updated AND quality gate passed],
|
||||
load and execute `{nextStepFile}` to begin story creation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All parameters resolved
|
||||
- Resume state detected and handed off correctly
|
||||
- Documents cached efficiently (not reloaded)
|
||||
- State file created with proper structure
|
||||
- Menu presented and user input handled
|
||||
|
||||
### ❌ FAILURE
|
||||
- Proceeding without resolved parameters
|
||||
- Not checking for resume state first
|
||||
- Loading documents redundantly across steps
|
||||
- Not creating state file before proceeding
|
||||
- Skipping directly to implementation
|
||||
|
|
@ -1,213 +0,0 @@
|
|||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume pipeline from checkpoint after failure or interruption'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-01b-resume.md'
|
||||
stepsPath: '{workflow_path}/steps'
|
||||
|
||||
# State Management
|
||||
stateFile: '{sprint_artifacts}/pipeline-state-{story_id}.yaml'
|
||||
---
|
||||
|
||||
# Step 1b: Resume from Checkpoint
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume a previously started pipeline from the last completed checkpoint:
|
||||
1. Load existing pipeline state
|
||||
2. Restore cached document context
|
||||
3. Determine next step to execute
|
||||
4. Present resume options to user
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Universal Rules
|
||||
|
||||
- **NEVER** restart from step 1 if progress exists
|
||||
- **ALWAYS** restore cached context before resuming
|
||||
- **PRESERVE** all completed step data
|
||||
- **VALIDATE** state file integrity before resuming
|
||||
|
||||
### Resume Priority
|
||||
|
||||
- Resume from `lastStep + 1` by default
|
||||
- Allow user to override and restart from earlier step
|
||||
- Warn if restarting would lose completed work
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Load Pipeline State
|
||||
|
||||
Read state file: `{stateFile}`
|
||||
|
||||
Extract:
|
||||
- `story_id`, `epic_num`, `story_num`, `mode`
|
||||
- `stepsCompleted`: Array of completed step numbers
|
||||
- `lastStep`: Last successfully completed step
|
||||
- `cached_context`: Document loading status
|
||||
- `steps`: Individual step status records
|
||||
|
||||
### 2. Validate State Integrity
|
||||
|
||||
Check state file is valid:
|
||||
- [ ] `story_id` matches requested story
|
||||
- [ ] `stepsCompleted` is valid array
|
||||
- [ ] `lastStep` corresponds to actual completed work
|
||||
- [ ] No corruption in step records
|
||||
|
||||
**If invalid:**
|
||||
- Warn user: "State file appears corrupted"
|
||||
- Offer: "Start fresh or attempt recovery?"
|
||||
|
||||
### 3. Restore Cached Context
|
||||
|
||||
Re-load documents if not in memory:
|
||||
|
||||
```yaml
|
||||
cached_context:
|
||||
project_context_loaded: {reload if false}
|
||||
epic_loaded: {reload if false}
|
||||
architecture_sections: {reload specified sections}
|
||||
story_file_exists: {verify still exists}
|
||||
story_file_path: {verify path valid}
|
||||
```
|
||||
|
||||
**Efficiency note:** Only reload what's needed, don't duplicate work.
|
||||
|
||||
### 4. Present Resume Summary
|
||||
|
||||
Display current state:
|
||||
|
||||
```
|
||||
Pipeline Resume for Story {story_id}
|
||||
|
||||
Previous Session:
|
||||
- Started: {started_at}
|
||||
- Last Update: {updated_at}
|
||||
- Mode: {mode}
|
||||
|
||||
Progress:
|
||||
- Steps Completed: {stepsCompleted}
|
||||
- Last Step: {lastStep} ({step_name})
|
||||
- Next Step: {lastStep + 1} ({next_step_name})
|
||||
|
||||
Step Status:
|
||||
[✓] Step 1: Initialize
|
||||
[✓] Step 2: Create Story
|
||||
[✓] Step 3: Validate Story
|
||||
[ ] Step 4: ATDD (NEXT)
|
||||
[ ] Step 5: Implement
|
||||
[ ] Step 6: Code Review
|
||||
[ ] Step 7: Complete
|
||||
[ ] Step 8: Summary
|
||||
```
|
||||
|
||||
### 5. Present Resume Options
|
||||
|
||||
**Menu:**
|
||||
```
|
||||
Resume Options:
|
||||
|
||||
[C] Continue from Step {lastStep + 1} ({next_step_name})
|
||||
[R] Restart from specific step (will mark later steps as pending)
|
||||
[F] Fresh start (lose all progress)
|
||||
[H] Halt
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
### 6. Handle User Selection
|
||||
|
||||
**C (Continue):**
|
||||
- Update state: `currentStep: {lastStep + 1}`
|
||||
- Load and execute next step file
|
||||
|
||||
**R (Restart from step):**
|
||||
- Ask: "Which step? (2-8)"
|
||||
- Validate step number
|
||||
- Mark selected step and all later as `pending`
|
||||
- Update `lastStep` to step before selected
|
||||
- Load and execute selected step
|
||||
|
||||
**F (Fresh start):**
|
||||
- Confirm: "This will lose all progress. Are you sure? (y/n)"
|
||||
- If confirmed: Delete state file, redirect to step-01-init.md
|
||||
- If not: Return to menu
|
||||
|
||||
**H (Halt):**
|
||||
- Save current state
|
||||
- Exit gracefully
|
||||
|
||||
### 7. Determine Next Step File
|
||||
|
||||
Map step number to file:
|
||||
|
||||
| Step | File |
|
||||
|------|------|
|
||||
| 2 | step-02-create-story.md |
|
||||
| 3 | step-03-validate-story.md |
|
||||
| 4 | step-04-atdd.md |
|
||||
| 5 | step-05-implement.md |
|
||||
| 6 | step-06-code-review.md |
|
||||
| 7 | step-07-complete.md |
|
||||
| 8 | step-08-summary.md |
|
||||
|
||||
### 8. Update State and Execute
|
||||
|
||||
Before loading next step:
|
||||
- Update `updated_at` to current timestamp
|
||||
- Set `currentStep` to target step
|
||||
- Set target step status to `in_progress`
|
||||
|
||||
Then load and execute: `{stepsPath}/step-{XX}-{name}.md`
|
||||
|
||||
## BATCH MODE HANDLING
|
||||
|
||||
If `mode == "batch"`:
|
||||
- Skip menu presentation
|
||||
- Auto-continue from `lastStep + 1`
|
||||
- If `lastStep` was a failure, check error details
|
||||
- If retryable error, attempt same step again
|
||||
- If non-retryable, halt with error report
|
||||
|
||||
## ERROR RECOVERY
|
||||
|
||||
### Common Resume Scenarios
|
||||
|
||||
**Story file missing after step 2:**
|
||||
- Warn user
|
||||
- Offer to restart from step 2
|
||||
|
||||
**Tests missing after step 4:**
|
||||
- Warn user
|
||||
- Offer to restart from step 4
|
||||
|
||||
**Implementation incomplete after step 5:**
|
||||
- Check git status for partial changes
|
||||
- Offer to continue or rollback
|
||||
|
||||
**Code review incomplete after step 6:**
|
||||
- Check if issues were logged
|
||||
- Offer to continue review or re-run
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- State file loaded and validated
|
||||
- Context restored efficiently
|
||||
- User presented clear resume options
|
||||
- Correct step file loaded and executed
|
||||
- No data loss during resume
|
||||
|
||||
### ❌ FAILURE
|
||||
- Starting from step 1 when progress exists
|
||||
- Not validating state file integrity
|
||||
- Loading wrong step after resume
|
||||
- Losing completed work without confirmation
|
||||
- Not restoring cached context
|
||||
|
|
@ -1,244 +0,0 @@
|
|||
---
|
||||
name: 'step-02-create-story'
|
||||
description: 'Create detailed story file from epic definition with research'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-02-create-story.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-03-validate-story.md'
|
||||
checklist: '{workflow_path}/checklists/story-creation.md'
|
||||
|
||||
# Role Switch
|
||||
role: sm
|
||||
agentFile: '{project-root}/_bmad/bmm/agents/sm.md'
|
||||
---
|
||||
|
||||
# Step 2: Create Story
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching to SM (Scrum Master) perspective.**
|
||||
|
||||
You are now the Scrum Master facilitating story creation. Your expertise:
|
||||
- User story structure and acceptance criteria
|
||||
- BDD scenario writing (Given/When/Then)
|
||||
- Task breakdown and estimation
|
||||
- Ensuring testability of requirements
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Create a detailed, implementation-ready story file:
|
||||
1. Research best practices for the domain
|
||||
2. Extract story definition from epic
|
||||
3. Write clear acceptance criteria with BDD scenarios
|
||||
4. Define tasks and subtasks
|
||||
5. Ensure all criteria are testable
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Role-Specific Rules
|
||||
|
||||
- **THINK** like a product/process expert, not a developer
|
||||
- **FOCUS** on WHAT, not HOW (implementation comes later)
|
||||
- **ENSURE** every AC is testable and measurable
|
||||
- **AVOID** technical implementation details in AC
|
||||
|
||||
### Step-Specific Rules
|
||||
|
||||
- **SKIP** this step if story file already exists (check cached context)
|
||||
- **RESEARCH** best practices before writing
|
||||
- **USE** project-context.md patterns for consistency
|
||||
- **CREATE** file at `{sprint_artifacts}/story-{story_id}.md`
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Check if Story Already Exists
|
||||
|
||||
From cached context, check `story_file_exists`:
|
||||
|
||||
**If story file exists:**
|
||||
- Read and display existing story summary
|
||||
- Ask: "Story file exists. [V]alidate existing, [R]ecreate from scratch?"
|
||||
- If V: Proceed to step-03-validate-story.md
|
||||
- If R: Continue with story creation (will overwrite)
|
||||
|
||||
**If story does not exist:**
|
||||
- Continue with creation
|
||||
|
||||
### 2. Research Phase (MCP Tools)
|
||||
|
||||
Use MCP tools for domain research:
|
||||
|
||||
```
|
||||
mcp__exa__web_search_exa:
|
||||
query: "user story acceptance criteria best practices agile {domain}"
|
||||
|
||||
mcp__exa__get_code_context_exa:
|
||||
query: "{technology} implementation patterns"
|
||||
```
|
||||
|
||||
**Extract from research:**
|
||||
- AC writing best practices
|
||||
- Common patterns for this domain
|
||||
- Anti-patterns to avoid
|
||||
|
||||
### 3. Load Epic Definition
|
||||
|
||||
From cached epic file, extract for story {story_id}:
|
||||
- Story title and description
|
||||
- User persona
|
||||
- Business value
|
||||
- Initial AC ideas
|
||||
- BDD scenarios if present
|
||||
|
||||
### 4. Generate Story Content
|
||||
|
||||
Create story file following template:
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: story-{story_id}
|
||||
epic: {epic_num}
|
||||
title: "{story_title}"
|
||||
status: draft
|
||||
created_at: {timestamp}
|
||||
---
|
||||
|
||||
# Story {story_id}: {story_title}
|
||||
|
||||
## User Story
|
||||
|
||||
As a [persona],
|
||||
I want to [action],
|
||||
So that [benefit].
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### AC1: [Criterion Name]
|
||||
|
||||
**Given** [precondition]
|
||||
**When** [action]
|
||||
**Then** [expected result]
|
||||
|
||||
**Test Scenarios:**
|
||||
- [ ] Scenario 1: [description]
|
||||
- [ ] Scenario 2: [description]
|
||||
|
||||
### AC2: [Criterion Name]
|
||||
...
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1: [Task Name]
|
||||
- [ ] Subtask 1.1
|
||||
- [ ] Subtask 1.2
|
||||
|
||||
### Task 2: [Task Name]
|
||||
...
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Database Changes
|
||||
- [any schema changes needed]
|
||||
|
||||
### API Changes
|
||||
- [any endpoint changes]
|
||||
|
||||
### UI Changes
|
||||
- [any frontend changes]
|
||||
|
||||
## Dependencies
|
||||
- [list any dependencies on other stories or systems]
|
||||
|
||||
## Out of Scope
|
||||
- [explicitly list what is NOT included]
|
||||
```
|
||||
|
||||
### 5. Verify Story Quality
|
||||
|
||||
Before saving, verify:
|
||||
- [ ] All AC have Given/When/Then format
|
||||
- [ ] Each AC has at least 2 test scenarios
|
||||
- [ ] Tasks cover all AC implementation
|
||||
- [ ] No implementation details in AC (WHAT not HOW)
|
||||
- [ ] Out of scope is defined
|
||||
- [ ] Dependencies listed if any
|
||||
|
||||
### 6. Save Story File
|
||||
|
||||
Write to: `{sprint_artifacts}/story-{story_id}.md`
|
||||
|
||||
Update state file:
|
||||
- `cached_context.story_file_exists: true`
|
||||
- `cached_context.story_file_path: {path}`
|
||||
|
||||
### 7. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `2` to `stepsCompleted`
|
||||
- Set `lastStep: 2`
|
||||
- Set `steps.step-02-create-story.status: completed`
|
||||
- Set `steps.step-02-create-story.duration: {duration}`
|
||||
|
||||
### 8. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Story {story_id} Created
|
||||
|
||||
Title: {story_title}
|
||||
Acceptance Criteria: {count}
|
||||
Test Scenarios: {count}
|
||||
Tasks: {count}
|
||||
|
||||
File: {story_file_path}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Validation
|
||||
[E] Edit story manually
|
||||
[R] Regenerate story
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue to next step.
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] Story file created at correct location
|
||||
- [ ] All AC in Given/When/Then format
|
||||
- [ ] Test scenarios defined for each AC
|
||||
- [ ] Tasks cover full implementation scope
|
||||
- [ ] File passes frontmatter validation
|
||||
|
||||
## MCP TOOLS AVAILABLE
|
||||
|
||||
- `mcp__exa__web_search_exa` - Research best practices
|
||||
- `mcp__exa__get_code_context_exa` - Tech pattern research
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [story file created AND quality gate passed AND state updated],
|
||||
load and execute `{nextStepFile}` for adversarial validation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Story file created with proper structure
|
||||
- All AC have BDD format
|
||||
- Test scenarios cover all AC
|
||||
- Research insights incorporated
|
||||
- State file updated correctly
|
||||
|
||||
### ❌ FAILURE
|
||||
- Story file not created or in wrong location
|
||||
- AC without Given/When/Then format
|
||||
- Missing test scenarios
|
||||
- Including implementation details in AC
|
||||
- Not updating state before proceeding
|
||||
|
|
@ -1,229 +0,0 @@
|
|||
---
|
||||
name: 'step-03-validate-story'
|
||||
description: 'Adversarial validation of story completeness and quality'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-03-validate-story.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-04-atdd.md'
|
||||
checklist: '{workflow_path}/checklists/story-validation.md'
|
||||
|
||||
# Role (same as step 2, no switch needed)
|
||||
role: sm
|
||||
---
|
||||
|
||||
# Step 3: Validate Story
|
||||
|
||||
## ROLE CONTINUATION
|
||||
|
||||
**Continuing as SM (Scrum Master) - Adversarial Validator mode.**
|
||||
|
||||
You are now an ADVERSARIAL validator. Your job is to FIND PROBLEMS, not approve.
|
||||
Challenge every assumption. Question every AC. Ensure the story is truly ready.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Perform rigorous validation of the story file:
|
||||
1. Research common AC anti-patterns
|
||||
2. Validate each acceptance criterion
|
||||
3. Check technical feasibility
|
||||
4. Ensure all edge cases covered
|
||||
5. Fix all issues found
|
||||
6. Add validation report
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Adversarial Mindset
|
||||
|
||||
- **ASSUME** something is wrong - find it
|
||||
- **NEVER** say "looks good" without deep analysis
|
||||
- **QUESTION** every assumption
|
||||
- **FIND** at least 3 issues (if no issues, you haven't looked hard enough)
|
||||
|
||||
### Validation Rules
|
||||
|
||||
- Every AC must be: Specific, Measurable, Testable
|
||||
- Every AC must have test scenarios
|
||||
- No vague terms: "should", "might", "could", "etc."
|
||||
- No undefined boundaries: "appropriate", "reasonable"
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Research Validation Patterns
|
||||
|
||||
Use MCP for research:
|
||||
|
||||
```
|
||||
mcp__exa__web_search_exa:
|
||||
query: "acceptance criteria anti-patterns common mistakes user stories"
|
||||
```
|
||||
|
||||
**Extract:**
|
||||
- Common AC problems
|
||||
- Validation techniques
|
||||
- Red flags to look for
|
||||
|
||||
### 2. Load Story File
|
||||
|
||||
Read from cached path: `{story_file_path}`
|
||||
|
||||
Parse and extract:
|
||||
- All acceptance criteria
|
||||
- All test scenarios
|
||||
- Task definitions
|
||||
- Dependencies
|
||||
|
||||
### 3. Validate Each AC (MANDATORY CHECKLIST)
|
||||
|
||||
For EACH acceptance criterion:
|
||||
|
||||
**Structure Check:**
|
||||
- [ ] Has Given/When/Then format
|
||||
- [ ] Given is a valid precondition
|
||||
- [ ] When is a clear action
|
||||
- [ ] Then is a measurable outcome
|
||||
|
||||
**Quality Check:**
|
||||
- [ ] Specific (no vague terms)
|
||||
- [ ] Measurable (clear success criteria)
|
||||
- [ ] Testable (can write automated test)
|
||||
- [ ] Independent (no hidden dependencies)
|
||||
|
||||
**Completeness Check:**
|
||||
- [ ] Edge cases considered
|
||||
- [ ] Error scenarios defined
|
||||
- [ ] Boundary conditions clear
|
||||
|
||||
**Anti-pattern Check:**
|
||||
- [ ] No implementation details
|
||||
- [ ] No AND conjunctions (split into multiple AC)
|
||||
- [ ] No OR alternatives (ambiguous)
|
||||
|
||||
### 4. Technical Feasibility Check
|
||||
|
||||
Cross-reference with architecture.md (from cache):
|
||||
|
||||
- [ ] Required data model exists or migration defined
|
||||
- [ ] API endpoints fit existing patterns
|
||||
- [ ] No conflicts with existing functionality
|
||||
- [ ] Security model (RLS) can support requirements
|
||||
|
||||
### 5. Test Scenario Coverage
|
||||
|
||||
Verify test scenarios:
|
||||
- [ ] At least 2 scenarios per AC
|
||||
- [ ] Happy path covered
|
||||
- [ ] Error paths covered
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Each scenario is unique (no duplicates)
|
||||
|
||||
### 6. Document All Issues Found
|
||||
|
||||
Create issues list:
|
||||
|
||||
```yaml
|
||||
issues_found:
|
||||
- id: 1
|
||||
severity: high|medium|low
|
||||
ac: AC1
|
||||
problem: "Description of issue"
|
||||
fix: "How to fix it"
|
||||
- id: 2
|
||||
...
|
||||
```
|
||||
|
||||
### 7. Fix All Issues
|
||||
|
||||
For EACH issue:
|
||||
1. Edit the story file to fix
|
||||
2. Document the fix
|
||||
3. Verify fix is correct
|
||||
|
||||
### 8. Add Validation Report
|
||||
|
||||
Append to story file:
|
||||
|
||||
```yaml
|
||||
# Validation Report
|
||||
validated_by: sm-validator
|
||||
validated_at: {timestamp}
|
||||
issues_found: {count}
|
||||
issues_fixed: {count}
|
||||
quality_score: {0-100}
|
||||
test_scenarios_count: {count}
|
||||
edge_cases_covered: {list}
|
||||
ready_for_dev: true|false
|
||||
validation_notes: |
|
||||
- {note 1}
|
||||
- {note 2}
|
||||
```
|
||||
|
||||
### 9. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `3` to `stepsCompleted`
|
||||
- Set `lastStep: 3`
|
||||
- Set `steps.step-03-validate-story.status: completed`
|
||||
- Record `issues_found` and `issues_fixed` counts
|
||||
|
||||
### 10. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Story Validation Complete
|
||||
|
||||
Issues Found: {count}
|
||||
Issues Fixed: {count}
|
||||
Quality Score: {score}/100
|
||||
|
||||
Validation Areas:
|
||||
- AC Structure: ✓/✗
|
||||
- Testability: ✓/✗
|
||||
- Technical Feasibility: ✓/✗
|
||||
- Edge Cases: ✓/✗
|
||||
|
||||
Ready for Development: {yes/no}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to ATDD (Test Generation)
|
||||
[R] Re-validate
|
||||
[E] Edit story manually
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue if ready_for_dev: true
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] All issues identified and fixed
|
||||
- [ ] Quality score >= 80
|
||||
- [ ] ready_for_dev: true
|
||||
- [ ] Validation report appended to story file
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [validation complete AND quality gate passed AND ready_for_dev: true],
|
||||
load and execute `{nextStepFile}` for ATDD test generation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Found and fixed at least 3 issues
|
||||
- Quality score >= 80
|
||||
- All AC pass validation checklist
|
||||
- Validation report added
|
||||
- Story marked ready for dev
|
||||
|
||||
### ❌ FAILURE
|
||||
- Approving story as "looks good" without deep review
|
||||
- Missing edge case analysis
|
||||
- Not fixing all identified issues
|
||||
- Proceeding with quality_score < 80
|
||||
- Not adding validation report
|
||||
|
|
@ -1,308 +0,0 @@
|
|||
---
|
||||
name: 'step-04-atdd'
|
||||
description: 'Generate failing acceptance tests before implementation (RED phase)'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-04-atdd.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-05-implement.md'
|
||||
checklist: '{workflow_path}/checklists/atdd.md'
|
||||
|
||||
# Role Switch
|
||||
role: tea
|
||||
agentFile: '{project-root}/_bmad/bmm/agents/tea.md'
|
||||
---
|
||||
|
||||
# Step 4: ATDD - Acceptance Test-Driven Development
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching to TEA (Test Engineering Architect) perspective.**
|
||||
|
||||
You are now the Test Engineering Architect. Your expertise:
|
||||
- Test strategy and design
|
||||
- Playwright and Vitest patterns
|
||||
- Data factories and fixtures
|
||||
- Test-first development methodology
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Generate FAILING acceptance tests BEFORE implementation (RED phase):
|
||||
1. Research test patterns for the technology stack
|
||||
2. Analyze each acceptance criterion
|
||||
3. Determine appropriate test level (E2E, API, Component, Unit)
|
||||
4. Write tests in Given/When/Then format
|
||||
5. Create data factories and fixtures
|
||||
6. Verify tests FAIL (they should - nothing is implemented yet)
|
||||
7. Generate implementation checklist for DEV
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### ATDD Principles
|
||||
|
||||
- **TESTS FIRST** - Write tests before any implementation
|
||||
- **TESTS MUST FAIL** - If tests pass, something is wrong
|
||||
- **ONE AC = ONE TEST** (minimum) - More for complex scenarios
|
||||
- **REALISTIC DATA** - Use factories, not hardcoded values
|
||||
|
||||
### Test Architecture Rules
|
||||
|
||||
- Use `data-testid` selectors for stability
|
||||
- Network-first pattern (route interception before navigation)
|
||||
- Auto-cleanup fixtures
|
||||
- No flaky timing-based assertions
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Research Test Patterns
|
||||
|
||||
Use MCP tools:
|
||||
|
||||
```
|
||||
mcp__exa__web_search_exa:
|
||||
query: "playwright acceptance test best practices Next.js TypeScript 2025"
|
||||
|
||||
mcp__exa__get_code_context_exa:
|
||||
query: "vitest playwright test fixtures factories faker patterns"
|
||||
```
|
||||
|
||||
**Extract:**
|
||||
- Current best practices for Next.js testing
|
||||
- Fixture and factory patterns
|
||||
- Common pitfalls to avoid
|
||||
|
||||
### 2. Analyze Acceptance Criteria
|
||||
|
||||
From cached story file, for EACH acceptance criterion:
|
||||
|
||||
```yaml
|
||||
ac_analysis:
|
||||
- ac_id: AC1
|
||||
title: "{ac_title}"
|
||||
given: "{given clause}"
|
||||
when: "{when clause}"
|
||||
then: "{then clause}"
|
||||
test_level: E2E|API|Component|Unit
|
||||
test_file: "{proposed test file path}"
|
||||
requires_fixtures: [list]
|
||||
requires_factories: [list]
|
||||
data_testids_needed: [list]
|
||||
```
|
||||
|
||||
### 3. Determine Test Levels
|
||||
|
||||
For each AC, determine appropriate level:
|
||||
|
||||
| Level | When to Use |
|
||||
|-------|-------------|
|
||||
| E2E | Full user flows, UI interactions |
|
||||
| API | Server actions, API endpoints |
|
||||
| Component | React component behavior |
|
||||
| Unit | Pure business logic, utilities |
|
||||
|
||||
### 4. Create Data Factories
|
||||
|
||||
For each entity needed in tests:
|
||||
|
||||
```typescript
|
||||
// src/tests/factories/{entity}.factory.ts
|
||||
import { faker } from "@faker-js/faker";
|
||||
|
||||
export function create{Entity}(overrides: Partial<{Entity}> = {}): {Entity} {
|
||||
return {
|
||||
id: faker.string.uuid(),
|
||||
// ... realistic fake data
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Create Test Fixtures
|
||||
|
||||
For each test setup pattern:
|
||||
|
||||
```typescript
|
||||
// src/tests/fixtures/{feature}.fixture.ts
|
||||
import { test as base } from "vitest";
|
||||
// or for E2E:
|
||||
import { test as base } from "@playwright/test";
|
||||
|
||||
export const test = base.extend<{
|
||||
// fixture types
|
||||
}>({
|
||||
// fixture implementations with auto-cleanup
|
||||
});
|
||||
```
|
||||
|
||||
### 6. Write Acceptance Tests
|
||||
|
||||
For EACH acceptance criterion:
|
||||
|
||||
```typescript
|
||||
// src/tests/{appropriate-dir}/{feature}.test.ts
|
||||
|
||||
describe("AC{N}: {ac_title}", () => {
|
||||
test("Given {given}, When {when}, Then {then}", async () => {
|
||||
// Arrange (Given)
|
||||
const data = createTestData();
|
||||
|
||||
// Act (When)
|
||||
const result = await performAction(data);
|
||||
|
||||
// Assert (Then)
|
||||
expect(result).toMatchExpectedOutcome();
|
||||
});
|
||||
|
||||
// Additional scenarios from story
|
||||
test("Edge case: {scenario}", async () => {
|
||||
// ...
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 7. Document Required data-testids
|
||||
|
||||
Create list of data-testids that DEV must implement:
|
||||
|
||||
```markdown
|
||||
## Required data-testid Attributes
|
||||
|
||||
| Element | data-testid | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| Submit button | submit-{feature} | Test form submission |
|
||||
| Error message | error-{feature} | Verify error display |
|
||||
| ... | ... | ... |
|
||||
```
|
||||
|
||||
### 8. Verify Tests FAIL
|
||||
|
||||
Run tests and verify they fail:
|
||||
|
||||
```bash
|
||||
npm test -- --run {test-file}
|
||||
```
|
||||
|
||||
**Expected:** All tests should FAIL (RED phase)
|
||||
- "Cannot find element with data-testid"
|
||||
- "Function not implemented"
|
||||
- "Route not found"
|
||||
|
||||
**If tests PASS:** Something is wrong - investigate
|
||||
|
||||
### 9. Create ATDD Checklist
|
||||
|
||||
Create: `{sprint_artifacts}/atdd-checklist-{story_id}.md`
|
||||
|
||||
```markdown
|
||||
# ATDD Checklist for Story {story_id}
|
||||
|
||||
## Test Files Created
|
||||
- [ ] {test_file_1}
|
||||
- [ ] {test_file_2}
|
||||
|
||||
## Factories Created
|
||||
- [ ] {factory_1}
|
||||
- [ ] {factory_2}
|
||||
|
||||
## Fixtures Created
|
||||
- [ ] {fixture_1}
|
||||
|
||||
## Implementation Requirements for DEV
|
||||
|
||||
### Required data-testid Attributes
|
||||
| Element | Attribute |
|
||||
|---------|-----------|
|
||||
| ... | ... |
|
||||
|
||||
### API Endpoints Needed
|
||||
- [ ] {endpoint_1}
|
||||
- [ ] {endpoint_2}
|
||||
|
||||
### Database Changes
|
||||
- [ ] {migration_1}
|
||||
|
||||
## Test Status (RED Phase)
|
||||
All tests should FAIL until implementation:
|
||||
- [ ] {test_1}: FAILING ✓
|
||||
- [ ] {test_2}: FAILING ✓
|
||||
```
|
||||
|
||||
### 10. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `4` to `stepsCompleted`
|
||||
- Set `lastStep: 4`
|
||||
- Set `steps.step-04-atdd.status: completed`
|
||||
- Record test file paths created
|
||||
|
||||
### 11. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
ATDD Complete - RED Phase Verified
|
||||
|
||||
Tests Created: {count}
|
||||
All Tests FAILING: ✓ (as expected)
|
||||
|
||||
Test Files:
|
||||
- {test_file_1}
|
||||
- {test_file_2}
|
||||
|
||||
Factories: {count}
|
||||
Fixtures: {count}
|
||||
data-testids Required: {count}
|
||||
|
||||
ATDD Checklist: {checklist_path}
|
||||
|
||||
Next: DEV will implement to make tests GREEN
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Implementation
|
||||
[T] Run tests again
|
||||
[E] Edit tests
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] Test file created for each AC
|
||||
- [ ] All tests FAIL (RED phase verified)
|
||||
- [ ] Factories created for test data
|
||||
- [ ] data-testid requirements documented
|
||||
- [ ] ATDD checklist created
|
||||
|
||||
## MCP TOOLS AVAILABLE
|
||||
|
||||
- `mcp__exa__web_search_exa` - Test pattern research
|
||||
- `mcp__exa__get_code_context_exa` - Framework-specific patterns
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [tests created AND all tests FAIL AND checklist created],
|
||||
load and execute `{nextStepFile}` for implementation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Tests written for all AC
|
||||
- All tests FAIL (red phase)
|
||||
- Factories use faker, not hardcoded data
|
||||
- Fixtures have auto-cleanup
|
||||
- data-testid requirements documented
|
||||
- ATDD checklist complete
|
||||
|
||||
### ❌ FAILURE
|
||||
- Tests PASS before implementation
|
||||
- Hardcoded test data
|
||||
- Missing edge case tests
|
||||
- No data-testid documentation
|
||||
- Skipping to implementation without tests
|
||||
|
|
@ -1,285 +0,0 @@
|
|||
---
|
||||
name: 'step-05-implement'
|
||||
description: 'Implement story to make tests pass (GREEN phase)'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-05-implement.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-05b-post-validation.md'
|
||||
checklist: '{workflow_path}/checklists/implementation.md'
|
||||
|
||||
# Role Switch
|
||||
role: dev
|
||||
agentFile: '{project-root}/_bmad/bmm/agents/dev.md'
|
||||
---
|
||||
|
||||
# Step 5: Implement Story
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching to DEV (Developer) perspective.**
|
||||
|
||||
You are now the Developer implementing the story. Your expertise:
|
||||
- Next.js 16 with App Router
|
||||
- TypeScript strict mode
|
||||
- Supabase with RLS
|
||||
- TDD methodology (make tests GREEN)
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Implement the story using TDD methodology:
|
||||
1. Research implementation patterns
|
||||
2. Review ATDD checklist and failing tests
|
||||
3. For each failing test: implement minimal code to pass
|
||||
4. Run tests, verify GREEN
|
||||
5. Ensure lint and build pass
|
||||
6. No refactoring yet (that's code review)
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### TDD Rules (RED-GREEN-REFACTOR)
|
||||
|
||||
- **GREEN PHASE** - Make tests pass with minimal code
|
||||
- **ONE TEST AT A TIME** - Don't implement all at once
|
||||
- **MINIMAL CODE** - Just enough to pass, no over-engineering
|
||||
- **RUN TESTS FREQUENTLY** - After each change
|
||||
|
||||
### Implementation Rules
|
||||
|
||||
- **Follow project-context.md** patterns exactly
|
||||
- **Result type** for all server actions (never throw)
|
||||
- **snake_case** for database columns
|
||||
- **Multi-tenancy** with tenant_id on all tables
|
||||
- **RLS policies** for all new tables
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Research Implementation Patterns
|
||||
|
||||
Use MCP tools:
|
||||
|
||||
```
|
||||
mcp__exa__get_code_context_exa:
|
||||
query: "Next.js 16 server actions Supabase RLS multi-tenant"
|
||||
|
||||
mcp__supabase__list_tables:
|
||||
# Understand current schema
|
||||
```
|
||||
|
||||
### 2. Review ATDD Checklist
|
||||
|
||||
Load: `{sprint_artifacts}/atdd-checklist-{story_id}.md`
|
||||
|
||||
Extract:
|
||||
- Required data-testid attributes
|
||||
- API endpoints needed
|
||||
- Database changes required
|
||||
- Current failing tests
|
||||
|
||||
### 3. Run Failing Tests
|
||||
|
||||
```bash
|
||||
npm test -- --run
|
||||
```
|
||||
|
||||
Confirm all tests are FAILING (from ATDD phase).
|
||||
|
||||
### 4. Implementation Loop
|
||||
|
||||
For EACH acceptance criterion:
|
||||
|
||||
**A. Focus on one failing test:**
|
||||
```bash
|
||||
npm test -- --run --grep "{test_name}"
|
||||
```
|
||||
|
||||
**B. Implement minimal code:**
|
||||
- Database migration if needed
|
||||
- Server action / API route
|
||||
- UI component with data-testid
|
||||
- Type definitions
|
||||
|
||||
**C. Run targeted test:**
|
||||
```bash
|
||||
npm test -- --run --grep "{test_name}"
|
||||
```
|
||||
|
||||
**D. Verify GREEN:**
|
||||
- Test passes ✓
|
||||
- Move to next test
|
||||
|
||||
### 5. Database Migrations
|
||||
|
||||
For any schema changes:
|
||||
|
||||
```bash
|
||||
# Create migration file
|
||||
npx supabase migration new {name}
|
||||
|
||||
# Migration content
|
||||
-- Enable RLS
|
||||
alter table {table} enable row level security;
|
||||
|
||||
-- RLS policies
|
||||
create policy "Tenants can view own data"
|
||||
on {table} for select
|
||||
using (tenant_id = auth.jwt() ->> 'tenant_id');
|
||||
```
|
||||
|
||||
Apply to remote:
|
||||
```bash
|
||||
npx supabase db push
|
||||
```
|
||||
|
||||
### 6. Server Actions Pattern
|
||||
|
||||
Follow project-context.md pattern:
|
||||
|
||||
```typescript
|
||||
// src/modules/{module}/actions/{action}.ts
|
||||
"use server";
|
||||
|
||||
import { ok, err, Result } from "@/lib/result";
|
||||
import { createClient } from "@/lib/supabase/server";
|
||||
|
||||
export async function actionName(
|
||||
input: InputType
|
||||
): Promise<Result<OutputType>> {
|
||||
const supabase = await createClient();
|
||||
|
||||
const { data, error } = await supabase
|
||||
.from("table")
|
||||
.select("*")
|
||||
.eq("tenant_id", tenantId);
|
||||
|
||||
if (error) {
|
||||
return err("DB_ERROR", error.message);
|
||||
}
|
||||
|
||||
return ok(data);
|
||||
}
|
||||
```
|
||||
|
||||
### 7. UI Components Pattern
|
||||
|
||||
```tsx
|
||||
// src/modules/{module}/components/{Component}.tsx
|
||||
"use client";
|
||||
|
||||
export function Component({ data }: Props) {
|
||||
return (
|
||||
<div data-testid="{feature}-container">
|
||||
<button data-testid="{feature}-submit">
|
||||
Submit
|
||||
</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### 8. Run Full Test Suite
|
||||
|
||||
After all AC implemented:
|
||||
|
||||
```bash
|
||||
npm test -- --run
|
||||
```
|
||||
|
||||
**All tests should pass (GREEN).**
|
||||
|
||||
### 9. Lint and Build
|
||||
|
||||
```bash
|
||||
npm run lint
|
||||
npm run build
|
||||
```
|
||||
|
||||
Fix any issues that arise.
|
||||
|
||||
### 10. Verify Implementation Completeness
|
||||
|
||||
Check against ATDD checklist:
|
||||
- [ ] All data-testid attributes added
|
||||
- [ ] All API endpoints created
|
||||
- [ ] All database migrations applied
|
||||
- [ ] All tests passing
|
||||
|
||||
### 11. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `5` to `stepsCompleted`
|
||||
- Set `lastStep: 5`
|
||||
- Set `steps.step-05-implement.status: completed`
|
||||
- Record files modified
|
||||
|
||||
### 12. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Implementation Complete - GREEN Phase
|
||||
|
||||
Tests: {passed}/{total} PASSING
|
||||
Lint: ✓ Clean
|
||||
Build: ✓ Success
|
||||
|
||||
Files Modified:
|
||||
- {file_1}
|
||||
- {file_2}
|
||||
|
||||
Migrations Applied:
|
||||
- {migration_1}
|
||||
|
||||
Ready for Code Review
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Post-Implementation Validation
|
||||
[T] Run tests again
|
||||
[B] Run build again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue if all tests pass
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] All tests pass (GREEN)
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] All ATDD checklist items complete
|
||||
- [ ] RLS policies for new tables
|
||||
|
||||
## MCP TOOLS AVAILABLE
|
||||
|
||||
- `mcp__exa__get_code_context_exa` - Implementation patterns
|
||||
- `mcp__supabase__list_tables` - Schema inspection
|
||||
- `mcp__supabase__execute_sql` - Query testing
|
||||
- `mcp__supabase__apply_migration` - Schema changes
|
||||
- `mcp__supabase__generate_typescript_types` - Type sync
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [all tests pass AND lint clean AND build succeeds],
|
||||
load and execute `{nextStepFile}` for post-implementation validation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All tests pass (GREEN phase)
|
||||
- TDD methodology followed
|
||||
- Result type used (no throws)
|
||||
- RLS policies in place
|
||||
- Lint and build clean
|
||||
|
||||
### ❌ FAILURE
|
||||
- Tests still failing
|
||||
- Skipping tests to implement faster
|
||||
- Throwing errors instead of Result type
|
||||
- Missing RLS policies
|
||||
- Build or lint failures
|
||||
|
|
@ -1,437 +0,0 @@
|
|||
---
|
||||
name: 'step-05b-post-validation'
|
||||
description: 'Verify completed tasks against codebase reality (catch false positives)'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-05b-post-validation.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-06-code-review.md'
|
||||
prevStepFile: '{workflow_path}/steps/step-05-implement.md'
|
||||
|
||||
# Role Switch
|
||||
role: dev
|
||||
requires_fresh_context: false # Continue from implementation context
|
||||
---
|
||||
|
||||
# Step 5b: Post-Implementation Validation
|
||||
|
||||
## ROLE CONTINUATION - VERIFICATION MODE
|
||||
|
||||
**Continuing as DEV but switching to VERIFICATION mindset.**
|
||||
|
||||
You are now verifying that completed work actually exists in the codebase.
|
||||
This catches the common problem of tasks marked [x] but implementation is incomplete.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify all completed tasks against codebase reality:
|
||||
1. Re-read story file and extract completed tasks
|
||||
2. For each completed task, identify what should exist
|
||||
3. Use codebase search tools to verify existence
|
||||
4. Run tests to verify they actually pass
|
||||
5. Identify false positives (marked done but not actually done)
|
||||
6. If gaps found, uncheck tasks and add missing work
|
||||
7. Re-run implementation if needed
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Verification Principles
|
||||
|
||||
- **TRUST NOTHING** - Verify every completed task
|
||||
- **CHECK EXISTENCE** - Files, functions, components must exist
|
||||
- **CHECK COMPLETENESS** - Not just existence, but full implementation
|
||||
- **TEST VERIFICATION** - Claimed test coverage must be real
|
||||
- **NO ASSUMPTIONS** - Re-scan the codebase with fresh eyes
|
||||
|
||||
### What to Verify
|
||||
|
||||
For each task marked [x]:
|
||||
- Files mentioned exist at correct paths
|
||||
- Functions/components declared and exported
|
||||
- Tests exist and actually pass
|
||||
- Database migrations applied
|
||||
- API endpoints respond correctly
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Load Story and Extract Completed Tasks
|
||||
|
||||
Load story file: `{story_file}`
|
||||
|
||||
Extract all tasks from story that are marked [x]:
|
||||
```regex
|
||||
- \[x\] (.+)
|
||||
```
|
||||
|
||||
Build list of `completed_tasks` to verify.
|
||||
|
||||
### 2. Categorize Tasks by Type
|
||||
|
||||
For each completed task, determine what needs verification:
|
||||
|
||||
**File Creation Tasks:**
|
||||
- Pattern: "Create {file_path}"
|
||||
- Verify: File exists at path
|
||||
|
||||
**Component/Function Tasks:**
|
||||
- Pattern: "Add {name} function/component"
|
||||
- Verify: Symbol exists and is exported
|
||||
|
||||
**Test Tasks:**
|
||||
- Pattern: "Add test for {feature}"
|
||||
- Verify: Test file exists and test passes
|
||||
|
||||
**Database Tasks:**
|
||||
- Pattern: "Add {table} table", "Create migration"
|
||||
- Verify: Migration file exists, schema matches
|
||||
|
||||
**API Tasks:**
|
||||
- Pattern: "Create {endpoint} endpoint"
|
||||
- Verify: Route file exists, handler implemented
|
||||
|
||||
**UI Tasks:**
|
||||
- Pattern: "Add {element} to UI"
|
||||
- Verify: Component has data-testid attribute
|
||||
|
||||
### 3. Verify File Existence
|
||||
|
||||
For all file-related tasks:
|
||||
|
||||
```bash
|
||||
# Use Glob to find files
|
||||
glob: "**/{mentioned_filename}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] File exists
|
||||
- [ ] File is not empty
|
||||
- [ ] File has expected exports
|
||||
|
||||
**False Positive Indicators:**
|
||||
- File doesn't exist
|
||||
- File exists but is empty
|
||||
- File exists but missing expected symbols
|
||||
|
||||
### 4. Verify Function/Component Implementation
|
||||
|
||||
For code implementation tasks:
|
||||
|
||||
```bash
|
||||
# Use Grep to find symbols
|
||||
grep: "{function_name|component_name}"
|
||||
glob: "**/*.{ts,tsx}"
|
||||
output_mode: "content"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Symbol is declared
|
||||
- [ ] Symbol is exported
|
||||
- [ ] Implementation is not a stub/placeholder
|
||||
- [ ] Required logic is present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Symbol not found
|
||||
- Symbol exists but marked TODO
|
||||
- Symbol exists but throws "Not implemented"
|
||||
- Symbol exists but returns empty/null
|
||||
|
||||
### 5. Verify Test Coverage
|
||||
|
||||
For all test-related tasks:
|
||||
|
||||
```bash
|
||||
# Find test files
|
||||
glob: "**/*.test.{ts,tsx}"
|
||||
glob: "**/*.spec.{ts,tsx}"
|
||||
|
||||
# Run specific tests
|
||||
npm test -- --run --grep "{feature_name}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Test file exists
|
||||
- [ ] Test describes the feature
|
||||
- [ ] Test actually runs (not skipped)
|
||||
- [ ] Test passes (GREEN)
|
||||
|
||||
**False Positive Indicators:**
|
||||
- No test file found
|
||||
- Test exists but skipped (it.skip)
|
||||
- Test exists but fails
|
||||
- Test exists but doesn't test the feature (placeholder)
|
||||
|
||||
### 6. Verify Database Changes
|
||||
|
||||
For database migration tasks:
|
||||
|
||||
```bash
|
||||
# Find migration files
|
||||
glob: "**/migrations/*.sql"
|
||||
|
||||
# Check Supabase schema
|
||||
mcp__supabase__list_tables
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Migration file exists
|
||||
- [ ] Migration has been applied
|
||||
- [ ] Table/column exists in schema
|
||||
- [ ] RLS policies are present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Migration file missing
|
||||
- Migration not applied to database
|
||||
- Table/column doesn't exist
|
||||
- RLS policies missing
|
||||
|
||||
### 7. Verify API Endpoints
|
||||
|
||||
For API endpoint tasks:
|
||||
|
||||
```bash
|
||||
# Find route files
|
||||
glob: "**/app/api/**/{endpoint}/route.ts"
|
||||
grep: "export async function {METHOD}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Route file exists
|
||||
- [ ] Handler function implemented
|
||||
- [ ] Returns proper Response type
|
||||
- [ ] Error handling present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Route file doesn't exist
|
||||
- Handler throws "Not implemented"
|
||||
- Handler returns stub response
|
||||
|
||||
### 8. Run Full Verification
|
||||
|
||||
Execute verification for ALL completed tasks:
|
||||
|
||||
```typescript
|
||||
interface VerificationResult {
|
||||
task: string;
|
||||
status: "verified" | "false_positive";
|
||||
evidence: string;
|
||||
missing?: string;
|
||||
}
|
||||
|
||||
const results: VerificationResult[] = [];
|
||||
|
||||
for (const task of completed_tasks) {
|
||||
const result = await verifyTask(task);
|
||||
results.push(result);
|
||||
}
|
||||
```
|
||||
|
||||
### 9. Analyze Verification Results
|
||||
|
||||
Count results:
|
||||
```
|
||||
Total Verified: {verified_count}
|
||||
False Positives: {false_positive_count}
|
||||
```
|
||||
|
||||
### 10. Handle False Positives
|
||||
|
||||
**IF false positives found (count > 0):**
|
||||
|
||||
Display:
|
||||
```
|
||||
⚠️ POST-IMPLEMENTATION GAPS DETECTED
|
||||
|
||||
Tasks marked complete but implementation incomplete:
|
||||
|
||||
{for each false_positive}
|
||||
- [ ] {task_description}
|
||||
Missing: {what_is_missing}
|
||||
Evidence: {grep/glob results}
|
||||
|
||||
{add new tasks for missing work}
|
||||
- [ ] Actually implement {missing_part}
|
||||
```
|
||||
|
||||
**Actions:**
|
||||
1. Uncheck false positive tasks in story file
|
||||
2. Add new tasks for the missing work
|
||||
3. Update "Gap Analysis" section in story
|
||||
4. Set state to re-run implementation
|
||||
|
||||
**Re-run implementation:**
|
||||
```
|
||||
Detected {false_positive_count} incomplete tasks.
|
||||
Re-running Step 5: Implementation to complete missing work...
|
||||
|
||||
{load and execute step-05-implement.md}
|
||||
```
|
||||
|
||||
After re-implementation, **RE-RUN THIS STEP** (step-05b-post-validation.md)
|
||||
|
||||
### 11. Handle Verified Success
|
||||
|
||||
**IF no false positives (all verified):**
|
||||
|
||||
Display:
|
||||
```
|
||||
✅ POST-IMPLEMENTATION VALIDATION PASSED
|
||||
|
||||
All {verified_count} completed tasks verified against codebase:
|
||||
- Files exist and are complete
|
||||
- Functions/components implemented
|
||||
- Tests exist and pass
|
||||
- Database changes applied
|
||||
- API endpoints functional
|
||||
|
||||
Ready for Code Review
|
||||
```
|
||||
|
||||
Update story file "Gap Analysis" section:
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Post-Implementation Validation
|
||||
- **Date:** {timestamp}
|
||||
- **Tasks Verified:** {verified_count}
|
||||
- **False Positives:** 0
|
||||
- **Status:** ✅ All work verified complete
|
||||
|
||||
**Verification Evidence:**
|
||||
{for each verified task}
|
||||
- ✅ {task}: {evidence}
|
||||
```
|
||||
|
||||
### 12. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `5b` to `stepsCompleted`
|
||||
- Set `lastStep: 5b`
|
||||
- Set `steps.step-05b-post-validation.status: completed`
|
||||
- Record verification results:
|
||||
```yaml
|
||||
verification:
|
||||
tasks_verified: {count}
|
||||
false_positives: {count}
|
||||
re_implementation_required: {true|false}
|
||||
```
|
||||
|
||||
### 13. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Post-Implementation Validation Complete
|
||||
|
||||
Verification Summary:
|
||||
- Tasks Checked: {total_count}
|
||||
- Verified Complete: {verified_count}
|
||||
- False Positives: {false_positive_count}
|
||||
- Re-implementations: {retry_count}
|
||||
|
||||
{if false_positives}
|
||||
Re-running implementation to complete missing work...
|
||||
{else}
|
||||
All work verified. Proceeding to Code Review...
|
||||
{endif}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu (only if no false positives):**
|
||||
```
|
||||
[C] Continue to Code Review
|
||||
[V] Run verification again
|
||||
[T] Run tests again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:**
|
||||
- Auto re-run implementation if false positives
|
||||
- Auto-continue if all verified
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding to code review:
|
||||
- [ ] All completed tasks verified against codebase
|
||||
- [ ] Zero false positives remaining
|
||||
- [ ] All tests still passing
|
||||
- [ ] Build still succeeds
|
||||
- [ ] Gap analysis updated with verification results
|
||||
|
||||
## VERIFICATION TOOLS
|
||||
|
||||
Use these tools for verification:
|
||||
|
||||
```typescript
|
||||
// File existence
|
||||
glob("{pattern}")
|
||||
|
||||
// Symbol search
|
||||
grep("{symbol_name}", { glob: "**/*.{ts,tsx}", output_mode: "content" })
|
||||
|
||||
// Test execution
|
||||
bash("npm test -- --run --grep '{test_name}'")
|
||||
|
||||
// Database check
|
||||
mcp__supabase__list_tables()
|
||||
|
||||
// Read file contents
|
||||
read("{file_path}")
|
||||
```
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [all tasks verified AND zero false positives],
|
||||
load and execute `{nextStepFile}` for code review.
|
||||
|
||||
**IF** [false positives detected],
|
||||
load and execute `{prevStepFile}` to complete missing work,
|
||||
then RE-RUN this step.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All completed tasks verified against codebase
|
||||
- No false positives (or all re-implemented)
|
||||
- Tests still passing
|
||||
- Evidence documented for each task
|
||||
- Gap analysis updated
|
||||
|
||||
### ❌ FAILURE
|
||||
- Skipping verification ("trust the marks")
|
||||
- Not checking actual code existence
|
||||
- Not running tests to verify claims
|
||||
- Allowing false positives to proceed
|
||||
- Not documenting verification evidence
|
||||
|
||||
## COMMON FALSE POSITIVE PATTERNS
|
||||
|
||||
Watch for these common issues:
|
||||
|
||||
1. **Stub Implementations**
|
||||
- Function exists but returns `null`
|
||||
- Function throws "Not implemented"
|
||||
- Component returns empty div
|
||||
|
||||
2. **Placeholder Tests**
|
||||
- Test exists but skipped (it.skip)
|
||||
- Test doesn't actually test the feature
|
||||
- Test always passes (no assertions)
|
||||
|
||||
3. **Incomplete Files**
|
||||
- File created but empty
|
||||
- Missing required exports
|
||||
- TODO comments everywhere
|
||||
|
||||
4. **Database Drift**
|
||||
- Migration file exists but not applied
|
||||
- Schema doesn't match migration
|
||||
- RLS policies missing
|
||||
|
||||
5. **API Stubs**
|
||||
- Route exists but returns 501
|
||||
- Handler not implemented
|
||||
- No error handling
|
||||
|
||||
This step is the **safety net** that catches incomplete work before code review.
|
||||
|
|
@ -1,294 +0,0 @@
|
|||
---
|
||||
name: 'step-06-code-review'
|
||||
description: 'Adversarial code review finding 3-10 specific issues'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-06-code-review.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-07-complete.md'
|
||||
checklist: '{workflow_path}/checklists/code-review.md'
|
||||
|
||||
# Role (continue as dev, but reviewer mindset)
|
||||
role: dev
|
||||
requires_fresh_context: true # In batch mode, checkpoint here for unbiased review
|
||||
---
|
||||
|
||||
# Step 6: Code Review
|
||||
|
||||
## ROLE CONTINUATION - ADVERSARIAL MODE
|
||||
|
||||
**Continuing as DEV but switching to ADVERSARIAL REVIEWER mindset.**
|
||||
|
||||
You are now a critical code reviewer. Your job is to FIND PROBLEMS.
|
||||
- **NEVER** say "looks good" - that's a failure
|
||||
- **MUST** find 3-10 specific issues
|
||||
- **FIX** every issue you find
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Perform adversarial code review:
|
||||
1. Query Supabase advisors for security/performance issues
|
||||
2. Identify all files changed for this story
|
||||
3. Review each file against checklist
|
||||
4. Find and document 3-10 issues (MANDATORY)
|
||||
5. Fix all issues
|
||||
6. Verify tests still pass
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Adversarial Requirements
|
||||
|
||||
- **MINIMUM 3 ISSUES** - If you found fewer, look harder
|
||||
- **MAXIMUM 10 ISSUES** - Prioritize if more found
|
||||
- **NO "LOOKS GOOD"** - This is FORBIDDEN
|
||||
- **FIX EVERYTHING** - Don't just report, fix
|
||||
|
||||
### Review Categories (find issues in EACH)
|
||||
|
||||
1. Security
|
||||
2. Performance
|
||||
3. Error Handling
|
||||
4. Test Coverage
|
||||
5. Code Quality
|
||||
6. Architecture
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Query Supabase Advisors
|
||||
|
||||
Use MCP tools:
|
||||
|
||||
```
|
||||
mcp__supabase__get_advisors:
|
||||
type: "security"
|
||||
|
||||
mcp__supabase__get_advisors:
|
||||
type: "performance"
|
||||
```
|
||||
|
||||
Document any issues found.
|
||||
|
||||
### 2. Identify Changed Files
|
||||
|
||||
```bash
|
||||
git status
|
||||
git diff --name-only HEAD~1
|
||||
```
|
||||
|
||||
List all files changed for story {story_id}.
|
||||
|
||||
### 3. Review Each Category
|
||||
|
||||
#### SECURITY REVIEW
|
||||
|
||||
For each file, check:
|
||||
- [ ] No SQL injection vulnerabilities
|
||||
- [ ] No XSS vulnerabilities
|
||||
- [ ] Auth checks on all protected routes
|
||||
- [ ] RLS policies exist and are correct
|
||||
- [ ] No credential exposure (API keys, secrets)
|
||||
- [ ] Input validation present
|
||||
- [ ] Rate limiting considered
|
||||
|
||||
#### PERFORMANCE REVIEW
|
||||
|
||||
- [ ] No N+1 query patterns
|
||||
- [ ] Indexes exist for query patterns
|
||||
- [ ] No unnecessary re-renders
|
||||
- [ ] Proper caching strategy
|
||||
- [ ] Efficient data fetching
|
||||
- [ ] Bundle size impact considered
|
||||
|
||||
#### ERROR HANDLING REVIEW
|
||||
|
||||
- [ ] Result type used consistently
|
||||
- [ ] Error messages are user-friendly
|
||||
- [ ] Edge cases handled
|
||||
- [ ] Null/undefined checked
|
||||
- [ ] Network errors handled gracefully
|
||||
|
||||
#### TEST COVERAGE REVIEW
|
||||
|
||||
- [ ] All AC have tests
|
||||
- [ ] Edge cases tested
|
||||
- [ ] Error paths tested
|
||||
- [ ] Mocking is appropriate (not excessive)
|
||||
- [ ] Tests are deterministic
|
||||
|
||||
#### CODE QUALITY REVIEW
|
||||
|
||||
- [ ] DRY - no duplicate code
|
||||
- [ ] SOLID principles followed
|
||||
- [ ] TypeScript strict mode compliant
|
||||
- [ ] No any types
|
||||
- [ ] Functions are focused (single responsibility)
|
||||
- [ ] Naming is clear and consistent
|
||||
|
||||
#### ARCHITECTURE REVIEW
|
||||
|
||||
- [ ] Module boundaries respected
|
||||
- [ ] Imports from index.ts only
|
||||
- [ ] Server/client separation correct
|
||||
- [ ] Data flow is clear
|
||||
- [ ] No circular dependencies
|
||||
|
||||
### 4. Document All Issues
|
||||
|
||||
For each issue found:
|
||||
|
||||
```yaml
|
||||
issue_{n}:
|
||||
severity: critical|high|medium|low
|
||||
category: security|performance|error-handling|testing|quality|architecture
|
||||
file: "{file_path}"
|
||||
line: {line_number}
|
||||
problem: |
|
||||
{Clear description of the issue}
|
||||
risk: |
|
||||
{What could go wrong if not fixed}
|
||||
fix: |
|
||||
{How to fix it}
|
||||
```
|
||||
|
||||
### 5. Fix All Issues
|
||||
|
||||
For EACH issue documented:
|
||||
|
||||
1. Edit the file to fix the issue
|
||||
2. Add test if issue wasn't covered
|
||||
3. Verify the fix is correct
|
||||
4. Mark as fixed
|
||||
|
||||
### 6. Run Verification
|
||||
|
||||
After all fixes:
|
||||
|
||||
```bash
|
||||
npm run lint
|
||||
npm run build
|
||||
npm test -- --run
|
||||
```
|
||||
|
||||
All must pass.
|
||||
|
||||
### 7. Create Review Report
|
||||
|
||||
Append to story file or create `{sprint_artifacts}/review-{story_id}.md`:
|
||||
|
||||
```markdown
|
||||
# Code Review Report - Story {story_id}
|
||||
|
||||
## Summary
|
||||
- Issues Found: {count}
|
||||
- Issues Fixed: {count}
|
||||
- Categories Reviewed: {list}
|
||||
|
||||
## Issues Detail
|
||||
|
||||
### Issue 1: {title}
|
||||
- **Severity:** {severity}
|
||||
- **Category:** {category}
|
||||
- **File:** {file}:{line}
|
||||
- **Problem:** {description}
|
||||
- **Fix Applied:** {fix_description}
|
||||
|
||||
### Issue 2: {title}
|
||||
...
|
||||
|
||||
## Security Checklist
|
||||
- [x] RLS policies verified
|
||||
- [x] No credential exposure
|
||||
- [x] Input validation present
|
||||
|
||||
## Performance Checklist
|
||||
- [x] No N+1 queries
|
||||
- [x] Indexes verified
|
||||
|
||||
## Final Status
|
||||
All issues resolved. Tests passing.
|
||||
|
||||
Reviewed by: DEV (adversarial)
|
||||
Reviewed at: {timestamp}
|
||||
```
|
||||
|
||||
### 8. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `6` to `stepsCompleted`
|
||||
- Set `lastStep: 6`
|
||||
- Set `steps.step-06-code-review.status: completed`
|
||||
- Record `issues_found` and `issues_fixed`
|
||||
|
||||
### 9. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Code Review Complete
|
||||
|
||||
Issues Found: {count} (minimum 3 required)
|
||||
Issues Fixed: {count}
|
||||
|
||||
By Category:
|
||||
- Security: {count}
|
||||
- Performance: {count}
|
||||
- Error Handling: {count}
|
||||
- Test Coverage: {count}
|
||||
- Code Quality: {count}
|
||||
- Architecture: {count}
|
||||
|
||||
All Tests: PASSING
|
||||
Lint: CLEAN
|
||||
Build: SUCCESS
|
||||
|
||||
Review Report: {report_path}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Completion
|
||||
[R] Run another review pass
|
||||
[T] Run tests again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue if minimum issues found and fixed
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] Minimum 3 issues found and fixed
|
||||
- [ ] All categories reviewed
|
||||
- [ ] All tests still passing
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] Review report created
|
||||
|
||||
## MCP TOOLS AVAILABLE
|
||||
|
||||
- `mcp__supabase__get_advisors` - Security/performance checks
|
||||
- `mcp__supabase__execute_sql` - Query verification
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [minimum 3 issues found AND all fixed AND tests pass],
|
||||
load and execute `{nextStepFile}` for story completion.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Found and fixed 3-10 issues
|
||||
- All categories reviewed
|
||||
- Tests still passing after fixes
|
||||
- Review report complete
|
||||
- No "looks good" shortcuts
|
||||
|
||||
### ❌ FAILURE
|
||||
- Saying "looks good" or "no issues found"
|
||||
- Finding fewer than 3 issues
|
||||
- Not fixing issues found
|
||||
- Tests failing after fixes
|
||||
- Skipping review categories
|
||||
|
|
@ -1,210 +0,0 @@
|
|||
---
|
||||
name: 'step-07-complete'
|
||||
description: 'Update sprint status and create git commit'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-07-complete.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-08-summary.md'
|
||||
|
||||
# Role Switch
|
||||
role: sm
|
||||
agentFile: '{project-root}/_bmad/bmm/agents/sm.md'
|
||||
---
|
||||
|
||||
# Step 7: Complete Story
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching back to SM (Scrum Master) perspective.**
|
||||
|
||||
You are completing the story lifecycle:
|
||||
- Update sprint tracking
|
||||
- Create git commit
|
||||
- Finalize documentation
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Complete the story development lifecycle:
|
||||
1. Final verification (tests, lint, build)
|
||||
2. Update sprint-status.yaml
|
||||
3. Create git commit with proper message
|
||||
4. Update story file status
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Completion Rules
|
||||
|
||||
- **VERIFY** everything passes before committing
|
||||
- **UPDATE** all tracking files
|
||||
- **COMMIT** with conventional commit message
|
||||
- **DOCUMENT** completion metadata
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Final Verification
|
||||
|
||||
Run full verification suite:
|
||||
|
||||
```bash
|
||||
npm test -- --run
|
||||
npm run lint
|
||||
npm run build
|
||||
```
|
||||
|
||||
All must pass before proceeding.
|
||||
|
||||
**If any fail:** HALT and report issues.
|
||||
|
||||
### 2. Update Story File Status
|
||||
|
||||
Edit story file, update frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
status: done
|
||||
completed_at: {timestamp}
|
||||
implementation_notes: |
|
||||
- Tests created and passing
|
||||
- Code reviewed and approved
|
||||
- {count} issues found and fixed
|
||||
---
|
||||
```
|
||||
|
||||
### 3. Update Sprint Status
|
||||
|
||||
Edit: `{sprint_artifacts}/sprint-status.yaml`
|
||||
|
||||
Find story {story_id} and update:
|
||||
|
||||
```yaml
|
||||
stories:
|
||||
- id: "{story_id}"
|
||||
status: done
|
||||
completed_at: {timestamp}
|
||||
metadata:
|
||||
tests_passing: true
|
||||
code_reviewed: true
|
||||
issues_found: {count}
|
||||
issues_fixed: {count}
|
||||
pipeline_version: "story-dev-only-v2.0"
|
||||
```
|
||||
|
||||
### 4. Stage Git Changes
|
||||
|
||||
```bash
|
||||
git add src/
|
||||
git add _bmad-output/implementation-artifacts/story-{story_id}.md
|
||||
git add _bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
git add src/supabase/migrations/
|
||||
```
|
||||
|
||||
### 5. Create Git Commit
|
||||
|
||||
Check for changes:
|
||||
```bash
|
||||
git diff --cached --quiet
|
||||
```
|
||||
|
||||
If changes exist, create commit:
|
||||
|
||||
```bash
|
||||
git commit -m "$(cat <<'EOF'
|
||||
feat(epic-{epic_num}): complete story {story_id}
|
||||
|
||||
- Acceptance tests created for all criteria
|
||||
- All tests passing (TDD green phase)
|
||||
- Code reviewed: {issues_found} issues found and fixed
|
||||
|
||||
Story: {story_title}
|
||||
Pipeline: story-dev-only-v2.0
|
||||
|
||||
🤖 Generated with BMAD Story Pipeline
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
### 6. Verify Commit
|
||||
|
||||
```bash
|
||||
git log -1 --oneline
|
||||
git status
|
||||
```
|
||||
|
||||
Confirm:
|
||||
- Commit created successfully
|
||||
- Working directory clean (or only untracked files)
|
||||
|
||||
### 7. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `7` to `stepsCompleted`
|
||||
- Set `lastStep: 7`
|
||||
- Set `steps.step-07-complete.status: completed`
|
||||
- Set `status: completing`
|
||||
|
||||
### 8. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Story {story_id} Completed
|
||||
|
||||
Sprint Status: Updated ✓
|
||||
Story Status: done ✓
|
||||
Git Commit: Created ✓
|
||||
|
||||
Commit: {commit_hash}
|
||||
Message: feat(epic-{epic_num}): complete story {story_id}
|
||||
|
||||
Files Committed:
|
||||
- {file_count} files
|
||||
|
||||
Next: Generate summary and audit trail
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Summary
|
||||
[L] View git log
|
||||
[S] View git status
|
||||
[H] Halt (story is complete, audit pending)
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue to summary
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] All tests pass
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] Sprint status updated
|
||||
- [ ] Git commit created
|
||||
- [ ] Story status set to done
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [verification passes AND commit created AND status updated],
|
||||
load and execute `{nextStepFile}` for summary generation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All verification passes
|
||||
- Sprint status updated correctly
|
||||
- Conventional commit created
|
||||
- Story marked as done
|
||||
- Clean git state
|
||||
|
||||
### ❌ FAILURE
|
||||
- Committing with failing tests
|
||||
- Missing sprint status update
|
||||
- Malformed commit message
|
||||
- Not including all changed files
|
||||
- Story not marked done
|
||||
|
|
@ -1,273 +0,0 @@
|
|||
---
|
||||
name: 'step-08-summary'
|
||||
description: 'Generate audit trail and pipeline summary report'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-08-summary.md'
|
||||
auditFile: '{sprint_artifacts}/audit-{story_id}-{date}.yaml'
|
||||
|
||||
# No role needed - orchestrator
|
||||
role: null
|
||||
---
|
||||
|
||||
# Step 8: Pipeline Summary
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Generate final audit trail and summary report:
|
||||
1. Calculate pipeline metrics
|
||||
2. Generate audit trail file
|
||||
3. Create summary report
|
||||
4. Clean up pipeline state
|
||||
5. Suggest next steps
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Calculate Pipeline Metrics
|
||||
|
||||
From pipeline state file, calculate:
|
||||
|
||||
```yaml
|
||||
metrics:
|
||||
total_duration: {sum of all step durations}
|
||||
steps_completed: {count}
|
||||
issues_found: {from code review}
|
||||
issues_fixed: {from code review}
|
||||
tests_created: {count}
|
||||
files_modified: {count}
|
||||
migrations_applied: {count}
|
||||
```
|
||||
|
||||
### 2. Generate Audit Trail
|
||||
|
||||
Create: `{auditFile}`
|
||||
|
||||
```yaml
|
||||
---
|
||||
audit_version: "1.0"
|
||||
pipeline: "story-dev-only-v2.0"
|
||||
story_id: "{story_id}"
|
||||
epic_num: {epic_num}
|
||||
---
|
||||
|
||||
# Pipeline Audit Trail
|
||||
|
||||
## Execution Summary
|
||||
started_at: "{started_at}"
|
||||
completed_at: "{timestamp}"
|
||||
total_duration: "{duration}"
|
||||
mode: "{mode}"
|
||||
status: "completed"
|
||||
|
||||
## Steps Executed
|
||||
steps:
|
||||
- step: 1
|
||||
name: "Initialize"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
|
||||
- step: 2
|
||||
name: "Create Story"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: sm
|
||||
output: "{story_file_path}"
|
||||
|
||||
- step: 3
|
||||
name: "Validate Story"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: sm
|
||||
issues_found: {count}
|
||||
issues_fixed: {count}
|
||||
|
||||
- step: 4
|
||||
name: "ATDD"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: tea
|
||||
tests_created: {count}
|
||||
test_files:
|
||||
- "{file_1}"
|
||||
- "{file_2}"
|
||||
|
||||
- step: 5
|
||||
name: "Implement"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: dev
|
||||
files_modified: {count}
|
||||
migrations:
|
||||
- "{migration_1}"
|
||||
|
||||
- step: 6
|
||||
name: "Code Review"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: dev
|
||||
issues_found: {count}
|
||||
issues_fixed: {count}
|
||||
categories_reviewed:
|
||||
- security
|
||||
- performance
|
||||
- error-handling
|
||||
- testing
|
||||
- quality
|
||||
- architecture
|
||||
|
||||
- step: 7
|
||||
name: "Complete"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
agent: sm
|
||||
commit_hash: "{hash}"
|
||||
|
||||
- step: 8
|
||||
name: "Summary"
|
||||
status: completed
|
||||
duration: "{duration}"
|
||||
|
||||
## Quality Gates
|
||||
gates:
|
||||
story_creation:
|
||||
passed: true
|
||||
criteria_met: [list]
|
||||
validation:
|
||||
passed: true
|
||||
quality_score: {score}
|
||||
atdd:
|
||||
passed: true
|
||||
tests_failing: true # Expected in red phase
|
||||
implementation:
|
||||
passed: true
|
||||
tests_passing: true
|
||||
code_review:
|
||||
passed: true
|
||||
minimum_issues_found: true
|
||||
|
||||
## Artifacts Produced
|
||||
artifacts:
|
||||
story_file: "{path}"
|
||||
test_files:
|
||||
- "{path}"
|
||||
migrations:
|
||||
- "{path}"
|
||||
atdd_checklist: "{path}"
|
||||
review_report: "{path}"
|
||||
commit: "{hash}"
|
||||
|
||||
## Token Efficiency
|
||||
token_estimate:
|
||||
traditional_approach: "~71K tokens (6 claude calls)"
|
||||
step_file_approach: "~{actual}K tokens (1 session)"
|
||||
savings: "{percentage}%"
|
||||
```
|
||||
|
||||
### 3. Generate Summary Report
|
||||
|
||||
Display to user:
|
||||
|
||||
```
|
||||
═══════════════════════════════════════════════════════════════════
|
||||
PIPELINE COMPLETE: Story {story_id}
|
||||
═══════════════════════════════════════════════════════════════════
|
||||
|
||||
📊 EXECUTION SUMMARY
|
||||
────────────────────
|
||||
Duration: {total_duration}
|
||||
Mode: {mode}
|
||||
Status: ✓ Completed Successfully
|
||||
|
||||
📋 STORY DETAILS
|
||||
────────────────────
|
||||
Epic: {epic_num}
|
||||
Title: {story_title}
|
||||
Commit: {commit_hash}
|
||||
|
||||
✅ QUALITY METRICS
|
||||
────────────────────
|
||||
Validation Score: {score}/100
|
||||
Issues Found: {count}
|
||||
Issues Fixed: {count}
|
||||
Tests Created: {count}
|
||||
Files Modified: {count}
|
||||
|
||||
📁 ARTIFACTS
|
||||
────────────────────
|
||||
Story: {story_file}
|
||||
Tests: {test_count} files
|
||||
Migrations: {migration_count}
|
||||
Audit: {audit_file}
|
||||
|
||||
💰 TOKEN EFFICIENCY
|
||||
────────────────────
|
||||
Traditional: ~71K tokens
|
||||
Step-file: ~{actual}K tokens
|
||||
Savings: {percentage}%
|
||||
|
||||
═══════════════════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
### 4. Update Final Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `8` to `stepsCompleted`
|
||||
- Set `lastStep: 8`
|
||||
- Set `status: completed`
|
||||
- Set `completed_at: {timestamp}`
|
||||
|
||||
### 5. Suggest Next Steps
|
||||
|
||||
Display:
|
||||
|
||||
```
|
||||
📌 NEXT STEPS
|
||||
────────────────────
|
||||
1. Review commit: git show {hash}
|
||||
2. Push when ready: git push
|
||||
3. Next story: bmad build {next_story_id}
|
||||
4. View audit: cat {audit_file}
|
||||
|
||||
Optional:
|
||||
- Run verification: bmad verify {story_id}
|
||||
- Run with coverage: npm test -- --coverage
|
||||
```
|
||||
|
||||
### 6. Clean Up (Optional)
|
||||
|
||||
In batch mode, optionally archive pipeline state:
|
||||
|
||||
```bash
|
||||
mv {state_file} {state_file}.completed
|
||||
```
|
||||
|
||||
Or keep for reference.
|
||||
|
||||
## COMPLETION
|
||||
|
||||
Pipeline execution complete. No next step to load.
|
||||
|
||||
Display final message:
|
||||
```
|
||||
Pipeline complete. Story {story_id} is ready.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Audit trail generated with all details
|
||||
- Summary displayed clearly
|
||||
- All metrics calculated
|
||||
- State marked complete
|
||||
- Next steps provided
|
||||
|
||||
### ❌ FAILURE
|
||||
- Missing audit trail
|
||||
- Incomplete metrics
|
||||
- State not finalized
|
||||
- No summary provided
|
||||
|
|
@ -1,249 +0,0 @@
|
|||
# Audit Trail Template
|
||||
# Generated at pipeline completion
|
||||
# Location: {sprint_artifacts}/audit-{story_id}-{date}.yaml
|
||||
# yamllint disable
|
||||
|
||||
---
|
||||
audit_version: "1.0"
|
||||
pipeline_version: "story-pipeline-v2.0"
|
||||
|
||||
# Story identification
|
||||
story_id: "{{story_id}}"
|
||||
epic_num: {{epic_num}}
|
||||
story_title: "{{story_title}}"
|
||||
|
||||
# Execution summary
|
||||
execution:
|
||||
started_at: "{{started_at}}"
|
||||
completed_at: "{{completed_at}}"
|
||||
total_duration: "{{duration}}"
|
||||
mode: "{{mode}}"
|
||||
status: "{{status}}"
|
||||
|
||||
# Agent roles used
|
||||
agents:
|
||||
sm:
|
||||
name: "Scrum Master"
|
||||
steps: [2, 3, 7]
|
||||
total_time: null
|
||||
tea:
|
||||
name: "Test Engineering Architect"
|
||||
steps: [4]
|
||||
total_time: null
|
||||
dev:
|
||||
name: "Developer"
|
||||
steps: [5, 6]
|
||||
total_time: null
|
||||
|
||||
# Step-by-step execution log
|
||||
steps:
|
||||
- step: 1
|
||||
name: "Initialize"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
actions:
|
||||
- "Loaded project context"
|
||||
- "Loaded epic definition"
|
||||
- "Cached architecture sections"
|
||||
|
||||
- step: 2
|
||||
name: "Create Story"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "sm"
|
||||
research_queries:
|
||||
- "{{query_1}}"
|
||||
- "{{query_2}}"
|
||||
output: "{{story_file_path}}"
|
||||
acceptance_criteria_count: {{count}}
|
||||
|
||||
- step: 3
|
||||
name: "Validate Story"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "sm"
|
||||
issues_found: {{count}}
|
||||
issues_fixed: {{count}}
|
||||
quality_score: {{score}}
|
||||
validation_areas:
|
||||
- "AC structure"
|
||||
- "Testability"
|
||||
- "Technical feasibility"
|
||||
- "Edge cases"
|
||||
|
||||
- step: 4
|
||||
name: "ATDD (Red Phase)"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "tea"
|
||||
tests_created: {{count}}
|
||||
test_files:
|
||||
- "{{path}}"
|
||||
factories_created:
|
||||
- "{{factory}}"
|
||||
fixtures_created:
|
||||
- "{{fixture}}"
|
||||
data_testids_documented: {{count}}
|
||||
|
||||
- step: 5
|
||||
name: "Implement (Green Phase)"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "dev"
|
||||
files_modified: {{count}}
|
||||
migrations_applied:
|
||||
- "{{migration}}"
|
||||
test_results:
|
||||
passed: {{count}}
|
||||
failed: 0
|
||||
lint_status: "clean"
|
||||
build_status: "success"
|
||||
|
||||
- step: 6
|
||||
name: "Code Review"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "dev"
|
||||
review_type: "adversarial"
|
||||
issues_found: {{count}}
|
||||
issues_fixed: {{count}}
|
||||
categories_reviewed:
|
||||
security:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
performance:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
error_handling:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
testing:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
code_quality:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
architecture:
|
||||
issues: {{count}}
|
||||
fixed: {{count}}
|
||||
|
||||
- step: 7
|
||||
name: "Complete"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
agent: "sm"
|
||||
commit_hash: "{{hash}}"
|
||||
commit_message: "feat(epic-{{epic_num}}): complete story {{story_id}}"
|
||||
files_committed: {{count}}
|
||||
sprint_status_updated: true
|
||||
|
||||
- step: 8
|
||||
name: "Summary"
|
||||
status: "{{status}}"
|
||||
duration: "{{duration}}"
|
||||
audit_file: "{{this_file}}"
|
||||
|
||||
# Quality gates summary
|
||||
quality_gates:
|
||||
story_creation:
|
||||
passed: true
|
||||
criteria:
|
||||
- "Story file created"
|
||||
- "All AC in BDD format"
|
||||
- "Test scenarios defined"
|
||||
|
||||
validation:
|
||||
passed: true
|
||||
quality_score: {{score}}
|
||||
criteria:
|
||||
- "No ambiguous requirements"
|
||||
- "All issues fixed"
|
||||
|
||||
atdd:
|
||||
passed: true
|
||||
criteria:
|
||||
- "Tests for all AC"
|
||||
- "Tests fail (red phase)"
|
||||
- "data-testids documented"
|
||||
|
||||
implementation:
|
||||
passed: true
|
||||
criteria:
|
||||
- "All tests pass"
|
||||
- "Lint clean"
|
||||
- "Build success"
|
||||
- "RLS policies added"
|
||||
|
||||
code_review:
|
||||
passed: true
|
||||
issues_found: {{count}}
|
||||
criteria:
|
||||
- "Minimum 3 issues found"
|
||||
- "All issues fixed"
|
||||
- "All categories reviewed"
|
||||
|
||||
# Artifacts produced
|
||||
artifacts:
|
||||
story_file:
|
||||
path: "{{path}}"
|
||||
size: "{{size}}"
|
||||
|
||||
test_files:
|
||||
- path: "{{path}}"
|
||||
test_count: {{count}}
|
||||
|
||||
migrations:
|
||||
- path: "{{path}}"
|
||||
tables_affected: ["{{table}}"]
|
||||
|
||||
checklists:
|
||||
atdd: "{{path}}"
|
||||
review: "{{path}}"
|
||||
|
||||
commit:
|
||||
hash: "{{hash}}"
|
||||
branch: "{{branch}}"
|
||||
pushed: false
|
||||
|
||||
# Token efficiency comparison
|
||||
token_efficiency:
|
||||
traditional_approach:
|
||||
description: "6 separate claude -p calls"
|
||||
estimated_tokens: 71000
|
||||
breakdown:
|
||||
- stage: "create-story"
|
||||
tokens: 12000
|
||||
- stage: "validate-story"
|
||||
tokens: 11000
|
||||
- stage: "atdd"
|
||||
tokens: 12000
|
||||
- stage: "implement"
|
||||
tokens: 15000
|
||||
- stage: "code-review"
|
||||
tokens: 13000
|
||||
- stage: "complete"
|
||||
tokens: 8000
|
||||
|
||||
step_file_approach:
|
||||
description: "Single session with step-file loading"
|
||||
estimated_tokens: "{{actual}}"
|
||||
savings_percentage: "{{percentage}}"
|
||||
breakdown:
|
||||
- step: "context_loading"
|
||||
tokens: 5000
|
||||
note: "Loaded once, cached"
|
||||
- step: "step_files"
|
||||
tokens: "{{tokens}}"
|
||||
note: "~200 lines each"
|
||||
- step: "execution"
|
||||
tokens: "{{tokens}}"
|
||||
note: "Actual work"
|
||||
|
||||
# Notes and observations
|
||||
notes:
|
||||
- "{{note_1}}"
|
||||
- "{{note_2}}"
|
||||
|
||||
# Generated by
|
||||
generated_by: "BMAD Story Pipeline v2.0"
|
||||
generated_at: "{{timestamp}}"
|
||||
|
|
@ -1,144 +0,0 @@
|
|||
# Pipeline State Template
|
||||
# Copy and populate for each story execution
|
||||
# Location: {sprint_artifacts}/pipeline-state-{story_id}.yaml
|
||||
|
||||
---
|
||||
# Story identification
|
||||
story_id: "{{story_id}}"
|
||||
epic_num: {{epic_num}}
|
||||
story_num: {{story_num}}
|
||||
|
||||
# Execution mode
|
||||
mode: "interactive" # or "batch"
|
||||
|
||||
# Progress tracking
|
||||
stepsCompleted: []
|
||||
lastStep: 0
|
||||
currentStep: 0
|
||||
status: "not_started" # not_started, initializing, in_progress, completing, completed, failed
|
||||
|
||||
# Timestamps
|
||||
started_at: null
|
||||
updated_at: null
|
||||
completed_at: null
|
||||
|
||||
# Cached document context (loaded once, reused)
|
||||
cached_context:
|
||||
project_context_loaded: false
|
||||
project_context_path: null
|
||||
epic_loaded: false
|
||||
epic_path: null
|
||||
architecture_sections: []
|
||||
architecture_paths: []
|
||||
story_file_exists: false
|
||||
story_file_path: null
|
||||
|
||||
# Step status tracking
|
||||
steps:
|
||||
step-01-init:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
notes: null
|
||||
|
||||
step-02-create-story:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: sm
|
||||
output_file: null
|
||||
notes: null
|
||||
|
||||
step-03-validate-story:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: sm
|
||||
issues_found: 0
|
||||
issues_fixed: 0
|
||||
quality_score: null
|
||||
notes: null
|
||||
|
||||
step-04-atdd:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: tea
|
||||
tests_created: 0
|
||||
test_files: []
|
||||
factories_created: []
|
||||
fixtures_created: []
|
||||
notes: null
|
||||
|
||||
step-05-implement:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: dev
|
||||
files_modified: []
|
||||
migrations_applied: []
|
||||
tests_passing: null
|
||||
lint_clean: null
|
||||
build_success: null
|
||||
notes: null
|
||||
|
||||
step-06-code-review:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: dev
|
||||
issues_found: 0
|
||||
issues_fixed: 0
|
||||
categories_reviewed: []
|
||||
tests_passing: null
|
||||
notes: null
|
||||
|
||||
step-07-complete:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
agent: sm
|
||||
commit_hash: null
|
||||
sprint_status_updated: false
|
||||
notes: null
|
||||
|
||||
step-08-summary:
|
||||
status: pending
|
||||
started_at: null
|
||||
completed_at: null
|
||||
duration: null
|
||||
audit_file: null
|
||||
notes: null
|
||||
|
||||
# Error tracking (if pipeline fails)
|
||||
errors: []
|
||||
# Example error entry:
|
||||
# - step: 5
|
||||
# timestamp: "2025-01-15T12:00:00Z"
|
||||
# error: "Tests failed after implementation"
|
||||
# details: "3 tests failing in auth.test.ts"
|
||||
# recoverable: true
|
||||
|
||||
# Quality gates passed
|
||||
quality_gates:
|
||||
story_creation: null
|
||||
validation: null
|
||||
atdd: null
|
||||
implementation: null
|
||||
code_review: null
|
||||
|
||||
# Metrics (populated at end)
|
||||
metrics:
|
||||
total_duration: null
|
||||
token_estimate: null
|
||||
files_modified_count: 0
|
||||
tests_created_count: 0
|
||||
issues_found_total: 0
|
||||
issues_fixed_total: 0
|
||||
|
|
@ -1,272 +0,0 @@
|
|||
---
|
||||
name: story-dev-only
|
||||
description: Automated story development pipeline with token-efficient step-file architecture. Single-session orchestration replacing multiple Claude calls.
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Story Pipeline Workflow
|
||||
|
||||
**Goal:** Execute complete story development lifecycle in a single Claude session: create story, validate, generate tests (ATDD), implement, code review, and complete.
|
||||
|
||||
**Your Role:** You are the **BMAD Pipeline Orchestrator**. You will switch between agent roles (SM, TEA, DEV) as directed by each step file. Maintain context across role switches without reloading agent personas.
|
||||
|
||||
**Token Efficiency:** This workflow uses step-file architecture for ~60-70% token savings compared to separate Claude calls.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** for disciplined execution:
|
||||
|
||||
### Core Principles
|
||||
|
||||
- **Micro-file Design**: Each step is a self-contained instruction file (~150-250 lines)
|
||||
- **Just-In-Time Loading**: Only the current step file is in memory
|
||||
- **Role Switching**: Same session, explicit role switch instead of fresh Claude calls
|
||||
- **State Tracking**: Pipeline state in `{sprint_artifacts}/pipeline-state-{story_id}.yaml`
|
||||
- **Checkpoint/Resume**: Can resume from any completed step after failure
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
||||
3. **ROLE SWITCH**: When step specifies a role, adopt that agent's perspective
|
||||
4. **QUALITY GATES**: Complete gate criteria before proceeding to next step
|
||||
5. **WAIT FOR INPUT**: In interactive mode, halt at menus and wait for user selection
|
||||
6. **SAVE STATE**: Update pipeline state file after each step completion
|
||||
7. **LOAD NEXT**: When directed, load, read entire file, then execute the next step
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- **NEVER** load multiple step files simultaneously
|
||||
- **ALWAYS** read entire step file before execution
|
||||
- **NEVER** skip steps or optimize the sequence
|
||||
- **ALWAYS** update pipeline state after completing each step
|
||||
- **ALWAYS** follow the exact instructions in the step file
|
||||
- **NEVER** create mental todo lists from future steps
|
||||
- **NEVER** look ahead to future step files
|
||||
|
||||
### Mode Differences
|
||||
|
||||
| Aspect | Interactive | Batch |
|
||||
|--------|-------------|-------|
|
||||
| Menus | Present, wait for [C] | Auto-proceed |
|
||||
| Approval | Required at gates | Skip with YOLO |
|
||||
| On failure | Halt, checkpoint | Checkpoint, exit |
|
||||
| Code review | Same session | Fresh context option |
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION MODES
|
||||
|
||||
### Interactive Mode (Default)
|
||||
|
||||
```bash
|
||||
bmad build 1-4 # Interactive pipeline for story 1-4
|
||||
bmad build --interactive 1-4
|
||||
```
|
||||
|
||||
Features:
|
||||
- Menu navigation between steps
|
||||
- User approval at quality gates
|
||||
- Can pause and resume
|
||||
- Role switching in same session
|
||||
|
||||
### Batch Mode
|
||||
|
||||
```bash
|
||||
bmad build --batch 1-4 # Unattended execution
|
||||
```
|
||||
|
||||
Features:
|
||||
- Auto-proceed through all steps
|
||||
- YOLO mode for approvals
|
||||
- Fail-fast on errors
|
||||
- Optional fresh context for code review
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- `output_folder`, `sprint_artifacts`, `communication_language`
|
||||
|
||||
### 2. Pipeline Parameters
|
||||
|
||||
Resolve from invocation:
|
||||
- `story_id`: Story identifier (e.g., "1-4")
|
||||
- `epic_num`: Epic number (e.g., 1)
|
||||
- `story_num`: Story number (e.g., 4)
|
||||
- `mode`: "interactive" or "batch"
|
||||
|
||||
### 3. Document Pre-loading
|
||||
|
||||
Load and cache these documents (read once, use across steps):
|
||||
- Story file: `{sprint_artifacts}/story-{epic_num}-{story_num}.md`
|
||||
- Epic file: `{output_folder}/epic-{epic_num}.md`
|
||||
- Architecture: `{output_folder}/architecture.md` (selective sections)
|
||||
- Project context: `**/project-context.md`
|
||||
|
||||
### 4. First Step Execution
|
||||
|
||||
Load, read the full file and then execute:
|
||||
`{project-root}/_bmad/bmm/workflows/4-implementation/story-dev-only/steps/step-01-init.md`
|
||||
|
||||
---
|
||||
|
||||
## STEP FILE MAP
|
||||
|
||||
| Step | File | Agent | Purpose |
|
||||
|------|------|-------|---------|
|
||||
| 1 | step-01-init.md | - | Load context, detect mode, cache docs |
|
||||
| 1b | step-01b-resume.md | - | Resume from checkpoint (conditional) |
|
||||
| 2 | step-02-create-story.md | SM | Create detailed story with research |
|
||||
| 3 | step-03-validate-story.md | SM | Adversarial validation |
|
||||
| 4 | step-04-atdd.md | TEA | Generate failing tests (red phase) |
|
||||
| 5 | step-05-implement.md | DEV | Implement to pass tests (green phase) |
|
||||
| 5b | step-05b-post-validation.md | DEV | Verify completed tasks vs codebase reality |
|
||||
| 6 | step-06-code-review.md | DEV | Find 3-10 specific issues |
|
||||
| 7 | step-07-complete.md | SM | Update status, git commit |
|
||||
| 8 | step-08-summary.md | - | Audit trail, summary report |
|
||||
|
||||
---
|
||||
|
||||
## ROLE SWITCHING PROTOCOL
|
||||
|
||||
When a step requires a different agent role:
|
||||
|
||||
1. **Announce Role Switch**: "Switching to [ROLE] perspective..."
|
||||
2. **Adopt Mindset**: Think from that role's expertise
|
||||
3. **Apply Checklist**: Use role-specific checklist from `checklists/`
|
||||
4. **Maintain Context**: Keep cached documents in memory
|
||||
5. **Complete Step**: Finish all step requirements before switching
|
||||
|
||||
Example role switches:
|
||||
- Step 2-3: SM (story creation and validation)
|
||||
- Step 4: SM → TEA (switch to test mindset)
|
||||
- Step 5-6: TEA → DEV (switch to implementation mindset)
|
||||
- Step 7: DEV → SM (switch back for completion)
|
||||
|
||||
---
|
||||
|
||||
## STATE MANAGEMENT
|
||||
|
||||
### Pipeline State File
|
||||
|
||||
Location: `{sprint_artifacts}/pipeline-state-{story_id}.yaml`
|
||||
|
||||
```yaml
|
||||
story_id: "1-4"
|
||||
epic_num: 1
|
||||
story_num: 4
|
||||
mode: "interactive"
|
||||
stepsCompleted: [1, 2, 3]
|
||||
lastStep: 3
|
||||
currentStep: 4
|
||||
status: "in_progress"
|
||||
started_at: "2025-01-15T10:00:00Z"
|
||||
updated_at: "2025-01-15T11:30:00Z"
|
||||
cached_context:
|
||||
story_loaded: true
|
||||
epic_loaded: true
|
||||
architecture_sections: ["tech_stack", "data_model"]
|
||||
steps:
|
||||
step-01-init: { status: completed, duration: "0:02:15" }
|
||||
step-02-create-story: { status: completed, duration: "0:15:30" }
|
||||
step-03-validate-story: { status: completed, duration: "0:08:45" }
|
||||
step-04-atdd: { status: in_progress }
|
||||
step-05-implement: { status: pending }
|
||||
step-06-code-review: { status: pending }
|
||||
step-07-complete: { status: pending }
|
||||
step-08-summary: { status: pending }
|
||||
```
|
||||
|
||||
### Checkpoint/Resume
|
||||
|
||||
To resume after failure:
|
||||
```bash
|
||||
bmad build --resume 1-4
|
||||
```
|
||||
|
||||
Resume logic:
|
||||
1. Load state file for story 1-4
|
||||
2. Find `lastStep` completed
|
||||
3. Load and execute step `lastStep + 1`
|
||||
4. Continue from there
|
||||
|
||||
---
|
||||
|
||||
## QUALITY GATES
|
||||
|
||||
Each gate must pass before proceeding:
|
||||
|
||||
### Story Creation Gate (Step 2)
|
||||
- [ ] Story file created with proper frontmatter
|
||||
- [ ] All acceptance criteria defined with Given/When/Then
|
||||
- [ ] Technical context linked
|
||||
|
||||
### Validation Gate (Step 3)
|
||||
- [ ] Story passes adversarial review
|
||||
- [ ] No ambiguous requirements
|
||||
- [ ] Implementation path clear
|
||||
|
||||
### ATDD Gate (Step 4)
|
||||
- [ ] Tests exist for all acceptance criteria
|
||||
- [ ] Tests fail (red phase verified)
|
||||
- [ ] Test structure follows project patterns
|
||||
|
||||
### Implementation Gate (Step 5)
|
||||
- [ ] All tests pass (green phase)
|
||||
- [ ] Code follows project patterns
|
||||
- [ ] No TypeScript errors
|
||||
- [ ] Lint passes
|
||||
|
||||
### Post-Validation Gate (Step 5b)
|
||||
- [ ] All completed tasks verified against codebase
|
||||
- [ ] Zero false positives (or re-implementation complete)
|
||||
- [ ] Files/functions/tests actually exist
|
||||
- [ ] Tests actually pass (not just claimed)
|
||||
|
||||
### Code Review Gate (Step 6)
|
||||
- [ ] 3-10 specific issues identified (not "looks good")
|
||||
- [ ] All issues resolved or documented
|
||||
- [ ] Security review complete
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
|
||||
- Pipeline completes all 8 steps
|
||||
- All quality gates passed
|
||||
- Story status updated to "done"
|
||||
- Git commit created
|
||||
- Audit trail generated
|
||||
- Token usage < 35K (target)
|
||||
|
||||
### ❌ FAILURE
|
||||
|
||||
- Step file instructions skipped or optimized
|
||||
- Quality gate bypassed without approval
|
||||
- Role not properly switched
|
||||
- State file not updated
|
||||
- Tests not verified to fail before implementation
|
||||
- Code review accepts "looks good"
|
||||
|
||||
---
|
||||
|
||||
## AUDIT TRAIL
|
||||
|
||||
After completion, generate audit trail at:
|
||||
`{sprint_artifacts}/audit-{story_id}-{date}.yaml`
|
||||
|
||||
Contents:
|
||||
- Pipeline execution timeline
|
||||
- Step durations
|
||||
- Quality gate results
|
||||
- Issues found and resolved
|
||||
- Files modified
|
||||
- Token usage estimate
|
||||
|
|
@ -1,235 +0,0 @@
|
|||
name: story-pipeline
|
||||
description: "Automated story development pipeline with token-efficient step-file architecture. Replaces separate Claude calls with single-session orchestration."
|
||||
author: "BMad + digital-bridge"
|
||||
version: "2.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/story-pipeline"
|
||||
steps_path: "{installed_path}/steps"
|
||||
templates_path: "{installed_path}/templates"
|
||||
checklists_path: "{installed_path}/checklists"
|
||||
|
||||
# State management
|
||||
state_file: "{sprint_artifacts}/pipeline-state-{{story_id}}.yaml"
|
||||
audit_trail: "{sprint_artifacts}/audit-{{story_id}}-{{date}}.yaml"
|
||||
|
||||
# Workflow modes
|
||||
modes:
|
||||
interactive:
|
||||
description: "Human-in-the-loop with menu navigation between steps"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: true
|
||||
fresh_context_for_review: false # Role switch instead
|
||||
batch:
|
||||
description: "Unattended execution with YOLO mode"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: false
|
||||
fresh_context_for_review: true # Checkpoint before code review
|
||||
fail_fast: true
|
||||
|
||||
# Agent role definitions (loaded once, switched as needed)
|
||||
agents:
|
||||
sm:
|
||||
name: "Scrum Master"
|
||||
persona: "{project-root}/_bmad/bmm/agents/sm.md"
|
||||
description: "Story creation, validation, sprint status"
|
||||
used_in_steps: [2, 3, 7]
|
||||
tea:
|
||||
name: "Test Engineering Architect"
|
||||
persona: "{project-root}/_bmad/bmm/agents/tea.md"
|
||||
description: "ATDD test generation, red phase verification"
|
||||
used_in_steps: [4]
|
||||
dev:
|
||||
name: "Developer"
|
||||
persona: "{project-root}/_bmad/bmm/agents/dev.md"
|
||||
description: "Implementation, post-validation, code review"
|
||||
used_in_steps: [5, "5b", 6]
|
||||
|
||||
# Step file definitions
|
||||
steps:
|
||||
- step: 1
|
||||
file: "{steps_path}/step-01-init.md"
|
||||
name: "Initialize Pipeline"
|
||||
description: "Load story context, detect mode, cache documents"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
|
||||
- step: "1b"
|
||||
file: "{steps_path}/step-01b-resume.md"
|
||||
name: "Resume from Checkpoint"
|
||||
description: "Resume pipeline from last completed step"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
conditional: true # Only if resuming
|
||||
|
||||
- step: 2
|
||||
file: "{steps_path}/step-02-create-story.md"
|
||||
name: "Create Story"
|
||||
description: "Generate detailed story from epic with research"
|
||||
agent: sm
|
||||
quality_gate: true
|
||||
mcp_tools: [exa]
|
||||
checklist: "{checklists_path}/story-creation.md"
|
||||
|
||||
- step: 3
|
||||
file: "{steps_path}/step-03-validate-story.md"
|
||||
name: "Validate Story"
|
||||
description: "Adversarial validation of story completeness"
|
||||
agent: sm
|
||||
quality_gate: true
|
||||
checklist: "{checklists_path}/story-validation.md"
|
||||
|
||||
- step: 4
|
||||
file: "{steps_path}/step-04-atdd.md"
|
||||
name: "ATDD Test Generation"
|
||||
description: "Generate failing acceptance tests (red phase)"
|
||||
agent: tea
|
||||
quality_gate: true
|
||||
checklist: "{checklists_path}/atdd.md"
|
||||
|
||||
- step: 5
|
||||
file: "{steps_path}/step-05-implement.md"
|
||||
name: "Implement Story"
|
||||
description: "Implement code to pass tests (green phase)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
checklist: "{checklists_path}/implementation.md"
|
||||
|
||||
- step: "5b"
|
||||
file: "{steps_path}/step-05b-post-validation.md"
|
||||
name: "Post-Implementation Validation"
|
||||
description: "Verify completed tasks against codebase reality (catch false positives)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
iterative: true # May re-invoke step 5 if gaps found
|
||||
|
||||
- step: 6
|
||||
file: "{steps_path}/step-06-code-review.md"
|
||||
name: "Code Review"
|
||||
description: "Adversarial code review finding 3-10 issues"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
requires_fresh_context: true # In batch mode, checkpoint here
|
||||
checklist: "{checklists_path}/code-review.md"
|
||||
|
||||
- step: 7
|
||||
file: "{steps_path}/step-07-complete.md"
|
||||
name: "Complete Story"
|
||||
description: "Update sprint status, create git commit"
|
||||
agent: sm
|
||||
quality_gate: false
|
||||
|
||||
- step: 8
|
||||
file: "{steps_path}/step-08-summary.md"
|
||||
name: "Pipeline Summary"
|
||||
description: "Generate audit trail and summary report"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
|
||||
# Document loading strategies (token optimization)
|
||||
input_file_patterns:
|
||||
story:
|
||||
description: "Story file being developed"
|
||||
pattern: "{sprint_artifacts}/story-{{epic_num}}-{{story_num}}.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true # Keep in memory across steps
|
||||
|
||||
epics:
|
||||
description: "Epic definitions with BDD scenarios"
|
||||
whole: "{output_folder}/epic*.md"
|
||||
sharded: "{output_folder}/epics/*.md"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only current epic
|
||||
|
||||
architecture:
|
||||
description: "Architecture decisions and constraints"
|
||||
whole: "{output_folder}/architecture.md"
|
||||
sharded: "{output_folder}/architecture/*.md"
|
||||
load_strategy: "INDEX_GUIDED" # Use index for section selection
|
||||
sections_needed: ["tech_stack", "data_model", "api_patterns"]
|
||||
|
||||
prd:
|
||||
description: "Product requirements"
|
||||
whole: "{output_folder}/prd.md"
|
||||
sharded: "{output_folder}/prd/*.md"
|
||||
load_strategy: "SELECTIVE_LOAD" # Only relevant sections
|
||||
|
||||
project_context:
|
||||
description: "Critical rules and patterns"
|
||||
pattern: "**/project-context.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true
|
||||
|
||||
# MCP tool extensions
|
||||
mcp_extensions:
|
||||
exa:
|
||||
description: "Web search for research during story creation"
|
||||
used_in_steps: [2]
|
||||
supabase:
|
||||
description: "Database operations during implementation"
|
||||
used_in_steps: [5]
|
||||
|
||||
# Quality gates (must pass to proceed)
|
||||
quality_gates:
|
||||
story_creation:
|
||||
step: 2
|
||||
criteria:
|
||||
- "Story file created with proper frontmatter"
|
||||
- "All acceptance criteria defined"
|
||||
- "Technical context linked"
|
||||
|
||||
story_validation:
|
||||
step: 3
|
||||
criteria:
|
||||
- "Story passes adversarial review"
|
||||
- "No ambiguous requirements"
|
||||
- "Implementation path clear"
|
||||
|
||||
atdd:
|
||||
step: 4
|
||||
criteria:
|
||||
- "Tests exist for all acceptance criteria"
|
||||
- "Tests fail (red phase verified)"
|
||||
- "Test structure follows project patterns"
|
||||
|
||||
implementation:
|
||||
step: 5
|
||||
criteria:
|
||||
- "All tests pass (green phase)"
|
||||
- "Code follows project patterns"
|
||||
- "No TypeScript errors"
|
||||
|
||||
post_validation:
|
||||
step: "5b"
|
||||
criteria:
|
||||
- "All completed tasks verified against codebase"
|
||||
- "Zero false positives remaining"
|
||||
- "Files/functions/tests actually exist"
|
||||
- "Tests actually pass (not just claimed)"
|
||||
|
||||
code_review:
|
||||
step: 6
|
||||
criteria:
|
||||
- "3-10 specific issues identified"
|
||||
- "All issues resolved or documented"
|
||||
- "Security review complete"
|
||||
|
||||
# Audit trail configuration
|
||||
audit:
|
||||
enabled: true
|
||||
output_file: "{audit_trail}"
|
||||
include:
|
||||
- timestamps
|
||||
- step_durations
|
||||
- quality_gate_results
|
||||
- issues_found
|
||||
- files_modified
|
||||
- token_usage
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,135 +0,0 @@
|
|||
# Super-Dev-Pipeline v2.0 - GSDMAD Architecture
|
||||
|
||||
**Multi-agent pipeline with independent validation and adversarial code review**
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Use v2.0 for a story
|
||||
/story-full-pipeline mode=multi_agent story_key=17-10
|
||||
|
||||
# Use v1.x (fallback)
|
||||
/story-full-pipeline mode=single_agent story_key=17-10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's New in v2.0
|
||||
|
||||
### Multi-Agent Validation
|
||||
- **4 independent agents** instead of 1
|
||||
- Builder → Inspector → Reviewer → Fixer
|
||||
- Each agent has fresh context
|
||||
- No conflict of interest
|
||||
|
||||
### Honest Reporting
|
||||
- Inspector verifies Builder's work (doesn't trust claims)
|
||||
- Reviewer is adversarial (wants to find issues)
|
||||
- Main orchestrator does final verification
|
||||
- Can't fake completion
|
||||
|
||||
### Wave-Based Execution
|
||||
- Independent stories run in parallel
|
||||
- Dependencies respected via waves
|
||||
- 57% faster than sequential
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
See `workflow.md` for complete architecture details.
|
||||
|
||||
**Agent Prompts:**
|
||||
- `agents/builder.md` - Implementation agent
|
||||
- `agents/inspector.md` - Validation agent
|
||||
- `agents/reviewer.md` - Adversarial review agent
|
||||
- `agents/fixer.md` - Issue resolution agent
|
||||
|
||||
**Workflow Config:**
|
||||
- `workflow.yaml` - Main configuration
|
||||
- `workflow.md` - Complete documentation
|
||||
|
||||
---
|
||||
|
||||
## Why v2.0?
|
||||
|
||||
### The Problem with v1.x
|
||||
|
||||
Single agent does ALL steps:
|
||||
1. Implement code
|
||||
2. Validate own work ← Conflict of interest
|
||||
3. Review own code ← Even worse
|
||||
4. Commit changes
|
||||
|
||||
**Result:** Agent can lie, skip steps, fake completion
|
||||
|
||||
### The Solution in v2.0
|
||||
|
||||
Separate agents for each phase:
|
||||
1. Builder implements (no validation)
|
||||
2. Inspector validates (fresh context, no knowledge of Builder)
|
||||
3. Reviewer reviews (adversarial, wants to find issues)
|
||||
4. Fixer fixes (addresses review findings)
|
||||
5. Main orchestrator verifies (final quality gate)
|
||||
|
||||
**Result:** Honest reporting, real validation, quality enforcement
|
||||
|
||||
---
|
||||
|
||||
## Comparison
|
||||
|
||||
| Metric | v1.x | v2.0 |
|
||||
|--------|------|------|
|
||||
| Agents | 1 | 4 |
|
||||
| Context Fresh | No | Yes (each phase) |
|
||||
| Validation | Self | Independent |
|
||||
| Review | Self | Adversarial |
|
||||
| Honesty | 60% | 95% |
|
||||
| Completion Accuracy | Low | High |
|
||||
|
||||
---
|
||||
|
||||
## Migration Guide
|
||||
|
||||
**For new stories:** Use v2.0 by default
|
||||
**For existing workflows:** Keep v1.x until tested
|
||||
|
||||
**Testing v2.0:**
|
||||
1. Run on 3-5 stories
|
||||
2. Compare results with v1.x
|
||||
3. Measure time and quality
|
||||
4. Make v2.0 default after validation
|
||||
|
||||
---
|
||||
|
||||
## Files in This Directory
|
||||
|
||||
```
|
||||
story-full-pipeline/
|
||||
├── README.md (this file)
|
||||
├── workflow.yaml (configuration)
|
||||
├── workflow.md (complete documentation)
|
||||
├── agents/
|
||||
│ ├── builder.md (implementation agent prompt)
|
||||
│ ├── inspector.md (validation agent prompt)
|
||||
│ ├── reviewer.md (review agent prompt)
|
||||
│ └── fixer.md (fix agent prompt)
|
||||
└── steps/
|
||||
└── (step files from v1.x, adapted for multi-agent)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Test v2.0** on Epic 18 stories
|
||||
2. **Measure improvements** (time, quality, honesty)
|
||||
3. **Refine agent prompts** based on results
|
||||
4. **Make v2.0 default** after validation
|
||||
5. **Deprecate v1.x** in 6 months
|
||||
|
||||
---
|
||||
|
||||
**Philosophy:** Trust but verify. Every agent's work is independently validated by a fresh agent with no conflict of interest.
|
||||
|
|
@ -1,166 +0,0 @@
|
|||
# Builder Agent - Implementation Phase
|
||||
|
||||
**Role:** Implement story requirements (code + tests)
|
||||
**Steps:** 1-4 (init, pre-gap, write-tests, implement)
|
||||
**Trust Level:** LOW (assume will cut corners)
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/tdd.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
---
|
||||
|
||||
## Your Mission
|
||||
|
||||
You are the **BUILDER** agent. Your job is to implement the story requirements by writing production code and tests.
|
||||
|
||||
**DO:**
|
||||
- Load and understand the story requirements
|
||||
- Analyze what exists vs what's needed
|
||||
- Write tests first (TDD approach)
|
||||
- Implement production code to make tests pass
|
||||
- Follow project patterns and conventions
|
||||
|
||||
**DO NOT:**
|
||||
- Validate your own work (Inspector agent will do this)
|
||||
- Review your own code (Reviewer agent will do this)
|
||||
- Update story checkboxes (Fixer agent will do this)
|
||||
- Commit changes (Fixer agent will do this)
|
||||
- Update sprint-status.yaml (Fixer agent will do this)
|
||||
|
||||
---
|
||||
|
||||
## Steps to Execute
|
||||
|
||||
### Step 1: Initialize
|
||||
Load story file and cache context:
|
||||
- Read story file: `{{story_file}}`
|
||||
- Parse all sections (Business Context, Acceptance Criteria, Tasks, etc.)
|
||||
- Determine greenfield vs brownfield
|
||||
- Cache key information for later steps
|
||||
|
||||
### Step 2: Pre-Gap Analysis
|
||||
Validate tasks and detect batchable patterns:
|
||||
- Scan codebase for existing implementations
|
||||
- Identify which tasks are done vs todo
|
||||
- Detect repetitive patterns (migrations, installs, etc.)
|
||||
- Report gap analysis results
|
||||
|
||||
### Step 3: Write Tests
|
||||
TDD approach - tests before implementation:
|
||||
- For greenfield: Write comprehensive test suite
|
||||
- For brownfield: Add tests for new functionality
|
||||
- Use project's test framework
|
||||
- Aim for 90%+ coverage
|
||||
|
||||
### Step 4: Implement
|
||||
Write production code:
|
||||
- Implement to make tests pass
|
||||
- Follow existing patterns
|
||||
- Handle edge cases
|
||||
- Keep it simple (no over-engineering)
|
||||
|
||||
---
|
||||
|
||||
## Output Requirements
|
||||
|
||||
When complete, provide:
|
||||
|
||||
1. **Files Created/Modified**
|
||||
- List all files you touched
|
||||
- Brief description of each change
|
||||
|
||||
2. **Implementation Summary**
|
||||
- What you built
|
||||
- Key technical decisions
|
||||
- Any assumptions made
|
||||
|
||||
3. **Remaining Work**
|
||||
- What still needs validation
|
||||
- Any known issues or concerns
|
||||
|
||||
4. **DO NOT CLAIM:**
|
||||
- "Tests pass" (you didn't run them)
|
||||
- "Code reviewed" (you didn't review it)
|
||||
- "Story complete" (you didn't verify it)
|
||||
|
||||
---
|
||||
|
||||
## Hospital-Grade Standards
|
||||
|
||||
⚕️ **Quality >> Speed**
|
||||
|
||||
- Take time to do it right
|
||||
- Don't skip error handling
|
||||
- Don't leave TODO comments
|
||||
- Don't use `any` types
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
This is how the orchestrator verifies your work was actually done.
|
||||
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-builder.json`
|
||||
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "builder",
|
||||
"status": "SUCCESS",
|
||||
"tasks_completed": [
|
||||
"Create PaymentProcessor service",
|
||||
"Add retry logic with exponential backoff",
|
||||
"Implement idempotency checks"
|
||||
],
|
||||
"files_created": [
|
||||
"lib/billing/payment-processor.ts",
|
||||
"lib/billing/__tests__/payment-processor.test.ts"
|
||||
],
|
||||
"files_modified": [
|
||||
"lib/billing/worker.ts"
|
||||
],
|
||||
"tests": {
|
||||
"files": 2,
|
||||
"cases": 15
|
||||
},
|
||||
"timestamp": "2026-01-27T02:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
|
||||
---
|
||||
|
||||
## When Complete, Return This Format
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** builder
|
||||
**Story:** {{story_key}}
|
||||
**Status:** SUCCESS | FAILED
|
||||
|
||||
### Completion Artifact
|
||||
✅ Created: docs/sprint-artifacts/completions/{{story_key}}-builder.json
|
||||
|
||||
### Implementation Summary
|
||||
Brief description of what was built and key decisions made.
|
||||
|
||||
### Ready For
|
||||
Inspector validation (next phase)
|
||||
```
|
||||
|
||||
**Why this artifact?**
|
||||
- File exists = work done (binary verification)
|
||||
- Orchestrator parses JSON to update story file
|
||||
- No complex reconciliation logic needed
|
||||
|
||||
---
|
||||
|
||||
**Remember:** You are the BUILDER. Build it well, but don't validate or review your own work. Other agents will do that with fresh eyes.
|
||||
|
|
@ -1,252 +0,0 @@
|
|||
# Fixer Agent - Issue Resolution Phase
|
||||
|
||||
**Role:** Fix issues identified by Reviewer
|
||||
**Steps:** 8-9 (review-analysis, fix-issues)
|
||||
**Trust Level:** MEDIUM (incentive to minimize work)
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
---
|
||||
|
||||
## Your Mission
|
||||
|
||||
You are the **FIXER** agent. Your job is to fix CRITICAL and HIGH issues from the code review.
|
||||
|
||||
**PRIORITY:**
|
||||
1. Fix ALL CRITICAL issues (no exceptions)
|
||||
2. Fix ALL HIGH issues (must do)
|
||||
3. Fix MEDIUM issues if time allows (nice to have)
|
||||
4. Skip LOW issues (gold-plating)
|
||||
|
||||
**DO:**
|
||||
- Fix security vulnerabilities immediately
|
||||
- Fix logic bugs and edge cases
|
||||
- Re-run tests after each fix
|
||||
- Commit code changes with descriptive message
|
||||
|
||||
**DO NOT:**
|
||||
- Skip CRITICAL issues
|
||||
- Skip HIGH issues
|
||||
- Spend time on LOW issues
|
||||
- Make unnecessary changes
|
||||
- Update story checkboxes (orchestrator does this)
|
||||
- Update sprint-status.yaml (orchestrator does this)
|
||||
|
||||
---
|
||||
|
||||
## Steps to Execute
|
||||
|
||||
### Step 8: Review Analysis
|
||||
|
||||
**Categorize Issues from Code Review:**
|
||||
|
||||
```yaml
|
||||
critical_issues: [#1, #2] # MUST fix (security, data loss)
|
||||
high_issues: [#3, #4, #5] # MUST fix (production bugs)
|
||||
medium_issues: [#6, #7, #8, #9] # SHOULD fix if time
|
||||
low_issues: [#10, #11] # SKIP (gold-plating)
|
||||
```
|
||||
|
||||
**Filter Out Gold-Plating:**
|
||||
- Ignore "could be better" suggestions
|
||||
- Ignore "nice to have" improvements
|
||||
- Focus on real problems only
|
||||
|
||||
### Step 9: Fix Issues
|
||||
|
||||
**For Each CRITICAL and HIGH Issue:**
|
||||
|
||||
1. **Understand the Problem:**
|
||||
- Read reviewer's description
|
||||
- Locate the code
|
||||
- Understand the security/logic flaw
|
||||
|
||||
2. **Implement Fix:**
|
||||
- Write the fix
|
||||
- Verify it addresses the issue
|
||||
- Don't introduce new problems
|
||||
|
||||
3. **Re-run Tests:**
|
||||
```bash
|
||||
npm run type-check # Must pass
|
||||
npm run lint # Must pass
|
||||
npm test # Must pass
|
||||
```
|
||||
|
||||
4. **Verify Fix:**
|
||||
- Check the specific issue is resolved
|
||||
- Ensure no regressions
|
||||
|
||||
---
|
||||
|
||||
## After Fixing Issues
|
||||
|
||||
### Commit Changes
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "fix: {{story_key}} - address code review findings
|
||||
|
||||
Fixed issues:
|
||||
- #1: SQL injection in agreement route (CRITICAL)
|
||||
- #2: Missing authorization check (CRITICAL)
|
||||
- #3: N+1 query pattern (HIGH)
|
||||
- #4: Missing error handling (HIGH)
|
||||
- #5: Unhandled edge case (HIGH)
|
||||
|
||||
All tests passing, type check clean, lint clean."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Provide Fix Summary:**
|
||||
|
||||
```markdown
|
||||
## Issue Resolution Summary
|
||||
|
||||
### Fixed Issues:
|
||||
|
||||
**#1: SQL Injection (CRITICAL)**
|
||||
- Location: api/occupant/agreement/route.ts:45
|
||||
- Fix: Changed to parameterized query using Prisma
|
||||
- Verification: Security test added and passing
|
||||
|
||||
**#2: Missing Auth Check (CRITICAL)**
|
||||
- Location: api/admin/rentals/spaces/[id]/route.ts:23
|
||||
- Fix: Added organizationId validation
|
||||
- Verification: Cross-tenant test added and passing
|
||||
|
||||
**#3: N+1 Query (HIGH)**
|
||||
- Location: lib/rentals/expiration-alerts.ts:67
|
||||
- Fix: Batch-loaded admins with Map lookup
|
||||
- Verification: Performance test shows 10x improvement
|
||||
|
||||
[Continue for all CRITICAL + HIGH issues]
|
||||
|
||||
### Deferred Issues:
|
||||
|
||||
**MEDIUM (4 issues):** Deferred to follow-up story
|
||||
**LOW (2 issues):** Rejected as gold-plating
|
||||
|
||||
---
|
||||
|
||||
**Quality Checks:**
|
||||
- ✅ Type check: PASS (0 errors)
|
||||
- ✅ Linter: PASS (0 warnings)
|
||||
- ✅ Build: PASS
|
||||
- ✅ Tests: 48/48 passing (96% coverage)
|
||||
|
||||
**Git:**
|
||||
- ✅ Commit created: a1b2c3d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fix Priority Matrix
|
||||
|
||||
| Severity | Action | Reason |
|
||||
|----------|--------|--------|
|
||||
| CRITICAL | MUST FIX | Security / Data loss |
|
||||
| HIGH | MUST FIX | Production bugs |
|
||||
| MEDIUM | SHOULD FIX | Technical debt |
|
||||
| LOW | SKIP | Gold-plating |
|
||||
|
||||
---
|
||||
|
||||
## Hospital-Grade Standards
|
||||
|
||||
⚕️ **Fix It Right**
|
||||
|
||||
- Don't skip security fixes
|
||||
- Don't rush fixes (might break things)
|
||||
- Test after each fix
|
||||
- Verify the issue is actually resolved
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
This is the FINAL agent artifact. The orchestrator uses this to update the story file.
|
||||
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-fixer.json`
|
||||
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "fixer",
|
||||
"status": "SUCCESS",
|
||||
"issues_fixed": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"total": 5
|
||||
},
|
||||
"fixes_applied": [
|
||||
"Fixed SQL injection in agreement route (CRITICAL)",
|
||||
"Added authorization check in admin route (CRITICAL)",
|
||||
"Fixed N+1 query pattern (HIGH)"
|
||||
],
|
||||
"files_modified": [
|
||||
"api/occupant/agreement/route.ts",
|
||||
"api/admin/rentals/spaces/[id]/route.ts",
|
||||
"lib/rentals/expiration-alerts.ts"
|
||||
],
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 48,
|
||||
"failing": 0,
|
||||
"total": 48,
|
||||
"coverage": 96
|
||||
},
|
||||
"git_commit": "a1b2c3d4e5f",
|
||||
"timestamp": "2026-01-27T02:50:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
|
||||
---
|
||||
|
||||
## When Complete, Return This Format
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** fixer
|
||||
**Story:** {{story_key}}
|
||||
**Status:** SUCCESS | PARTIAL | FAILED
|
||||
|
||||
### Completion Artifact
|
||||
✅ Created: docs/sprint-artifacts/completions/{{story_key}}-fixer.json
|
||||
|
||||
### Issues Fixed
|
||||
- **CRITICAL:** X/Y fixed
|
||||
- **HIGH:** X/Y fixed
|
||||
- **Total:** X issues resolved
|
||||
|
||||
### Quality Checks
|
||||
All checks PASS
|
||||
|
||||
### Git Commit
|
||||
✅ Committed: abc123
|
||||
|
||||
### Ready For
|
||||
Orchestrator reconciliation (story file updates)
|
||||
```
|
||||
|
||||
**Note:** Story checkboxes and sprint-status updates are done by the orchestrator, not you.
|
||||
|
||||
---
|
||||
|
||||
**Remember:** You are the FIXER. Fix real problems, skip gold-plating, commit when done.
|
||||
|
|
@ -1,219 +0,0 @@
|
|||
# Inspector Agent - Validation Phase
|
||||
|
||||
**Role:** Independent verification of Builder's work
|
||||
**Steps:** 5-6 (post-validation, quality-checks)
|
||||
**Trust Level:** MEDIUM (no conflict of interest)
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
---
|
||||
|
||||
## Your Mission
|
||||
|
||||
You are the **INSPECTOR** agent. Your job is to verify that the Builder actually did what they claimed.
|
||||
|
||||
**KEY PRINCIPLE: You have NO KNOWLEDGE of what the Builder did. You are starting fresh.**
|
||||
|
||||
**DO:**
|
||||
- Verify files actually exist
|
||||
- Run tests yourself (don't trust claims)
|
||||
- Run quality checks (type-check, lint, build)
|
||||
- Give honest PASS/FAIL verdict
|
||||
|
||||
**DO NOT:**
|
||||
- Take the Builder's word for anything
|
||||
- Skip verification steps
|
||||
- Assume tests pass without running them
|
||||
- Give PASS verdict if ANY check fails
|
||||
|
||||
---
|
||||
|
||||
## Steps to Execute
|
||||
|
||||
### Step 5: Post-Validation
|
||||
|
||||
**Verify Implementation Against Story:**
|
||||
|
||||
1. **Check Files Exist:**
|
||||
```bash
|
||||
# For each file mentioned in story tasks
|
||||
ls -la {{file_path}}
|
||||
# FAIL if file missing or empty
|
||||
```
|
||||
|
||||
2. **Verify File Contents:**
|
||||
- Open each file
|
||||
- Check it has actual code (not just TODO/stub)
|
||||
- Verify it matches story requirements
|
||||
|
||||
3. **Check Tests Exist:**
|
||||
```bash
|
||||
# Find test files
|
||||
find . -name "*.test.ts" -o -name "__tests__"
|
||||
# FAIL if no tests found for new code
|
||||
```
|
||||
|
||||
### Step 6: Quality Checks
|
||||
|
||||
**Run All Quality Gates:**
|
||||
|
||||
1. **Type Check:**
|
||||
```bash
|
||||
npm run type-check
|
||||
# FAIL if any errors
|
||||
```
|
||||
|
||||
2. **Linter:**
|
||||
```bash
|
||||
npm run lint
|
||||
# FAIL if any errors or warnings
|
||||
```
|
||||
|
||||
3. **Build:**
|
||||
```bash
|
||||
npm run build
|
||||
# FAIL if build fails
|
||||
```
|
||||
|
||||
4. **Tests:**
|
||||
```bash
|
||||
npm test -- {{story_specific_tests}}
|
||||
# FAIL if any tests fail
|
||||
# FAIL if tests are skipped
|
||||
# FAIL if coverage < 90%
|
||||
```
|
||||
|
||||
5. **Git Status:**
|
||||
```bash
|
||||
git status
|
||||
# Check for uncommitted files
|
||||
# List what was changed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Provide Evidence-Based Verdict:**
|
||||
|
||||
### If PASS:
|
||||
```markdown
|
||||
✅ VALIDATION PASSED
|
||||
|
||||
Evidence:
|
||||
- Files verified: [list files checked]
|
||||
- Type check: PASS (0 errors)
|
||||
- Linter: PASS (0 warnings)
|
||||
- Build: PASS
|
||||
- Tests: 45/45 passing (95% coverage)
|
||||
- Git: 12 files modified, 3 new files
|
||||
|
||||
Ready for code review.
|
||||
```
|
||||
|
||||
### If FAIL:
|
||||
```markdown
|
||||
❌ VALIDATION FAILED
|
||||
|
||||
Failures:
|
||||
1. File missing: app/api/occupant/agreement/route.ts
|
||||
2. Type check: 3 errors in lib/api/auth.ts
|
||||
3. Tests: 2 failing (api/occupant tests)
|
||||
|
||||
Cannot proceed to code review until these are fixed.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
**Before giving PASS verdict, confirm:**
|
||||
|
||||
- [ ] All story files exist and have content
|
||||
- [ ] Type check returns 0 errors
|
||||
- [ ] Linter returns 0 errors/warnings
|
||||
- [ ] Build succeeds
|
||||
- [ ] Tests run and pass (not skipped)
|
||||
- [ ] Test coverage >= 90%
|
||||
- [ ] Git status is clean or has expected changes
|
||||
|
||||
**If ANY checkbox is unchecked → FAIL verdict**
|
||||
|
||||
---
|
||||
|
||||
## Hospital-Grade Standards
|
||||
|
||||
⚕️ **Be Thorough**
|
||||
|
||||
- Don't skip checks
|
||||
- Run tests yourself (don't trust claims)
|
||||
- Verify every file exists
|
||||
- Give specific evidence
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-inspector.json`
|
||||
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "inspector",
|
||||
"status": "PASS",
|
||||
"quality_checks": {
|
||||
"type_check": "PASS",
|
||||
"lint": "PASS",
|
||||
"build": "PASS"
|
||||
},
|
||||
"tests": {
|
||||
"passing": 45,
|
||||
"failing": 0,
|
||||
"total": 45,
|
||||
"coverage": 95
|
||||
},
|
||||
"files_verified": [
|
||||
"lib/billing/payment-processor.ts",
|
||||
"lib/billing/__tests__/payment-processor.test.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:35:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
|
||||
---
|
||||
|
||||
## When Complete, Return This Format
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** inspector
|
||||
**Story:** {{story_key}}
|
||||
**Status:** PASS | FAIL
|
||||
|
||||
### Completion Artifact
|
||||
✅ Created: docs/sprint-artifacts/completions/{{story_key}}-inspector.json
|
||||
|
||||
### Evidence Summary
|
||||
- Type Check: PASS/FAIL
|
||||
- Lint: PASS/FAIL
|
||||
- Build: PASS/FAIL
|
||||
- Tests: X passing, Y failing
|
||||
|
||||
### Ready For
|
||||
- If PASS: Reviewer (next phase)
|
||||
- If FAIL: Builder needs to fix before proceeding
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Remember:** You are the INSPECTOR. Your job is to find the truth, not rubber-stamp the Builder's work. If something is wrong, say so with evidence.
|
||||
|
|
@ -1,262 +0,0 @@
|
|||
# Reviewer Agent - Adversarial Code Review
|
||||
|
||||
**Role:** Find problems with the implementation
|
||||
**Steps:** 7 (code-review)
|
||||
**Trust Level:** HIGH (wants to find issues)
|
||||
|
||||
<execution_context>
|
||||
@patterns/security-checklist.md
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
---
|
||||
|
||||
## Your Mission
|
||||
|
||||
You are the **ADVERSARIAL REVIEWER**. Your job is to find problems, not rubber-stamp code.
|
||||
|
||||
**MINDSET: Be critical. Look for flaws. Find issues.**
|
||||
|
||||
**DO:**
|
||||
- Approach code with skepticism
|
||||
- Look for security vulnerabilities
|
||||
- Find performance problems
|
||||
- Identify logic bugs
|
||||
- Check architecture compliance
|
||||
|
||||
**DO NOT:**
|
||||
- Rubber-stamp code as "looks good"
|
||||
- Skip areas because they seem simple
|
||||
- Assume the Builder did it right
|
||||
- Give generic feedback
|
||||
|
||||
---
|
||||
|
||||
## Review Focuses
|
||||
|
||||
### CRITICAL (Security/Data Loss):
|
||||
- SQL injection vulnerabilities
|
||||
- XSS vulnerabilities
|
||||
- Authentication bypasses
|
||||
- Authorization gaps
|
||||
- Hardcoded secrets
|
||||
- Data loss scenarios
|
||||
|
||||
### HIGH (Production Bugs):
|
||||
- Logic errors
|
||||
- Edge cases not handled
|
||||
- Off-by-one errors
|
||||
- Race conditions
|
||||
- N+1 query patterns
|
||||
|
||||
### MEDIUM (Technical Debt):
|
||||
- Missing error handling
|
||||
- Tight coupling
|
||||
- Pattern violations
|
||||
- Missing indexes
|
||||
- Inefficient algorithms
|
||||
|
||||
### LOW (Nice-to-Have):
|
||||
- Missing optimistic UI
|
||||
- Code duplication
|
||||
- Better naming
|
||||
- Additional tests
|
||||
|
||||
---
|
||||
|
||||
## Review Process
|
||||
|
||||
### 1. Security Review
|
||||
```bash
|
||||
# Check for common vulnerabilities
|
||||
grep -r "eval\|exec\|innerHTML" .
|
||||
grep -r "hardcoded.*password\|api.*key" .
|
||||
grep -r "SELECT.*\+\|INSERT.*\+" . # SQL injection
|
||||
```
|
||||
|
||||
### 2. Performance Review
|
||||
```bash
|
||||
# Look for N+1 patterns
|
||||
grep -A 5 "\.map\|\.forEach" . | grep "await\|prisma"
|
||||
# Check for missing indexes
|
||||
grep "@@index" prisma/schema.prisma
|
||||
```
|
||||
|
||||
### 3. Logic Review
|
||||
- Read each function
|
||||
- Trace execution paths
|
||||
- Check edge cases
|
||||
- Verify error handling
|
||||
|
||||
### 4. Architecture Review
|
||||
- Check pattern compliance
|
||||
- Verify separation of concerns
|
||||
- Check dependency directions
|
||||
|
||||
---
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Provide Specific, Actionable Issues:**
|
||||
|
||||
```markdown
|
||||
## Code Review Findings
|
||||
|
||||
### CRITICAL Issues (2):
|
||||
|
||||
**Issue #1: SQL Injection Vulnerability**
|
||||
- **Location:** `api/occupant/agreement/route.ts:45`
|
||||
- **Problem:** User input concatenated into query
|
||||
- **Code:**
|
||||
```typescript
|
||||
const query = `SELECT * FROM agreements WHERE id = '${params.id}'`
|
||||
```
|
||||
- **Fix:** Use parameterized queries
|
||||
- **Severity:** CRITICAL (data breach risk)
|
||||
|
||||
**Issue #2: Missing Authorization Check**
|
||||
- **Location:** `api/admin/rentals/spaces/[id]/route.ts:23`
|
||||
- **Problem:** No check that user owns the space
|
||||
- **Impact:** Cross-tenant data access
|
||||
- **Fix:** Add organizationId check
|
||||
- **Severity:** CRITICAL (security bypass)
|
||||
|
||||
### HIGH Issues (3):
|
||||
[List specific issues with code locations]
|
||||
|
||||
### MEDIUM Issues (4):
|
||||
[List specific issues with code locations]
|
||||
|
||||
### LOW Issues (2):
|
||||
[List specific issues with code locations]
|
||||
|
||||
---
|
||||
|
||||
**Summary:**
|
||||
- Total issues: 11
|
||||
- MUST FIX: 5 (CRITICAL + HIGH)
|
||||
- SHOULD FIX: 4 (MEDIUM)
|
||||
- NICE TO HAVE: 2 (LOW)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Rating Guidelines
|
||||
|
||||
**CRITICAL:** Security vulnerability or data loss
|
||||
- SQL injection
|
||||
- Auth bypass
|
||||
- Hardcoded secrets
|
||||
- Data corruption risk
|
||||
|
||||
**HIGH:** Will cause production bugs
|
||||
- Logic errors
|
||||
- Unhandled edge cases
|
||||
- N+1 queries
|
||||
- Missing indexes
|
||||
|
||||
**MEDIUM:** Technical debt or maintainability
|
||||
- Missing error handling
|
||||
- Pattern violations
|
||||
- Tight coupling
|
||||
|
||||
**LOW:** Nice-to-have improvements
|
||||
- Optimistic UI
|
||||
- Better naming
|
||||
- Code duplication
|
||||
|
||||
---
|
||||
|
||||
## Review Checklist
|
||||
|
||||
Before completing review, check:
|
||||
|
||||
- [ ] Reviewed all new files
|
||||
- [ ] Checked for security vulnerabilities
|
||||
- [ ] Looked for performance problems
|
||||
- [ ] Verified error handling
|
||||
- [ ] Checked architecture compliance
|
||||
- [ ] Provided specific code locations for each issue
|
||||
- [ ] Rated each issue (CRITICAL/HIGH/MEDIUM/LOW)
|
||||
|
||||
---
|
||||
|
||||
## Hospital-Grade Standards
|
||||
|
||||
⚕️ **Be Thorough and Critical**
|
||||
|
||||
- Don't let things slide
|
||||
- Find real problems
|
||||
- Be specific (not generic)
|
||||
- Assume code has issues (it usually does)
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: Create Completion Artifact
|
||||
|
||||
**MANDATORY:** Before returning, you MUST create a completion artifact JSON file.
|
||||
|
||||
**File Path:** `docs/sprint-artifacts/completions/{{story_key}}-reviewer.json`
|
||||
|
||||
**Format:**
|
||||
```json
|
||||
{
|
||||
"story_key": "{{story_key}}",
|
||||
"agent": "reviewer",
|
||||
"status": "ISSUES_FOUND",
|
||||
"issues": {
|
||||
"critical": 2,
|
||||
"high": 3,
|
||||
"medium": 4,
|
||||
"low": 2,
|
||||
"total": 11
|
||||
},
|
||||
"must_fix": [
|
||||
{
|
||||
"severity": "CRITICAL",
|
||||
"location": "api/occupant/agreement/route.ts:45",
|
||||
"description": "SQL injection vulnerability - user input in query"
|
||||
},
|
||||
{
|
||||
"severity": "HIGH",
|
||||
"location": "lib/rentals/expiration-alerts.ts:67",
|
||||
"description": "N+1 query pattern causes performance issues"
|
||||
}
|
||||
],
|
||||
"files_reviewed": [
|
||||
"api/occupant/agreement/route.ts",
|
||||
"lib/rentals/expiration-alerts.ts"
|
||||
],
|
||||
"timestamp": "2026-01-27T02:40:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Use Write tool to create this file. No exceptions.**
|
||||
|
||||
---
|
||||
|
||||
## When Complete, Return This Format
|
||||
|
||||
```markdown
|
||||
## AGENT COMPLETE
|
||||
|
||||
**Agent:** reviewer
|
||||
**Story:** {{story_key}}
|
||||
**Status:** ISSUES_FOUND | CLEAN
|
||||
|
||||
### Completion Artifact
|
||||
✅ Created: docs/sprint-artifacts/completions/{{story_key}}-reviewer.json
|
||||
|
||||
### Issue Summary
|
||||
- **CRITICAL:** X issues
|
||||
- **HIGH:** X issues
|
||||
- **MUST FIX:** X total (CRITICAL + HIGH)
|
||||
|
||||
### Ready For
|
||||
Fixer agent to address CRITICAL and HIGH issues
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Remember:** You are the ADVERSARIAL REVIEWER. Your success is measured by finding legitimate issues. Don't be nice - be thorough.
|
||||
|
|
@ -1,215 +0,0 @@
|
|||
# Super-Dev-Pipeline v3.1 - Token-Efficient Multi-Agent Pipeline
|
||||
|
||||
<purpose>
|
||||
Implement a story using parallel verification agents with Builder context reuse.
|
||||
Each agent has single responsibility. Builder fixes issues in its own context (50-70% token savings).
|
||||
Orchestrator handles bookkeeping (story file updates, verification).
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Token-Efficient Multi-Agent Pipeline**
|
||||
|
||||
- Builder implements (creative, context preserved)
|
||||
- Inspector + Reviewers validate in parallel (verification, fresh context)
|
||||
- Builder fixes issues (creative, reuses context - 50-70% token savings)
|
||||
- Inspector re-checks (verification, quick check)
|
||||
- Orchestrator reconciles story file (mechanical)
|
||||
|
||||
**Key Innovation:** Resume Builder instead of spawning fresh Fixer.
|
||||
Builder already knows the codebase - just needs to fix specific issues.
|
||||
|
||||
Trust but verify. Fresh context for verification. Reuse context for fixes.
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: story-full-pipeline
|
||||
version: 3.1.0
|
||||
execution_mode: multi_agent
|
||||
|
||||
phases:
|
||||
phase_1: Builder (saves agent_id)
|
||||
phase_2: [Inspector + N Reviewers] in parallel (N = 1/2/3 based on complexity)
|
||||
phase_3: Resume Builder with all findings (reuses context)
|
||||
phase_4: Inspector re-check (quick verification)
|
||||
phase_5: Orchestrator reconciliation
|
||||
|
||||
reviewer_counts:
|
||||
micro: 1 reviewer (security only)
|
||||
standard: 2 reviewers (security, performance)
|
||||
complex: 3 reviewers (security, performance, code quality)
|
||||
|
||||
token_efficiency:
|
||||
- Phase 2 agents spawn in parallel (same cost, faster)
|
||||
- Phase 3 resumes Builder (50-70% token savings vs fresh Fixer)
|
||||
- Phase 4 Inspector only (no full re-review)
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/hospital-grade.md
|
||||
@patterns/agent-completion.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="load_story" priority="first">
|
||||
Load and validate the story file.
|
||||
|
||||
\`\`\`bash
|
||||
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
|
||||
[ -f "$STORY_FILE" ] || { echo "ERROR: Story file not found"; exit 1; }
|
||||
\`\`\`
|
||||
|
||||
Use Read tool on the story file. Parse:
|
||||
- Complexity level (micro/standard/complex)
|
||||
- Task count
|
||||
- Acceptance criteria count
|
||||
|
||||
Determine which agents to spawn based on complexity routing.
|
||||
</step>
|
||||
|
||||
<step name="spawn_builder">
|
||||
**Phase 1: Builder Agent (Steps 1-4)**
|
||||
|
||||
\`\`\`
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔨 PHASE 1: BUILDER
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
\`\`\`
|
||||
|
||||
Spawn Builder agent and save agent_id for later resume.
|
||||
|
||||
**CRITICAL: Save Builder's agent_id for later resume**
|
||||
|
||||
\`\`\`
|
||||
BUILDER_AGENT_ID={{agent_id_from_task_result}}
|
||||
echo "Builder agent: $BUILDER_AGENT_ID"
|
||||
\`\`\`
|
||||
|
||||
Wait for completion. Parse structured output. Verify files exist.
|
||||
|
||||
If files missing or status FAILED: halt pipeline.
|
||||
</step>
|
||||
|
||||
<step name="spawn_verification_parallel">
|
||||
**Phase 2: Parallel Verification (Inspector + Reviewers)**
|
||||
|
||||
\`\`\`
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 PHASE 2: PARALLEL VERIFICATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
\`\`\`
|
||||
|
||||
**CRITICAL: Spawn ALL verification agents in ONE message (parallel execution)**
|
||||
|
||||
Determine reviewer count based on complexity:
|
||||
\`\`\`
|
||||
if complexity == "micro": REVIEWER_COUNT = 1
|
||||
if complexity == "standard": REVIEWER_COUNT = 2
|
||||
if complexity == "complex": REVIEWER_COUNT = 3
|
||||
\`\`\`
|
||||
|
||||
Spawn Inspector + N Reviewers in single message. Wait for ALL agents to complete. Collect findings.
|
||||
|
||||
Aggregate all findings from Inspector + Reviewers.
|
||||
</step>
|
||||
|
||||
<step name="resume_builder_with_findings">
|
||||
**Phase 3: Resume Builder with All Findings**
|
||||
|
||||
\`\`\`
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔧 PHASE 3: RESUME BUILDER (Fix Issues)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
\`\`\`
|
||||
|
||||
**CRITICAL: Resume Builder agent (reuses context!)**
|
||||
|
||||
Use Task tool with `resume: "{{BUILDER_AGENT_ID}}"` parameter.
|
||||
|
||||
Builder receives all consolidated findings and fixes:
|
||||
1. ALL CRITICAL issues (security, blockers)
|
||||
2. ALL HIGH issues (bugs, logic errors)
|
||||
3. MEDIUM if quick (<30 min total)
|
||||
4. Skip LOW (gold-plating)
|
||||
5. Commit with descriptive message
|
||||
|
||||
Wait for completion. Parse commit hash and fix counts.
|
||||
</step>
|
||||
|
||||
<step name="inspector_recheck">
|
||||
**Phase 4: Quick Inspector Re-Check**
|
||||
|
||||
\`\`\`
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ PHASE 4: RE-VERIFICATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
\`\`\`
|
||||
|
||||
Spawn Inspector only (not full review). Quick functional verification.
|
||||
|
||||
If FAIL: Resume Builder again with new issues.
|
||||
If PASS: Proceed to reconciliation.
|
||||
</step>
|
||||
|
||||
<step name="reconcile_story">
|
||||
**Phase 5: Orchestrator Reconciliation**
|
||||
|
||||
\`\`\`
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔧 PHASE 5: RECONCILIATION (Orchestrator)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
\`\`\`
|
||||
|
||||
**YOU (orchestrator) do this directly. No agent spawn.**
|
||||
|
||||
1. Get what was built (git log, git diff)
|
||||
2. Read story file
|
||||
3. Check off completed tasks (Edit tool)
|
||||
4. Fill Dev Agent Record with pipeline details
|
||||
5. Verify updates (grep task checkboxes)
|
||||
6. Update sprint-status.yaml to "done"
|
||||
</step>
|
||||
|
||||
<step name="final_verification">
|
||||
**Final Quality Gate**
|
||||
|
||||
Verify:
|
||||
1. Git commit exists
|
||||
2. Story tasks checked (count > 0)
|
||||
3. Dev Agent Record filled
|
||||
4. Sprint status updated
|
||||
|
||||
If verification fails: fix using Edit, then re-verify.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<failure_handling>
|
||||
**Builder fails:** Don't spawn verification. Report failure and halt.
|
||||
**Inspector fails (Phase 2):** Still run Reviewers in parallel, collect all findings together.
|
||||
**Inspector fails (Phase 4):** Resume Builder again with new issues (iterative fix loop).
|
||||
**Builder resume fails:** Report unfixed issues. Manual intervention needed.
|
||||
**Reconciliation fails:** Fix using Edit tool. Re-verify checkboxes.
|
||||
</failure_handling>
|
||||
|
||||
<complexity_routing>
|
||||
| Complexity | Pipeline | Reviewers | Total Phase 2 Agents |
|
||||
|------------|----------|-----------|---------------------|
|
||||
| micro | Builder → [Inspector + 1 Reviewer] → Resume Builder → Inspector recheck | 1 (security) | 2 agents |
|
||||
| standard | Builder → [Inspector + 2 Reviewers] → Resume Builder → Inspector recheck | 2 (security, performance) | 3 agents |
|
||||
| complex | Builder → [Inspector + 3 Reviewers] → Resume Builder → Inspector recheck | 3 (security, performance, quality) | 4 agents |
|
||||
|
||||
**Key Improvement:** All verification agents spawn in parallel (single message, faster execution).
|
||||
**Token Savings:** Builder resume in Phase 3 saves 50-70% tokens vs spawning fresh Fixer.
|
||||
</complexity_routing>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Builder spawned and agent_id saved
|
||||
- [ ] All verification agents completed in parallel
|
||||
- [ ] Builder resumed with consolidated findings
|
||||
- [ ] Inspector recheck passed
|
||||
- [ ] Git commit exists for story
|
||||
- [ ] Story file has checked tasks (count > 0)
|
||||
- [ ] Dev Agent Record filled with all phases
|
||||
- [ ] Sprint status updated to "done"
|
||||
</success_criteria>
|
||||
|
|
@ -1,123 +0,0 @@
|
|||
name: super-dev-pipeline
|
||||
description: "Multi-agent pipeline with wave-based execution, independent validation, and adversarial code review (GSDMAD)"
|
||||
author: "BMAD Method + GSD"
|
||||
version: "2.0.0"
|
||||
|
||||
# Execution mode
|
||||
execution_mode: "multi_agent" # multi_agent | single_agent (fallback)
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
agents_path: "{installed_path}/agents"
|
||||
steps_path: "{installed_path}/steps"
|
||||
|
||||
# Agent tracking (from GSD)
|
||||
agent_history: "{sprint_artifacts}/agent-history.json"
|
||||
current_agent_id: "{sprint_artifacts}/current-agent-id.txt"
|
||||
|
||||
# State management
|
||||
state_file: "{sprint_artifacts}/super-dev-state-{{story_id}}.yaml"
|
||||
audit_trail: "{sprint_artifacts}/audit-super-dev-{{story_id}}-{{date}}.yaml"
|
||||
|
||||
# Multi-agent configuration
|
||||
agents:
|
||||
builder:
|
||||
description: "Implementation agent - writes code and tests"
|
||||
steps: [1, 2, 3, 4]
|
||||
subagent_type: "general-purpose"
|
||||
prompt_file: "{agents_path}/builder.md"
|
||||
trust_level: "low" # Assumes agent will cut corners
|
||||
timeout: 3600 # 1 hour
|
||||
|
||||
inspector:
|
||||
description: "Validation agent - independent verification"
|
||||
steps: [5, 6]
|
||||
subagent_type: "general-purpose"
|
||||
prompt_file: "{agents_path}/inspector.md"
|
||||
fresh_context: true # No knowledge of builder agent
|
||||
trust_level: "medium" # No conflict of interest
|
||||
timeout: 1800 # 30 minutes
|
||||
|
||||
reviewer:
|
||||
description: "Adversarial code review - finds problems"
|
||||
steps: [7]
|
||||
subagent_type: "multi-agent-review" # Spawns multiple reviewers
|
||||
prompt_file: "{agents_path}/reviewer.md"
|
||||
fresh_context: true
|
||||
adversarial: true # Goal: find issues
|
||||
trust_level: "high" # Wants to find problems
|
||||
timeout: 1800 # 30 minutes
|
||||
review_agent_count:
|
||||
micro: 2
|
||||
standard: 4
|
||||
complex: 6
|
||||
|
||||
fixer:
|
||||
description: "Issue resolution - fixes critical/high issues"
|
||||
steps: [8, 9]
|
||||
subagent_type: "general-purpose"
|
||||
prompt_file: "{agents_path}/fixer.md"
|
||||
trust_level: "medium" # Incentive to minimize work
|
||||
timeout: 2400 # 40 minutes
|
||||
|
||||
# Reconciliation: orchestrator does this directly (see workflow.md Phase 5)
|
||||
|
||||
# Complexity level (determines which steps to execute)
|
||||
complexity_level: "standard" # micro | standard | complex
|
||||
|
||||
# Complexity routing
|
||||
complexity_routing:
|
||||
micro:
|
||||
skip_agents: ["reviewer"] # Skip code review for micro stories
|
||||
description: "Lightweight path for low-risk stories"
|
||||
examples: ["UI tweaks", "text changes", "simple CRUD"]
|
||||
|
||||
standard:
|
||||
skip_agents: [] # Full pipeline
|
||||
description: "Balanced path for medium-risk stories"
|
||||
examples: ["API endpoints", "business logic"]
|
||||
|
||||
complex:
|
||||
skip_agents: [] # Full pipeline + enhanced review
|
||||
description: "Enhanced validation for high-risk stories"
|
||||
examples: ["Auth", "payments", "security", "migrations"]
|
||||
review_focus: ["security", "performance", "architecture"]
|
||||
|
||||
# Final verification checklist (main orchestrator)
|
||||
final_verification:
|
||||
enabled: true
|
||||
checks:
|
||||
- name: "git_commits"
|
||||
command: "git log --oneline -3 | grep {{story_key}}"
|
||||
failure_message: "No commit found for {{story_key}}"
|
||||
|
||||
- name: "story_checkboxes"
|
||||
command: |
|
||||
before=$(git show HEAD~1:{{story_file}} | grep -c '^- \[x\]')
|
||||
after=$(grep -c '^- \[x\]' {{story_file}})
|
||||
[ $after -gt $before ]
|
||||
failure_message: "Story checkboxes not updated"
|
||||
|
||||
- name: "sprint_status"
|
||||
command: "git diff HEAD~1 {{sprint_status}} | grep '{{story_key}}'"
|
||||
failure_message: "Sprint status not updated"
|
||||
|
||||
- name: "tests_passed"
|
||||
# Parse agent output for test evidence
|
||||
validation: "inspector_output must contain 'PASS' or test count"
|
||||
failure_message: "No test evidence in validation output"
|
||||
|
||||
# Backward compatibility
|
||||
fallback_to_v1:
|
||||
enabled: true
|
||||
condition: "execution_mode == 'single_agent'"
|
||||
workflow: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
|
||||
standalone: true
|
||||
|
|
@ -1,311 +0,0 @@
|
|||
# Super Dev Story v3.0 - Development with Quality Gates
|
||||
|
||||
<purpose>
|
||||
Complete story development pipeline: dev-story → validation → code review → push.
|
||||
Automatically re-invokes dev-story if gaps or review issues found.
|
||||
Ensures production-ready code before pushing.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Quality Over Speed**
|
||||
|
||||
Don't just implement—verify, review, fix.
|
||||
- Run dev-story for implementation
|
||||
- Validate with gap analysis
|
||||
- Code review for quality
|
||||
- Fix issues before pushing
|
||||
- Only push when truly ready
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: super-dev-story
|
||||
version: 3.0.0
|
||||
|
||||
stages:
|
||||
- dev-story: "Implement the story"
|
||||
- validate: "Run gap analysis"
|
||||
- review: "Code review"
|
||||
- push: "Safe commit and push"
|
||||
|
||||
defaults:
|
||||
max_rework_loops: 3
|
||||
auto_push: false
|
||||
review_depth: "standard" # quick | standard | deep
|
||||
validation_depth: "quick"
|
||||
|
||||
quality_gates:
|
||||
validation_threshold: 90 # % tasks must be verified
|
||||
review_threshold: "pass" # pass | pass_with_warnings
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
**Load story and prepare pipeline**
|
||||
|
||||
```bash
|
||||
STORY_FILE="{{story_file}}"
|
||||
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
|
||||
```
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🚀 SUPER DEV STORY PIPELINE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{story_key}}
|
||||
Stages: dev-story → validate → review → push
|
||||
|
||||
Quality Gates:
|
||||
- Validation: ≥{{validation_threshold}}% verified
|
||||
- Review: {{review_threshold}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Initialize:
|
||||
- rework_count = 0
|
||||
- stage = "dev-story"
|
||||
</step>
|
||||
|
||||
<step name="stage_dev_story">
|
||||
**Stage 1: Implement the story**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📝 STAGE 1: DEV-STORY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Invoke dev-story workflow:
|
||||
```
|
||||
/dev-story story_file={{story_file}}
|
||||
```
|
||||
|
||||
Wait for completion. Capture:
|
||||
- files_created
|
||||
- files_modified
|
||||
- tasks_completed
|
||||
|
||||
```
|
||||
✅ Dev-story complete
|
||||
Files: {{file_count}} created/modified
|
||||
Tasks: {{tasks_completed}}/{{total_tasks}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="stage_validate">
|
||||
**Stage 2: Validate implementation**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 STAGE 2: VALIDATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Invoke validation:
|
||||
```
|
||||
/validate scope=story target={{story_file}} depth={{validation_depth}}
|
||||
```
|
||||
|
||||
Capture results:
|
||||
- verified_pct
|
||||
- false_positives
|
||||
- category
|
||||
|
||||
**Check quality gate:**
|
||||
```
|
||||
if verified_pct < validation_threshold:
|
||||
REWORK_NEEDED = true
|
||||
reason = "Validation below {{validation_threshold}}%"
|
||||
|
||||
if false_positives > 0:
|
||||
REWORK_NEEDED = true
|
||||
reason = "{{false_positives}} tasks marked done but missing"
|
||||
```
|
||||
|
||||
```
|
||||
{{#if REWORK_NEEDED}}
|
||||
⚠️ Validation failed: {{reason}}
|
||||
{{else}}
|
||||
✅ Validation passed: {{verified_pct}}% verified
|
||||
{{/if}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="stage_review">
|
||||
**Stage 3: Code review**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 STAGE 3: CODE REVIEW
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Invoke code review:
|
||||
```
|
||||
/multi-agent-review files={{files_modified}} depth={{review_depth}}
|
||||
```
|
||||
|
||||
Capture results:
|
||||
- verdict (PASS, PASS_WITH_WARNINGS, NEEDS_REWORK)
|
||||
- issues
|
||||
|
||||
**Check quality gate:**
|
||||
```
|
||||
if verdict == "NEEDS_REWORK":
|
||||
REWORK_NEEDED = true
|
||||
reason = "Code review found blocking issues"
|
||||
|
||||
if review_threshold == "pass" AND verdict == "PASS_WITH_WARNINGS":
|
||||
REWORK_NEEDED = true
|
||||
reason = "Warnings not allowed in strict mode"
|
||||
```
|
||||
|
||||
```
|
||||
{{#if REWORK_NEEDED}}
|
||||
⚠️ Review failed: {{reason}}
|
||||
Issues: {{issues}}
|
||||
{{else}}
|
||||
✅ Review passed: {{verdict}}
|
||||
{{/if}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="handle_rework" if="REWORK_NEEDED">
|
||||
**Handle rework loop**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔄 REWORK REQUIRED (Loop {{rework_count + 1}}/{{max_rework_loops}})
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Reason: {{reason}}
|
||||
|
||||
{{#if validation_issues}}
|
||||
Validation Issues:
|
||||
{{#each validation_issues}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
{{#if review_issues}}
|
||||
Review Issues:
|
||||
{{#each review_issues}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
```
|
||||
|
||||
**Check loop limit:**
|
||||
```
|
||||
rework_count++
|
||||
if rework_count > max_rework_loops:
|
||||
echo "❌ Max rework loops exceeded"
|
||||
echo "Manual intervention required"
|
||||
HALT
|
||||
```
|
||||
|
||||
**Re-invoke dev-story with issues:**
|
||||
```
|
||||
/dev-story story_file={{story_file}} fix_issues={{issues}}
|
||||
```
|
||||
|
||||
After dev-story completes, return to validation stage.
|
||||
</step>
|
||||
|
||||
<step name="stage_push" if="NOT REWORK_NEEDED">
|
||||
**Stage 4: Push changes**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 STAGE 4: PUSH
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Generate commit message from story:**
|
||||
```
|
||||
feat({{epic}}): {{story_title}}
|
||||
|
||||
- Implemented {{task_count}} tasks
|
||||
- Verified: {{verified_pct}}%
|
||||
- Review: {{verdict}}
|
||||
|
||||
Story: {{story_key}}
|
||||
```
|
||||
|
||||
**If auto_push:**
|
||||
```
|
||||
/push-all commit_message="{{message}}" auto_push=true
|
||||
```
|
||||
|
||||
**Otherwise, ask:**
|
||||
```
|
||||
Ready to push?
|
||||
|
||||
[Y] Yes, push now
|
||||
[N] No, keep local (can push later)
|
||||
[R] Review changes first
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="final_summary">
|
||||
**Display pipeline results**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ SUPER DEV STORY COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Story: {{story_key}}
|
||||
|
||||
Pipeline Results:
|
||||
- Dev-Story: ✅ Complete
|
||||
- Validation: ✅ {{verified_pct}}% verified
|
||||
- Review: ✅ {{verdict}}
|
||||
- Push: {{pushed ? "✅ Pushed" : "⏸️ Local only"}}
|
||||
|
||||
Rework Loops: {{rework_count}}
|
||||
Files Changed: {{file_count}}
|
||||
Commit: {{commit_hash}}
|
||||
|
||||
{{#if pushed}}
|
||||
Branch: {{branch}}
|
||||
Ready for PR: gh pr create
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Standard pipeline
|
||||
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md
|
||||
|
||||
# With auto-push
|
||||
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md auto_push=true
|
||||
|
||||
# Strict review mode
|
||||
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md review_threshold=pass
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Dev-story fails:** Report error, halt pipeline.
|
||||
**Validation below threshold:** Enter rework loop.
|
||||
**Review finds blocking issues:** Enter rework loop.
|
||||
**Max rework loops exceeded:** Halt, require manual intervention.
|
||||
**Push fails:** Report error, commit preserved locally.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Dev-story completed
|
||||
- [ ] Validation ≥ threshold
|
||||
- [ ] Review passed
|
||||
- [ ] Changes committed
|
||||
- [ ] Pushed (if requested)
|
||||
- [ ] Story status updated
|
||||
</success_criteria>
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
name: super-dev-story
|
||||
description: "Enhanced story development with post-implementation validation and automated code review - ensures stories are truly complete before marking done"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
user_skill_level: "{config_source}:user_skill_level"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
story_dir: "{config_source}:implementation_artifacts"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-story"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
story_file: "" # Explicit story path; auto-discovered if empty
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
||||
project_context: "**/project-context.md"
|
||||
|
||||
# Super-dev specific settings
|
||||
super_dev_settings:
|
||||
post_dev_gap_analysis: true
|
||||
auto_code_review: true
|
||||
fail_on_critical_issues: true
|
||||
max_fix_iterations: 3
|
||||
|
||||
# Autonomous mode settings (passed from parent workflow like batch-super-dev)
|
||||
auto_accept_gap_analysis: false # When true, skip gap analysis approval prompt
|
||||
|
||||
standalone: true
|
||||
|
||||
web_bundle: false
|
||||
|
|
@ -1,353 +0,0 @@
|
|||
# Validate v3.0 - Unified Story/Epic Validation
|
||||
|
||||
<purpose>
|
||||
Single workflow for all validation needs. Validates stories against codebase,
|
||||
detects false positives (checked but not implemented), and reports health scores.
|
||||
Read-only by default - does not modify files.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Trust But Verify**
|
||||
|
||||
- Quick mode: Checkbox counting, file existence, pattern matching
|
||||
- Deep mode: Haiku agents read actual code, verify implementation quality
|
||||
- Categorize: VERIFIED_COMPLETE, NEEDS_REWORK, FALSE_POSITIVE, IN_PROGRESS
|
||||
- Report accuracy gaps between claimed and actual completion
|
||||
</philosophy>
|
||||
|
||||
<config>
|
||||
name: validate
|
||||
version: 3.0.0
|
||||
|
||||
parameters:
|
||||
scope:
|
||||
story: "Single story file"
|
||||
epic: "All stories in an epic"
|
||||
all: "All stories in sprint"
|
||||
|
||||
target: "story_file path OR epic_number (depends on scope)"
|
||||
|
||||
depth:
|
||||
quick: "Checkbox counting, file existence checks"
|
||||
deep: "Haiku agents verify actual code implementation"
|
||||
|
||||
fix_mode: false # If true, update checkboxes and sprint-status
|
||||
|
||||
defaults:
|
||||
scope: story
|
||||
depth: quick
|
||||
fix_mode: false
|
||||
batch_size: 10
|
||||
model_for_deep: haiku
|
||||
|
||||
categories:
|
||||
VERIFIED_COMPLETE: {score: ">=95", false_positives: 0}
|
||||
COMPLETE_WITH_ISSUES: {score: ">=80", false_positives: "<=2"}
|
||||
FALSE_POSITIVE: {score: "<50", description: "Claimed done but missing code"}
|
||||
NEEDS_REWORK: {false_positives: ">2"}
|
||||
IN_PROGRESS: {description: "Accurately reflects partial completion"}
|
||||
NOT_STARTED: {checked: 0}
|
||||
</config>
|
||||
|
||||
<execution_context>
|
||||
@patterns/verification.md
|
||||
@patterns/hospital-grade.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="resolve_targets" priority="first">
|
||||
**Determine what to validate based on scope**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 VALIDATION: {{scope}} scope, {{depth}} depth
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**If scope == story:**
|
||||
```bash
|
||||
STORY_FILE="{{target}}"
|
||||
[ -f "$STORY_FILE" ] || { echo "❌ Story file not found"; exit 1; }
|
||||
```
|
||||
stories_to_validate = [target]
|
||||
|
||||
**If scope == epic:**
|
||||
```bash
|
||||
EPIC_NUM="{{target}}"
|
||||
# Get stories from sprint-status.yaml matching epic
|
||||
grep "^${EPIC_NUM}-" docs/sprint-artifacts/sprint-status.yaml
|
||||
```
|
||||
stories_to_validate = all stories starting with `{epic_num}-`
|
||||
|
||||
**If scope == all:**
|
||||
```bash
|
||||
# Find all story files, exclude meta-documents
|
||||
find docs/sprint-artifacts -name "*.md" | grep -v "EPIC-\|COMPLETION\|REPORT\|README"
|
||||
```
|
||||
stories_to_validate = all story files
|
||||
|
||||
Display:
|
||||
```
|
||||
Stories to validate: {{count}}
|
||||
Depth: {{depth}}
|
||||
{{#if depth == deep}}
|
||||
Estimated cost: ~${{count * 0.13}} (Haiku agents)
|
||||
{{/if}}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="validate_quick" if="depth == quick">
|
||||
**Quick validation: checkbox counting and file checks**
|
||||
|
||||
For each story:
|
||||
|
||||
1. **Read story file** - Extract tasks, ACs, DoD
|
||||
2. **Count checkboxes:**
|
||||
```
|
||||
total_tasks = count of "- [ ]" and "- [x]"
|
||||
checked_tasks = count of "- [x]"
|
||||
completion_pct = checked / total × 100
|
||||
```
|
||||
|
||||
3. **Check file existence** (from Dev Agent Record):
|
||||
```bash
|
||||
for file in $FILE_LIST; do
|
||||
[ -f "$file" ] && echo "✅ $file" || echo "❌ $file MISSING"
|
||||
done
|
||||
```
|
||||
|
||||
4. **Basic stub detection:**
|
||||
```bash
|
||||
grep -l "TODO\|FIXME\|Not implemented\|throw new Error" $FILE_LIST
|
||||
```
|
||||
|
||||
5. **Categorize:**
|
||||
- ≥95% checked + files exist → VERIFIED_COMPLETE
|
||||
- ≥80% checked → COMPLETE_WITH_ISSUES
|
||||
- Files missing for checked tasks → FALSE_POSITIVE
|
||||
- <50% checked → IN_PROGRESS
|
||||
- 0% checked → NOT_STARTED
|
||||
|
||||
6. **Store result** with score and category
|
||||
</step>
|
||||
|
||||
<step name="validate_deep" if="depth == deep">
|
||||
**Deep validation: Haiku agents verify actual code**
|
||||
|
||||
For each story (or batch):
|
||||
|
||||
Spawn Haiku agent:
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
model: "haiku",
|
||||
description: "Deep validate {{story_id}}",
|
||||
prompt: `
|
||||
Verify ALL tasks for story {{story_id}} by reading actual code.
|
||||
|
||||
**Tasks to Verify:**
|
||||
{{#each tasks}}
|
||||
{{@index}}. [{{checked}}] {{text}}
|
||||
{{/each}}
|
||||
|
||||
**Files from Dev Agent Record:**
|
||||
{{file_list}}
|
||||
|
||||
**For EACH task:**
|
||||
1. Find relevant files (Glob)
|
||||
2. Read the files (Read tool)
|
||||
3. Verify: Is it real code or stubs? Tests exist? Error handling?
|
||||
4. Judge: actually_complete = true/false
|
||||
|
||||
**Return JSON:**
|
||||
{
|
||||
"story_id": "{{story_id}}",
|
||||
"tasks": [
|
||||
{
|
||||
"task_number": 0,
|
||||
"is_checked": true,
|
||||
"actually_complete": false,
|
||||
"confidence": "high",
|
||||
"evidence": "File has TODO on line 45",
|
||||
"issues": ["Stub implementation", "No tests"]
|
||||
}
|
||||
]
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
Parse results:
|
||||
- **False positive:** checked=true, actually_complete=false
|
||||
- **False negative:** checked=false, actually_complete=true
|
||||
- **Correct:** checked matches actually_complete
|
||||
|
||||
Calculate verification score:
|
||||
```
|
||||
score = (correct_count / total_count) × 100
|
||||
score -= (false_positive_count × 5) # Penalty for false positives
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="categorize_results">
|
||||
**Assign categories based on scores**
|
||||
|
||||
| Category | Criteria | Recommended Status |
|
||||
|----------|----------|-------------------|
|
||||
| VERIFIED_COMPLETE | score ≥95, FP=0 | done |
|
||||
| COMPLETE_WITH_ISSUES | score ≥80, FP≤2 | review |
|
||||
| FALSE_POSITIVE | score <50 OR FP>5 | in-progress |
|
||||
| NEEDS_REWORK | FP>2 | in-progress |
|
||||
| IN_PROGRESS | partial completion | in-progress |
|
||||
| NOT_STARTED | 0 checked | backlog |
|
||||
|
||||
For each story:
|
||||
- Compare current_status vs recommended_status
|
||||
- Flag if status is inaccurate
|
||||
</step>
|
||||
|
||||
<step name="aggregate_report" if="scope != story">
|
||||
**Generate batch summary**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 VALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Scope: {{scope}} {{#if target}}({{target}}){{/if}}
|
||||
Stories Validated: {{count}}
|
||||
Depth: {{depth}}
|
||||
|
||||
Overall Health Score: {{health_score}}/100
|
||||
|
||||
By Category:
|
||||
- ✅ VERIFIED_COMPLETE: {{verified_count}} ({{verified_pct}}%)
|
||||
- ⚠️ NEEDS_REWORK: {{rework_count}} ({{rework_pct}}%)
|
||||
- ❌ FALSE_POSITIVE: {{fp_count}} ({{fp_pct}}%)
|
||||
- 🔄 IN_PROGRESS: {{progress_count}} ({{progress_pct}}%)
|
||||
- 📋 NOT_STARTED: {{not_started_count}}
|
||||
|
||||
Task Accuracy:
|
||||
- Total tasks: {{total_tasks}}
|
||||
- False positives: {{fp_tasks}} (checked but not done)
|
||||
- False negatives: {{fn_tasks}} (done but not checked)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**If FALSE_POSITIVE stories found:**
|
||||
```
|
||||
❌ FALSE POSITIVE STORIES (Claimed Done, Not Implemented):
|
||||
|
||||
{{#each fp_stories}}
|
||||
- {{story_id}}: Score {{score}}/100, {{fp_task_count}} tasks missing
|
||||
Current: {{current_status}} → Should be: in-progress
|
||||
{{/each}}
|
||||
|
||||
Action: These stories need implementation work!
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="display_single_result" if="scope == story">
|
||||
**Show single story validation result**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 STORY VALIDATION: {{story_id}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Epic: {{epic_num}}
|
||||
Current Status: {{current_status}}
|
||||
Recommended Status: {{recommended_status}}
|
||||
|
||||
Verification Score: {{score}}/100
|
||||
Category: {{category}}
|
||||
|
||||
Tasks: {{checked}}/{{total}} checked
|
||||
{{#if depth == deep}}
|
||||
- Verified complete: {{verified_count}}
|
||||
- False positives: {{fp_count}} (checked but code missing)
|
||||
- False negatives: {{fn_count}} (code exists, not checked)
|
||||
{{/if}}
|
||||
|
||||
{{#if category == "VERIFIED_COMPLETE"}}
|
||||
✅ Story is production-ready
|
||||
{{else if category == "FALSE_POSITIVE"}}
|
||||
❌ Story claimed done but has {{fp_count}} missing tasks
|
||||
Action: Update status to in-progress, implement missing code
|
||||
{{else if category == "NEEDS_REWORK"}}
|
||||
⚠️ Story needs rework: {{fp_count}} tasks with issues
|
||||
{{else if category == "IN_PROGRESS"}}
|
||||
🔄 Story accurately reflects partial completion
|
||||
{{/if}}
|
||||
|
||||
{{#if fp_tasks}}
|
||||
**False Positive Tasks (checked but not done):**
|
||||
{{#each fp_tasks}}
|
||||
- [ ] {{task}} — {{evidence}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="apply_fixes" if="fix_mode">
|
||||
**Auto-fix mode: update files based on validation**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔧 AUTO-FIX MODE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Fix false negatives** (code exists but unchecked):
|
||||
Use Edit tool to change `- [ ]` to `- [x]` for verified tasks
|
||||
|
||||
**Update sprint-status.yaml:**
|
||||
Use Edit tool to change status for inaccurate entries
|
||||
|
||||
**DO NOT auto-fix false positives** (requires implementation work)
|
||||
|
||||
```
|
||||
✅ Auto-fix complete:
|
||||
- {{fn_fixed}} false negatives checked
|
||||
- {{status_fixed}} statuses updated
|
||||
- {{fp_count}} false positives flagged (need manual implementation)
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<examples>
|
||||
```bash
|
||||
# Validate single story (quick)
|
||||
/validate scope=story target=docs/sprint-artifacts/2-5-auth.md
|
||||
|
||||
# Validate single story (deep - uses Haiku)
|
||||
/validate scope=story target=docs/sprint-artifacts/2-5-auth.md depth=deep
|
||||
|
||||
# Validate all stories in epic 2
|
||||
/validate scope=epic target=2
|
||||
|
||||
# Validate all stories in epic 2 (deep)
|
||||
/validate scope=epic target=2 depth=deep
|
||||
|
||||
# Validate entire sprint
|
||||
/validate scope=all
|
||||
|
||||
# Validate and auto-fix false negatives
|
||||
/validate scope=all fix_mode=true
|
||||
```
|
||||
</examples>
|
||||
|
||||
<failure_handling>
|
||||
**Story file not found:** Skip with warning, continue batch.
|
||||
**Haiku agent fails:** Fall back to quick validation for that story.
|
||||
**All stories fail:** Report systemic issue, halt.
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All target stories validated
|
||||
- [ ] Categories assigned based on scores
|
||||
- [ ] False positives identified
|
||||
- [ ] Report generated
|
||||
- [ ] Fixes applied (if fix_mode=true)
|
||||
</success_criteria>
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
name: validate
|
||||
description: "Unified validation workflow. Validates stories against codebase, detects false positives, reports health scores. Replaces validate-story, validate-story-deep, validate-all-stories, validate-all-stories-deep, validate-epic-status, validate-all-epics."
|
||||
author: "BMad"
|
||||
version: "3.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
||||
story_dir: "{implementation_artifacts}"
|
||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/validate"
|
||||
instructions: "{installed_path}/workflow.md"
|
||||
|
||||
# Input variables
|
||||
variables:
|
||||
scope: "story" # story | epic | all
|
||||
target: "" # story_file path OR epic_number (depends on scope)
|
||||
depth: "quick" # quick | deep (deep uses Haiku agents)
|
||||
fix_mode: false # If true, auto-fix false negatives and update statuses
|
||||
|
||||
# Deep validation settings
|
||||
deep_validation:
|
||||
model: "haiku"
|
||||
batch_size: 10
|
||||
cost_per_story: 0.13
|
||||
|
||||
# Output
|
||||
default_output_file: "{story_dir}/.validation-{scope}-{date}.md"
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -14,9 +14,9 @@ describe('DependencyResolver - Advanced Scenarios', () => {
|
|||
await fs.ensureDir(path.join(bmadDir, 'core', 'agents'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'core', 'tasks'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'core', 'templates'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'agents'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'tasks'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'modules', 'bmm', 'templates'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'bmm', 'agents'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'bmm', 'tasks'));
|
||||
await fs.ensureDir(path.join(bmadDir, 'bmm', 'templates'));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
|
|
@ -33,7 +33,7 @@ dependencies: ["{project-root}/bmad/bmm/tasks/analyze.md"]
|
|||
---
|
||||
<agent>Agent</agent>`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/analyze.md', 'BMM Task');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/analyze.md', 'BMM Task');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, []);
|
||||
|
|
@ -51,8 +51,8 @@ dependencies: ["{project-root}/bmad/bmm/tasks/*.md"]
|
|||
---
|
||||
<agent>Agent</agent>`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/task1.md', 'Task 1');
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/task2.md', 'Task 2');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/task1.md', 'Task 1');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/task2.md', 'Task 2');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']); // Include bmm module
|
||||
|
|
@ -154,16 +154,16 @@ Task content`,
|
|||
});
|
||||
|
||||
it('should resolve template from module path', async () => {
|
||||
await createTestFile(bmadDir, 'modules/bmm/agents/agent.md', '<agent>BMM Agent</agent>');
|
||||
await createTestFile(bmadDir, 'bmm/agents/agent.md', '<agent>BMM Agent</agent>');
|
||||
await createTestFile(
|
||||
bmadDir,
|
||||
'modules/bmm/tasks/task.md',
|
||||
'bmm/tasks/task.md',
|
||||
`---
|
||||
template: "{project-root}/bmad/bmm/templates/prd-template.yaml"
|
||||
---
|
||||
Task`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/templates/prd-template.yaml', 'template');
|
||||
await createTestFile(bmadDir, 'bmm/templates/prd-template.yaml', 'template');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||
|
|
@ -215,7 +215,7 @@ Task`,
|
|||
<command exec="bmad/bmm/tasks/create-prd" />
|
||||
</agent>`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/create-prd.md', 'PRD Task');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/create-prd.md', 'PRD Task');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, []);
|
||||
|
|
@ -249,7 +249,7 @@ Task`,
|
|||
Use @task-custom-task
|
||||
</agent>`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/custom-task.md', 'Custom Task');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/custom-task.md', 'Custom Task');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||
|
|
@ -265,7 +265,7 @@ Use @task-custom-task
|
|||
Use @agent-pm
|
||||
</agent>`,
|
||||
);
|
||||
await createTestFile(bmadDir, 'modules/bmm/agents/pm.md', '<agent>PM</agent>');
|
||||
await createTestFile(bmadDir, 'bmm/agents/pm.md', '<agent>PM</agent>');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||
|
|
|
|||
|
|
@ -585,7 +585,7 @@ Finally check bmad/core/tasks/review
|
|||
describe('module organization', () => {
|
||||
it('should organize files by module correctly', async () => {
|
||||
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||
await createTestFile(bmadDir, 'modules/bmm/agents/bmm-agent.md', '<agent>BMM</agent>');
|
||||
await createTestFile(bmadDir, 'bmm/agents/bmm-agent.md', '<agent>BMM</agent>');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||
|
|
@ -638,9 +638,9 @@ Finally check bmad/core/tasks/review
|
|||
expect(module).toBe('core');
|
||||
});
|
||||
|
||||
it('should extract module from src/modules/bmm path', () => {
|
||||
it('should extract module from src/bmm path', () => {
|
||||
const resolver = new DependencyResolver();
|
||||
const filePath = path.join(bmadDir, 'modules/bmm/agents/pm.md');
|
||||
const filePath = path.join(bmadDir, 'bmm/agents/pm.md');
|
||||
|
||||
const module = resolver.getModuleFromPath(bmadDir, filePath);
|
||||
|
||||
|
|
@ -651,12 +651,12 @@ Finally check bmad/core/tasks/review
|
|||
// Create installed structure (no src/ prefix)
|
||||
const installedDir = path.join(tmpDir, 'installed');
|
||||
await fs.ensureDir(path.join(installedDir, 'core/agents'));
|
||||
await fs.ensureDir(path.join(installedDir, 'modules/bmm/agents'));
|
||||
await fs.ensureDir(path.join(installedDir, 'bmm/agents'));
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
|
||||
const coreFile = path.join(installedDir, 'core/agents/agent.md');
|
||||
const moduleFile = path.join(installedDir, 'modules/bmm/agents/pm.md');
|
||||
const moduleFile = path.join(installedDir, 'bmm/agents/pm.md');
|
||||
|
||||
expect(resolver.getModuleFromPath(installedDir, coreFile)).toBe('core');
|
||||
expect(resolver.getModuleFromPath(installedDir, moduleFile)).toBe('bmm');
|
||||
|
|
@ -764,7 +764,7 @@ dependencies: []
|
|||
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||
await createTestFile(
|
||||
bmadDir,
|
||||
'modules/bmm/agents/bmm-agent.md',
|
||||
'bmm/agents/bmm-agent.md',
|
||||
`---
|
||||
dependencies: ["{project-root}/bmad/core/tasks/shared-task.md"]
|
||||
---
|
||||
|
|
@ -783,8 +783,8 @@ dependencies: ["{project-root}/bmad/core/tasks/shared-task.md"]
|
|||
|
||||
it('should resolve module tasks', async () => {
|
||||
await createTestFile(bmadDir, 'core/agents/core-agent.md', '<agent>Core</agent>');
|
||||
await createTestFile(bmadDir, 'modules/bmm/agents/pm.md', '<agent>PM</agent>');
|
||||
await createTestFile(bmadDir, 'modules/bmm/tasks/create-prd.md', 'Create PRD task');
|
||||
await createTestFile(bmadDir, 'bmm/agents/pm.md', '<agent>PM</agent>');
|
||||
await createTestFile(bmadDir, 'bmm/tasks/create-prd.md', 'Create PRD task');
|
||||
|
||||
const resolver = new DependencyResolver();
|
||||
const result = await resolver.resolve(bmadDir, ['bmm']);
|
||||
|
|
|
|||
|
|
@ -153,18 +153,18 @@ function generateLlmsTxt(outputDir) {
|
|||
'',
|
||||
'## Quick Start',
|
||||
'',
|
||||
`- **[Quick Start](${SITE_URL}/docs/modules/bmm/quick-start)** - Get started with BMAD Method`,
|
||||
`- **[Quick Start](${SITE_URL}/docs/bmm/quick-start)** - Get started with BMAD Method`,
|
||||
`- **[Installation](${SITE_URL}/docs/getting-started/installation)** - Installation guide`,
|
||||
'',
|
||||
'## Core Concepts',
|
||||
'',
|
||||
`- **[Scale Adaptive System](${SITE_URL}/docs/modules/bmm/scale-adaptive-system)** - Understand BMAD scaling`,
|
||||
`- **[Quick Flow](${SITE_URL}/docs/modules/bmm/bmad-quick-flow)** - Fast development workflow`,
|
||||
`- **[Party Mode](${SITE_URL}/docs/modules/bmm/party-mode)** - Multi-agent collaboration`,
|
||||
`- **[Scale Adaptive System](${SITE_URL}/docs/bmm/scale-adaptive-system)** - Understand BMAD scaling`,
|
||||
`- **[Quick Flow](${SITE_URL}/docs/bmm/bmad-quick-flow)** - Fast development workflow`,
|
||||
`- **[Party Mode](${SITE_URL}/docs/bmm/party-mode)** - Multi-agent collaboration`,
|
||||
'',
|
||||
'## Modules',
|
||||
'',
|
||||
`- **[BMM - Method](${SITE_URL}/docs/modules/bmm/quick-start)** - Core methodology module`,
|
||||
`- **[BMM - Method](${SITE_URL}/docs/bmm/quick-start)** - Core methodology module`,
|
||||
`- **[BMB - Builder](${SITE_URL}/docs/modules/bmb/)** - Agent and workflow builder`,
|
||||
`- **[BMGD - Game Dev](${SITE_URL}/docs/modules/bmgd/quick-start)** - Game development module`,
|
||||
'',
|
||||
|
|
|
|||
|
|
@ -83,28 +83,38 @@ class DependencyResolver {
|
|||
// or if it contains a src subdirectory (production scenario)
|
||||
const hasSrcSubdir = await fs.pathExists(path.join(bmadDir, 'src'));
|
||||
const hasModulesSubdir = await fs.pathExists(path.join(bmadDir, 'modules'));
|
||||
const hasCoreDir = await fs.pathExists(path.join(bmadDir, 'core'));
|
||||
const hasBmmDir = await fs.pathExists(path.join(bmadDir, 'bmm'));
|
||||
|
||||
if (hasModulesSubdir) {
|
||||
// bmadDir is already the src directory (e.g., /path/to/src)
|
||||
// Structure: bmadDir/core or bmadDir/modules/bmm
|
||||
// bmadDir is already the src directory with modules/ subdirectory (legacy structure)
|
||||
// Structure: bmadDir/core or bmadDir/bmm or bmadDir/modules/other
|
||||
if (module === 'core') {
|
||||
moduleDir = path.join(bmadDir, 'core');
|
||||
} else if (module === 'bmm') {
|
||||
moduleDir = path.join(bmadDir, 'modules', 'bmm');
|
||||
moduleDir = path.join(bmadDir, 'bmm');
|
||||
} else {
|
||||
moduleDir = path.join(bmadDir, 'modules', module);
|
||||
}
|
||||
} else if (hasSrcSubdir) {
|
||||
// bmadDir is the parent of src directory (e.g., /path/to/BMAD-METHOD)
|
||||
// Structure: bmadDir/src/core or bmadDir/src/modules/bmm
|
||||
// Structure: bmadDir/src/core or bmadDir/src/bmm
|
||||
const srcDir = path.join(bmadDir, 'src');
|
||||
if (module === 'core') {
|
||||
moduleDir = path.join(srcDir, 'core');
|
||||
} else if (module === 'bmm') {
|
||||
moduleDir = path.join(srcDir, 'modules', 'bmm');
|
||||
moduleDir = path.join(srcDir, 'bmm');
|
||||
} else {
|
||||
moduleDir = path.join(srcDir, 'modules', module);
|
||||
}
|
||||
} else if (hasCoreDir || hasBmmDir) {
|
||||
// bmadDir IS the src directory without modules/ subdirectory (new structure)
|
||||
// Structure: bmadDir/core or bmadDir/bmm
|
||||
if (module === 'core') {
|
||||
moduleDir = path.join(bmadDir, 'core');
|
||||
} else if (module === 'bmm') {
|
||||
moduleDir = path.join(bmadDir, 'bmm');
|
||||
}
|
||||
}
|
||||
|
||||
if (!moduleDir || !(await fs.pathExists(moduleDir))) {
|
||||
|
|
|
|||
|
|
@ -510,7 +510,7 @@ class YamlXmlBuilder {
|
|||
const sourceHash = await this.calculateFileHash(agentYamlPath);
|
||||
const customizeHash = customizeYamlPath ? await this.calculateFileHash(customizeYamlPath) : null;
|
||||
|
||||
// Extract module from path (e.g., /path/to/modules/bmm/agents/pm.yaml -> bmm)
|
||||
// Extract module from path (e.g., /path/to/bmm/agents/pm.yaml -> bmm)
|
||||
// or /path/to/bmad/bmm/agents/pm.yaml -> bmm
|
||||
// or /path/to/src/bmm/agents/pm.yaml -> bmm
|
||||
let module = 'core'; // default to core
|
||||
|
|
|
|||
Loading…
Reference in New Issue