refactor(tech-spec): unify story generation and adopt intent-based approach

Major refactoring of tech-spec workflow for quick-flow projects:

## Unified Story Generation
- Consolidated instructions-level0-story.md and instructions-level1-stories.md into single instructions-generate-stories.md
- Always generates epic + stories (minimal epic for 1 story, detailed for multiple)
- Consistent naming: story-{epic-slug}-N.md for all stories (1-5)
- Eliminated branching logic and duplicate code

## Intent-Based Intelligence
- Removed 150+ lines of hardcoded stack detection examples (Node.js, Python, Ruby, Java, Go, Rust, PHP)
- Replaced prescriptive instructions with intelligent guidance that trusts LLM capabilities
- PHASE 2 stack detection now adapts to ANY project type automatically
- Step 2 discovery changed from scripted Q&A to adaptive conversation goals

## Terminology Updates
- Replaced "Level 0" and "Level 1" with "quick-flow" terminology throughout
- Updated to "single story" vs "multiple stories (2-5)" language
- Consistent modern terminology across all files

## Variable and Structure Improvements
- Fixed variable references: now uses {instructions_generate_stories} instead of hardcoded paths
- Updated workflow.yaml with cleaner variable structure
- Removed unused variables (project_level, development_context)
- Added story_count and epic_slug runtime variables

## Files Changed
- Deleted: instructions-level0-story.md (7,259 bytes)
- Deleted: instructions-level1-stories.md (16,274 bytes)
- Created: instructions-generate-stories.md (13,109 bytes)
- Updated: instructions.md (reduced from 35,028 to 32,006 bytes)
- Updated: workflow.yaml, checklist.md

## Impact
- 50% fewer workflow files (3 → 1 for story generation)
- More adaptable to any tech stack
- Clearer, more maintainable code
- Better developer and user experience
- Trusts modern LLM intelligence instead of constraining with examples
This commit is contained in:
Brian Madison 2025-11-11 16:26:53 -06:00
parent 2d99833b9e
commit 4d745532aa
6 changed files with 586 additions and 966 deletions

View File

@ -2,20 +2,22 @@
**Purpose**: Validate tech-spec workflow outputs are context-rich, definitive, complete, and implementation-ready. **Purpose**: Validate tech-spec workflow outputs are context-rich, definitive, complete, and implementation-ready.
**Scope**: Levels 0-1 software projects **Scope**: Quick-flow software projects (1-5 stories)
**Expected Outputs**: tech-spec.md + story files (1 for Level 0, 2-3 for Level 1) **Expected Outputs**: tech-spec.md + epics.md + story files (1-5 stories)
**New Standard**: Tech-spec should be comprehensive enough to replace story-context for Level 0-1 projects **New Standard**: Tech-spec should be comprehensive enough to replace story-context for most quick-flow projects
--- ---
## 1. Output Files Exist ## 1. Output Files Exist
- [ ] tech-spec.md created in output folder - [ ] tech-spec.md created in output folder
- [ ] epics.md created (minimal for 1 story, detailed for multiple)
- [ ] Story file(s) created in sprint_artifacts - [ ] Story file(s) created in sprint_artifacts
- Level 0: 1 story file (story-{slug}.md) - Naming convention: story-{epic-slug}-N.md (where N = 1 to story_count)
- Level 1: epics.md + 2-3 story files (story-{epic-slug}-N.md) - 1 story: story-{epic-slug}-1.md
- Multiple stories: story-{epic-slug}-1.md through story-{epic-slug}-N.md
- [ ] bmm-workflow-status.yaml updated (if not standalone mode) - [ ] bmm-workflow-status.yaml updated (if not standalone mode)
- [ ] No unfilled {{template_variables}} in any files - [ ] No unfilled {{template_variables}} in any files
@ -134,16 +136,17 @@
--- ---
## 6. Epic Quality (Level 1 Only) ## 6. Epic Quality (All Projects)
- [ ] **Epic title**: User-focused outcome (not implementation detail) - [ ] **Epic title**: User-focused outcome (not implementation detail)
- [ ] **Epic slug**: Clean kebab-case slug (2-3 words) - [ ] **Epic slug**: Clean kebab-case slug (2-3 words)
- [ ] **Epic goal**: Clear purpose and value statement - [ ] **Epic goal**: Clear purpose and value statement
- [ ] **Epic scope**: Boundaries clearly defined - [ ] **Epic scope**: Boundaries clearly defined
- [ ] **Success criteria**: Measurable outcomes - [ ] **Success criteria**: Measurable outcomes
- [ ] **Story map**: Visual representation of epic → stories - [ ] **Story map** (if multiple stories): Visual representation of epic → stories
- [ ] **Implementation sequence**: Logical story ordering with dependencies - [ ] **Implementation sequence** (if multiple stories): Logical story ordering with dependencies
- [ ] **Tech-spec reference**: Links back to tech-spec.md - [ ] **Tech-spec reference**: Links back to tech-spec.md
- [ ] **Detail level appropriate**: Minimal for 1 story, detailed for multiple
--- ---

View File

@ -0,0 +1,434 @@
# Unified Epic and Story Generation
<workflow>
<critical>This generates epic + stories for ALL quick-flow projects</critical>
<critical>Always generates: epics.md + story files (1-5 stories based on {{story_count}})</critical>
<critical>Runs AFTER tech-spec.md completion</critical>
<critical>Story format MUST match create-story template for compatibility with story-context and dev-story workflows</critical>
<step n="1" goal="Load tech spec and extract implementation context">
<action>Read the completed tech-spec.md file from {default_output_file}</action>
<action>Load bmm-workflow-status.yaml from {workflow-status} (if exists)</action>
<action>Get story_count from workflow variables (1-5)</action>
<action>Ensure {sprint_artifacts} directory exists</action>
<action>Extract from tech-spec structure:
**From "The Change" section:**
- Problem statement and solution overview
- Scope (in/out)
**From "Implementation Details" section:**
- Source tree changes
- Technical approach
- Integration points
**From "Implementation Guide" section:**
- Implementation steps
- Testing strategy
- Acceptance criteria
- Time estimates
**From "Development Context" section:**
- Framework dependencies with versions
- Existing code references
- Internal dependencies
**From "Developer Resources" section:**
- File paths
- Key code locations
- Testing locations
Use this rich context to generate comprehensive, implementation-ready epic and stories.
</action>
</step>
<step n="2" goal="Generate epic slug and structure">
<action>Create epic based on the overall feature/change from tech-spec</action>
<action>Derive epic slug from the feature name:
- Use 2-3 words max
- Kebab-case format
- User-focused, not implementation-focused
Examples:
- "OAuth Integration" → "oauth-integration"
- "Fix Login Bug" → "login-fix"
- "User Profile Page" → "user-profile"
</action>
<action>Store as {{epic_slug}} - this will be used for all story filenames</action>
<action>Adapt epic detail to story count:
**For single story (story_count == 1):**
- Epic is minimal - just enough structure
- Goal: Brief statement of what's being accomplished
- Scope: High-level boundary
- Success criteria: Core outcomes
**For multiple stories (story_count > 1):**
- Epic is detailed - full breakdown
- Goal: Comprehensive purpose and value statement
- Scope: Clear boundaries with in/out examples
- Success criteria: Measurable, testable outcomes
- Story map: Visual representation of epic → stories
- Implementation sequence: Logical ordering with dependencies
</action>
</step>
<step n="3" goal="Generate epic document">
<action>Initialize {epics_file} using {epics_template}</action>
<action>Populate epic metadata from tech-spec context:
**Epic Title:** User-facing outcome (not implementation detail)
- Good: "OAuth Integration", "Login Bug Fix", "Icon Reliability"
- Bad: "Update recommendedLibraries.ts", "Refactor auth service"
**Epic Goal:** Why this matters to users/business
**Epic Scope:** Clear boundaries from tech-spec scope section
**Epic Success Criteria:** Measurable outcomes from tech-spec acceptance criteria
**Dependencies:** From tech-spec integration points and dependencies
</action>
<template-output file="{epics_file}">project_name</template-output>
<template-output file="{epics_file}">date</template-output>
<template-output file="{epics_file}">epic_title</template-output>
<template-output file="{epics_file}">epic_slug</template-output>
<template-output file="{epics_file}">epic_goal</template-output>
<template-output file="{epics_file}">epic_scope</template-output>
<template-output file="{epics_file}">epic_success_criteria</template-output>
<template-output file="{epics_file}">epic_dependencies</template-output>
</step>
<step n="4" goal="Intelligently break down into stories">
<action>Analyze tech-spec implementation steps and create story breakdown
**For story_count == 1:**
- Create single comprehensive story covering all implementation
- Title: Focused on the deliverable outcome
- Tasks: Map directly to tech-spec implementation steps
- Estimated points: Typically 1-5 points
**For story_count > 1:**
- Break implementation into logical story boundaries
- Each story must be:
- Independently valuable (delivers working functionality)
- Testable (has clear acceptance criteria)
- Sequentially ordered (no forward dependencies)
- Right-sized (prefer 2-4 stories over many tiny ones)
**Story Sequencing Rules (CRITICAL):**
1. Foundation → Build → Test → Polish
2. Database → API → UI
3. Backend → Frontend
4. Core → Enhancement
5. NO story can depend on a later story!
Validate sequence: Each story N should only depend on stories 1...N-1
</action>
<action>For each story position (1 to {{story_count}}):
1. **Determine story scope from tech-spec tasks**
- Group related implementation steps
- Ensure story leaves system in working state
2. **Create story title**
- User-focused deliverable
- Active, clear language
- Good: "OAuth Backend Integration", "OAuth UI Components"
- Bad: "Write some OAuth code", "Update files"
3. **Extract acceptance criteria**
- From tech-spec testing strategy and acceptance criteria
- Must be numbered (AC #1, AC #2, etc.)
- Must be specific and testable
- Use Given/When/Then format when applicable
4. **Map tasks to implementation steps**
- Break down tech-spec implementation steps for this story
- Create checkbox list
- Reference AC numbers: (AC: #1), (AC: #2)
5. **Estimate story points**
- 1 point = < 1 day (2-4 hours)
- 2 points = 1-2 days
- 3 points = 2-3 days
- 5 points = 3-5 days
- Total across all stories should align with tech-spec estimates
</action>
</step>
<step n="5" goal="Generate story files">
<for-each story="1 to story_count">
<action>Set story_filename = "story-{{epic_slug}}-{{n}}.md"</action>
<action>Set story_path = "{sprint_artifacts}/{{story_filename}}"</action>
<action>Create story file using {user_story_template}</action>
<action>Populate story with:
**Story Header:**
- N.M format (where N is always 1 for quick-flow, M is story number)
- Title: User-focused deliverable
- Status: Draft
**User Story:**
- As a [role] (developer, user, admin, system, etc.)
- I want [capability/change]
- So that [benefit/value]
**Acceptance Criteria:**
- Numbered list (AC #1, AC #2, ...)
- Specific, measurable, testable
- Derived from tech-spec testing strategy and acceptance criteria
- Cover all success conditions for this story
**Tasks/Subtasks:**
- Checkbox list mapped to tech-spec implementation steps
- Each task references AC numbers: (AC: #1)
- Include explicit testing tasks
**Technical Summary:**
- High-level approach for this story
- Key technical decisions
- Files/modules involved
**Project Structure Notes:**
- files_to_modify: From tech-spec "Developer Resources → File Paths"
- test_locations: From tech-spec "Developer Resources → Testing Locations"
- story_points: Estimated effort
- dependencies: Prerequisites (other stories, systems, data)
**Key Code References:**
- From tech-spec "Development Context → Relevant Existing Code"
- From tech-spec "Developer Resources → Key Code Locations"
- Specific file:line references when available
**Context References:**
- Link to tech-spec.md (primary context document)
- Note: Tech-spec contains brownfield analysis, framework versions, patterns, etc.
**Dev Agent Record:**
- Empty sections (populated during dev-story execution)
- Agent Model Used
- Debug Log References
- Completion Notes
- Files Modified
- Test Results
**Review Notes:**
- Empty section (populated during code review)
</action>
<template-output file="{{story_path}}">story_number</template-output>
<template-output file="{{story_path}}">story_title</template-output>
<template-output file="{{story_path}}">user_role</template-output>
<template-output file="{{story_path}}">capability</template-output>
<template-output file="{{story_path}}">benefit</template-output>
<template-output file="{{story_path}}">acceptance_criteria</template-output>
<template-output file="{{story_path}}">tasks_subtasks</template-output>
<template-output file="{{story_path}}">technical_summary</template-output>
<template-output file="{{story_path}}">files_to_modify</template-output>
<template-output file="{{story_path}}">test_locations</template-output>
<template-output file="{{story_path}}">story_points</template-output>
<template-output file="{{story_path}}">time_estimate</template-output>
<template-output file="{{story_path}}">dependencies</template-output>
<template-output file="{{story_path}}">existing_code_references</template-output>
</for-each>
</step>
<step n="6" goal="Generate story map and finalize epic" if="story_count > 1">
<action>Create visual story map showing epic → stories hierarchy
Include:
- Epic title at top
- Stories listed with point estimates
- Dependencies noted
- Sequence validation confirmation
Example:
```
Epic: OAuth Integration (8 points)
├── Story 1.1: OAuth Backend (3 points)
│ Dependencies: None
├── Story 1.2: OAuth UI Components (3 points)
│ Dependencies: Story 1.1
└── Story 1.3: OAuth Testing & Polish (2 points)
Dependencies: Stories 1.1, 1.2
```
</action>
<action>Calculate totals:
- Total story points across all stories
- Estimated timeline (typically 1-2 points per day)
</action>
<action>Append to {epics_file}:
- Story summaries
- Story map visual
- Implementation sequence
- Total points and timeline
</action>
<template-output file="{epics_file}">story_map</template-output>
<template-output file="{epics_file}">story_summaries</template-output>
<template-output file="{epics_file}">total_points</template-output>
<template-output file="{epics_file}">estimated_timeline</template-output>
<template-output file="{epics_file}">implementation_sequence</template-output>
</step>
<step n="7" goal="Validate story quality">
<critical>Always run validation - NOT optional!</critical>
<action>Validate all stories against quality standards:
**Story Sequence Validation (CRITICAL):**
- For each story N, verify it doesn't depend on story N+1 or later
- Check: Can stories be implemented in order 1→2→3→...?
- If sequence invalid: Identify problem, propose reordering, ask user to confirm
**Acceptance Criteria Quality:**
- All AC are numbered (AC #1, AC #2, ...)
- Each AC is specific and testable (no "works well", "is good", "performs fast")
- AC use Given/When/Then or equivalent structure
- All success conditions are covered
**Story Completeness:**
- All stories map to tech-spec implementation steps
- Story points align with tech-spec time estimates
- Dependencies are clearly documented
- Each story has testable AC
- Files and locations reference tech-spec developer resources
**Template Compliance:**
- All required sections present
- Dev Agent Record sections exist (even if empty)
- Context references link to tech-spec.md
- Story numbering follows N.M format
</action>
<check if="validation issues found">
<output>⚠️ **Story Validation Issues:**
{{issues_list}}
**Recommended Fixes:**
{{fixes}}
Shall I fix these automatically? (yes/no)</output>
<ask>Apply fixes? (yes/no)</ask>
<check if="yes">
<action>Apply fixes (reorder stories, rewrite vague AC, add missing details)</action>
<action>Re-validate</action>
<output>✅ Validation passed after fixes!</output>
</check>
</check>
<check if="validation passes">
<output>✅ **Story Validation Passed!**
**Quality Scores:**
- Sequence: ✅ Valid (no forward dependencies)
- AC Quality: ✅ All specific and testable
- Completeness: ✅ All tech spec tasks covered
- Template Compliance: ✅ All sections present
Stories are implementation-ready!</output>
</check>
</step>
<step n="8" goal="Update workflow status and finalize">
<action>Update bmm-workflow-status.yaml (if exists):
- Mark tech-spec as complete
- Initialize story sequence tracking
- Set first story as TODO
- Track epic slug and story count
</action>
<output>**✅ Epic and Stories Generated!**
**Epic:** {{epic_title}} ({{epic_slug}})
**Total Stories:** {{story_count}}
{{#if story_count > 1}}**Total Points:** {{total_points}}
**Estimated Timeline:** {{estimated_timeline}}{{/if}}
**Files Created:**
- `{epics_file}` - Epic structure{{#if story_count == 1}} (minimal){{/if}}
- `{sprint_artifacts}/story-{{epic_slug}}-1.md`{{#if story_count > 1}}
- `{sprint_artifacts}/story-{{epic_slug}}-2.md`{{/if}}{{#if story_count > 2}}
- Through story-{{epic_slug}}-{{story_count}}.md{{/if}}
**What's Next:**
All stories reference tech-spec.md as primary context. You can proceed directly to development with the DEV agent!
Story files are ready for:
- Direct implementation (dev-story workflow)
- Optional context generation (story-context workflow for complex cases)
- Sprint planning organization (sprint-planning workflow for multi-story coordination)
</output>
</step>
</workflow>

View File

@ -1,200 +0,0 @@
# Level 0 - Minimal User Story Generation
<workflow>
<critical>This generates a single user story for Level 0 atomic changes</critical>
<critical>Level 0 = single file change, bug fix, or small isolated task</critical>
<critical>This workflow runs AFTER tech-spec.md has been completed</critical>
<critical>Output format MUST match create-story template for compatibility with story-context and dev-story workflows</critical>
<step n="1" goal="Load tech spec and extract the change">
<action>Read the completed tech-spec.md file from {output_folder}/tech-spec.md</action>
<action>Load bmm-workflow-status.yaml from {output_folder}/bmm-workflow-status.yaml (if exists)</action>
<action>Extract sprint_artifacts from config (where stories are stored)</action>
<action>Extract from the ENHANCED tech-spec structure:
- Problem statement from "The Change → Problem Statement" section
- Solution overview from "The Change → Proposed Solution" section
- Scope from "The Change → Scope" section
- Source tree from "Implementation Details → Source Tree Changes" section
- Time estimate from "Implementation Guide → Implementation Steps" section
- Acceptance criteria from "Implementation Guide → Acceptance Criteria" section
- Framework dependencies from "Development Context → Framework/Libraries" section
- Existing code references from "Development Context → Relevant Existing Code" section
- File paths from "Developer Resources → File Paths Reference" section
- Key code locations from "Developer Resources → Key Code Locations" section
- Testing locations from "Developer Resources → Testing Locations" section
</action>
</step>
<step n="2" goal="Generate story slug and filename">
<action>Derive a short URL-friendly slug from the feature/change name</action>
<action>Max slug length: 3-5 words, kebab-case format</action>
<example>
- "Migrate JS Library Icons" → "icon-migration"
- "Fix Login Validation Bug" → "login-fix"
- "Add OAuth Integration" → "oauth-integration"
</example>
<action>Set story_filename = "story-{slug}.md"</action>
<action>Set story_path = "{sprint_artifacts}/story-{slug}.md"</action>
</step>
<step n="3" goal="Create user story in standard format">
<action>Create 1 story that describes the technical change as a deliverable</action>
<action>Story MUST use create-story template format for compatibility</action>
<guidelines>
**Story Point Estimation:**
- 1 point = < 1 day (2-4 hours)
- 2 points = 1-2 days
- 3 points = 2-3 days
- 5 points = 3-5 days (if this high, question if truly Level 0)
**Story Title Best Practices:**
- Use active, user-focused language
- Describe WHAT is delivered, not HOW
- Good: "Icon Migration to Internal CDN"
- Bad: "Run curl commands to download PNGs"
**Story Description Format:**
- As a [role] (developer, user, admin, etc.)
- I want [capability/change]
- So that [benefit/value]
**Acceptance Criteria:**
- Extract from tech-spec "Testing Approach" section
- Must be specific, measurable, and testable
- Include performance criteria if specified
**Tasks/Subtasks:**
- Map directly to tech-spec "Implementation Guide" tasks
- Use checkboxes for tracking
- Reference AC numbers: (AC: #1), (AC: #2)
- Include explicit testing subtasks
**Dev Notes:**
- Extract technical constraints from tech-spec
- Include file paths from "Developer Resources → File Paths Reference"
- Include existing code references from "Development Context → Relevant Existing Code"
- Reference architecture patterns if applicable
- Cite tech-spec sections for implementation details
- Note dependencies (internal and external)
**NEW: Comprehensive Context**
Since tech-spec is now context-rich, populate all new template fields:
- dependencies: Extract from "Development Context" and "Implementation Details → Integration Points"
- existing_code_references: Extract from "Development Context → Relevant Existing Code" and "Developer Resources → Key Code Locations"
</guidelines>
<action>Initialize story file using user_story_template</action>
<template-output file="{story_path}">story_title</template-output>
<template-output file="{story_path}">role</template-output>
<template-output file="{story_path}">capability</template-output>
<template-output file="{story_path}">benefit</template-output>
<template-output file="{story_path}">acceptance_criteria</template-output>
<template-output file="{story_path}">tasks_subtasks</template-output>
<template-output file="{story_path}">technical_summary</template-output>
<template-output file="{story_path}">files_to_modify</template-output>
<template-output file="{story_path}">test_locations</template-output>
<template-output file="{story_path}">story_points</template-output>
<template-output file="{story_path}">time_estimate</template-output>
<template-output file="{story_path}">dependencies</template-output>
<template-output file="{story_path}">existing_code_references</template-output>
<template-output file="{story_path}">architecture_references</template-output>
</step>
<step n="4" goal="Update status - Level 0 single story">
<invoke-workflow path="{project-root}/{bmad_folder}/bmm/workflows/workflow-status">
<param>mode: update</param>
<param>action: complete_workflow</param>
<param>workflow_name: tech-spec</param>
</invoke-workflow>
<check if="success == true">
<output>✅ Tech-spec complete! Next: {{next_workflow}}</output>
</check>
<action>Load {{status_file_path}}</action>
<action>Set STORIES_SEQUENCE: [{slug}]</action>
<action>Set TODO_STORY: {slug}</action>
<action>Set TODO_TITLE: {{story_title}}</action>
<action>Set IN_PROGRESS_STORY: (empty)</action>
<action>Set STORIES_DONE: []</action>
<action>Save {{status_file_path}}</action>
<output>Story queue initialized with single story: {slug}</output>
</step>
<step n="5" goal="Provide user guidance for next steps">
<action>Display completion summary</action>
**Level 0 Planning Complete!**
**Generated Artifacts:**
- `tech-spec.md` → Technical source of truth
- `story-{slug}.md` → User story ready for implementation
**Story Location:** `{story_path}`
**Next Steps:**
**🎯 RECOMMENDED - Direct to Development (Level 0):**
Since the tech-spec is now CONTEXT-RICH with:
- ✅ Brownfield codebase analysis (if applicable)
- ✅ Framework and library details with exact versions
- ✅ Existing patterns and code references
- ✅ Complete file paths and integration points
**You can skip story-context and go straight to dev!**
1. Load DEV agent: `{project-root}/{bmad_folder}/bmm/agents/dev.md`
2. Run `dev-story` workflow
3. Begin implementation immediately
**Option B - Generate Additional Context (optional):**
Only needed for extremely complex scenarios:
1. Load SM agent: `{project-root}/{bmad_folder}/bmm/agents/sm.md`
2. Run `story-context` workflow (generates additional XML context)
3. Then load DEV agent and run `dev-story` workflow
**Progress Tracking:**
- All decisions logged in: `bmm-workflow-status.yaml`
- Next action clearly identified
<ask>Ready to proceed? Choose your path:
1. Go directly to dev-story (RECOMMENDED - tech-spec has all context)
2. Generate additional story context (for complex edge cases)
3. Exit for now
Select option (1-3)</ask>
</step>
</workflow>

View File

@ -1,451 +0,0 @@
# Level 1 - Epic and Stories Generation
<workflow>
<critical>This generates epic and user stories for Level 1 projects after tech-spec completion</critical>
<critical>This is a lightweight story breakdown - not a full PRD</critical>
<critical>Level 1 = coherent feature, 1-10 stories (prefer 2-3), 1 epic</critical>
<critical>This workflow runs AFTER tech-spec.md has been completed</critical>
<critical>Story format MUST match create-story template for compatibility with story-context and dev-story workflows</critical>
<step n="1" goal="Load tech spec and extract implementation tasks">
<action>Read the completed tech-spec.md file from {output_folder}/tech-spec.md</action>
<action>Load bmm-workflow-status.yaml from {output_folder}/bmm-workflow-status.yaml (if exists)</action>
<action>Extract sprint_artifacts from config (where stories are stored)</action>
<action>Extract from the ENHANCED tech-spec structure:
- Overall feature goal from "The Change → Problem Statement" and "Proposed Solution"
- Implementation tasks from "Implementation Guide → Implementation Steps"
- Time estimates from "Implementation Guide → Implementation Steps"
- Dependencies from "Implementation Details → Integration Points" and "Development Context → Dependencies"
- Source tree from "Implementation Details → Source Tree Changes"
- Framework dependencies from "Development Context → Framework/Libraries"
- Existing code references from "Development Context → Relevant Existing Code"
- File paths from "Developer Resources → File Paths Reference"
- Key code locations from "Developer Resources → Key Code Locations"
- Testing locations from "Developer Resources → Testing Locations"
- Acceptance criteria from "Implementation Guide → Acceptance Criteria"
</action>
</step>
<step n="2" goal="Create single epic">
<action>Create 1 epic that represents the entire feature</action>
<action>Epic title should be user-facing value statement</action>
<action>Epic goal should describe why this matters to users</action>
<guidelines>
**Epic Best Practices:**
- Title format: User-focused outcome (not implementation detail)
- Good: "JS Library Icon Reliability"
- Bad: "Update recommendedLibraries.ts file"
- Scope: Clearly define what's included/excluded
- Success criteria: Measurable outcomes that define "done"
</guidelines>
<example>
**Epic:** JS Library Icon Reliability
**Goal:** Eliminate external dependencies for JS library icons to ensure consistent, reliable display and improve application performance.
**Scope:** Migrate all 14 recommended JS library icons from third-party CDN URLs (GitHub, jsDelivr) to internal static asset hosting.
**Success Criteria:**
- All library icons load from internal paths
- Zero external requests for library icons
- Icons load 50-200ms faster than baseline
- No broken icons in production
</example>
<action>Derive epic slug from epic title (kebab-case, 2-3 words max)</action>
<example>
- "JS Library Icon Reliability" → "icon-reliability"
- "OAuth Integration" → "oauth-integration"
- "Admin Dashboard" → "admin-dashboard"
</example>
<action>Initialize epics.md summary document using epics_template</action>
<action>Also capture project_level for the epic template</action>
<template-output file="{output_folder}/epics.md">project_level</template-output>
<template-output file="{output_folder}/epics.md">epic_title</template-output>
<template-output file="{output_folder}/epics.md">epic_slug</template-output>
<template-output file="{output_folder}/epics.md">epic_goal</template-output>
<template-output file="{output_folder}/epics.md">epic_scope</template-output>
<template-output file="{output_folder}/epics.md">epic_success_criteria</template-output>
<template-output file="{output_folder}/epics.md">epic_dependencies</template-output>
</step>
<step n="3" goal="Determine optimal story count">
<critical>Level 1 should have 2-3 stories maximum - prefer longer stories over more stories</critical>
<action>Analyze tech spec implementation tasks and time estimates</action>
<action>Group related tasks into logical story boundaries</action>
<guidelines>
**Story Count Decision Matrix:**
**2 Stories (preferred for most Level 1):**
- Use when: Feature has clear build/verify split
- Example: Story 1 = Build feature, Story 2 = Test and deploy
- Typical points: 3-5 points per story
**3 Stories (only if necessary):**
- Use when: Feature has distinct setup, build, verify phases
- Example: Story 1 = Setup, Story 2 = Core implementation, Story 3 = Integration and testing
- Typical points: 2-3 points per story
**Never exceed 3 stories for Level 1:**
- If more needed, consider if project should be Level 2
- Better to have longer stories (5 points) than more stories (5x 1-point stories)
</guidelines>
<action>Determine story_count = 2 or 3 based on tech spec complexity</action>
</step>
<step n="4" goal="Generate user stories from tech spec tasks">
<action>For each story (2-3 total), generate separate story file</action>
<action>Story filename format: "story-{epic_slug}-{n}.md" where n = 1, 2, or 3</action>
<guidelines>
**Story Generation Guidelines:**
- Each story = multiple implementation tasks from tech spec
- Story title format: User-focused deliverable (not implementation steps)
- Include technical acceptance criteria from tech spec tasks
- Link back to tech spec sections for implementation details
**CRITICAL: Acceptance Criteria Must Be:**
1. **Numbered** - AC #1, AC #2, AC #3, etc.
2. **Specific** - No vague statements like "works well" or "is fast"
3. **Testable** - Can be verified objectively
4. **Complete** - Covers all success conditions
5. **Independent** - Each AC tests one thing
6. **Format**: Use Given/When/Then when applicable
**Good AC Examples:**
✅ AC #1: Given a valid email address, when user submits the form, then the account is created and user receives a confirmation email within 30 seconds
✅ AC #2: Given an invalid email format, when user submits, then form displays "Invalid email format" error message
✅ AC #3: All unit tests in UserService.test.ts pass with 100% coverage
**Bad AC Examples:**
❌ "User can create account" (too vague)
❌ "System performs well" (not measurable)
❌ "Works correctly" (not specific)
**Story Point Estimation:**
- 1 point = < 1 day (2-4 hours)
- 2 points = 1-2 days
- 3 points = 2-3 days
- 5 points = 3-5 days
**Level 1 Typical Totals:**
- Total story points: 5-10 points
- 2 stories: 3-5 points each
- 3 stories: 2-3 points each
- If total > 15 points, consider if this should be Level 2
**Story Structure (MUST match create-story format):**
- Status: Draft
- Story: As a [role], I want [capability], so that [benefit]
- Acceptance Criteria: Numbered list from tech spec
- Tasks / Subtasks: Checkboxes mapped to tech spec tasks (AC: #n references)
- Dev Notes: Technical summary, project structure notes, references
- Dev Agent Record: Empty sections (tech-spec provides context)
**NEW: Comprehensive Context Fields**
Since tech-spec is context-rich, populate ALL template fields:
- dependencies: Extract from tech-spec "Development Context → Dependencies" and "Integration Points"
- existing_code_references: Extract from "Development Context → Relevant Existing Code" and "Developer Resources → Key Code Locations"
</guidelines>
<for-each story="1 to story_count">
<action>Set story_path_{n} = "{sprint_artifacts}/story-{epic_slug}-{n}.md"</action>
<action>Create story file from user_story_template with the following content:</action>
<template-output file="{story_path_{n}}">
- story_title: User-focused deliverable title
- role: User role (e.g., developer, user, admin)
- capability: What they want to do
- benefit: Why it matters
- acceptance_criteria: Specific, measurable criteria from tech spec
- tasks_subtasks: Implementation tasks with AC references
- technical_summary: High-level approach, key decisions
- files_to_modify: List of files that will change (from tech-spec "Developer Resources → File Paths Reference")
- test_locations: Where tests will be added (from tech-spec "Developer Resources → Testing Locations")
- story_points: Estimated effort (1/2/3/5)
- time_estimate: Days/hours estimate
- dependencies: Internal/external dependencies (from tech-spec "Development Context" and "Integration Points")
- existing_code_references: Code to reference (from tech-spec "Development Context → Relevant Existing Code" and "Key Code Locations")
- architecture_references: Links to tech-spec.md sections
</template-output>
</for-each>
<critical>Generate exactly {story_count} story files (2 or 3 based on Step 3 decision)</critical>
</step>
<step n="5" goal="Create story map and implementation sequence with dependency validation">
<critical>Stories MUST be ordered so earlier stories don't depend on later ones</critical>
<critical>Each story must have CLEAR, TESTABLE acceptance criteria</critical>
<action>Analyze dependencies between stories:
**Dependency Rules:**
1. Infrastructure/setup → Feature implementation → Testing/polish
2. Database changes → API changes → UI changes
3. Backend services → Frontend components
4. Core functionality → Enhancement features
5. No story can depend on a later story!
**Validate Story Sequence:**
For each story N, check:
- Does it require anything from Story N+1, N+2, etc.? ❌ INVALID
- Does it only use things from Story 1...N-1? ✅ VALID
- Can it be implemented independently or using only prior stories? ✅ VALID
If invalid dependencies found, REORDER stories!
</action>
<action>Generate visual story map showing epic → stories hierarchy with dependencies</action>
<action>Calculate total story points across all stories</action>
<action>Estimate timeline based on total points (1-2 points per day typical)</action>
<action>Define implementation sequence with explicit dependency notes</action>
<example>
## Story Map
```
Epic: Icon Reliability
├── Story 1: Build Icon Infrastructure (3 points)
│ Dependencies: None (foundational work)
└── Story 2: Test and Deploy Icons (2 points)
Dependencies: Story 1 (requires infrastructure)
```
**Total Story Points:** 5
**Estimated Timeline:** 1 sprint (1 week)
## Implementation Sequence
1. **Story 1** → Build icon infrastructure (setup, download, configure)
- Dependencies: None
- Deliverable: Icon files downloaded, organized, accessible
2. **Story 2** → Test and deploy (depends on Story 1)
- Dependencies: Story 1 must be complete
- Deliverable: Icons verified, tested, deployed to production
**Dependency Validation:** ✅ Valid sequence - no forward dependencies
</example>
<template-output file="{output_folder}/epics.md">story_summaries</template-output>
<template-output file="{output_folder}/epics.md">story_map</template-output>
<template-output file="{output_folder}/epics.md">total_points</template-output>
<template-output file="{output_folder}/epics.md">estimated_timeline</template-output>
<template-output file="{output_folder}/epics.md">implementation_sequence</template-output>
</step>
<step n="6" goal="Update status and populate story backlog">
<invoke-workflow path="{project-root}/{bmad_folder}/bmm/workflows/workflow-status">
<param>mode: update</param>
<param>action: complete_workflow</param>
<param>workflow_name: tech-spec</param>
<param>populate_stories_from: {epics_output_file}</param>
</invoke-workflow>
<check if="success == true">
<output>✅ Status updated! Loaded {{total_stories}} stories from epics.</output>
<output>Next: {{next_workflow}} ({{next_agent}} agent)</output>
</check>
<check if="success == false">
<output>⚠️ Status update failed: {{error}}</output>
</check>
</step>
<step n="7" goal="Auto-validate story quality and sequence">
<critical>Auto-run validation - NOT optional!</critical>
<action>Running automatic story validation...</action>
<action>**Validate Story Sequence (CRITICAL):**
For each story, check:
1. Does Story N depend on Story N+1 or later? ❌ FAIL - Reorder required!
2. Are dependencies clearly documented? ✅ PASS
3. Can stories be implemented in order 1→2→3? ✅ PASS
If sequence validation FAILS:
- Identify the problem dependencies
- Propose new ordering
- Ask user to confirm reordering
</action>
<action>**Validate Acceptance Criteria Quality:**
For each story's AC, check:
1. Is it numbered (AC #1, AC #2, etc.)? ✅ Required
2. Is it specific and testable? ✅ Required
3. Does it use Given/When/Then or equivalent? ✅ Recommended
4. Are all success conditions covered? ✅ Required
Count vague AC (contains "works", "good", "fast", "well"):
- 0 vague AC: ✅ EXCELLENT
- 1-2 vague AC: ⚠️ WARNING - Should improve
- 3+ vague AC: ❌ FAIL - Must improve
</action>
<action>**Validate Story Completeness:**
1. Do all stories map to tech spec tasks? ✅ Required
2. Do story points align with tech spec estimates? ✅ Recommended
3. Are dependencies clearly noted? ✅ Required
4. Does each story have testable AC? ✅ Required
</action>
<action>Generate validation report</action>
<check if="sequence validation fails OR AC quality fails">
<output>❌ **Story Validation Failed:**
{{issues_found}}
**Recommended Fixes:**
{{recommended_fixes}}
Shall I fix these issues? (yes/no)</output>
<ask>Apply fixes? (yes/no)</ask>
<check if="yes">
<action>Apply fixes (reorder stories, rewrite vague AC, add missing details)</action>
<action>Re-validate</action>
<output>✅ Validation passed after fixes!</output>
</check>
</check>
<check if="validation passes">
<output>✅ **Story Validation Passed!**
**Sequence:** ✅ Valid (no forward dependencies)
**AC Quality:** ✅ All specific and testable
**Completeness:** ✅ All tech spec tasks covered
**Dependencies:** ✅ Clearly documented
Stories are implementation-ready!</output>
</check>
</step>
<step n="8" goal="Finalize and provide user guidance">
<action>Confirm all validation passed</action>
<action>Verify total story points align with tech spec time estimates</action>
<action>Confirm epic and stories are complete</action>
**Level 1 Planning Complete!**
**Epic:** {{epic_title}}
**Total Stories:** {{story_count}}
**Total Story Points:** {{total_points}}
**Estimated Timeline:** {{estimated_timeline}}
**Generated Artifacts:**
- `tech-spec.md` → Technical source of truth
- `epics.md` → Epic and story summary
- `story-{epic_slug}-1.md` → First story (ready for implementation)
- `story-{epic_slug}-2.md` → Second story
{{#if story_3}}
- `story-{epic_slug}-3.md` → Third story
{{/if}}
**Story Location:** `{sprint_artifacts}/`
**Next Steps - Iterative Implementation:**
**🎯 RECOMMENDED - Direct to Development (Level 1):**
Since the tech-spec is now CONTEXT-RICH with:
- ✅ Brownfield codebase analysis (if applicable)
- ✅ Framework and library details with exact versions
- ✅ Existing patterns and code references
- ✅ Complete file paths and integration points
- ✅ Dependencies clearly mapped
**You can skip story-context for most Level 1 stories!**
**1. Start with Story 1:**
a. Load DEV agent: `{project-root}/{bmad_folder}/bmm/agents/dev.md`
b. Run `dev-story` workflow (select story-{epic_slug}-1.md)
c. Tech-spec provides all context needed
d. Implement story 1
**2. After Story 1 Complete:**
- Repeat for story-{epic_slug}-2.md
- Reference completed story 1 in your work
**3. After Story 2 Complete:**
{{#if story_3}}
- Repeat for story-{epic_slug}-3.md
{{/if}}
- Level 1 feature complete!
**Option B - Generate Additional Context (optional):**
Only needed for extremely complex multi-story dependencies:
1. Load SM agent: `{project-root}/{bmad_folder}/bmm/agents/sm.md`
2. Run `story-context` workflow for complex stories
3. Then load DEV agent and run `dev-story`
**Progress Tracking:**
- All decisions logged in: `bmm-workflow-status.yaml`
- Next action clearly identified
<ask>Ready to proceed? Choose your path:
1. Go directly to dev-story for story 1 (RECOMMENDED - tech-spec has all context)
2. Generate additional story context first (for complex dependencies)
3. Exit for now
Select option (1-3):</ask>
</step>
</workflow>

View File

@ -1,4 +1,4 @@
# Tech-Spec Workflow - Context-Aware Technical Planning (Level 0-1) # Tech-Spec Workflow - Context-Aware Technical Planning (quick-flow)
<workflow> <workflow>
@ -6,8 +6,8 @@
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical> <critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical> <critical>Generate all documents in {document_output_language}</critical>
<critical>This is for Level 0-1 projects - tech-spec with context-rich story generation</critical> <critical>This is quick-flow efforts - tech-spec with context-rich story generation</critical>
<critical>Level 0: tech-spec + single user story | Level 1: tech-spec + epic/stories</critical> <critical>Quick Flow: tech-spec + epic with 1-5 stories (always generates epic structure)</critical>
<critical>LIVING DOCUMENT: Write to tech-spec.md continuously as you discover - never wait until the end</critical> <critical>LIVING DOCUMENT: Write to tech-spec.md continuously as you discover - never wait until the end</critical>
<critical>CONTEXT IS KING: Gather ALL available context before generating specs</critical> <critical>CONTEXT IS KING: Gather ALL available context before generating specs</critical>
<critical>DOCUMENT OUTPUT: Technical, precise, definitive. Specific versions only. User skill level ({user_skill_level}) affects conversation style ONLY, not document content.</critical> <critical>DOCUMENT OUTPUT: Technical, precise, definitive. Specific versions only. User skill level ({user_skill_level}) affects conversation style ONLY, not document content.</critical>
@ -26,34 +26,34 @@
<output>Great! Let's quickly configure your project...</output> <output>Great! Let's quickly configure your project...</output>
<ask>What level is this project? <ask>How many user stories do you think this work requires?
**Level 0** - Single atomic change (bug fix, small isolated feature, single file change) **Single Story** - Simple change (bug fix, small isolated feature, single file change)
→ Generates: 1 tech-spec + 1 story → Generates: tech-spec + epic (minimal) + 1 story
→ Example: "Fix login validation bug" or "Add email field to user form" → Example: "Fix login validation bug" or "Add email field to user form"
**Level 1** - Coherent feature (multiple related changes, small feature set) **Multiple Stories (2-5)** - Coherent feature (multiple related changes, small feature set)
→ Generates: 1 tech-spec + 1 epic + 2-3 stories → Generates: tech-spec + epic (detailed) + 2-5 stories
→ Example: "Add OAuth integration" or "Build user profile page" → Example: "Add OAuth integration" or "Build user profile page"
Enter **0** or **1**:</ask> Enter **1** for single story, or **2-5** for number of stories you estimate</ask>
<action>Capture user response as project_level (0 or 1)</action> <action>Capture user response as story_count (1-5)</action>
<action>Validate: If not 0 or 1, ask again</action> <action>Validate: If not 1-5, ask for clarification. If > 5, suggest using full BMad Method instead</action>
<ask>Is this a **greenfield** (new/empty codebase) or **brownfield** (existing codebase) project? <ask if="not already known greenfield vs brownfield">Is this a **greenfield** (new/empty codebase) or **brownfield** (existing codebase) project?
**Greenfield** - Starting fresh, no existing code **Greenfield** - Starting fresh, no existing code aside from starter templates
**Brownfield** - Adding to or modifying existing code **Brownfield** - Adding to or modifying existing functional code or project
Enter **greenfield** or **brownfield**:</ask> Enter **greenfield** or **brownfield**:</ask>
<action>Capture user response as field_type (greenfield or brownfield)</action> <action>Capture user response as field_type (greenfield or brownfield)</action>
<action>Validate: If not greenfield or brownfield, ask again</action> <action>Validate: If not greenfield or brownfield, ask again</action>
<output>Perfect! Running as: <output>Perfect! Running as:
- **Project Level:** {{project_level}} - **Story Count:** {{story_count}} {{#if story_count == 1}}story (minimal epic){{else}}stories (detailed epic){{/if}}
- **Field Type:** {{field_type}} - **Field Type:** {{field_type}}
- **Mode:** Standalone (no status file tracking) - **Mode:** Standalone (no status file tracking)
@ -65,21 +65,17 @@ Let's build your tech-spec!</output>
</check> </check>
<check if="status file found"> <check if="status file found">
<action>Load the FULL file: {output_folder}/bmm-workflow-status.yaml</action> <action>Load the FULL file: {workflow-status}</action>
<action>Parse workflow_status section</action> <action>Parse workflow_status section</action>
<action>Check status of "tech-spec" workflow</action> <action>Check status of "tech-spec" workflow</action>
<action>Get project_level from YAML metadata</action> <action>Get selected_track from YAML metadata indicating this is quick-flow-greenfield or quick-flow-brownfield</action>
<action>Get field_type from YAML metadata (greenfield or brownfield)</action> <action>Get field_type from YAML metadata (greenfield or brownfield)</action>
<action>Find first non-completed workflow (next expected workflow)</action> <action>Find first non-completed workflow (next expected workflow)</action>
<check if="project_level >= 2"> <check if="selected_track is NOT quick-flow-greenfield AND NOT quick-flow-brownfield">
<output>**Incorrect Workflow for Level {{project_level}}** <output>**Incorrect Workflow for Level {{selected_track}}**
Tech-spec is for Simple projects. **Correct workflow:** `create-prd` (PM agent). You should Exit at this point, unless you want to force run this workflow.
Tech-spec is for Level 0-1 projects. Level 2-4 should use PRD workflow.
**Correct workflow:** `create-prd` (PM agent)
</output> </output>
<action>Exit and redirect to prd</action>
</check> </check>
<check if="tech-spec status is file path (already completed)"> <check if="tech-spec status is file path (already completed)">
@ -128,8 +124,8 @@ Search for and load (using dual-strategy: whole first, then sharded):
- If found: Load completely and extract key context - If found: Load completely and extract key context
2. **Research Documents:** 2. **Research Documents:**
- Search pattern: {output-folder}/\_research\*.md - Search pattern: {output-folder}/_research_.md
- Sharded: {output-folder}/\_research\*/index.md - Sharded: {output-folder}/_research_/index.md
- If found: Load completely and extract insights - If found: Load completely and extract insights
3. **Document-Project Output (CRITICAL for brownfield):** 3. **Document-Project Output (CRITICAL for brownfield):**
@ -137,109 +133,49 @@ Search for and load (using dual-strategy: whole first, then sharded):
- If found: This is the brownfield codebase map - load ALL shards! - If found: This is the brownfield codebase map - load ALL shards!
- Extract: File structure, key modules, existing patterns, naming conventions - Extract: File structure, key modules, existing patterns, naming conventions
Create a summary of what was found: Create a summary of what was found and ask user if there are other documents or information to consider before proceeding:
- List of loaded documents - List of loaded documents
- Key insights from each - Key insights from each
- Brownfield vs greenfield determination - Brownfield vs greenfield determination
</action> </action>
<action>**PHASE 2: Detect Project Type from Setup Files** <action>**PHASE 2: Intelligently Detect Project Stack**
Search for project setup files in {project-root}: Use your comprehensive knowledge as a coding-capable LLM to analyze the project:
**Node.js/JavaScript:** **Discover Setup Files:**
- package.json → Parse for framework, dependencies, scripts - Search {project-root} for dependency manifests (package.json, requirements.txt, Gemfile, go.mod, Cargo.toml, composer.json, pom.xml, build.gradle, pyproject.toml, etc.)
- Adapt to ANY project type - you know the ecosystem conventions
**Python:** **Extract Critical Information:**
- requirements.txt → Parse for packages
- pyproject.toml → Parse for modern Python projects
- Pipfile → Parse for pipenv projects
**Ruby:**
- Gemfile → Parse for gems and versions
**Java:**
- pom.xml → Parse for Maven dependencies
- build.gradle → Parse for Gradle dependencies
**Go:**
- go.mod → Parse for modules
**Rust:**
- Cargo.toml → Parse for crates
**PHP:**
- composer.json → Parse for packages
If setup file found, extract:
1. Framework name and EXACT version (e.g., "React 18.2.0", "Django 4.2.1") 1. Framework name and EXACT version (e.g., "React 18.2.0", "Django 4.2.1")
2. All production dependencies with versions 2. All production dependencies with specific versions
3. Dev dependencies and tools (TypeScript, Jest, ESLint, pytest, etc.) 3. Dev tools and testing frameworks (Jest, pytest, ESLint, etc.)
4. Available scripts (npm run test, npm run build, etc.) 4. Available build/test scripts
5. Project type indicators (is it an API? Web app? CLI tool?) 5. Project type (web app, API, CLI, library, etc.)
6. **Test framework** (Jest, pytest, RSpec, JUnit, Mocha, etc.)
**Check for Outdated Dependencies:** **Assess Currency:**
<check if="major framework version > 2 years old">
<action>Use WebSearch to find current recommended version</action> - Identify if major dependencies are outdated (>2 years old)
<example> - Use WebSearch to find current recommended versions if needed
If package.json shows "react": "16.14.0" (from 2020): - Note migration complexity in your summary
<WebSearch query="React latest stable version 2025 migration guide" />
Note both current version AND migration complexity in stack summary
</example>
</check>
**For Greenfield Projects:** **For Greenfield Projects:**
<check if="field_type == greenfield"> <check if="field_type == greenfield">
<action>Use WebSearch for current best practices AND starter templates</action> <action>Use WebSearch to discover current best practices and official starter templates</action>
<example> <action>Recommend appropriate starters based on detected framework (or user's intended stack)</action>
<WebSearch query="{detected_framework} best practices {current_year}" /> <action>Present benefits conversationally: setup time saved, modern patterns, testing included</action>
<WebSearch query="{detected_framework} recommended packages {current_year}" /> <ask>Would you like to use a starter template? (yes/no/show-me-options)</ask>
<WebSearch query="{detected_framework} official starter template {current_year}" /> <action>Capture preference and include in implementation stack if accepted</action>
<WebSearch query="{project_type} {detected_framework} boilerplate {current_year}" />
</example>
**RECOMMEND STARTER TEMPLATES:**
Look for official or well-maintained starter templates:
- React: Create React App, Vite, Next.js starter
- Vue: create-vue, Nuxt starter
- Python: cookiecutter templates, FastAPI template
- Node.js: express-generator, NestJS CLI
- Ruby: Rails new, Sinatra template
- Go: go-blueprint, standard project layout
Benefits of starters:
- ✅ Modern best practices baked in
- ✅ Proper project structure
- ✅ Build tooling configured
- ✅ Testing framework set up
- ✅ Linting/formatting included
- ✅ Faster time to first feature
**Present recommendations to user:**
"I found these starter templates for {{framework}}:
1. {{official_template}} - Official, well-maintained
2. {{community_template}} - Popular community template
These provide {{benefits}}. Would you like to use one? (yes/no/show-me-more)"
<action>Capture user preference on starter template</action>
<action>If yes, include starter setup in implementation stack</action>
</check> </check>
Store this as {{project_stack_summary}} **Trust Your Intelligence:**
You understand project ecosystems deeply. Adapt your analysis to any stack - don't be constrained by examples. Extract what matters for developers.
Store comprehensive findings as {{project_stack_summary}}
</action> </action>
<action>**PHASE 3: Brownfield Codebase Reconnaissance** (if applicable) <action>**PHASE 3: Brownfield Codebase Reconnaissance** (if applicable)
@ -357,106 +293,57 @@ This gives me a solid foundation for creating a context-rich tech spec!"
<step n="2" goal="Conversational discovery of the change/feature"> <step n="2" goal="Conversational discovery of the change/feature">
<action>Now engage in natural conversation to understand what needs to be built. <action>Engage {user_name} in natural, adaptive conversation to deeply understand what needs to be built.
Adapt questioning based on project_level: **Discovery Approach:**
Adapt your questioning style to the complexity:
- For single-story changes: Focus on the specific problem, location, and approach
- For multi-story features: Explore user value, integration strategy, and scope boundaries
**Core Discovery Goals (accomplish through natural dialogue):**
1. **The Problem/Need**
- What user or technical problem are we solving?
- Why does this matter now?
- What's the impact if we don't do this?
2. **The Solution Approach**
- What's the proposed solution?
- How should this work from a user/system perspective?
- What alternatives were considered?
3. **Integration & Location**
- <check if="brownfield">Where does this fit in the existing codebase?</check>
- What existing code/patterns should we reference or follow?
- What are the integration points?
4. **Scope Clarity**
- What's IN scope for this work?
- What's explicitly OUT of scope (future work, not needed)?
- If multiple stories: What's MVP vs enhancement?
5. **Constraints & Dependencies**
- Technical limitations or requirements?
- Dependencies on other systems, APIs, or services?
- Performance, security, or compliance considerations?
6. **Success Criteria**
- How will we know this is done correctly?
- What does "working" look like?
- What edge cases matter?
**Conversation Style:**
- Be warm and collaborative, not interrogative
- Ask follow-up questions based on their responses
- Help them think through implications
- Reference context from Phase 1 (existing code, stack, patterns)
- Adapt depth to {{story_count}} complexity
Synthesize discoveries into clear, comprehensive specifications.
</action> </action>
<check if="project_level == 0">
<action>**Level 0: Atomic Change Discovery**
Engage warmly and get specific details:
"Let's talk about this change. I need to understand it deeply so the tech-spec gives developers everything they need."
**Core Questions (adapt naturally, don't interrogate):**
1. "What problem are you solving?"
- Listen for: Bug fix, missing feature, technical debt, improvement
- Capture as {{change_type}}
2. "Where in the codebase should this live?"
- If brownfield: "I see you have [existing modules]. Does this fit in any of those?"
- If greenfield: "Let's figure out the right structure for this."
- Capture affected areas
3. <check if="brownfield">
"Are there existing patterns or similar code I should follow?"
- Look for consistency requirements
- Identify reference implementations
</check>
4. "What's the expected behavior after this change?"
- Get specific success criteria
- Understand edge cases
5. "Any constraints or gotchas I should know about?"
- Technical limitations
- Dependencies on other systems
- Performance requirements
**Discovery Goals:**
- Understand the WHY (problem)
- Understand the WHAT (solution)
- Understand the WHERE (location in code)
- Understand the HOW (approach and patterns)
Synthesize into clear problem statement and solution overview.
</action>
</check>
<check if="project_level == 1">
<action>**Level 1: Feature Discovery**
Engage in deeper feature exploration:
"This is a Level 1 feature - coherent but focused. Let's explore what you're building."
**Core Questions (natural conversation):**
1. "What user need are you addressing?"
- Get to the core value
- Understand the user's pain point
2. "How should this integrate with existing code?"
- If brownfield: "I saw [existing features]. How does this relate?"
- Identify integration points
- Note dependencies
3. <check if="brownfield AND similar features exist">
"Can you point me to similar features I can reference for patterns?"
- Get example implementations
- Understand established patterns
</check>
4. "What's IN scope vs OUT of scope for this feature?"
- Define clear boundaries
- Identify MVP vs future enhancements
- Keep it focused (remind: Level 1 = 2-3 stories max)
5. "Are there dependencies on other systems or services?"
- External APIs
- Databases
- Third-party libraries
6. "What does success look like?"
- Measurable outcomes
- User-facing impact
- Technical validation
**Discovery Goals:**
- Feature purpose and value
- Integration strategy
- Scope boundaries
- Success criteria
- Dependencies
Synthesize into comprehensive feature description.
</action>
</check>
<template-output>problem_statement</template-output> <template-output>problem_statement</template-output>
<template-output>solution_overview</template-output> <template-output>solution_overview</template-output>
<template-output>change_type</template-output> <template-output>change_type</template-output>
@ -730,14 +617,14 @@ Pre-implementation checklist:
**Implementation Steps:** **Implementation Steps:**
Step-by-step breakdown: Step-by-step breakdown:
For Level 0: For single-story changes:
1. [Step 1 with specific file and action] 1. [Step 1 with specific file and action]
2. [Step 2 with specific file and action] 2. [Step 2 with specific file and action]
3. [Write tests] 3. [Write tests]
4. [Verify acceptance criteria] 4. [Verify acceptance criteria]
For Level 1: For multi-story features:
Organize by story/phase: Organize by story/phase:
1. Phase 1: [Foundation work] 1. Phase 1: [Foundation work]
@ -1004,21 +891,17 @@ Tech-spec is high quality and ready for story generation!</output>
</step> </step>
<step n="5" goal="Generate context-rich user stories"> <step n="5" goal="Generate epic and context-rich stories">
<action>Now generate stories that reference the rich tech-spec context</action> <action>Invoke unified story generation workflow: {instructions_generate_stories}</action>
<check if="project_level == 0"> <action>This will generate:
<action>Invoke {installed_path}/instructions-level0-story.md to generate single user story</action>
<action>Story will leverage tech-spec.md as primary context</action>
<action>Developers can skip story-context workflow since tech-spec is comprehensive</action>
</check>
<check if="project_level == 1"> - **epics.md** - Epic structure (minimal for 1 story, detailed for multiple)
<action>Invoke {installed_path}/instructions-level1-stories.md to generate epic and stories</action> - **story-{epic-slug}-N.md** - Story files (where N = 1 to {{story_count}})
<action>Stories will reference tech-spec.md for all technical details</action>
<action>Epic provides organization, tech-spec provides implementation context</action> All stories reference tech-spec.md as primary context - comprehensive enough that developers can often skip story-context workflow.
</check> </action>
</step> </step>
@ -1028,22 +911,13 @@ Tech-spec is high quality and ready for story generation!</output>
**Deliverables Created:** **Deliverables Created:**
<check if="project_level == 0">
- ✅ **tech-spec.md** - Context-rich technical specification - ✅ **tech-spec.md** - Context-rich technical specification
- Includes: brownfield analysis, framework details, existing patterns - Includes: brownfield analysis, framework details, existing patterns
- ✅ **story-{slug}.md** - Implementation-ready user story - ✅ **epics.md** - Epic structure{{#if story_count == 1}} (minimal for single story){{else}} with {{story_count}} stories{{/if}}
- References tech-spec as primary context - ✅ **story-{epic-slug}-1.md** - First story{{#if story_count > 1}}
</check> - ✅ **story-{epic-slug}-2.md** - Second story{{/if}}{{#if story_count > 2}}
- ✅ **story-{epic-slug}-3.md** - Third story{{/if}}{{#if story_count > 3}}
<check if="project_level == 1"> - ✅ **Additional stories** through story-{epic-slug}-{{story_count}}.md{{/if}}
- ✅ **tech-spec.md** - Context-rich technical specification
- ✅ **epics.md** - Epic and story organization
- ✅ **story-{epic-slug}-1.md** - First story
- ✅ **story-{epic-slug}-2.md** - Second story
{{#if story_3}}
- ✅ **story-{epic-slug}-3.md** - Third story
{{/if}}
</check>
**What Makes This Tech-Spec Special:** **What Makes This Tech-Spec Special:**
@ -1057,55 +931,41 @@ The tech-spec is comprehensive enough to serve as the primary context document:
**Next Steps:** **Next Steps:**
<check if="project_level == 0"> **🎯 Recommended Path - Direct to Development:**
**For Single Story (Level 0):**
**Option A - With Story Context (for complex changes):** Since the tech-spec is CONTEXT-RICH, you can often skip story-context generation!
1. Ask SM agent to run `create-story-context` for the story {{#if story_count == 1}}
- This generates additional XML context if needed **For Your Single Story:**
2. Then ask DEV agent to run `dev-story` to implement
**Option B - Direct to Dev (most Level 0):** 1. Ask DEV agent to run `dev-story`
- Select story-{epic-slug}-1.md
1. Ask DEV agent to run `dev-story` directly
- Tech-spec provides all the context needed! - Tech-spec provides all the context needed!
- Story is ready to implement
💡 **Tip:** Most Level 0 changes don't need separate story context since tech-spec is comprehensive! 💡 **Optional:** Only run `story-context` (SM agent) if this is unusually complex
</check> {{else}}
**For Your {{story_count}} Stories - Iterative Approach:**
<check if="project_level == 1"> 1. **Start with Story 1:**
**For Multiple Stories (Level 1):** - Ask DEV agent to run `dev-story`
- Select story-{epic-slug}-1.md
- Tech-spec provides context
**Recommended: Story-by-Story Approach** 2. **After Story 1 Complete:**
- Repeat for story-{epic-slug}-2.md
- Continue through story {{story_count}}
For the **first story** ({{first_story_name}}): 💡 **Alternative:** Use `sprint-planning` (SM agent) to organize all stories as a coordinated sprint
**Option A - With Story Context (recommended for first story):** 💡 **Optional:** Run `story-context` (SM agent) for complex stories needing additional context
{{/if}}
1. Ask SM agent to run `create-story-context` for story 1
- Generates focused context for this specific story
2. Then ask DEV agent to run `dev-story` to implement story 1
**Option B - Direct to Dev:**
1. Ask DEV agent to run `dev-story` for story 1
- Tech-spec has most context needed
After completing story 1, repeat for stories 2 and 3.
**Alternative: Sprint Planning Approach**
- If managing multiple stories as a sprint, ask SM agent to run `sprint-planning`
- This organizes all stories for coordinated implementation
</check>
**Your Tech-Spec:** **Your Tech-Spec:**
- 📄 Saved to: `{output_folder}/tech-spec.md` - 📄 Saved to: `{output_folder}/tech-spec.md`
- Epic & Stories: `{output_folder}/epics.md` + `{sprint_artifacts}/`
- Contains: All context, decisions, patterns, and implementation guidance - Contains: All context, decisions, patterns, and implementation guidance
- Ready for: Direct development or story context generation - Ready for: Direct development!
The tech-spec is your single source of truth! 🚀 The tech-spec is your single source of truth! 🚀
</output> </output>

View File

@ -1,6 +1,6 @@
# Technical Specification # Technical Specification
name: tech-spec name: tech-spec
description: "Technical specification workflow for Level 0 projects (single atomic changes). Creates focused tech spec for bug fixes, single endpoint additions, or small isolated changes. Tech-spec only - no PRD needed." description: "Technical specification workflow for quick-flow projects. Creates focused tech spec and generates epic + stories (1 story for simple changes, 2-5 stories for features). Tech-spec only - no PRD needed."
author: "BMad" author: "BMad"
# Critical variables from config # Critical variables from config
@ -13,10 +13,11 @@ document_output_language: "{config_source}:document_output_language"
user_skill_level: "{config_source}:user_skill_level" user_skill_level: "{config_source}:user_skill_level"
date: system-generated date: system-generated
workflow-status: "{output_folder}/bmm-workflow-status.yaml"
# Runtime variables (captured during workflow execution) # Runtime variables (captured during workflow execution)
project_level: runtime-captured story_count: runtime-captured
project_type: runtime-captured epic_slug: runtime-captured
development_context: runtime-captured
change_type: runtime-captured change_type: runtime-captured
field_type: runtime-captured field_type: runtime-captured
@ -25,23 +26,15 @@ installed_path: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tec
instructions: "{installed_path}/instructions.md" instructions: "{installed_path}/instructions.md"
template: "{installed_path}/tech-spec-template.md" template: "{installed_path}/tech-spec-template.md"
# Story generation instructions (invoked based on level) # Story generation (unified approach - always generates epic + stories)
instructions_level0_story: "{installed_path}/instructions-level0-story.md" instructions_generate_stories: "{installed_path}/instructions-generate-stories.md"
instructions_level1_stories: "{installed_path}/instructions-level1-stories.md"
# Templates
user_story_template: "{installed_path}/user-story-template.md" user_story_template: "{installed_path}/user-story-template.md"
epics_template: "{installed_path}/epics-template.md" epics_template: "{installed_path}/epics-template.md"
# Output configuration # Output configuration
default_output_file: "{output_folder}/tech-spec.md" default_output_file: "{output_folder}/tech-spec.md"
user_story_file: "{output_folder}/user-story.md"
epics_file: "{output_folder}/epics.md" epics_file: "{output_folder}/epics.md"
sprint_artifacts: "{output_folder}/sprint_artifacts"
# Recommended input documents (optional for Level 0)
recommended_inputs:
- bug_report: "Bug description or issue ticket"
- feature_request: "Brief feature description"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version
@ -49,31 +42,12 @@ input_file_patterns:
product_brief: product_brief:
whole: "{output_folder}/*brief*.md" whole: "{output_folder}/*brief*.md"
sharded: "{output_folder}/*brief*/index.md" sharded: "{output_folder}/*brief*/index.md"
research: research:
whole: "{output_folder}/*research*.md" whole: "{output_folder}/*research*.md"
sharded: "{output_folder}/*research*/index.md" sharded: "{output_folder}/*research*/index.md"
document_project: document_project:
sharded: "{output_folder}/docs/index.md" sharded: "{output_folder}/docs/index.md"
standalone: true standalone: true
web_bundle: web_bundle: false
name: "tech-spec-sm"
description: "Technical specification workflow for Level 0-1 projects. Creates focused tech spec with story generation. Level 0: tech-spec + user story. Level 1: tech-spec + epic/stories."
author: "BMad"
instructions: "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/instructions.md"
web_bundle_files:
# Core workflow files
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/instructions.md"
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/instructions-level0-story.md"
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/instructions-level1-stories.md"
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/tech-spec-template.md"
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/user-story-template.md"
- "{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec/epics-template.md"
# Task dependencies (referenced in instructions.md)
- "{bmad_folder}/core/tasks/workflow.xml"
- "{bmad_folder}/core/tasks/adv-elicit.xml"
- "{bmad_folder}/core/tasks/adv-elicit-methods.csv"