diff --git a/CHANGELOG.md b/CHANGELOG.md index 7266b0c1..b1608fc8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -260,7 +260,7 @@ More documentation updates coming soon. **Workflow & Variable Fixes:** -- **Variable Naming**: Standardized from {project_root} to {project-root} across CIS, BMGD, and BMM modules +- **Variable Naming**: Standardized from {project_root} to {project-root} across CIS and BMM modules - **Workflow References**: Fixed broken .yaml → .md workflow references - **Advanced Elicitation Variables**: Fixed undefined variables in brainstorming - **Dependency Format**: Corrected dependency format and added missing frontmatter @@ -357,7 +357,6 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **260+ Files Updated**: Comprehensive language integration across: - Core workflows (brainstorming, party mode, advanced elicitation) - BMB workflows (create-agent, create-module, create-workflow, edit-workflow, etc.) - - BMGD workflows (game-brief, gdd, narrative, game-architecture, etc.) - BMM workflows (research, create-ux-design, prd, create-architecture, etc.) - **Tested Languages**: Verified working with Spanish and Pirate Speak - **Natural Conversations**: AI agents respond in configured language throughout workflow @@ -392,7 +391,7 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **index.md** - Core concepts introduction - **agents.md** (93 lines) - Understanding agents in BMAD - **workflows.md** (89 lines) - Understanding workflows in BMAD -- **modules.md** (76 lines) - Understanding modules (BMM, BMGD, CIS, BMB, Core) +- **modules.md** (76 lines) - Understanding modules (BMM, CIS, BMB, Core) - **installing/index.md** (77 lines) - Installation guide - **installing/upgrading.md** (144 lines) - Upgrading guide - **bmad-customization/index.md** - Customization overview @@ -467,13 +466,12 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **Compound Menu Triggers**: All agents now use consistent 2-letter compound trigger format (e.g., `bmm-rd`, `bmm-ca`) - **Improved UX**: Shorter, more memorable command shortcuts across all modules -- **Module Prefixing**: Menu items properly scoped by module prefix (bmm-, bmgd-, cis-, bmb-) +- **Module Prefixing**: Menu items properly scoped by module prefix (bmm-, cis-, bmb-) - **Universal Pattern**: All 22 agents updated to follow the same menu structure **Agent Updates:** - **BMM Module**: 9 agents with standardized menus (pm, analyst, architect, dev, ux-designer, tech-writer, sm, tea, quick-flow-solo-dev) -- **BMGD Module**: 6 agents with standardized menus (game-architect, game-designer, game-dev, game-qa, game-scrum-master, game-solo-dev) - **CIS Module**: 6 agents with standardized menus (innovation-strategist, design-thinking-coach, creative-problem-solver, brainstorming-coach, presentation-master, storyteller) - **BMB Module**: 3 agents with standardized menus (bmad-builder, agent-builder, module-builder, workflow-builder) - **Core Module**: BMAD Master agent updated with consistent menu patterns @@ -497,7 +495,7 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **Planning Artifacts**: Ephemeral planning documents (prd.md, product-brief.md, ux-design.md, architecture.md) - **Documentation**: Long-term project documentation (separate from planning) -- **Module Configuration**: BMM and BMGD modules updated with proper default paths +- **Module Configuration**: BMM modules updated with proper default paths ### 🪟 Windows Installer Fixes @@ -569,7 +567,7 @@ Located in `src/modules/bmb/workflows/agent/data/`: **Revolutionary Content Organization:** -- **Phase 1-4 Path Segregation**: Implemented new BM paths across all BMM and BMGD workflows +- **Phase 1-4 Path Segregation**: Implemented new BM paths across all BMM workflows - **Planning vs Implementation Artifacts**: Separated ephemeral Phase 4 artifacts from permanent documentation - **Optimized File Organization**: Better structure differentiating planning artifacts from long-term project documentation - **Backward Compatible**: Existing installations continue working while preparing for optimized content organization @@ -615,12 +613,6 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **Enhanced Quality Assurance**: 6-step process with mode detection, context gathering, execution, self-check, review, and resolution - **578 New Lines Added**: Significant expansion of quick-dev capabilities -**BMGD Workflow Fixes:** - -- **workflow-status Filename Correction**: Fixed incorrect filename references (PR #1172) -- **sprint-planning Update**: Added workflow-status update to game-architecture completion -- **Path Corrections**: Resolved dead references and syntax errors (PR #1164) - ### 🎨 Code Quality & Refactoring **Persona Streamlining (PR #1167):** @@ -692,50 +684,6 @@ Located in `src/modules/bmb/workflows/agent/data/`: **Release: December 18, 2025** -### 🎮 BMGD Module - Complete Game Development Module Updated - -**Massive BMGD Overhaul:** - -- **New Game QA Agent (GLaDOS)**: Elite Game QA Architect with test automation specialization - - Engine-specific expertise: Unity, Unreal, Godot testing frameworks - - Comprehensive knowledge base with 15+ testing topics - - Complete testing workflows: test-framework, test-design, automate, playtest-plan, performance-test, test-review - -- **New Game Solo Dev Agent (Indie)**: Rapid prototyping and iteration specialist - - Quick-flow workflows optimized for solo/small team development - - Streamlined development process for indie game creators - -- **Production Workflow Alignment**: BMGD 4-production now fully aligned with BMM 4-implementation - - Removed obsolete workflows: story-done, story-ready, story-context, epic-tech-context - - Added sprint-status workflow for project tracking - - All workflows updated as standalone with proper XML instructions - -**Game Testing Architecture:** - -- **Complete Testing Knowledge Base**: 15 comprehensive testing guides covering: - - Engine-specific: Unity (TF 1.6.0), Unreal, Godot testing - - Game-specific: Playtesting, balance, save systems, multiplayer - - Platform: Certification (TRC/XR), localization, input systems - - QA Fundamentals: Automation, performance, regression, smoke testing - -**New Workflows & Features:** - -- **workflow-status**: Multi-mode status checker for game projects - - Game-specific project levels (Game Jam → AAA) - - Support for gamedev and quickflow paths - - Project initialization workflow - -- **create-tech-spec**: Game-focused technical specification workflow - - Engine-aware (Unity/Unreal/Godot) specifications - - Performance and gameplay feel considerations - -- **Enhanced Documentation**: Complete documentation suite with 9 guides - - agents-guide.md: Reference for all 6 agents - - workflows-guide.md: Complete workflow documentation - - game-types-guide.md: 24 game type templates - - quick-flow-guide.md: Rapid development guide - - Comprehensive troubleshooting and glossary - ### 🤖 Agent Management Improved **Agent Recompile Feature:** @@ -769,19 +717,15 @@ Located in `src/modules/bmb/workflows/agent/data/`: ### 📊 Statistics -- **178 files changed** with massive BMGD expansion -- **28,350+ lines added** across testing documentation and workflows -- **2 new agents** added to BMGD module -- **15 comprehensive testing guides** created -- **Complete alignment** between BMGD and BMM production workflows +- **28,350+ lines added** across documentation and workflows ### 🌟 Key Highlights -1. **BMGD Module Revolution**: Complete overhaul with professional game development workflows -2. **Game Testing Excellence**: Comprehensive testing architecture for all major game engines -3. **Agent Management**: New recompile feature allows quick agent updates without full reinstall -4. **Full Customization Support**: All agent fields now customizable via YAML -5. **Industry-Ready Documentation**: Professional-grade guides for game development teams +1. **Agent Management**: New recompile feature allows quick agent updates without full reinstall +2. **Full Customization Support**: All agent fields now customizable via YAML +3. **Custom Module Management**: Enhanced handling for custom module installation and updates +4. **Installation Improvements**: Better manifest tracking and customization integration +5. **Documentation Updates**: All documentation updated to reflect new features --- @@ -1161,7 +1105,6 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **3-Track System**: Simplified from 5 levels to 3 intuitive tracks - **Web Bundles Guide**: Comprehensive documentation with 60-80% cost savings strategies - **Unified Output Structure**: Eliminated .ephemeral/ folders - single configurable output folder -- **BMGD Phase 4**: Added 10 game development workflows with BMM patterns ## [6.0.0-alpha.8] @@ -1171,14 +1114,13 @@ Located in `src/modules/bmb/workflows/agent/data/`: - **Optimized Agent Loading**: CLI loads from installed files, eliminating duplication - **Party Mode Everywhere**: All web bundles include multi-agent collaboration - **Phase 4 Artifact Separation**: Stories, code reviews, sprint plans configurable outside docs -- **Expanded Web Bundles**: All BMM, BMGD, CIS agents bundled with elicitation integration +- **Expanded Web Bundles**: All BMM and CIS agents bundled with elicitation integration ## [6.0.0-alpha.7] **Release: November 7, 2025** - **Workflow Vendoring**: Web bundler performs automatic cross-module dependency vendoring -- **BMGD Module Extraction**: Game development split into standalone 4-phase structure - **Enhanced Dependency Resolution**: Better handling of web_bundle: false workflows - **Advanced Elicitation Fix**: Added missing CSV files to workflow bundles - **Claude Code Fix**: Resolved README slash command installation regression diff --git a/FEATURE-SUMMARY.md b/FEATURE-SUMMARY.md index 7194e84a..55c5d563 100644 --- a/FEATURE-SUMMARY.md +++ b/FEATURE-SUMMARY.md @@ -118,7 +118,7 @@ Two feature branches implementing comprehensive quality automation for BMAD-METH - Files: 36 modified/created - Lines: +8,079 insertions, -93 deletions - Commits: 11 well-formed conventional commits -- Modules: BMM and BMGD (both fully supported) +- Modules: BMM (fully supported) **Validation:** - ✅ All schema validation passing diff --git a/PR-DESCRIPTION.md b/PR-DESCRIPTION.md index 546378a0..e0a7245f 100644 --- a/PR-DESCRIPTION.md +++ b/PR-DESCRIPTION.md @@ -31,5 +31,5 @@ Gap analysis detects existing code and proposes task refinements (extend vs crea **Changes:** 3 workflows, 4 new docs, 16 files total **Lines:** ~2,740 additions -**Modules:** BMM and BMGD +**Modules:** BMM **Breaking:** None - fully backwards compatible diff --git a/README-changes.md b/README-changes.md index 4c0eb0a4..778f92d9 100644 --- a/README-changes.md +++ b/README-changes.md @@ -241,10 +241,6 @@ This fork is published to npm under the `@jonahschulte` scope for independent in - Modified `create-story` workflow for requirements-focused planning - Enhanced `dev-story` workflow with gap analysis integration -### BMGD (BMAD Game Dev) -- Added parallel workflows for game development context -- Enhanced `game-dev.agent.yaml` and `game-scrum-master.agent.yaml` - --- ## Installation diff --git a/README.md b/README.md index ea5a8aeb..eb43e0d8 100644 --- a/README.md +++ b/README.md @@ -69,7 +69,6 @@ BMad Method extends with official modules for specialized domains. Modules are a | ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | | **BMad Method (BMM)** | [bmad-code-org/BMAD-METHOD](https://github.com/bmad-code-org/BMAD-METHOD) | [bmad-method](https://www.npmjs.com/package/bmad-method) | Core framework with 34+ workflows across 4 development phases | | **BMad Builder (BMB)** | [bmad-code-org/bmad-builder](https://github.com/bmad-code-org/bmad-builder) | [bmad-builder](https://www.npmjs.com/package/bmad-builder) | Create custom BMad agents, workflows, and domain-specific modules | -| **Game Dev Studio (BMGD)** | [bmad-code-org/bmad-module-game-dev-studio](https://github.com/bmad-code-org/bmad-module-game-dev-studio) | [bmad-game-dev-studio](https://www.npmjs.com/package/bmad-game-dev-studio) | Game development workflows for Unity, Unreal, and Godot | | **Creative Intelligence Suite (CIS)** | [bmad-code-org/bmad-module-creative-intelligence-suite](https://github.com/bmad-code-org/bmad-module-creative-intelligence-suite) | [bmad-creative-intelligence-suite](https://www.npmjs.com/package/bmad-creative-intelligence-suite) | Innovation, brainstorming, design thinking, and problem-solving | * More modules are coming in the next 2 weeks from BMad Official, and a community marketplace for the installer also will be coming with the final V6 release! diff --git a/docs/_STYLE_GUIDE.md b/docs/_STYLE_GUIDE.md index e5fb51ff..5e389c67 100644 --- a/docs/_STYLE_GUIDE.md +++ b/docs/_STYLE_GUIDE.md @@ -224,7 +224,7 @@ your-project/ | **Deep-Dive** | `document-project.md` | | **Configuration** | `core-tasks.md` | | **Glossary** | `glossary/index.md` | -| **Comprehensive** | `bmgd-workflows.md` | +| **Comprehensive** | `bmm-workflows.md` | ### Reference Index Pages @@ -324,7 +324,6 @@ Add italic context at definition start for limited-scope terms: - `*Quick Flow only.*` - `*BMad Method/Enterprise.*` - `*Phase N.*` -- `*BMGD.*` - `*Brownfield.*` ### Glossary Checklist diff --git a/docs/gap-analysis-migration.md b/docs/gap-analysis-migration.md index df3bd99a..2c0dc128 100644 --- a/docs/gap-analysis-migration.md +++ b/docs/gap-analysis-migration.md @@ -164,7 +164,6 @@ When dev-story prompts for gap analysis approval: ```bash # In BMAD-METHOD repo git checkout v6.0.0-alpha.21 -- src/modules/bmm/workflows/4-implementation/ -git checkout v6.0.0-alpha.21 -- src/modules/bmgd/workflows/4-production/ # Reinstall in your project cd ~/git/your-project diff --git a/docs/how-to/upgrade-to-v6.md b/docs/how-to/upgrade-to-v6.md index 3d576f46..81fd240e 100644 --- a/docs/how-to/upgrade-to-v6.md +++ b/docs/how-to/upgrade-to-v6.md @@ -109,9 +109,6 @@ your-project/ | v4 Module | v6 Status | |-----------|-----------| -| `_bmad-2d-phaser-game-dev` | Integrated into BMGD Module | -| `_bmad-2d-unity-game-dev` | Integrated into BMGD Module | -| `_bmad-godot-game-dev` | Integrated into BMGD Module | | `_bmad-infrastructure-devops` | Deprecated — new DevOps agent coming soon | | `_bmad-creative-writing` | Not adapted — new v6 module coming soon | diff --git a/docs/tea/glossary/index.md b/docs/tea/glossary/index.md index 3d48d83c..2cabc396 100644 --- a/docs/tea/glossary/index.md +++ b/docs/tea/glossary/index.md @@ -30,8 +30,6 @@ Terminology reference for the BMad Method. | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | **Architecture Document** | *BMad Method/Enterprise.* System-wide design document defining structure, components, data models, integration patterns, security, and deployment. | | **Epics** | High-level feature groupings containing multiple related stories. Typically 5-15 stories each representing cohesive functionality. | -| **Game Brief** | *BMGD.* Document capturing game's core vision, pillars, target audience, and scope. Foundation for the GDD. | -| **GDD** | *BMGD.* Game Design Document — comprehensive document detailing all aspects of game design: mechanics, systems, content, and more. | | **PRD** | *BMad Method/Enterprise.* Product Requirements Document containing vision, goals, FRs, NFRs, and success criteria. Focuses on WHAT to build. | | **Product Brief** | *Phase 1.* Optional strategic document capturing product vision, market context, and high-level requirements before detailed planning. | | **Tech-Spec** | *Quick Flow only.* Comprehensive technical plan with problem statement, solution approach, file-level changes, and testing strategy. | @@ -57,8 +55,6 @@ Terminology reference for the BMad Method. | **Architect** | Agent designing system architecture, creating architecture documents, and validating designs. Primary agent for Phase 3. | | **BMad Master** | Meta-level orchestrator from BMad Core facilitating party mode and providing high-level guidance across all modules. | | **DEV** | Developer agent implementing stories, writing code, running tests, and performing code reviews. Primary implementer in Phase 4. | -| **Game Architect** | *BMGD.* Agent designing game system architecture and validating game-specific technical designs. | -| **Game Designer** | *BMGD.* Agent creating game design documents (GDD) and running game-specific workflows. | | **Party Mode** | Multi-agent collaboration feature where agents discuss challenges together. BMad Master orchestrates, selecting 2-3 relevant agents per message. | | **PM** | Product Manager agent creating PRDs and tech-specs. Primary agent for Phase 2 planning. | | **SM** | Scrum Master agent managing sprints, creating stories, and coordinating implementation. Primary orchestrator for Phase 4. | @@ -103,24 +99,6 @@ Terminology reference for the BMad Method. | **Story File** | Markdown file containing story description, acceptance criteria, technical notes, and testing requirements. | | **Track Selection** | Automatic analysis by `bmad-help` suggesting appropriate track based on complexity indicators. User can override. | -## Game Development Terms - -| Term | Definition | -| ------------------------------ | ---------------------------------------------------------------------------------------------------- | -| **Core Fantasy** | *BMGD.* The emotional experience players seek from your game — what they want to FEEL. | -| **Core Loop** | *BMGD.* Fundamental cycle of actions players repeat throughout gameplay. The heart of your game. | -| **Design Pillar** | *BMGD.* Core principle guiding all design decisions. Typically 3-5 pillars define a game's identity. | -| **Environmental Storytelling** | *BMGD.* Narrative communicated through the game world itself rather than explicit dialogue. | -| **Game Type** | *BMGD.* Genre classification determining which specialized GDD sections are included. | -| **MDA Framework** | *BMGD.* Mechanics → Dynamics → Aesthetics — framework for analyzing and designing games. | -| **Meta-Progression** | *BMGD.* Persistent progression carrying between individual runs or sessions. | -| **Metroidvania** | *BMGD.* Genre featuring interconnected world exploration with ability-gated progression. | -| **Narrative Complexity** | *BMGD.* How central story is to the game: Critical, Heavy, Moderate, or Light. | -| **Permadeath** | *BMGD.* Game mechanic where character death is permanent, typically requiring a new run. | -| **Player Agency** | *BMGD.* Degree to which players can make meaningful choices affecting outcomes. | -| **Procedural Generation** | *BMGD.* Algorithmic creation of game content (levels, items, characters) rather than hand-crafted. | -| **Roguelike** | *BMGD.* Genre featuring procedural generation, permadeath, and run-based progression. | - ## Test Architect (TEA) Concepts | Term | Definition | diff --git a/src/bmm/workflows/1-analysis/create-product-brief/product-brief.template.md b/src/bmm/workflows/1-analysis/create-product-brief/product-brief.template.md new file mode 100644 index 00000000..d41d5620 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/product-brief.template.md @@ -0,0 +1,10 @@ +--- +stepsCompleted: [] +inputDocuments: [] +date: { system-date } +author: { user } +--- + +# Product Brief: {{project_name}} + + diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01-init.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01-init.md new file mode 100644 index 00000000..49618093 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01-init.md @@ -0,0 +1,177 @@ +--- +name: 'step-01-init' +description: 'Initialize the product brief workflow by detecting continuation state and setting up the document' + +# File References +nextStepFile: './step-02-vision.md' +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' + +# Template References +productBriefTemplate: '../product-brief.template.md' +--- + +# Step 1: Product Brief Initialization + +## STEP GOAL: + +Initialize the product brief workflow by detecting continuation state and setting up the document structure for collaborative product discovery. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative discovery tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on initialization and setup - no content generation yet +- 🚫 FORBIDDEN to look ahead to future steps or assume knowledge from them +- 💬 Approach: Systematic setup with clear reporting to user +- 📋 Detect existing workflow state and handle continuation properly + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis of current state before taking any action +- 💾 Initialize document structure and update frontmatter appropriately +- 📖 Set up frontmatter `stepsCompleted: [1]` before loading next step +- 🚫 FORBIDDEN to load next step until user selects 'C' (Continue) + +## CONTEXT BOUNDARIES: + +- Available context: Variables from workflow.md are available in memory +- Focus: Workflow initialization and document setup only +- Limits: Don't assume knowledge from other steps or create content yet +- Dependencies: Configuration loaded from workflow.md initialization + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Check for Existing Workflow State + +First, check if the output document already exists: + +**Workflow State Detection:** + +- Look for file `{outputFile}` +- If exists, read the complete file including frontmatter +- If not exists, this is a fresh workflow + +### 2. Handle Continuation (If Document Exists) + +If the document exists and has frontmatter with `stepsCompleted`: + +**Continuation Protocol:** + +- **STOP immediately** and load `./step-01b-continue.md` +- Do not proceed with any initialization tasks +- Let step-01b handle all continuation logic +- This is an auto-proceed situation - no user choice needed + +### 3. Fresh Workflow Setup (If No Document) + +If no document exists or no `stepsCompleted` in frontmatter: + +#### A. Input Document Discovery + +load context documents using smart discovery. Documents can be in the following locations: +- {planning_artifacts}/** +- {output_folder}/** +- {product_knowledge}/** +- docs/** + +Also - when searching - documents can be a single markdown file, or a folder with an index and multiple files. For Example, if searching for `*foo*.md` and not found, also search for a folder called *foo*/index.md (which indicates sharded content) + +Try to discover the following: +- Brainstorming Reports (`*brainstorming*.md`) +- Research Documents (`*research*.md`) +- Project Documentation (generally multiple documents might be found for this in the `{product_knowledge}` or `docs` folder.) +- Project Context (`**/project-context.md`) + +Confirm what you have found with the user, along with asking if the user wants to provide anything else. Only after this confirmation will you proceed to follow the loading rules + +**Loading Rules:** + +- Load ALL discovered files completely that the user confirmed or provided (no offset/limit) +- If there is a project context, whatever is relevant should try to be biased in the remainder of this whole workflow process +- For sharded folders, load ALL files to get complete picture, using the index first to potentially know the potential of each document +- index.md is a guide to what's relevant whenever available +- Track all successfully loaded files in frontmatter `inputDocuments` array + +#### B. Create Initial Document + +**Document Setup:** + +- Copy the template from `{productBriefTemplate}` to `{outputFile}`, and update the frontmatter fields + +#### C. Present Initialization Results + +**Setup Report to User:** +"Welcome {{user_name}}! I've set up your product brief workspace for {{project_name}}. + +**Document Setup:** + +- Created: `{outputFile}` from template +- Initialized frontmatter with workflow state + +**Input Documents Discovered:** + +- Research: {number of research files loaded or "None found"} +- Brainstorming: {number of brainstorming files loaded or "None found"} +- Project docs: {number of project files loaded or "None found"} +- Project Context: {number of project context files loaded or "None found"} + +**Files loaded:** {list of specific file names or "No additional documents found"} + +Do you have any other documents you'd like me to include, or shall we continue to the next step?" + +### 4. Present MENU OPTIONS + +Display: "**Proceeding to product vision discovery...**" + +#### Menu Handling Logic: + +- After setup report is presented, without delay, read fully and follow: {nextStepFile} + +#### EXECUTION RULES: + +- This is an initialization step with auto-proceed after setup completion +- Proceed directly to next step after document setup and reporting + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [setup completion is achieved and frontmatter properly updated], will you then read fully and follow: `{nextStepFile}` to begin product vision discovery. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- Existing workflow detected and properly handed off to step-01b +- Fresh workflow initialized with template and proper frontmatter +- Input documents discovered and loaded using sharded-first logic +- All discovered files tracked in frontmatter `inputDocuments` +- Menu presented and user input handled correctly +- Frontmatter updated with `stepsCompleted: [1]` before proceeding + +### ❌ SYSTEM FAILURE: + +- Proceeding with fresh initialization when existing workflow exists +- Not updating frontmatter with discovered input documents +- Creating document without proper template structure +- Not checking sharded folders first before whole files +- Not reporting discovered documents to user clearly +- Proceeding without user selecting 'C' (Continue) + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01b-continue.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01b-continue.md new file mode 100644 index 00000000..99b2495f --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-01b-continue.md @@ -0,0 +1,161 @@ +--- +name: 'step-01b-continue' +description: 'Resume the product brief workflow from where it was left off, ensuring smooth continuation' + +# File References +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' +--- + +# Step 1B: Product Brief Continuation + +## STEP GOAL: + +Resume the product brief workflow from where it was left off, ensuring smooth continuation with full context restoration. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative continuation tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on understanding where we left off and continuing appropriately +- 🚫 FORBIDDEN to modify content completed in previous steps +- 💬 Approach: Systematic state analysis with clear progress reporting +- 📋 Resume workflow from exact point where it was interrupted + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis of current state before taking any action +- 💾 Keep existing frontmatter `stepsCompleted` values +- 📖 Only load documents that were already tracked in `inputDocuments` +- 🚫 FORBIDDEN to discover new input documents during continuation + +## CONTEXT BOUNDARIES: + +- Available context: Current document and frontmatter are already loaded +- Focus: Workflow state analysis and continuation logic only +- Limits: Don't assume knowledge beyond what's in the document +- Dependencies: Existing workflow state from previous session + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Analyze Current State + +**State Assessment:** +Review the frontmatter to understand: + +- `stepsCompleted`: Which steps are already done +- `lastStep`: The most recently completed step number +- `inputDocuments`: What context was already loaded +- All other frontmatter variables + +### 2. Restore Context Documents + +**Context Reloading:** + +- For each document in `inputDocuments`, load the complete file +- This ensures you have full context for continuation +- Don't discover new documents - only reload what was previously processed +- Maintain the same context as when workflow was interrupted + +### 3. Present Current Progress + +**Progress Report to User:** +"Welcome back {{user_name}}! I'm resuming our product brief collaboration for {{project_name}}. + +**Current Progress:** + +- Steps completed: {stepsCompleted} +- Last worked on: Step {lastStep} +- Context documents available: {len(inputDocuments)} files + +**Document Status:** + +- Current product brief is ready with all completed sections +- Ready to continue from where we left off + +Does this look right, or do you want to make any adjustments before we proceed?" + +### 4. Determine Continuation Path + +**Next Step Logic:** +Based on `lastStep` value, determine which step to load next: + +- If `lastStep = 1` → Load `./step-02-vision.md` +- If `lastStep = 2` → Load `./step-03-users.md` +- If `lastStep = 3` → Load `./step-04-metrics.md` +- Continue this pattern for all steps +- If `lastStep = 6` → Workflow already complete + +### 5. Handle Workflow Completion + +**If workflow already complete (`lastStep = 6`):** +"Great news! It looks like we've already completed the product brief workflow for {{project_name}}. + +The final document is ready at `{outputFile}` with all sections completed through step 6. + +Would you like me to: + +- Review the completed product brief with you +- Suggest next workflow steps (like PRD creation) +- Start a new product brief revision + +What would be most helpful?" + +### 6. Present MENU OPTIONS + +**If workflow not complete:** +Display: "Ready to continue with Step {nextStepNumber}: {nextStepTitle}? + +**Select an Option:** [C] Continue to Step {nextStepNumber}" + +#### Menu Handling Logic: + +- IF C: Read fully and follow the appropriate next step file based on `lastStep` +- IF Any other comments or queries: respond and redisplay menu + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- User can chat or ask questions about current progress + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [C continue option] is selected and [current state confirmed], will you then read fully and follow the appropriate next step file to resume the workflow. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- All previous input documents successfully reloaded +- Current workflow state accurately analyzed and presented +- User confirms understanding of progress before continuation +- Correct next step identified and prepared for loading +- Proper continuation path determined based on `lastStep` + +### ❌ SYSTEM FAILURE: + +- Discovering new input documents instead of reloading existing ones +- Modifying content from already completed steps +- Loading wrong next step based on `lastStep` value +- Proceeding without user confirmation of current state +- Not maintaining context consistency from previous session + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-02-vision.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-02-vision.md new file mode 100644 index 00000000..f00e18fa --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-02-vision.md @@ -0,0 +1,199 @@ +--- +name: 'step-02-vision' +description: 'Discover and define the core product vision, problem statement, and unique value proposition' + +# File References +nextStepFile: './step-03-users.md' +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' + +# Task References +advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml' +partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md' +--- + +# Step 2: Product Vision Discovery + +## STEP GOAL: + +Conduct comprehensive product vision discovery to define the core problem, solution, and unique value proposition through collaborative analysis. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative discovery tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on product vision, problem, and solution discovery +- 🚫 FORBIDDEN to generate vision without real user input and collaboration +- 💬 Approach: Systematic discovery from problem to solution +- 📋 COLLABORATIVE discovery, not assumption-based vision crafting + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Generate vision content collaboratively with user +- 📖 Update frontmatter `stepsCompleted: [1, 2]` before loading next step +- 🚫 FORBIDDEN to proceed without user confirmation through menu + +## CONTEXT BOUNDARIES: + +- Available context: Current document and frontmatter from step 1, input documents already loaded in memory +- Focus: This will be the first content section appended to the document +- Limits: Focus on clear, compelling product vision and problem statement +- Dependencies: Document initialization from step-01 must be complete + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Begin Vision Discovery + +**Opening Conversation:** +"As your PM peer, I'm excited to help you shape the vision for {{project_name}}. Let's start with the foundation. + +**Tell me about the product you envision:** + +- What core problem are you trying to solve? +- Who experiences this problem most acutely? +- What would success look like for the people you're helping? +- What excites you most about this solution? + +Let's start with the problem space before we get into solutions." + +### 2. Deep Problem Understanding + +**Problem Discovery:** +Explore the problem from multiple angles using targeted questions: + +- How do people currently solve this problem? +- What's frustrating about current solutions? +- What happens if this problem goes unsolved? +- Who feels this pain most intensely? + +### 3. Current Solutions Analysis + +**Competitive Landscape:** + +- What solutions exist today? +- Where do they fall short? +- What gaps are they leaving open? +- Why haven't existing solutions solved this completely? + +### 4. Solution Vision + +**Collaborative Solution Crafting:** + +- If we could solve this perfectly, what would that look like? +- What's the simplest way we could make a meaningful difference? +- What makes your approach different from what's out there? +- What would make users say 'this is exactly what I needed'? + +### 5. Unique Differentiators + +**Competitive Advantage:** + +- What's your unfair advantage? +- What would be hard for competitors to copy? +- What insight or approach is uniquely yours? +- Why is now the right time for this solution? + +### 6. Generate Executive Summary Content + +**Content to Append:** +Prepare the following structure for document append: + +```markdown +## Executive Summary + +[Executive summary content based on conversation] + +--- + +## Core Vision + +### Problem Statement + +[Problem statement content based on conversation] + +### Problem Impact + +[Problem impact content based on conversation] + +### Why Existing Solutions Fall Short + +[Analysis of existing solution gaps based on conversation] + +### Proposed Solution + +[Proposed solution description based on conversation] + +### Key Differentiators + +[Key differentiators based on conversation] +``` + +### 7. Present MENU OPTIONS + +**Content Presentation:** +"I've drafted the executive summary and core vision based on our conversation. This captures the essence of {{project_name}} and what makes it special. + +**Here's what I'll add to the document:** +[Show the complete markdown content from step 6] + +**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue" + +#### Menu Handling Logic: + +- IF A: Read fully and follow: {advancedElicitationTask} with current vision content to dive deeper and refine +- IF P: Read fully and follow: {partyModeWorkflow} to bring different perspectives to positioning and differentiation +- IF C: Save content to {outputFile}, update frontmatter with stepsCompleted: [1, 2], then read fully and follow: {nextStepFile} +- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#7-present-menu-options) + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After other menu items execution, return to this menu with updated content +- User can chat or ask questions - always respond and then end with display again of the menu options + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [C continue option] is selected and [vision content finalized and saved to document with frontmatter updated], will you then read fully and follow: `{nextStepFile}` to begin target user discovery. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- Clear problem statement that resonates with target users +- Compelling solution vision that addresses the core problem +- Unique differentiators that provide competitive advantage +- Executive summary that captures the product essence +- A/P/C menu presented and handled correctly with proper task execution +- Content properly appended to document when C selected +- Frontmatter updated with stepsCompleted: [1, 2] + +### ❌ SYSTEM FAILURE: + +- Accepting vague problem statements without pushing for specificity +- Creating solution vision without fully understanding the problem +- Missing unique differentiators or competitive insights +- Generating vision without real user input and collaboration +- Not presenting standard A/P/C menu after content generation +- Appending content without user selecting 'C' +- Not updating frontmatter properly + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-03-users.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-03-users.md new file mode 100644 index 00000000..cba26641 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-03-users.md @@ -0,0 +1,202 @@ +--- +name: 'step-03-users' +description: 'Define target users with rich personas and map their key interactions with the product' + +# File References +nextStepFile: './step-04-metrics.md' +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' + +# Task References +advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml' +partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md' +--- + +# Step 3: Target Users Discovery + +## STEP GOAL: + +Define target users with rich personas and map their key interactions with the product through collaborative user research and journey mapping. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative discovery tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on defining who this product serves and how they interact with it +- 🚫 FORBIDDEN to create generic user profiles without specific details +- 💬 Approach: Systematic persona development with journey mapping +- 📋 COLLABORATIVE persona development, not assumption-based user creation + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Generate user personas and journeys collaboratively with user +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3]` before loading next step +- 🚫 FORBIDDEN to proceed without user confirmation through menu + +## CONTEXT BOUNDARIES: + +- Available context: Current document and frontmatter from previous steps, product vision and problem already defined +- Focus: Creating vivid, actionable user personas that align with product vision +- Limits: Focus on users who directly experience the problem or benefit from the solution +- Dependencies: Product vision and problem statement from step-02 must be complete + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Begin User Discovery + +**Opening Exploration:** +"Now that we understand what {{project_name}} does, let's define who it's for. + +**User Discovery:** + +- Who experiences the problem we're solving? +- Are there different types of users with different needs? +- Who gets the most value from this solution? +- Are there primary users and secondary users we should consider? + +Let's start by identifying the main user groups." + +### 2. Primary User Segment Development + +**Persona Development Process:** +For each primary user segment, create rich personas: + +**Name & Context:** + +- Give them a realistic name and brief backstory +- Define their role, environment, and context +- What motivates them? What are their goals? + +**Problem Experience:** + +- How do they currently experience the problem? +- What workarounds are they using? +- What are the emotional and practical impacts? + +**Success Vision:** + +- What would success look like for them? +- What would make them say "this is exactly what I needed"? + +**Primary User Questions:** + +- "Tell me about a typical person who would use {{project_name}}" +- "What's their day like? Where does our product fit in?" +- "What are they trying to accomplish that's hard right now?" + +### 3. Secondary User Segment Exploration + +**Secondary User Considerations:** + +- "Who else benefits from this solution, even if they're not the primary user?" +- "Are there admin, support, or oversight roles we should consider?" +- "Who influences the decision to adopt or purchase this product?" +- "Are there partner or stakeholder users who matter?" + +### 4. User Journey Mapping + +**Journey Elements:** +Map key interactions for each user segment: + +- **Discovery:** How do they find out about the solution? +- **Onboarding:** What's their first experience like? +- **Core Usage:** How do they use the product day-to-day? +- **Success Moment:** When do they realize the value? +- **Long-term:** How does it become part of their routine? + +**Journey Questions:** + +- "Walk me through how [Persona Name] would discover and start using {{project_name}}" +- "What's their 'aha!' moment?" +- "How does this product change how they work or live?" + +### 5. Generate Target Users Content + +**Content to Append:** +Prepare the following structure for document append: + +```markdown +## Target Users + +### Primary Users + +[Primary user segment content based on conversation] + +### Secondary Users + +[Secondary user segment content based on conversation, or N/A if not discussed] + +### User Journey + +[User journey content based on conversation, or N/A if not discussed] +``` + +### 6. Present MENU OPTIONS + +**Content Presentation:** +"I've mapped out who {{project_name}} serves and how they'll interact with it. This helps us ensure we're building something that real people will love to use. + +**Here's what I'll add to the document:** +[Show the complete markdown content from step 5] + +**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue" + +#### Menu Handling Logic: + +- IF A: Read fully and follow: {advancedElicitationTask} with current user content to dive deeper into personas and journeys +- IF P: Read fully and follow: {partyModeWorkflow} to bring different perspectives to validate user understanding +- IF C: Save content to {outputFile}, update frontmatter with stepsCompleted: [1, 2, 3], then read fully and follow: {nextStepFile} +- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#6-present-menu-options) + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After other menu items execution, return to this menu with updated content +- User can chat or ask questions - always respond and then end with display again of the menu options + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [C continue option] is selected and [user personas finalized and saved to document with frontmatter updated], will you then read fully and follow: `{nextStepFile}` to begin success metrics definition. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- Rich, believable user personas with clear motivations +- Clear distinction between primary and secondary users +- User journeys that show key interaction points and value creation +- User segments that align with product vision and problem statement +- A/P/C menu presented and handled correctly with proper task execution +- Content properly appended to document when C selected +- Frontmatter updated with stepsCompleted: [1, 2, 3] + +### ❌ SYSTEM FAILURE: + +- Creating generic user profiles without specific details +- Missing key user segments that are important to success +- User journeys that don't show how the product creates value +- Not connecting user needs back to the problem statement +- Not presenting standard A/P/C menu after content generation +- Appending content without user selecting 'C' +- Not updating frontmatter properly + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-04-metrics.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-04-metrics.md new file mode 100644 index 00000000..e6b297c3 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-04-metrics.md @@ -0,0 +1,205 @@ +--- +name: 'step-04-metrics' +description: 'Define comprehensive success metrics that include user success, business objectives, and key performance indicators' + +# File References +nextStepFile: './step-05-scope.md' +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' + +# Task References +advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml' +partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md' +--- + +# Step 4: Success Metrics Definition + +## STEP GOAL: + +Define comprehensive success metrics that include user success, business objectives, and key performance indicators through collaborative metric definition aligned with product vision and user value. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative discovery tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on defining measurable success criteria and business objectives +- 🚫 FORBIDDEN to create vague metrics that can't be measured or tracked +- 💬 Approach: Systematic metric definition that connects user value to business success +- 📋 COLLABORATIVE metric definition that drives actionable decisions + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Generate success metrics collaboratively with user +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4]` before loading next step +- 🚫 FORBIDDEN to proceed without user confirmation through menu + +## CONTEXT BOUNDARIES: + +- Available context: Current document and frontmatter from previous steps, product vision and target users already defined +- Focus: Creating measurable, actionable success criteria that align with product strategy +- Limits: Focus on metrics that drive decisions and demonstrate real value creation +- Dependencies: Product vision and user personas from previous steps must be complete + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Begin Success Metrics Discovery + +**Opening Exploration:** +"Now that we know who {{project_name}} serves and what problem it solves, let's define what success looks like. + +**Success Discovery:** + +- How will we know we're succeeding for our users? +- What would make users say 'this was worth it'? +- What metrics show we're creating real value? + +Let's start with the user perspective." + +### 2. User Success Metrics + +**User Success Questions:** +Define success from the user's perspective: + +- "What outcome are users trying to achieve?" +- "How will they know the product is working for them?" +- "What's the moment where they realize this is solving their problem?" +- "What behaviors indicate users are getting value?" + +**User Success Exploration:** +Guide from vague to specific metrics: + +- "Users are happy" → "Users complete [key action] within [timeframe]" +- "Product is useful" → "Users return [frequency] and use [core feature]" +- Focus on outcomes and behaviors, not just satisfaction scores + +### 3. Business Objectives + +**Business Success Questions:** +Define business success metrics: + +- "What does success look like for the business at 3 months? 12 months?" +- "Are we measuring revenue, user growth, engagement, something else?" +- "What business metrics would make you say 'this is working'?" +- "How does this product contribute to broader company goals?" + +**Business Success Categories:** + +- **Growth Metrics:** User acquisition, market penetration +- **Engagement Metrics:** Usage patterns, retention, satisfaction +- **Financial Metrics:** Revenue, profitability, cost efficiency +- **Strategic Metrics:** Market position, competitive advantage + +### 4. Key Performance Indicators + +**KPI Development Process:** +Define specific, measurable KPIs: + +- Transform objectives into measurable indicators +- Ensure each KPI has a clear measurement method +- Define targets and timeframes where appropriate +- Include leading indicators that predict success + +**KPI Examples:** + +- User acquisition: "X new users per month" +- Engagement: "Y% of users complete core journey weekly" +- Business impact: "$Z in cost savings or revenue generation" + +### 5. Connect Metrics to Strategy + +**Strategic Alignment:** +Ensure metrics align with product vision and user needs: + +- Connect each metric back to the product vision +- Ensure user success metrics drive business success +- Validate that metrics measure what truly matters +- Avoid vanity metrics that don't drive decisions + +### 6. Generate Success Metrics Content + +**Content to Append:** +Prepare the following structure for document append: + +```markdown +## Success Metrics + +[Success metrics content based on conversation] + +### Business Objectives + +[Business objectives content based on conversation, or N/A if not discussed] + +### Key Performance Indicators + +[Key performance indicators content based on conversation, or N/A if not discussed] +``` + +### 7. Present MENU OPTIONS + +**Content Presentation:** +"I've defined success metrics that will help us track whether {{project_name}} is creating real value for users and achieving business objectives. + +**Here's what I'll add to the document:** +[Show the complete markdown content from step 6] + +**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue" + +#### Menu Handling Logic: + +- IF A: Read fully and follow: {advancedElicitationTask} with current metrics content to dive deeper into success metric insights +- IF P: Read fully and follow: {partyModeWorkflow} to bring different perspectives to validate comprehensive metrics +- IF C: Save content to {outputFile}, update frontmatter with stepsCompleted: [1, 2, 3, 4], then read fully and follow: {nextStepFile} +- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#7-present-menu-options) + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After other menu items execution, return to this menu with updated content +- User can chat or ask questions - always respond and then end with display again of the menu options + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [C continue option] is selected and [success metrics finalized and saved to document with frontmatter updated], will you then read fully and follow: `{nextStepFile}` to begin MVP scope definition. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- User success metrics that focus on outcomes and behaviors +- Clear business objectives aligned with product strategy +- Specific, measurable KPIs with defined targets and timeframes +- Metrics that connect user value to business success +- A/P/C menu presented and handled correctly with proper task execution +- Content properly appended to document when C selected +- Frontmatter updated with stepsCompleted: [1, 2, 3, 4] + +### ❌ SYSTEM FAILURE: + +- Vague success metrics that can't be measured or tracked +- Business objectives disconnected from user success +- Too many metrics or missing critical success indicators +- Metrics that don't drive actionable decisions +- Not presenting standard A/P/C menu after content generation +- Appending content without user selecting 'C' +- Not updating frontmatter properly + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-05-scope.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-05-scope.md new file mode 100644 index 00000000..0914b835 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-05-scope.md @@ -0,0 +1,219 @@ +--- +name: 'step-05-scope' +description: 'Define MVP scope with clear boundaries and outline future vision while managing scope creep' + +# File References +nextStepFile: './step-06-complete.md' +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' + +# Task References +advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml' +partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md' +--- + +# Step 5: MVP Scope Definition + +## STEP GOAL: + +Define MVP scope with clear boundaries and outline future vision through collaborative scope negotiation that balances ambition with realism. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative discovery tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on defining minimum viable scope and future vision +- 🚫 FORBIDDEN to create MVP scope that's too large or includes non-essential features +- 💬 Approach: Systematic scope negotiation with clear boundary setting +- 📋 COLLABORATIVE scope definition that prevents scope creep + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Generate MVP scope collaboratively with user +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5]` before loading next step +- 🚫 FORBIDDEN to proceed without user confirmation through menu + +## CONTEXT BOUNDARIES: + +- Available context: Current document and frontmatter from previous steps, product vision, users, and success metrics already defined +- Focus: Defining what's essential for MVP vs. future enhancements +- Limits: Balance user needs with implementation feasibility +- Dependencies: Product vision, user personas, and success metrics from previous steps must be complete + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Begin Scope Definition + +**Opening Exploration:** +"Now that we understand what {{project_name}} does, who it serves, and how we'll measure success, let's define what we need to build first. + +**Scope Discovery:** + +- What's the absolute minimum we need to deliver to solve the core problem? +- What features would make users say 'this solves my problem'? +- How do we balance ambition with getting something valuable to users quickly? + +Let's start with the MVP mindset: what's the smallest version that creates real value?" + +### 2. MVP Core Features Definition + +**MVP Feature Questions:** +Define essential features for minimum viable product: + +- "What's the core functionality that must work?" +- "Which features directly address the main problem we're solving?" +- "What would users consider 'incomplete' if it was missing?" +- "What features create the 'aha!' moment we discussed earlier?" + +**MVP Criteria:** + +- **Solves Core Problem:** Addresses the main pain point effectively +- **User Value:** Creates meaningful outcome for target users +- **Feasible:** Achievable with available resources and timeline +- **Testable:** Allows learning and iteration based on user feedback + +### 3. Out of Scope Boundaries + +**Out of Scope Exploration:** +Define what explicitly won't be in MVP: + +- "What features would be nice to have but aren't essential?" +- "What functionality could wait for version 2.0?" +- "What are we intentionally saying 'no' to for now?" +- "How do we communicate these boundaries to stakeholders?" + +**Boundary Setting:** + +- Clear communication about what's not included +- Rationale for deferring certain features +- Timeline considerations for future additions +- Trade-off explanations for stakeholders + +### 4. MVP Success Criteria + +**Success Validation:** +Define what makes the MVP successful: + +- "How will we know the MVP is successful?" +- "What metrics will indicate we should proceed beyond MVP?" +- "What user feedback signals validate our approach?" +- "What's the decision point for scaling beyond MVP?" + +**Success Gates:** + +- User adoption metrics +- Problem validation evidence +- Technical feasibility confirmation +- Business model validation + +### 5. Future Vision Exploration + +**Vision Questions:** +Define the longer-term product vision: + +- "If this is wildly successful, what does it become in 2-3 years?" +- "What capabilities would we add with more resources?" +- "How does the MVP evolve into the full product vision?" +- "What markets or user segments could we expand to?" + +**Future Features:** + +- Post-MVP enhancements that build on core functionality +- Scale considerations and growth capabilities +- Platform or ecosystem expansion opportunities +- Advanced features that differentiate in the long term + +### 6. Generate MVP Scope Content + +**Content to Append:** +Prepare the following structure for document append: + +```markdown +## MVP Scope + +### Core Features + +[Core features content based on conversation] + +### Out of Scope for MVP + +[Out of scope content based on conversation, or N/A if not discussed] + +### MVP Success Criteria + +[MVP success criteria content based on conversation, or N/A if not discussed] + +### Future Vision + +[Future vision content based on conversation, or N/A if not discussed] +``` + +### 7. Present MENU OPTIONS + +**Content Presentation:** +"I've defined the MVP scope for {{project_name}} that balances delivering real value with realistic boundaries. This gives us a clear path forward while keeping our options open for future growth. + +**Here's what I'll add to the document:** +[Show the complete markdown content from step 6] + +**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue" + +#### Menu Handling Logic: + +- IF A: Read fully and follow: {advancedElicitationTask} with current scope content to optimize scope definition +- IF P: Read fully and follow: {partyModeWorkflow} to bring different perspectives to validate MVP scope +- IF C: Save content to {outputFile}, update frontmatter with stepsCompleted: [1, 2, 3, 4, 5], then read fully and follow: {nextStepFile} +- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#7-present-menu-options) + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After other menu items execution, return to this menu with updated content +- User can chat or ask questions - always respond and then end with display again of the menu options + +## CRITICAL STEP COMPLETION NOTE + +ONLY WHEN [C continue option] is selected and [MVP scope finalized and saved to document with frontmatter updated], will you then read fully and follow: `{nextStepFile}` to complete the product brief workflow. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- MVP features that solve the core problem effectively +- Clear out-of-scope boundaries that prevent scope creep +- Success criteria that validate MVP approach and inform go/no-go decisions +- Future vision that inspires while maintaining focus on MVP +- A/P/C menu presented and handled correctly with proper task execution +- Content properly appended to document when C selected +- Frontmatter updated with stepsCompleted: [1, 2, 3, 4, 5] + +### ❌ SYSTEM FAILURE: + +- MVP scope too large or includes non-essential features +- Missing clear boundaries leading to scope creep +- No success criteria to validate MVP approach +- Future vision disconnected from MVP foundation +- Not presenting standard A/P/C menu after content generation +- Appending content without user selecting 'C' +- Not updating frontmatter properly + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. diff --git a/src/bmm/workflows/1-analysis/create-product-brief/steps/step-06-complete.md b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-06-complete.md new file mode 100644 index 00000000..91c1ba66 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/steps/step-06-complete.md @@ -0,0 +1,162 @@ +--- +name: 'step-06-complete' +description: 'Complete the product brief workflow, update status files, and suggest next steps for the project' + +# File References +outputFile: '{planning_artifacts}/product-brief-{{project_name}}-{{date}}.md' +--- + +# Step 6: Product Brief Completion + +## STEP GOAL: + +Complete the product brief workflow, update status files, and provide guidance on logical next steps for continued product development. + +## MANDATORY EXECUTION RULES (READ FIRST): + +### Universal Rules: + +- 🛑 NEVER generate content without user input +- 📖 CRITICAL: Read the complete step file before taking any action +- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read +- 📋 YOU ARE A FACILITATOR, not a content generator +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Role Reinforcement: + +- ✅ You are a product-focused Business Analyst facilitator +- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role +- ✅ We engage in collaborative dialogue, not command-response +- ✅ You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision +- ✅ Maintain collaborative completion tone throughout + +### Step-Specific Rules: + +- 🎯 Focus only on completion, next steps, and project guidance +- 🚫 FORBIDDEN to generate new content for the product brief +- 💬 Approach: Systematic completion with quality validation and next step recommendations +- 📋 FINALIZE document and update workflow status appropriately + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Update the main workflow status file with completion information +- 📖 Suggest potential next workflow steps for the user +- 🚫 DO NOT load additional steps after this one (this is final) + +## CONTEXT BOUNDARIES: + +- Available context: Complete product brief document from all previous steps, workflow frontmatter shows all completed steps +- Focus: Completion validation, status updates, and next step guidance +- Limits: No new content generation, only completion and wrap-up activities +- Dependencies: All previous steps must be completed with content saved to document + +## Sequence of Instructions (Do not deviate, skip, or optimize) + +### 1. Announce Workflow Completion + +**Completion Announcement:** +"🎉 **Product Brief Complete, {{user_name}}!** + +I've successfully collaborated with you to create a comprehensive Product Brief for {{project_name}}. + +**What we've accomplished:** + +- ✅ Executive Summary with clear vision and problem statement +- ✅ Core Vision with solution definition and unique differentiators +- ✅ Target Users with rich personas and user journeys +- ✅ Success Metrics with measurable outcomes and business objectives +- ✅ MVP Scope with focused feature set and clear boundaries +- ✅ Future Vision that inspires while maintaining current focus + +**The complete Product Brief is now available at:** `{outputFile}` + +This brief serves as the foundation for all subsequent product development activities and strategic decisions." + +### 2. Document Quality Check + +**Completeness Validation:** +Perform final validation of the product brief: + +- Does the executive summary clearly communicate the vision and problem? +- Are target users well-defined with compelling personas? +- Do success metrics connect user value to business objectives? +- Is MVP scope focused and realistic? +- Does the brief provide clear direction for next steps? + +**Consistency Validation:** + +- Do all sections align with the core problem statement? +- Is user value consistently emphasized throughout? +- Are success criteria traceable to user needs and business goals? +- Does MVP scope align with the problem and solution? + +### 3. Suggest Next Steps + +**Recommended Next Workflow:** +Provide guidance on logical next workflows: + +1. `create-prd` - Create detailed Product Requirements Document + - Brief provides foundation for detailed requirements + - User personas inform journey mapping + - Success metrics become specific acceptance criteria + - MVP scope becomes detailed feature specifications + +**Other Potential Next Steps:** + +1. `create-ux-design` - UX research and design (can run parallel with PRD) +2. `domain-research` - Deep market or domain research (if needed) + +**Strategic Considerations:** + +- The PRD workflow builds directly on this brief for detailed planning +- Consider team capacity and immediate priorities +- Use brief to validate concept before committing to detailed work +- Brief can guide early technical feasibility discussions + +### 4. Congrats to the user + +"**Your Product Brief for {{project_name}} is now complete and ready for the next phase!**" + +Recap that the brief captures everything needed to guide subsequent product development: + +- Clear vision and problem definition +- Deep understanding of target users +- Measurable success criteria +- Focused MVP scope with realistic boundaries +- Inspiring long-term vision + +### 5. Suggest next steps + +Product Brief complete. Read fully and follow: `_bmad/core/tasks/bmad-help.md` with argument `Validate PRD`. + +--- + +## 🚨 SYSTEM SUCCESS/FAILURE METRICS + +### ✅ SUCCESS: + +- Product brief contains all essential sections with collaborative content +- All collaborative content properly saved to document with proper frontmatter +- Workflow status file updated with completion information and timestamp +- Clear next step guidance provided to user with specific workflow recommendations +- Document quality validation completed with completeness and consistency checks +- User acknowledges completion and understands next available options +- Workflow properly marked as complete in status tracking + +### ❌ SYSTEM FAILURE: + +- Not updating workflow status file with completion information +- Missing clear next step guidance for user +- Not confirming document completeness with user +- Workflow not properly marked as complete in status tracking +- User unclear about what happens next or available options +- Document quality issues not identified or addressed + +**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE. + +## FINAL WORKFLOW COMPLETION + +This product brief is now complete and serves as the strategic foundation for the entire product lifecycle. All subsequent design, architecture, and development work should trace back to the vision, user needs, and success criteria documented in this brief. + +**Congratulations on completing the Product Brief for {{project_name}}!** 🎉 diff --git a/src/bmm/workflows/1-analysis/create-product-brief/workflow.md b/src/bmm/workflows/1-analysis/create-product-brief/workflow.md new file mode 100644 index 00000000..c17b1821 --- /dev/null +++ b/src/bmm/workflows/1-analysis/create-product-brief/workflow.md @@ -0,0 +1,58 @@ +--- +name: create-product-brief +description: Create comprehensive product briefs through collaborative step-by-step discovery as creative Business Analyst working with the user as peers. +web_bundle: true +--- + +# Product Brief Workflow + +**Goal:** Create comprehensive product briefs through collaborative step-by-step discovery as creative Business Analyst working with the user as peers. + +**Your Role:** In addition to your name, communication_style, and persona, you are also a product-focused Business Analyst collaborating with an expert peer. This is a partnership, not a client-vendor relationship. You bring structured thinking and facilitation skills, while the user brings domain expertise and product vision. Work together as equals. + +--- + +## WORKFLOW ARCHITECTURE + +This uses **step-file architecture** for disciplined execution: + +### Core Principles + +- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly +- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so +- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed +- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document +- **Append-Only Building**: Build documents by appending content as directed to the output file + +### Step Processing Rules + +1. **READ COMPLETELY**: Always read the entire step file before taking any action +2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate +3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection +4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue) +5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step +6. **LOAD NEXT**: When directed, read fully and follow the next step file + +### Critical Rules (NO EXCEPTIONS) + +- 🛑 **NEVER** load multiple step files simultaneously +- 📖 **ALWAYS** read entire step file before execution +- 🚫 **NEVER** skip steps or optimize the sequence +- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step +- 🎯 **ALWAYS** follow the exact instructions in the step file +- ⏸️ **ALWAYS** halt at menus and wait for user input +- 📋 **NEVER** create mental todo lists from future steps + +--- + +## INITIALIZATION SEQUENCE + +### 1. Configuration Loading + +Load and read full config from {project-root}/_bmad/bmm/config.yaml and resolve: + +- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`, `user_skill_level` + +### 2. First Step EXECUTION + +Read fully and follow: `{project-root}/_bmad/bmm/workflows/1-analysis/create-product-brief/steps/step-01-init.md` to begin the workflow. diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-01-init.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-01-init.md new file mode 100644 index 00000000..27d056b1 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-01-init.md @@ -0,0 +1,137 @@ +# Domain Research Step 1: Domain Research Scope Confirmation + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user confirmation + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ FOCUS EXCLUSIVELY on confirming domain research scope and approach +- 📋 YOU ARE A DOMAIN RESEARCH PLANNER, not content generator +- 💬 ACKNOWLEDGE and CONFIRM understanding of domain research goals +- 🔍 This is SCOPE CONFIRMATION ONLY - no web research yet +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present [C] continue option after scope confirmation +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Research type = "domain" is already set +- **Research topic = "{{research_topic}}"** - discovered from initial discussion +- **Research goals = "{{research_goals}}"** - captured from initial discussion +- Focus on industry/domain analysis with web research +- Web search is required to verify and supplement your knowledge with current facts + +## YOUR TASK: + +Confirm domain research scope and approach for **{{research_topic}}** with the user's goals in mind. + +## DOMAIN SCOPE CONFIRMATION: + +### 1. Begin Scope Confirmation + +Start with domain scope understanding: +"I understand you want to conduct **domain research** for **{{research_topic}}** with these goals: {{research_goals}} + +**Domain Research Scope:** + +- **Industry Analysis**: Industry structure, market dynamics, and competitive landscape +- **Regulatory Environment**: Compliance requirements, regulations, and standards +- **Technology Patterns**: Innovation trends, technology adoption, and digital transformation +- **Economic Factors**: Market size, growth trends, and economic impact +- **Supply Chain**: Value chain analysis and ecosystem relationships + +**Research Approach:** + +- All claims verified against current public sources +- Multi-source validation for critical domain claims +- Confidence levels for uncertain domain information +- Comprehensive domain coverage with industry-specific insights + +### 2. Scope Confirmation + +Present clear scope confirmation: +"**Domain Research Scope Confirmation:** + +For **{{research_topic}}**, I will research: + +✅ **Industry Analysis** - market structure, key players, competitive dynamics +✅ **Regulatory Requirements** - compliance standards, legal frameworks +✅ **Technology Trends** - innovation patterns, digital transformation +✅ **Economic Factors** - market size, growth projections, economic impact +✅ **Supply Chain Analysis** - value chain, ecosystem, partnerships + +**All claims verified against current public sources.** + +**Does this domain research scope and approach align with your goals?** +[C] Continue - Begin domain research with this scope + +### 3. Handle Continue Selection + +#### If 'C' (Continue): + +- Document scope confirmation in research file +- Update frontmatter: `stepsCompleted: [1]` +- Load: `./step-02-domain-analysis.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append scope confirmation: + +```markdown +## Domain Research Scope Confirmation + +**Research Topic:** {{research_topic}} +**Research Goals:** {{research_goals}} + +**Domain Research Scope:** + +- Industry Analysis - market structure, competitive landscape +- Regulatory Environment - compliance requirements, legal frameworks +- Technology Trends - innovation patterns, digital transformation +- Economic Factors - market size, growth projections +- Supply Chain Analysis - value chain, ecosystem relationships + +**Research Methodology:** + +- All claims verified against current public sources +- Multi-source validation for critical domain claims +- Confidence level framework for uncertain information +- Comprehensive domain coverage with industry-specific insights + +**Scope Confirmed:** {{date}} +``` + +## SUCCESS METRICS: + +✅ Domain research scope clearly confirmed with user +✅ All domain analysis areas identified and explained +✅ Research methodology emphasized +✅ [C] continue option presented and handled correctly +✅ Scope confirmation documented when user proceeds +✅ Proper routing to next domain research step + +## FAILURE MODES: + +❌ Not clearly confirming domain research scope with user +❌ Missing critical domain analysis areas +❌ Not explaining that web search is required for current facts +❌ Not presenting [C] continue option +❌ Proceeding without user scope confirmation +❌ Not routing to next domain research step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C', load `./step-02-domain-analysis.md` to begin industry analysis. + +Remember: This is SCOPE CONFIRMATION ONLY - no actual domain research yet, just confirming the research approach and scope! diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-02-domain-analysis.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-02-domain-analysis.md new file mode 100644 index 00000000..bb4cbb63 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-02-domain-analysis.md @@ -0,0 +1,229 @@ +# Domain Research Step 2: Industry Analysis + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE AN INDUSTRY ANALYST, not content generator +- 💬 FOCUS on market size, growth, and industry dynamics +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after industry analysis content generation +- 📝 WRITE INDUSTRY ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from step-01 are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on market size, growth, and industry dynamics +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct industry analysis focusing on market size, growth, and industry dynamics. Search the web to verify and supplement current facts. + +## INDUSTRY ANALYSIS SEQUENCE: + +### 1. Begin Industry Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different industry areas simultaneously and thoroughly. + +Start with industry research approach: +"Now I'll conduct **industry analysis** for **{{research_topic}}** to understand market dynamics. + +**Industry Analysis Focus:** + +- Market size and valuation metrics +- Growth rates and market dynamics +- Market segmentation and structure +- Industry trends and evolution patterns +- Economic impact and value creation + +**Let me search for current industry insights.**" + +### 2. Parallel Industry Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} market size value" +Search the web: "{{research_topic}} market growth rate dynamics" +Search the web: "{{research_topic}} market segmentation structure" +Search the web: "{{research_topic}} industry trends evolution" + +**Analysis approach:** + +- Look for recent market research reports and industry analyses +- Search for authoritative sources (market research firms, industry associations) +- Identify market size, growth rates, and segmentation data +- Research industry trends and evolution patterns +- Analyze economic impact and value creation metrics + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate industry findings: + +**Research Coverage:** + +- Market size and valuation analysis +- Growth rates and market dynamics +- Market segmentation and structure +- Industry trends and evolution patterns + +**Cross-Industry Analysis:** +[Identify patterns connecting market dynamics, segmentation, and trends] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Industry Analysis Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare industry analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Industry Analysis + +### Market Size and Valuation + +[Market size analysis with source citations] +_Total Market Size: [Current market valuation]_ +_Growth Rate: [CAGR and market growth projections]_ +_Market Segments: [Size and value of key market segments]_ +_Economic Impact: [Economic contribution and value creation]_ +_Source: [URL]_ + +### Market Dynamics and Growth + +[Market dynamics analysis with source citations] +_Growth Drivers: [Key factors driving market growth]_ +_Growth Barriers: [Factors limiting market expansion]_ +_Cyclical Patterns: [Industry seasonality and cycles]_ +_Market Maturity: [Life cycle stage and development phase]_ +_Source: [URL]_ + +### Market Structure and Segmentation + +[Market structure analysis with source citations] +_Primary Segments: [Key market segments and their characteristics]_ +_Sub-segment Analysis: [Detailed breakdown of market sub-segments]_ +_Geographic Distribution: [Regional market variations and concentrations]_ +_Vertical Integration: [Supply chain and value chain structure]_ +_Source: [URL]_ + +### Industry Trends and Evolution + +[Industry trends analysis with source citations] +_Emerging Trends: [Current industry developments and transformations]_ +_Historical Evolution: [Industry development over recent years]_ +_Technology Integration: [How technology is changing the industry]_ +_Future Outlook: [Projected industry developments and changes]_ +_Source: [URL]_ + +### Competitive Dynamics + +[Competitive dynamics analysis with source citations] +_Market Concentration: [Level of market consolidation and competition]_ +_Competitive Intensity: [Degree of competition and rivalry]_ +_Barriers to Entry: [Obstacles for new market entrants]_ +_Innovation Pressure: [Rate of innovation and change]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **industry analysis** for {{research_topic}}. + +**Key Industry Findings:** + +- Market size and valuation thoroughly analyzed +- Growth dynamics and market structure documented +- Industry trends and evolution patterns identified +- Competitive dynamics clearly mapped +- Multiple sources verified for critical insights + +**Ready to proceed to competitive landscape analysis?** +[C] Continue - Save this to document and proceed to competitive landscape + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2]` +- Load: `./step-03-competitive-landscape.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Market size and valuation thoroughly analyzed +✅ Growth dynamics and market structure documented +✅ Industry trends and evolution patterns identified +✅ Competitive dynamics clearly mapped +✅ Multiple sources verified for critical insights +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (competitive landscape) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying on training data instead of web search for current facts +❌ Missing critical market size or growth data +❌ Incomplete market structure analysis +❌ Not identifying key industry trends +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to competitive landscape step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## INDUSTRY RESEARCH PROTOCOLS: + +- Research market research reports and industry analyses +- Use authoritative sources (market research firms, industry associations) +- Analyze market size, growth rates, and segmentation data +- Study industry trends and evolution patterns +- Search the web to verify facts +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## INDUSTRY ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative industry research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable industry insights + +## NEXT STEP: + +After user selects 'C', load `./step-03-competitive-landscape.md` to analyze competitive landscape, key players, and ecosystem analysis for {{research_topic}}. + +Remember: Always write research content to document immediately and search the web to verify facts! diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-03-competitive-landscape.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-03-competitive-landscape.md new file mode 100644 index 00000000..0dc2de6e --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-03-competitive-landscape.md @@ -0,0 +1,238 @@ +# Domain Research Step 3: Competitive Landscape + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A COMPETITIVE ANALYST, not content generator +- 💬 FOCUS on key players, market share, and competitive dynamics +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after competitive analysis content generation +- 📝 WRITE COMPETITIVE ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on key players, market share, and competitive dynamics +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct competitive landscape analysis focusing on key players, market share, and competitive dynamics. Search the web to verify and supplement current facts. + +## COMPETITIVE LANDSCAPE ANALYSIS SEQUENCE: + +### 1. Begin Competitive Landscape Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different competitive areas simultaneously and thoroughly. + +Start with competitive research approach: +"Now I'll conduct **competitive landscape analysis** for **{{research_topic}}** to understand the competitive ecosystem. + +**Competitive Landscape Focus:** + +- Key players and market leaders +- Market share and competitive positioning +- Competitive strategies and differentiation +- Business models and value propositions +- Entry barriers and competitive dynamics + +**Let me search for current competitive insights.**" + +### 2. Parallel Competitive Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} key players market leaders" +Search the web: "{{research_topic}} market share competitive landscape" +Search the web: "{{research_topic}} competitive strategies differentiation" +Search the web: "{{research_topic}} entry barriers competitive dynamics" + +**Analysis approach:** + +- Look for recent competitive intelligence reports and market analyses +- Search for company websites, annual reports, and investor presentations +- Research market share data and competitive positioning +- Analyze competitive strategies and differentiation approaches +- Study entry barriers and competitive dynamics + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate competitive findings: + +**Research Coverage:** + +- Key players and market leaders analysis +- Market share and competitive positioning assessment +- Competitive strategies and differentiation mapping +- Entry barriers and competitive dynamics evaluation + +**Cross-Competitive Analysis:** +[Identify patterns connecting players, strategies, and market dynamics] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Competitive Landscape Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare competitive landscape analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Competitive Landscape + +### Key Players and Market Leaders + +[Key players analysis with source citations] +_Market Leaders: [Dominant players and their market positions]_ +_Major Competitors: [Significant competitors and their specialties]_ +_Emerging Players: [New entrants and innovative companies]_ +_Global vs Regional: [Geographic distribution of key players]_ +_Source: [URL]_ + +### Market Share and Competitive Positioning + +[Market share analysis with source citations] +_Market Share Distribution: [Current market share breakdown]_ +_Competitive Positioning: [How players position themselves in the market]_ +_Value Proposition Mapping: [Different value propositions across players]_ +_Customer Segments Served: [Different customer bases by competitor]_ +_Source: [URL]_ + +### Competitive Strategies and Differentiation + +[Competitive strategies analysis with source citations] +_Cost Leadership Strategies: [Players competing on price and efficiency]_ +_Differentiation Strategies: [Players competing on unique value]_ +_Focus/Niche Strategies: [Players targeting specific segments]_ +_Innovation Approaches: [How different players innovate]_ +_Source: [URL]_ + +### Business Models and Value Propositions + +[Business models analysis with source citations] +_Primary Business Models: [How competitors make money]_ +_Revenue Streams: [Different approaches to monetization]_ +_Value Chain Integration: [Vertical integration vs partnership models]_ +_Customer Relationship Models: [How competitors build customer loyalty]_ +_Source: [URL]_ + +### Competitive Dynamics and Entry Barriers + +[Competitive dynamics analysis with source citations] +_Barriers to Entry: [Obstacles facing new market entrants]_ +_Competitive Intensity: [Level of rivalry and competitive pressure]_ +_Market Consolidation Trends: [M&A activity and market concentration]_ +_Switching Costs: [Costs for customers to switch between providers]_ +_Source: [URL]_ + +### Ecosystem and Partnership Analysis + +[Ecosystem analysis with source citations] +_Supplier Relationships: [Key supplier partnerships and dependencies]_ +_Distribution Channels: [How competitors reach customers]_ +_Technology Partnerships: [Strategic technology alliances]_ +_Ecosystem Control: [Who controls key parts of the value chain]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **competitive landscape analysis** for {{research_topic}}. + +**Key Competitive Findings:** + +- Key players and market leaders thoroughly identified +- Market share and competitive positioning clearly mapped +- Competitive strategies and differentiation analyzed +- Business models and value propositions documented +- Competitive dynamics and entry barriers evaluated + +**Ready to proceed to regulatory focus analysis?** +[C] Continue - Save this to document and proceed to regulatory focus + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3]` +- Load: `./step-04-regulatory-focus.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Key players and market leaders thoroughly identified +✅ Market share and competitive positioning clearly mapped +✅ Competitive strategies and differentiation analyzed +✅ Business models and value propositions documented +✅ Competitive dynamics and entry barriers evaluated +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (regulatory focus) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying on training data instead of web search for current facts +❌ Missing critical key players or market leaders +❌ Incomplete market share or positioning analysis +❌ Not identifying competitive strategies +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to regulatory focus step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## COMPETITIVE RESEARCH PROTOCOLS: + +- Research competitive intelligence reports and market analyses +- Use company websites, annual reports, and investor presentations +- Analyze market share data and competitive positioning +- Study competitive strategies and differentiation approaches +- Search the web to verify facts +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## COMPETITIVE ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative competitive intelligence sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable competitive insights + +## NEXT STEP: + +After user selects 'C', load `./step-04-regulatory-focus.md` to analyze regulatory requirements, compliance frameworks, and legal considerations for {{research_topic}}. + +Remember: Always write research content to document immediately and search the web to verify facts! diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-04-regulatory-focus.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-04-regulatory-focus.md new file mode 100644 index 00000000..e98010c7 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-04-regulatory-focus.md @@ -0,0 +1,206 @@ +# Domain Research Step 4: Regulatory Focus + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A REGULATORY ANALYST, not content generator +- 💬 FOCUS on compliance requirements and regulatory landscape +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after regulatory content generation +- 📝 WRITE REGULATORY ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on regulatory and compliance requirements for the domain +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct focused regulatory and compliance analysis with emphasis on requirements that impact {{research_topic}}. Search the web to verify and supplement current facts. + +## REGULATORY FOCUS SEQUENCE: + +### 1. Begin Regulatory Analysis + +Start with regulatory research approach: +"Now I'll focus on **regulatory and compliance requirements** that impact **{{research_topic}}**. + +**Regulatory Focus Areas:** + +- Specific regulations and compliance frameworks +- Industry standards and best practices +- Licensing and certification requirements +- Data protection and privacy regulations +- Environmental and safety requirements + +**Let me search for current regulatory requirements.**" + +### 2. Web Search for Specific Regulations + +Search for current regulatory information: +Search the web: "{{research_topic}} regulations compliance requirements" + +**Regulatory focus:** + +- Specific regulations applicable to the domain +- Compliance frameworks and standards +- Recent regulatory changes or updates +- Enforcement agencies and oversight bodies + +### 3. Web Search for Industry Standards + +Search for current industry standards: +Search the web: "{{research_topic}} standards best practices" + +**Standards focus:** + +- Industry-specific technical standards +- Best practices and guidelines +- Certification requirements +- Quality assurance frameworks + +### 4. Web Search for Data Privacy Requirements + +Search for current privacy regulations: +Search the web: "data privacy regulations {{research_topic}}" + +**Privacy focus:** + +- GDPR, CCPA, and other data protection laws +- Industry-specific privacy requirements +- Data governance and security standards +- User consent and data handling requirements + +### 5. Generate Regulatory Analysis Content + +Prepare regulatory content with source citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Regulatory Requirements + +### Applicable Regulations + +[Specific regulations analysis with source citations] +_Source: [URL]_ + +### Industry Standards and Best Practices + +[Industry standards analysis with source citations] +_Source: [URL]_ + +### Compliance Frameworks + +[Compliance frameworks analysis with source citations] +_Source: [URL]_ + +### Data Protection and Privacy + +[Privacy requirements analysis with source citations] +_Source: [URL]_ + +### Licensing and Certification + +[Licensing requirements analysis with source citations] +_Source: [URL]_ + +### Implementation Considerations + +[Practical implementation considerations with source citations] +_Source: [URL]_ + +### Risk Assessment + +[Regulatory and compliance risk assessment] +``` + +### 6. Present Analysis and Continue Option + +Show the generated regulatory analysis and present continue option: +"I've completed **regulatory requirements analysis** for {{research_topic}}. + +**Key Regulatory Findings:** + +- Specific regulations and frameworks identified +- Industry standards and best practices mapped +- Compliance requirements clearly documented +- Implementation considerations provided +- Risk assessment completed + +**Ready to proceed to technical trends?** +[C] Continue - Save this to the document and move to technical trends + +### 7. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4]` +- Load: `./step-05-technical-trends.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 5. No additional append needed. + +## SUCCESS METRICS: + +✅ Applicable regulations identified with current citations +✅ Industry standards and best practices documented +✅ Compliance frameworks clearly mapped +✅ Data protection requirements analyzed +✅ Implementation considerations provided +✅ [C] continue option presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Relying on training data instead of web search for current facts +❌ Missing critical regulatory requirements for the domain +❌ Not providing implementation considerations for compliance +❌ Not completing risk assessment for regulatory compliance +❌ Not presenting [C] continue option after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## REGULATORY RESEARCH PROTOCOLS: + +- Search for specific regulations by name and number +- Identify regulatory bodies and enforcement agencies +- Research recent regulatory changes and updates +- Map industry standards to regulatory requirements +- Consider regional and jurisdictional differences + +## SOURCE VERIFICATION: + +- Always cite regulatory agency websites +- Use official government and industry association sources +- Note effective dates and implementation timelines +- Present compliance requirement levels and obligations + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-05-technical-trends.md` to analyze technical trends and innovations in the domain. + +Remember: Search the web to verify regulatory facts and provide practical implementation considerations! diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-05-technical-trends.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-05-technical-trends.md new file mode 100644 index 00000000..55e834cd --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-05-technical-trends.md @@ -0,0 +1,234 @@ +# Domain Research Step 5: Technical Trends + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A TECHNOLOGY ANALYST, not content generator +- 💬 FOCUS on emerging technologies and innovation patterns +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after technical trends content generation +- 📝 WRITE TECHNICAL TRENDS ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on emerging technologies and innovation patterns in the domain +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct comprehensive technical trends analysis using current web data with emphasis on innovations and emerging technologies impacting {{research_topic}}. + +## TECHNICAL TRENDS SEQUENCE: + +### 1. Begin Technical Trends Analysis + +Start with technology research approach: +"Now I'll conduct **technical trends and emerging technologies** analysis for **{{research_topic}}** using current data. + +**Technical Trends Focus:** + +- Emerging technologies and innovations +- Digital transformation impacts +- Automation and efficiency improvements +- New business models enabled by technology +- Future technology projections and roadmaps + +**Let me search for current technology developments.**" + +### 2. Web Search for Emerging Technologies + +Search for current technology information: +Search the web: "{{research_topic}} emerging technologies innovations" + +**Technology focus:** + +- AI, machine learning, and automation impacts +- Digital transformation trends +- New technologies disrupting the industry +- Innovation patterns and breakthrough developments + +### 3. Web Search for Digital Transformation + +Search for current transformation trends: +Search the web: "{{research_topic}} digital transformation trends" + +**Transformation focus:** + +- Digital adoption trends and rates +- Business model evolution +- Customer experience innovations +- Operational efficiency improvements + +### 4. Web Search for Future Outlook + +Search for future projections: +Search the web: "{{research_topic}} future outlook trends" + +**Future focus:** + +- Technology roadmaps and projections +- Market evolution predictions +- Innovation pipelines and R&D trends +- Long-term industry transformation + +### 5. Generate Technical Trends Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare technical analysis with source citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Technical Trends and Innovation + +### Emerging Technologies + +[Emerging technologies analysis with source citations] +_Source: [URL]_ + +### Digital Transformation + +[Digital transformation analysis with source citations] +_Source: [URL]_ + +### Innovation Patterns + +[Innovation patterns analysis with source citations] +_Source: [URL]_ + +### Future Outlook + +[Future outlook and projections with source citations] +_Source: [URL]_ + +### Implementation Opportunities + +[Implementation opportunity analysis with source citations] +_Source: [URL]_ + +### Challenges and Risks + +[Challenges and risks assessment with source citations] +_Source: [URL]_ + +## Recommendations + +### Technology Adoption Strategy + +[Technology adoption recommendations] + +### Innovation Roadmap + +[Innovation roadmap suggestions] + +### Risk Mitigation + +[Risk mitigation strategies] +``` + +### 6. Present Analysis and Complete Option + +Show the generated technical analysis and present complete option: +"I've completed **technical trends and innovation analysis** for {{research_topic}}. + +**Technical Highlights:** + +- Emerging technologies and innovations identified +- Digital transformation trends mapped +- Future outlook and projections analyzed +- Implementation opportunities and challenges documented +- Practical recommendations provided + +**Technical Trends Research Completed:** + +- Emerging technologies and innovations identified +- Digital transformation trends mapped +- Future outlook and projections analyzed +- Implementation opportunities and challenges documented + +**Ready to proceed to research synthesis and recommendations?** +[C] Continue - Save this to document and proceed to synthesis + +### 7. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4, 5]` +- Load: `./step-06-research-synthesis.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 5. No additional append needed. + +## SUCCESS METRICS: + +✅ Emerging technologies identified with current data +✅ Digital transformation trends clearly documented +✅ Future outlook and projections analyzed +✅ Implementation opportunities and challenges mapped +✅ Strategic recommendations provided +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (research synthesis) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts +❌ Missing critical emerging technologies in the domain +❌ Not providing practical implementation recommendations +❌ Not completing strategic recommendations +❌ Not presenting completion option for research workflow +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## TECHNICAL RESEARCH PROTOCOLS: + +- Search for cutting-edge technologies and innovations +- Identify disruption patterns and game-changers +- Research technology adoption timelines and barriers +- Consider regional technology variations +- Analyze competitive technological advantages + +## RESEARCH WORKFLOW COMPLETION: + +When 'C' is selected: + +- All domain research steps completed +- Comprehensive research document generated +- All sections appended with source citations +- Research workflow status updated +- Final recommendations provided to user + +## NEXT STEPS: + +Research workflow complete. User may: + +- Use the domain research to inform other workflows (PRD, architecture, etc.) +- Conduct additional research on specific topics if needed +- Move forward with product development based on research insights + +Congratulations on completing comprehensive domain research! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/domain-steps/step-06-research-synthesis.md b/src/bmm/workflows/1-analysis/research/domain-steps/step-06-research-synthesis.md new file mode 100644 index 00000000..1c7db8c0 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/domain-steps/step-06-research-synthesis.md @@ -0,0 +1,443 @@ +# Domain Research Step 6: Research Synthesis and Completion + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A DOMAIN RESEARCH STRATEGIST, not content generator +- 💬 FOCUS on comprehensive synthesis and authoritative conclusions +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📄 PRODUCE COMPREHENSIVE DOCUMENT with narrative intro, TOC, and summary +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] complete option after synthesis content generation +- 💾 ONLY save when user chooses C (Complete) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5, 6]` before completing workflow +- 🚫 FORBIDDEN to complete workflow until C is selected +- 📚 GENERATE COMPLETE DOCUMENT STRUCTURE with intro, TOC, and summary + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - comprehensive domain analysis +- **Research goals = "{{research_goals}}"** - achieved through exhaustive research +- All domain research sections have been completed (analysis, regulatory, technical) +- Web search capabilities with source verification are enabled +- This is the final synthesis step producing the complete research document + +## YOUR TASK: + +Produce a comprehensive, authoritative research document on **{{research_topic}}** with compelling narrative introduction, detailed TOC, and executive summary based on exhaustive domain research. + +## COMPREHENSIVE DOCUMENT SYNTHESIS: + +### 1. Document Structure Planning + +**Complete Research Document Structure:** + +```markdown +# [Compelling Title]: Comprehensive {{research_topic}} Research + +## Executive Summary + +[Brief compelling overview of key findings and implications] + +## Table of Contents + +- Research Introduction and Methodology +- Industry Overview and Market Dynamics +- Technology Trends and Innovation Landscape +- Regulatory Framework and Compliance Requirements +- Competitive Landscape and Key Players +- Strategic Insights and Recommendations +- Implementation Considerations and Risk Assessment +- Future Outlook and Strategic Opportunities +- Research Methodology and Source Documentation +- Appendices and Additional Resources +``` + +### 2. Generate Compelling Narrative Introduction + +**Introduction Requirements:** + +- Hook reader with compelling opening about {{research_topic}} +- Establish research significance and timeliness +- Outline comprehensive research methodology +- Preview key findings and strategic implications +- Set professional, authoritative tone + +**Web Search for Introduction Context:** +Search the web: "{{research_topic}} significance importance" + +### 3. Synthesize All Research Sections + +**Section-by-Section Integration:** + +- Combine industry analysis from step-02 +- Integrate regulatory focus from step-03 +- Incorporate technical trends from step-04 +- Add cross-sectional insights and connections +- Ensure comprehensive coverage with no gaps + +### 4. Generate Complete Document Content + +#### Final Document Structure: + +```markdown +# [Compelling Title]: Comprehensive {{research_topic}} Domain Research + +## Executive Summary + +[2-3 paragraph compelling summary of the most critical findings and strategic implications for {{research_topic}} based on comprehensive current research] + +**Key Findings:** + +- [Most significant market dynamics] +- [Critical regulatory considerations] +- [Important technology trends] +- [Strategic implications] + +**Strategic Recommendations:** + +- [Top 3-5 actionable recommendations based on research] + +## Table of Contents + +1. Research Introduction and Methodology +2. {{research_topic}} Industry Overview and Market Dynamics +3. Technology Landscape and Innovation Trends +4. Regulatory Framework and Compliance Requirements +5. Competitive Landscape and Ecosystem Analysis +6. Strategic Insights and Domain Opportunities +7. Implementation Considerations and Risk Assessment +8. Future Outlook and Strategic Planning +9. Research Methodology and Source Verification +10. Appendices and Additional Resources + +## 1. Research Introduction and Methodology + +### Research Significance + +[Compelling narrative about why {{research_topic}} research is critical right now] +_Why this research matters now: [Strategic importance with current context]_ +_Source: [URL]_ + +### Research Methodology + +[Comprehensive description of research approach including:] + +- **Research Scope**: [Comprehensive coverage areas] +- **Data Sources**: [Authoritative sources and verification approach] +- **Analysis Framework**: [Structured analysis methodology] +- **Time Period**: [current focus and historical context] +- **Geographic Coverage**: [Regional/global scope] + +### Research Goals and Objectives + +**Original Goals:** {{research_goals}} + +**Achieved Objectives:** + +- [Goal 1 achievement with supporting evidence] +- [Goal 2 achievement with supporting evidence] +- [Additional insights discovered during research] + +## 2. {{research_topic}} Industry Overview and Market Dynamics + +### Market Size and Growth Projections + +[Comprehensive market analysis synthesized from step-02 with current data] +_Market Size: [Current market valuation]_ +_Growth Rate: [CAGR and projections]_ +_Market Drivers: [Key growth factors]_ +_Source: [URL]_ + +### Industry Structure and Value Chain + +[Complete industry structure analysis] +_Value Chain Components: [Detailed breakdown]_ +_Industry Segments: [Market segmentation analysis]_ +_Economic Impact: [Industry economic significance]_ +_Source: [URL]_ + +## 3. Technology Landscape and Innovation Trends + +### Current Technology Adoption + +[Technology trends analysis from step-04 with current context] +_Emerging Technologies: [Key technologies affecting {{research_topic}}]_ +_Adoption Patterns: [Technology adoption rates and patterns]_ +_Innovation Drivers: [Factors driving technology change]_ +_Source: [URL]_ + +### Digital Transformation Impact + +[Comprehensive analysis of technology's impact on {{research_topic}}] +_Transformation Trends: [Major digital transformation patterns]_ +_Disruption Opportunities: [Technology-driven opportunities]_ +_Future Technology Outlook: [Emerging technologies and timelines]_ +_Source: [URL]_ + +## 4. Regulatory Framework and Compliance Requirements + +### Current Regulatory Landscape + +[Regulatory analysis from step-03 with current updates] +_Key Regulations: [Critical regulatory requirements]_ +_Compliance Standards: [Industry standards and best practices]_ +_Recent Changes: [current regulatory updates and implications]_ +_Source: [URL]_ + +### Risk and Compliance Considerations + +[Comprehensive risk assessment] +_Compliance Risks: [Major regulatory and compliance risks]_ +_Risk Mitigation Strategies: [Approaches to manage regulatory risks]_ +_Future Regulatory Trends: [Anticipated regulatory developments]_ +_Source: [URL]_ + +## 5. Competitive Landscape and Ecosystem Analysis + +### Market Positioning and Key Players + +[Competitive analysis with current market positioning] +_Market Leaders: [Dominant players and strategies]_ +_Emerging Competitors: [New entrants and innovative approaches]_ +_Competitive Dynamics: [Market competition patterns and trends]_ +_Source: [URL]_ + +### Ecosystem and Partnership Landscape + +[Complete ecosystem analysis] +_Ecosystem Players: [Key stakeholders and relationships]_ +_Partnership Opportunities: [Strategic collaboration potential]_ +_Supply Chain Dynamics: [Supply chain structure and risks]_ +_Source: [URL]_ + +## 6. Strategic Insights and Domain Opportunities + +### Cross-Domain Synthesis + +[Strategic insights from integrating all research sections] +_Market-Technology Convergence: [How technology and market forces interact]_ +_Regulatory-Strategic Alignment: [How regulatory environment shapes strategy]_ +_Competitive Positioning Opportunities: [Strategic advantages based on research]_ +_Source: [URL]_ + +### Strategic Opportunities + +[High-value opportunities identified through comprehensive research] +_Market Opportunities: [Specific market entry or expansion opportunities]_ +_Technology Opportunities: [Technology adoption or innovation opportunities]_ +_Partnership Opportunities: [Strategic collaboration and partnership potential]_ +_Source: [URL]_ + +## 7. Implementation Considerations and Risk Assessment + +### Implementation Framework + +[Practical implementation guidance based on research findings] +_Implementation Timeline: [Recommended phased approach]_ +_Resource Requirements: [Key resources and capabilities needed]_ +_Success Factors: [Critical success factors for implementation]_ +_Source: [URL]_ + +### Risk Management and Mitigation + +[Comprehensive risk assessment and mitigation strategies] +_Implementation Risks: [Major risks and mitigation approaches]_ +_Market Risks: [Market-related risks and contingency plans]_ +_Technology Risks: [Technology adoption and implementation risks]_ +_Source: [URL]_ + +## 8. Future Outlook and Strategic Planning + +### Future Trends and Projections + +[Forward-looking analysis based on comprehensive research] +_Near-term Outlook: [1-2 year projections and implications]_ +_Medium-term Trends: [3-5 year expected developments]_ +_Long-term Vision: [5+ year strategic outlook for {{research_topic}}]_ +_Source: [URL]_ + +### Strategic Recommendations + +[Comprehensive strategic recommendations] +_Immediate Actions: [Priority actions for next 6 months]_ +_Strategic Initiatives: [Key strategic initiatives for 1-2 years]_ +_Long-term Strategy: [Strategic positioning for 3+ years]_ +_Source: [URL]_ + +## 9. Research Methodology and Source Verification + +### Comprehensive Source Documentation + +[Complete documentation of all research sources] +_Primary Sources: [Key authoritative sources used]_ +_Secondary Sources: [Supporting research and analysis]_ +_Web Search Queries: [Complete list of search queries used]_ + +### Research Quality Assurance + +[Quality assurance and validation approach] +_Source Verification: [All factual claims verified with multiple sources]_ +_Confidence Levels: [Confidence assessments for uncertain data]_ +_Limitations: [Research limitations and areas for further investigation]_ +_Methodology Transparency: [Complete transparency about research approach]_ + +## 10. Appendices and Additional Resources + +### Detailed Data Tables + +[Comprehensive data tables supporting research findings] +_Market Data Tables: [Detailed market size, growth, and segmentation data]_ +_Technology Adoption Data: [Detailed technology adoption and trend data]_ +_Regulatory Reference Tables: [Complete regulatory requirements and compliance data]_ + +### Additional Resources + +[Valuable resources for continued research and implementation] +_Industry Associations: [Key industry organizations and resources]_ +_Research Organizations: [Authoritative research institutions and reports]_ +_Government Resources: [Regulatory agencies and official resources]_ +_Professional Networks: [Industry communities and knowledge sources]_ + +--- + +## Research Conclusion + +### Summary of Key Findings + +[Comprehensive summary of the most important research findings] + +### Strategic Impact Assessment + +[Assessment of strategic implications for {{research_topic}}] + +### Next Steps Recommendations + +[Specific next steps for leveraging this research] + +--- + +**Research Completion Date:** {{date}} +**Research Period:** Comprehensive analysis +**Document Length:** As needed for comprehensive coverage +**Source Verification:** All facts cited with sources +**Confidence Level:** High - based on multiple authoritative sources + +_This comprehensive research document serves as an authoritative reference on {{research_topic}} and provides strategic insights for informed decision-making._ +``` + +### 5. Present Complete Document and Final Option + +**Document Completion Presentation:** + +"I've completed the **comprehensive research document synthesis** for **{{research_topic}}**, producing an authoritative research document with: + +**Document Features:** + +- **Compelling Narrative Introduction**: Engaging opening that establishes research significance +- **Comprehensive Table of Contents**: Complete navigation structure for easy reference +- **Exhaustive Research Coverage**: All aspects of {{research_topic}} thoroughly analyzed +- **Executive Summary**: Key findings and strategic implications highlighted +- **Strategic Recommendations**: Actionable insights based on comprehensive research +- **Complete Source Citations**: Every factual claim verified with sources + +**Research Completeness:** + +- Industry analysis and market dynamics fully documented +- Technology trends and innovation landscape comprehensively covered +- Regulatory framework and compliance requirements detailed +- Competitive landscape and ecosystem analysis complete +- Strategic insights and implementation guidance provided + +**Document Standards Met:** + +- Exhaustive research with no critical gaps +- Professional structure and compelling narrative +- As long as needed for comprehensive coverage +- Multiple independent sources for all claims +- Proper citations throughout + +**Ready to complete this comprehensive research document?** +[C] Complete Research - Save final comprehensive document + +### 6. Handle Final Completion + +#### If 'C' (Complete Research): + +- Append the complete document to the research file +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4, 5]` +- Complete the domain research workflow +- Provide final document delivery confirmation + +## APPEND TO DOCUMENT: + +When user selects 'C', append the complete comprehensive research document using the full structure above. + +## SUCCESS METRICS: + +✅ Compelling narrative introduction with research significance +✅ Comprehensive table of contents with complete document structure +✅ Exhaustive research coverage across all domain aspects +✅ Executive summary with key findings and strategic implications +✅ Strategic recommendations grounded in comprehensive research +✅ Complete source verification with citations +✅ Professional document structure and compelling narrative +✅ [C] complete option presented and handled correctly +✅ Domain research workflow completed with comprehensive document + +## FAILURE MODES: + +❌ Not producing compelling narrative introduction +❌ Missing comprehensive table of contents +❌ Incomplete research coverage across domain aspects +❌ Not providing executive summary with key findings +❌ Missing strategic recommendations based on research +❌ Relying solely on training data without web verification for current facts +❌ Producing document without professional structure +❌ Not presenting completion option for final document + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## COMPREHENSIVE DOCUMENT STANDARDS: + +This step ensures the final research document: + +- Serves as an authoritative reference on {{research_topic}} +- Provides compelling narrative and professional structure +- Includes comprehensive coverage with no gaps +- Maintains rigorous source verification standards +- Delivers strategic insights and actionable recommendations +- Meets professional research document quality standards + +## DOMAIN RESEARCH WORKFLOW COMPLETION: + +When 'C' is selected: + +- All domain research steps completed (1-5) +- Comprehensive domain research document generated +- Professional document structure with intro, TOC, and summary +- All sections appended with source citations +- Domain research workflow status updated to complete +- Final comprehensive research document delivered to user + +## FINAL DELIVERABLE: + +Complete authoritative research document on {{research_topic}} that: + +- Establishes professional credibility through comprehensive research +- Provides strategic insights for informed decision-making +- Serves as reference document for continued use +- Maintains highest research quality standards + +Congratulations on completing comprehensive domain research! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-01-init.md b/src/bmm/workflows/1-analysis/research/market-steps/step-01-init.md new file mode 100644 index 00000000..a3772a9b --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-01-init.md @@ -0,0 +1,182 @@ +# Market Research Step 1: Market Research Initialization + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate research content in init step +- ✅ ALWAYS confirm understanding of user's research goals +- 📋 YOU ARE A MARKET RESEARCH FACILITATOR, not content generator +- 💬 FOCUS on clarifying scope and approach +- 🔍 NO WEB RESEARCH in init - that's for later steps +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete research +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Confirm research understanding before proceeding +- ⚠️ Present [C] continue option after scope clarification +- 💾 Write initial scope document immediately +- 📖 Update frontmatter `stepsCompleted: [1]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from main workflow discovery are available +- Research type = "market" is already set +- **Research topic = "{{research_topic}}"** - discovered from initial discussion +- **Research goals = "{{research_goals}}"** - captured from initial discussion +- Focus on market research scope clarification +- Web search capabilities are enabled for later steps + +## YOUR TASK: + +Initialize market research by confirming understanding of {{research_topic}} and establishing clear research scope. + +## MARKET RESEARCH INITIALIZATION: + +### 1. Confirm Research Understanding + +**INITIALIZE - DO NOT RESEARCH YET** + +Start with research confirmation: +"I understand you want to conduct **market research** for **{{research_topic}}** with these goals: {{research_goals}} + +**My Understanding of Your Research Needs:** + +- **Research Topic**: {{research_topic}} +- **Research Goals**: {{research_goals}} +- **Research Type**: Market Research +- **Approach**: Comprehensive market analysis with source verification + +**Market Research Areas We'll Cover:** + +- Market size, growth dynamics, and trends +- Customer insights and behavior analysis +- Competitive landscape and positioning +- Strategic recommendations and implementation guidance + +**Does this accurately capture what you're looking for?**" + +### 2. Refine Research Scope + +Gather any clarifications needed: + +#### Scope Clarification Questions: + +- "Are there specific customer segments or aspects of {{research_topic}} we should prioritize?" +- "Should we focus on specific geographic regions or global market?" +- "Is this for market entry, expansion, product development, or other business purpose?" +- "Any competitors or market segments you specifically want us to analyze?" + +### 3. Document Initial Scope + +**WRITE IMMEDIATELY TO DOCUMENT** + +Write initial research scope to document: + +```markdown +# Market Research: {{research_topic}} + +## Research Initialization + +### Research Understanding Confirmed + +**Topic**: {{research_topic}} +**Goals**: {{research_goals}} +**Research Type**: Market Research +**Date**: {{date}} + +### Research Scope + +**Market Analysis Focus Areas:** + +- Market size, growth projections, and dynamics +- Customer segments, behavior patterns, and insights +- Competitive landscape and positioning analysis +- Strategic recommendations and implementation guidance + +**Research Methodology:** + +- Current web data with source verification +- Multiple independent sources for critical claims +- Confidence level assessment for uncertain data +- Comprehensive coverage with no critical gaps + +### Next Steps + +**Research Workflow:** + +1. ✅ Initialization and scope setting (current step) +2. Customer Insights and Behavior Analysis +3. Competitive Landscape Analysis +4. Strategic Synthesis and Recommendations + +**Research Status**: Scope confirmed, ready to proceed with detailed market analysis +``` + +### 4. Present Confirmation and Continue Option + +Show initial scope document and present continue option: +"I've documented our understanding and initial scope for **{{research_topic}}** market research. + +**What I've established:** + +- Research topic and goals confirmed +- Market analysis focus areas defined +- Research methodology verification +- Clear workflow progression + +**Document Status:** Initial scope written to research file for your review + +**Ready to begin detailed market research?** +[C] Continue - Confirm scope and proceed to customer insights analysis +[Modify] Suggest changes to research scope before proceeding + +### 5. Handle User Response + +#### If 'C' (Continue): + +- Update frontmatter: `stepsCompleted: [1]` +- Add confirmation note to document: "Scope confirmed by user on {{date}}" +- Load: `./step-02-customer-insights.md` + +#### If 'Modify': + +- Gather user changes to scope +- Update document with modifications +- Re-present updated scope for confirmation + +## SUCCESS METRICS: + +✅ Research topic and goals accurately understood +✅ Market research scope clearly defined +✅ Initial scope document written immediately +✅ User opportunity to review and modify scope +✅ [C] continue option presented and handled correctly +✅ Document properly updated with scope confirmation + +## FAILURE MODES: + +❌ Not confirming understanding of research topic and goals +❌ Generating research content instead of just scope clarification +❌ Not writing initial scope document to file +❌ Not providing opportunity for user to modify scope +❌ Proceeding to next step without user confirmation +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor research decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## INITIALIZATION PRINCIPLES: + +This step ensures: + +- Clear mutual understanding of research objectives +- Well-defined research scope and approach +- Immediate documentation for user review +- User control over research direction before detailed work begins + +## NEXT STEP: + +After user confirmation and scope finalization, load `./step-02-customer-insights.md` to begin detailed market research with customer insights analysis. + +Remember: Init steps confirm understanding and scope, not generate research content! diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-behavior.md b/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-behavior.md new file mode 100644 index 00000000..f707a0a3 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-behavior.md @@ -0,0 +1,237 @@ +# Market Research Step 2: Customer Behavior and Segments + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A CUSTOMER BEHAVIOR ANALYST, not content generator +- 💬 FOCUS on customer behavior patterns and demographic analysis +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete research +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after customer behavior content generation +- 📝 WRITE CUSTOMER BEHAVIOR ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from step-01 are available +- Focus on customer behavior patterns and demographic analysis +- Web search capabilities with source verification are enabled +- Previous step confirmed research scope and goals +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion + +## YOUR TASK: + +Conduct customer behavior and segment analysis with emphasis on patterns and demographics. + +## CUSTOMER BEHAVIOR ANALYSIS SEQUENCE: + +### 1. Begin Customer Behavior Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different customer behavior areas simultaneously and thoroughly. + +Start with customer behavior research approach: +"Now I'll conduct **customer behavior analysis** for **{{research_topic}}** to understand customer patterns. + +**Customer Behavior Focus:** + +- Customer behavior patterns and preferences +- Demographic profiles and segmentation +- Psychographic characteristics and values +- Behavior drivers and influences +- Customer interaction patterns and engagement + +**Let me search for current customer behavior insights.**" + +### 2. Parallel Customer Behavior Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} customer behavior patterns" +Search the web: "{{research_topic}} customer demographics" +Search the web: "{{research_topic}} psychographic profiles" +Search the web: "{{research_topic}} customer behavior drivers" + +**Analysis approach:** + +- Look for customer behavior studies and research reports +- Search for demographic segmentation and analysis +- Research psychographic profiling and value systems +- Analyze behavior drivers and influencing factors +- Study customer interaction and engagement patterns + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate customer behavior findings: + +**Research Coverage:** + +- Customer behavior patterns and preferences +- Demographic profiles and segmentation +- Psychographic characteristics and values +- Behavior drivers and influences +- Customer interaction patterns and engagement + +**Cross-Behavior Analysis:** +[Identify patterns connecting demographics, psychographics, and behaviors] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Customer Behavior Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare customer behavior analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Customer Behavior and Segments + +### Customer Behavior Patterns + +[Customer behavior patterns analysis with source citations] +_Behavior Drivers: [Key motivations and patterns from web search]_ +_Interaction Preferences: [Customer engagement and interaction patterns]_ +_Decision Habits: [How customers typically make decisions]_ +_Source: [URL]_ + +### Demographic Segmentation + +[Demographic analysis with source citations] +_Age Demographics: [Age groups and preferences]_ +_Income Levels: [Income segments and purchasing behavior]_ +_Geographic Distribution: [Regional/city differences]_ +_Education Levels: [Education impact on behavior]_ +_Source: [URL]_ + +### Psychographic Profiles + +[Psychographic analysis with source citations] +_Values and Beliefs: [Core values driving customer behavior]_ +_Lifestyle Preferences: [Lifestyle choices and behaviors]_ +_Attitudes and Opinions: [Customer attitudes toward products/services]_ +_Personality Traits: [Personality influences on behavior]_ +_Source: [URL]_ + +### Customer Segment Profiles + +[Detailed customer segment profiles with source citations] +_Segment 1: [Detailed profile including demographics, psychographics, behavior]_ +_Segment 2: [Detailed profile including demographics, psychographics, behavior]_ +_Segment 3: [Detailed profile including demographics, psychographics, behavior]_ +_Source: [URL]_ + +### Behavior Drivers and Influences + +[Behavior drivers analysis with source citations] +_Emotional Drivers: [Emotional factors influencing behavior]_ +_Rational Drivers: [Logical decision factors]_ +_Social Influences: [Social and peer influences]_ +_Economic Influences: [Economic factors affecting behavior]_ +_Source: [URL]_ + +### Customer Interaction Patterns + +[Customer interaction analysis with source citations] +_Research and Discovery: [How customers find and research options]_ +_Purchase Decision Process: [Steps in purchase decision making]_ +_Post-Purchase Behavior: [After-purchase engagement patterns]_ +_Loyalty and Retention: [Factors driving customer loyalty]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **customer behavior analysis** for {{research_topic}}, focusing on customer patterns. + +**Key Customer Behavior Findings:** + +- Customer behavior patterns clearly identified with drivers +- Demographic segmentation thoroughly analyzed +- Psychographic profiles mapped and documented +- Customer interaction patterns captured +- Multiple sources verified for critical insights + +**Ready to proceed to customer pain points?** +[C] Continue - Save this to document and proceed to pain points analysis + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2]` +- Load: `./step-03-customer-pain-points.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Customer behavior patterns identified with current citations +✅ Demographic segmentation thoroughly analyzed +✅ Psychographic profiles clearly documented +✅ Customer interaction patterns captured +✅ Multiple sources verified for critical insights +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (customer pain points) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical customer behavior patterns +❌ Incomplete demographic segmentation analysis +❌ Missing psychographic profile documentation +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to customer pain points analysis step +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor research decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## CUSTOMER BEHAVIOR RESEARCH PROTOCOLS: + +- Research customer behavior studies and market research +- Use demographic data from authoritative sources +- Research psychographic profiling and value systems +- Analyze customer interaction and engagement patterns +- Focus on current behavior data and trends +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## BEHAVIOR ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative customer research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable customer insights + +## NEXT STEP: + +After user selects 'C', load `./step-03-customer-pain-points.md` to analyze customer pain points, challenges, and unmet needs for {{research_topic}}. + +Remember: Always write research content to document immediately and emphasize current customer data with rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-insights.md b/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-insights.md new file mode 100644 index 00000000..c6d7ea32 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-02-customer-insights.md @@ -0,0 +1,200 @@ +# Market Research Step 2: Customer Insights + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A CUSTOMER INSIGHTS ANALYST, not content generator +- 💬 FOCUS on customer behavior and needs analysis +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after customer insights content generation +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from step-01 are available +- Focus on customer behavior and needs analysis +- Web search capabilities with source verification are enabled +- May need to search for current customer behavior trends + +## YOUR TASK: + +Conduct comprehensive customer insights analysis with emphasis on behavior patterns and needs. + +## CUSTOMER INSIGHTS SEQUENCE: + +### 1. Begin Customer Insights Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different customer areas simultaneously and thoroughly + +Start with customer research approach: +"Now I'll conduct **customer insights analysis** to understand customer behavior and needs. + +**Customer Insights Focus:** + +- Customer behavior patterns and preferences +- Pain points and challenges +- Decision-making processes +- Customer journey mapping +- Customer satisfaction drivers +- Demographic and psychographic profiles + +**Let me search for current customer insights using parallel web searches for comprehensive coverage.**" + +### 2. Parallel Customer Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "[product/service/market] customer behavior patterns" +Search the web: "[product/service/market] customer pain points challenges" +Search the web: "[product/service/market] customer decision process" + +**Analysis approach:** + +- Look for customer behavior studies and surveys +- Search for customer experience and interaction patterns +- Research customer satisfaction methodologies +- Note generational and cultural customer variations +- Research customer pain points and frustrations +- Analyze decision-making processes and criteria + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate the customer insights: + +**Research Coverage:** + +- Customer behavior patterns and preferences +- Pain points and challenges +- Decision-making processes and journey mapping + +**Cross-Customer Analysis:** +[Identify patterns connecting behavior, pain points, and decisions] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Customer Insights Content + +Prepare customer analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Customer Insights + +### Customer Behavior Patterns + +[Customer behavior analysis with source citations] +_Source: [URL]_ + +### Pain Points and Challenges + +[Pain points analysis with source citations] +_Source: [URL]_ + +### Decision-Making Processes + +[Decision-making analysis with source citations] +_Source: [URL]_ + +### Customer Journey Mapping + +[Customer journey analysis with source citations] +_Source: [URL]_ + +### Customer Satisfaction Drivers + +[Satisfaction drivers analysis with source citations] +_Source: [URL]_ + +### Demographic Profiles + +[Demographic profiles analysis with source citations] +_Source: [URL]_ + +### Psychographic Profiles + +[Psychographic profiles analysis with source citations] +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +Show the generated customer insights and present continue option: +"I've completed the **customer insights analysis** for customer behavior and needs. + +**Key Customer Findings:** + +- Customer behavior patterns clearly identified +- Pain points and challenges thoroughly documented +- Decision-making processes mapped +- Customer journey insights captured +- Satisfaction and profile data analyzed + +**Ready to proceed to competitive analysis?** +[C] Continue - Save this to the document and proceed to competitive analysis + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- Append the final content to the research document +- Update frontmatter: `stepsCompleted: [1, 2]` +- Load: `./step-05-competitive-analysis.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the research document using the structure from step 4. + +## SUCCESS METRICS: + +✅ Customer behavior patterns identified with current citations +✅ Pain points and challenges clearly documented +✅ Decision-making processes thoroughly analyzed +✅ Customer journey insights captured and mapped +✅ Customer satisfaction drivers identified +✅ [C] continue option presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical customer behavior patterns +❌ Not identifying key pain points and challenges +❌ Incomplete customer journey mapping +❌ Not presenting [C] continue option after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## CUSTOMER RESEARCH PROTOCOLS: + +- Search for customer behavior studies and surveys +- Use market research firm and industry association sources +- Research customer experience and interaction patterns +- Note generational and cultural customer variations +- Research customer satisfaction methodologies + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-05-competitive-analysis.md` to focus on competitive landscape analysis. + +Remember: Always emphasize current customer data and rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-03-customer-pain-points.md b/src/bmm/workflows/1-analysis/research/market-steps/step-03-customer-pain-points.md new file mode 100644 index 00000000..f4d2ae6d --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-03-customer-pain-points.md @@ -0,0 +1,249 @@ +# Market Research Step 3: Customer Pain Points and Needs + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A CUSTOMER NEEDS ANALYST, not content generator +- 💬 FOCUS on customer pain points, challenges, and unmet needs +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after pain points content generation +- 📝 WRITE CUSTOMER PAIN POINTS ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Customer behavior analysis completed in previous step +- Focus on customer pain points, challenges, and unmet needs +- Web search capabilities with source verification are enabled +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion + +## YOUR TASK: + +Conduct customer pain points and needs analysis with emphasis on challenges and frustrations. + +## CUSTOMER PAIN POINTS ANALYSIS SEQUENCE: + +### 1. Begin Customer Pain Points Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different customer pain point areas simultaneously and thoroughly. + +Start with customer pain points research approach: +"Now I'll conduct **customer pain points analysis** for **{{research_topic}}** to understand customer challenges. + +**Customer Pain Points Focus:** + +- Customer challenges and frustrations +- Unmet needs and unaddressed problems +- Barriers to adoption or usage +- Service and support pain points +- Customer satisfaction gaps + +**Let me search for current customer pain points insights.**" + +### 2. Parallel Pain Points Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} customer pain points challenges" +Search the web: "{{research_topic}} customer frustrations" +Search the web: "{{research_topic}} unmet customer needs" +Search the web: "{{research_topic}} customer barriers to adoption" + +**Analysis approach:** + +- Look for customer satisfaction surveys and reports +- Search for customer complaints and reviews +- Research customer support and service issues +- Analyze barriers to customer adoption +- Study unmet needs and market gaps + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate customer pain points findings: + +**Research Coverage:** + +- Customer challenges and frustrations +- Unmet needs and unaddressed problems +- Barriers to adoption or usage +- Service and support pain points + +**Cross-Pain Points Analysis:** +[Identify patterns connecting different types of pain points] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Customer Pain Points Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare customer pain points analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Customer Pain Points and Needs + +### Customer Challenges and Frustrations + +[Customer challenges analysis with source citations] +_Primary Frustrations: [Major customer frustrations identified]_ +_Usage Barriers: [Barriers preventing effective usage]_ +_Service Pain Points: [Customer service and support issues]_ +_Frequency Analysis: [How often these challenges occur]_ +_Source: [URL]_ + +### Unmet Customer Needs + +[Unmet needs analysis with source citations] +_Critical Unmet Needs: [Most important unaddressed needs]_ +_Solution Gaps: [Opportunities to address unmet needs]_ +_Market Gaps: [Market opportunities from unmet needs]_ +_Priority Analysis: [Which needs are most critical]_ +_Source: [URL]_ + +### Barriers to Adoption + +[Adoption barriers analysis with source citations] +_Price Barriers: [Cost-related barriers to adoption]_ +_Technical Barriers: [Complexity or technical barriers]_ +_Trust Barriers: [Trust and credibility issues]_ +_Convenience Barriers: [Ease of use or accessibility issues]_ +_Source: [URL]_ + +### Service and Support Pain Points + +[Service pain points analysis with source citations] +_Customer Service Issues: [Common customer service problems]_ +_Support Gaps: [Areas where customer support is lacking]_ +_Communication Issues: [Communication breakdowns and frustrations]_ +_Response Time Issues: [Slow response and resolution problems]_ +_Source: [URL]_ + +### Customer Satisfaction Gaps + +[Satisfaction gap analysis with source citations] +_Expectation Gaps: [Differences between expectations and reality]_ +_Quality Gaps: [Areas where quality expectations aren't met]_ +_Value Perception Gaps: [Perceived value vs actual value]_ +_Trust and Credibility Gaps: [Trust issues affecting satisfaction]_ +_Source: [URL]_ + +### Emotional Impact Assessment + +[Emotional impact analysis with source citations] +_Frustration Levels: [Customer frustration severity assessment]_ +_Loyalty Risks: [How pain points affect customer loyalty]_ +_Reputation Impact: [Impact on brand or product reputation]_ +_Customer Retention Risks: [Risk of customer loss from pain points]_ +_Source: [URL]_ + +### Pain Point Prioritization + +[Pain point prioritization with source citations] +_High Priority Pain Points: [Most critical pain points to address]_ +_Medium Priority Pain Points: [Important but less critical pain points]_ +_Low Priority Pain Points: [Minor pain points with lower impact]_ +_Opportunity Mapping: [Pain points with highest solution opportunity]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **customer pain points analysis** for {{research_topic}}, focusing on customer challenges. + +**Key Pain Points Findings:** + +- Customer challenges and frustrations thoroughly documented +- Unmet needs and solution gaps clearly identified +- Adoption barriers and service pain points analyzed +- Customer satisfaction gaps assessed +- Pain points prioritized by impact and opportunity + +**Ready to proceed to customer decision processes?** +[C] Continue - Save this to document and proceed to decision processes analysis + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3]` +- Load: `./step-04-customer-decisions.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Customer challenges and frustrations clearly documented +✅ Unmet needs and solution gaps identified +✅ Adoption barriers and service pain points analyzed +✅ Customer satisfaction gaps assessed +✅ Pain points prioritized by impact and opportunity +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (customer decisions) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical customer challenges or frustrations +❌ Not identifying unmet needs or solution gaps +❌ Incomplete adoption barriers analysis +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to customer decisions analysis step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## CUSTOMER PAIN POINTS RESEARCH PROTOCOLS: + +- Research customer satisfaction surveys and reviews +- Use customer feedback and complaint data +- Analyze customer support and service issues +- Study barriers to customer adoption +- Focus on current pain point data +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## PAIN POINTS ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative customer research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable pain point insights + +## NEXT STEP: + +After user selects 'C', load `./step-04-customer-decisions.md` to analyze customer decision processes, journey mapping, and decision factors for {{research_topic}}. + +Remember: Always write research content to document immediately and emphasize current customer pain points data with rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-04-customer-decisions.md b/src/bmm/workflows/1-analysis/research/market-steps/step-04-customer-decisions.md new file mode 100644 index 00000000..21544335 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-04-customer-decisions.md @@ -0,0 +1,259 @@ +# Market Research Step 4: Customer Decisions and Journey + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A CUSTOMER DECISION ANALYST, not content generator +- 💬 FOCUS on customer decision processes and journey mapping +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after decision processes content generation +- 📝 WRITE CUSTOMER DECISIONS ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Customer behavior and pain points analysis completed in previous steps +- Focus on customer decision processes and journey mapping +- Web search capabilities with source verification are enabled +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion + +## YOUR TASK: + +Conduct customer decision processes and journey analysis with emphasis on decision factors and journey mapping. + +## CUSTOMER DECISIONS ANALYSIS SEQUENCE: + +### 1. Begin Customer Decisions Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different customer decision areas simultaneously and thoroughly. + +Start with customer decisions research approach: +"Now I'll conduct **customer decision processes analysis** for **{{research_topic}}** to understand customer decision-making. + +**Customer Decisions Focus:** + +- Customer decision-making processes +- Decision factors and criteria +- Customer journey mapping +- Purchase decision influencers +- Information gathering patterns + +**Let me search for current customer decision insights.**" + +### 2. Parallel Decisions Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} customer decision process" +Search the web: "{{research_topic}} buying criteria factors" +Search the web: "{{research_topic}} customer journey mapping" +Search the web: "{{research_topic}} decision influencing factors" + +**Analysis approach:** + +- Look for customer decision research studies +- Search for buying criteria and factor analysis +- Research customer journey mapping methodologies +- Analyze decision influence factors and channels +- Study information gathering and evaluation patterns + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate customer decision findings: + +**Research Coverage:** + +- Customer decision-making processes +- Decision factors and criteria +- Customer journey mapping +- Decision influence factors + +**Cross-Decisions Analysis:** +[Identify patterns connecting decision factors and journey stages] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Customer Decisions Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare customer decisions analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Customer Decision Processes and Journey + +### Customer Decision-Making Processes + +[Decision processes analysis with source citations] +_Decision Stages: [Key stages in customer decision making]_ +_Decision Timelines: [Timeframes for different decisions]_ +_Complexity Levels: [Decision complexity assessment]_ +_Evaluation Methods: [How customers evaluate options]_ +_Source: [URL]_ + +### Decision Factors and Criteria + +[Decision factors analysis with source citations] +_Primary Decision Factors: [Most important factors in decisions]_ +_Secondary Decision Factors: [Supporting factors influencing decisions]_ +_Weighing Analysis: [How different factors are weighed]_ +_Evoluton Patterns: [How factors change over time]_ +_Source: [URL]_ + +### Customer Journey Mapping + +[Journey mapping analysis with source citations] +_Awareness Stage: [How customers become aware of {{research_topic}}]_ +_Consideration Stage: [Evaluation and comparison process]_ +_Decision Stage: [Final decision-making process]_ +_Purchase Stage: [Purchase execution and completion]_ +_Post-Purchase Stage: [Post-decision evaluation and behavior]_ +_Source: [URL]_ + +### Touchpoint Analysis + +[Touchpoint analysis with source citations] +_Digital Touchpoints: [Online and digital interaction points]_ +_Offline Touchpoints: [Physical and in-person interaction points]_ +_Information Sources: [Where customers get information]_ +_Influence Channels: [What influences customer decisions]_ +_Source: [URL]_ + +### Information Gathering Patterns + +[Information patterns analysis with source citations] +_Research Methods: [How customers research options]_ +_Information Sources Trusted: [Most trusted information sources]_ +_Research Duration: [Time spent gathering information]_ +_Evaluation Criteria: [How customers evaluate information]_ +_Source: [URL]_ + +### Decision Influencers + +[Decision influencer analysis with source citations] +_Peer Influence: [How friends and family influence decisions]_ +_Expert Influence: [How expert opinions affect decisions]_ +_Media Influence: [How media and marketing affect decisions]_ +_Social Proof Influence: [How reviews and testimonials affect decisions]_ +_Source: [URL]_ + +### Purchase Decision Factors + +[Purchase decision factors analysis with source citations] +_Immediate Purchase Drivers: [Factors triggering immediate purchase]_ +_Delayed Purchase Drivers: [Factors causing purchase delays]_ +_Brand Loyalty Factors: [Factors driving repeat purchases]_ +_Price Sensitivity: [How price affects purchase decisions]_ +_Source: [URL]_ + +### Customer Decision Optimizations + +[Decision optimization analysis with source citations] +_Friction Reduction: [Ways to make decisions easier]_ +_Trust Building: [Building customer trust in decisions]_ +_Conversion Optimization: [Optimizing decision-to-purchase rates]_ +_Loyalty Building: [Building long-term customer relationships]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **customer decision processes analysis** for {{research_topic}}, focusing on customer decision-making. + +**Key Decision Findings:** + +- Customer decision-making processes clearly mapped +- Decision factors and criteria thoroughly analyzed +- Customer journey mapping completed across all stages +- Decision influencers and touchpoints identified +- Information gathering patterns documented + +**Ready to proceed to competitive analysis?** +[C] Continue - Save this to document and proceed to competitive analysis + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4]` +- Load: `./step-05-competitive-analysis.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Customer decision-making processes clearly mapped +✅ Decision factors and criteria thoroughly analyzed +✅ Customer journey mapping completed across all stages +✅ Decision influencers and touchpoints identified +✅ Information gathering patterns documented +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (competitive analysis) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical decision-making process stages +❌ Not identifying key decision factors +❌ Incomplete customer journey mapping +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to competitive analysis step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## CUSTOMER DECISIONS RESEARCH PROTOCOLS: + +- Research customer decision studies and psychology +- Use customer journey mapping methodologies +- Analyze buying criteria and decision factors +- Study decision influence and touchpoint analysis +- Focus on current decision data +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## DECISION ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative customer decision research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable decision insights + +## NEXT STEP: + +After user selects 'C', load `./step-05-competitive-analysis.md` to analyze competitive landscape, market positioning, and competitive strategies for {{research_topic}}. + +Remember: Always write research content to document immediately and emphasize current customer decision data with rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-05-competitive-analysis.md b/src/bmm/workflows/1-analysis/research/market-steps/step-05-competitive-analysis.md new file mode 100644 index 00000000..d7387a4f --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-05-competitive-analysis.md @@ -0,0 +1,177 @@ +# Market Research Step 5: Competitive Analysis + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A COMPETITIVE ANALYST, not content generator +- 💬 FOCUS on competitive landscape and market positioning +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] complete option after competitive analysis content generation +- 💾 ONLY save when user chooses C (Complete) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5]` before completing workflow +- 🚫 FORBIDDEN to complete workflow until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Focus on competitive landscape and market positioning analysis +- Web search capabilities with source verification are enabled +- May need to search for specific competitor information + +## YOUR TASK: + +Conduct comprehensive competitive analysis with emphasis on market positioning. + +## COMPETITIVE ANALYSIS SEQUENCE: + +### 1. Begin Competitive Analysis + +Start with competitive research approach: +"Now I'll conduct **competitive analysis** to understand the competitive landscape. + +**Competitive Analysis Focus:** + +- Key players and market share +- Competitive positioning strategies +- Strengths and weaknesses analysis +- Market differentiation opportunities +- Competitive threats and challenges + +**Let me search for current competitive information.**" + +### 2. Generate Competitive Analysis Content + +Prepare competitive analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Competitive Landscape + +### Key Market Players + +[Key players analysis with market share data] +_Source: [URL]_ + +### Market Share Analysis + +[Market share analysis with source citations] +_Source: [URL]_ + +### Competitive Positioning + +[Positioning analysis with source citations] +_Source: [URL]_ + +### Strengths and Weaknesses + +[SWOT analysis with source citations] +_Source: [URL]_ + +### Market Differentiation + +[Differentiation analysis with source citations] +_Source: [URL]_ + +### Competitive Threats + +[Threats analysis with source citations] +_Source: [URL]_ + +### Opportunities + +[Competitive opportunities analysis with source citations] +_Source: [URL]_ +``` + +### 3. Present Analysis and Complete Option + +Show the generated competitive analysis and present complete option: +"I've completed the **competitive analysis** for the competitive landscape. + +**Key Competitive Findings:** + +- Key market players and market share identified +- Competitive positioning strategies mapped +- Strengths and weaknesses thoroughly analyzed +- Market differentiation opportunities identified +- Competitive threats and challenges documented + +**Ready to complete the market research?** +[C] Complete Research - Save final document and conclude + +### 4. Handle Complete Selection + +#### If 'C' (Complete Research): + +- Append the final content to the research document +- Update frontmatter: `stepsCompleted: [1, 2, 3]` +- Complete the market research workflow + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the research document using the structure from step 2. + +## SUCCESS METRICS: + +✅ Key market players identified +✅ Market share analysis completed with source verification +✅ Competitive positioning strategies clearly mapped +✅ Strengths and weaknesses thoroughly analyzed +✅ Market differentiation opportunities identified +✅ [C] complete option presented and handled correctly +✅ Content properly appended to document when C selected +✅ Market research workflow completed successfully + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing key market players or market share data +❌ Incomplete competitive positioning analysis +❌ Not identifying market differentiation opportunities +❌ Not presenting completion option for research workflow +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## COMPETITIVE RESEARCH PROTOCOLS: + +- Search for industry reports and competitive intelligence +- Use competitor company websites and annual reports +- Research market research firm competitive analyses +- Note competitive advantages and disadvantages +- Search for recent market developments and disruptions + +## MARKET RESEARCH COMPLETION: + +When 'C' is selected: + +- All market research steps completed +- Comprehensive market research document generated +- All sections appended with source citations +- Market research workflow status updated +- Final recommendations provided to user + +## NEXT STEPS: + +Market research workflow complete. User may: + +- Use market research to inform product development strategies +- Conduct additional competitive research on specific companies +- Combine market research with other research types for comprehensive insights + +Congratulations on completing comprehensive market research! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/market-steps/step-06-research-completion.md b/src/bmm/workflows/1-analysis/research/market-steps/step-06-research-completion.md new file mode 100644 index 00000000..42d7d7d9 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/market-steps/step-06-research-completion.md @@ -0,0 +1,475 @@ +# Market Research Step 6: Research Completion + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A MARKET RESEARCH STRATEGIST, not content generator +- 💬 FOCUS on strategic recommendations and actionable insights +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] complete option after completion content generation +- 💾 ONLY save when user chooses C (Complete) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5, 6]` before completing workflow +- 🚫 FORBIDDEN to complete workflow until C is selected +- 📚 GENERATE COMPLETE DOCUMENT STRUCTURE with intro, TOC, and summary + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - comprehensive market analysis +- **Research goals = "{{research_goals}}"** - achieved through exhaustive market research +- All market research sections have been completed (customer behavior, pain points, decisions, competitive analysis) +- Web search capabilities with source verification are enabled +- This is the final synthesis step producing the complete market research document + +## YOUR TASK: + +Produce a comprehensive, authoritative market research document on **{{research_topic}}** with compelling narrative introduction, detailed TOC, and executive summary based on exhaustive market research. + +## MARKET RESEARCH COMPLETION SEQUENCE: + +### 1. Begin Strategic Synthesis + +Start with strategic synthesis approach: +"Now I'll complete our market research with **strategic synthesis and recommendations** . + +**Strategic Synthesis Focus:** + +- Integrated insights from market, customer, and competitive analysis +- Strategic recommendations based on research findings +- Market entry or expansion strategies +- Risk assessment and mitigation approaches +- Actionable next steps and implementation guidance + +**Let me search for current strategic insights and best practices.**" + +### 2. Web Search for Market Entry Strategies + +Search for current market strategies: +Search the web: "market entry strategies best practices" + +**Strategy focus:** + +- Market entry timing and approaches +- Go-to-market strategies and frameworks +- Market positioning and differentiation tactics +- Customer acquisition and growth strategies + +### 3. Web Search for Risk Assessment + +Search for current risk approaches: +Search the web: "market research risk assessment frameworks" + +**Risk focus:** + +- Market risks and uncertainty management +- Competitive threats and mitigation strategies +- Regulatory and compliance risks +- Economic and market volatility considerations + +### 4. Generate Complete Market Research Document + +Prepare comprehensive market research document with full structure: + +#### Complete Document Structure: + +```markdown +# [Compelling Title]: Comprehensive {{research_topic}} Market Research + +## Executive Summary + +[Brief compelling overview of key market findings and strategic implications] + +## Table of Contents + +- Market Research Introduction and Methodology +- {{research_topic}} Market Analysis and Dynamics +- Customer Insights and Behavior Analysis +- Competitive Landscape and Positioning +- Strategic Market Recommendations +- Market Entry and Growth Strategies +- Risk Assessment and Mitigation +- Implementation Roadmap and Success Metrics +- Future Market Outlook and Opportunities +- Market Research Methodology and Source Documentation +- Market Research Appendices and Additional Resources + +## 1. Market Research Introduction and Methodology + +### Market Research Significance + +**Compelling market narrative about why {{research_topic}} research is critical now** +_Market Importance: [Strategic market significance with up-to-date context]_ +_Business Impact: [Business implications of market research]_ +_Source: [URL]_ + +### Market Research Methodology + +[Comprehensive description of market research approach including:] + +- **Market Scope**: [Comprehensive market coverage areas] +- **Data Sources**: [Authoritative market sources and verification approach] +- **Analysis Framework**: [Structured market analysis methodology] +- **Time Period**: [current focus and market evolution context] +- **Geographic Coverage**: [Regional/global market scope] + +### Market Research Goals and Objectives + +**Original Market Goals:** {{research_goals}} + +**Achieved Market Objectives:** + +- [Market Goal 1 achievement with supporting evidence] +- [Market Goal 2 achievement with supporting evidence] +- [Additional market insights discovered during research] + +## 2. {{research_topic}} Market Analysis and Dynamics + +### Market Size and Growth Projections + +_[Comprehensive market analysis]_ +_Market Size: [Current market valuation and size]_ +_Growth Rate: [CAGR and market growth projections]_ +_Market Drivers: [Key factors driving market growth]_ +_Market Segments: [Detailed market segmentation analysis]_ +_Source: [URL]_ + +### Market Trends and Dynamics + +[Current market trends analysis] +_Emerging Trends: [Key market trends and their implications]_ +_Market Dynamics: [Forces shaping market evolution]_ +_Consumer Behavior Shifts: [Changes in customer behavior and preferences]_ +_Source: [URL]_ + +### Pricing and Business Model Analysis + +[Comprehensive pricing and business model analysis] +_Pricing Strategies: [Current pricing approaches and models]_ +_Business Model Evolution: [Emerging and successful business models]_ +_Value Proposition Analysis: [Customer value proposition assessment]_ +_Source: [URL]_ + +## 3. Customer Insights and Behavior Analysis + +### Customer Behavior Patterns + +[Customer insights analysis with current context] +_Behavior Patterns: [Key customer behavior trends and patterns]_ +_Customer Journey: [Complete customer journey mapping]_ +_Decision Factors: [Factors influencing customer decisions]_ +_Source: [URL]_ + +### Customer Pain Points and Needs + +[Comprehensive customer pain point analysis] +_Pain Points: [Key customer challenges and frustrations]_ +_Unmet Needs: [Unsolved customer needs and opportunities]_ +_Customer Expectations: [Current customer expectations and requirements]_ +_Source: [URL]_ + +### Customer Segmentation and Targeting + +[Detailed customer segmentation analysis] +_Customer Segments: [Detailed customer segment profiles]_ +_Target Market Analysis: [Most attractive customer segments]_ +_Segment-specific Strategies: [Tailored approaches for key segments]_ +_Source: [URL]_ + +## 4. Competitive Landscape and Positioning + +### Competitive Analysis + +[Comprehensive competitive analysis] +_Market Leaders: [Dominant competitors and their strategies]_ +_Emerging Competitors: [New entrants and innovative approaches]_ +_Competitive Advantages: [Key differentiators and competitive advantages]_ +_Source: [URL]_ + +### Market Positioning Strategies + +[Strategic positioning analysis] +_Positioning Opportunities: [Opportunities for market differentiation]_ +_Competitive Gaps: [Unserved market needs and opportunities]_ +_Positioning Framework: [Recommended positioning approach]_ +_Source: [URL]_ + +## 5. Strategic Market Recommendations + +### Market Opportunity Assessment + +[Strategic market opportunities analysis] +_High-Value Opportunities: [Most attractive market opportunities]_ +_Market Entry Timing: [Optimal timing for market entry or expansion]_ +_Growth Strategies: [Recommended approaches for market growth]_ +_Source: [URL]_ + +### Strategic Recommendations + +[Comprehensive strategic recommendations] +_Market Entry Strategy: [Recommended approach for market entry/expansion]_ +_Competitive Strategy: [Recommended competitive positioning and approach]_ +_Customer Acquisition Strategy: [Recommended customer acquisition approach]_ +_Source: [URL]_ + +## 6. Market Entry and Growth Strategies + +### Go-to-Market Strategy + +[Comprehensive go-to-market approach] +_Market Entry Approach: [Recommended market entry strategy and tactics]_ +_Channel Strategy: [Optimal channels for market reach and customer acquisition]_ +_Partnership Strategy: [Strategic partnership and collaboration opportunities]_ +_Source: [URL]_ + +### Growth and Scaling Strategy + +[Market growth and scaling analysis] +_Growth Phases: [Recommended phased approach to market growth]_ +_Scaling Considerations: [Key factors for successful market scaling]_ +_Expansion Opportunities: [Opportunities for geographic or segment expansion]_ +_Source: [URL]_ + +## 7. Risk Assessment and Mitigation + +### Market Risk Analysis + +[Comprehensive market risk assessment] +_Market Risks: [Key market-related risks and uncertainties]_ +_Competitive Risks: [Competitive threats and mitigation strategies]_ +_Regulatory Risks: [Regulatory and compliance considerations]_ +_Source: [URL]_ + +### Mitigation Strategies + +[Risk mitigation and contingency planning] +_Risk Mitigation Approaches: [Strategies for managing identified risks]_ +_Contingency Planning: [Backup plans and alternative approaches]_ +_Market Sensitivity Analysis: [Impact of market changes on strategy]_ +_Source: [URL]_ + +## 8. Implementation Roadmap and Success Metrics + +### Implementation Framework + +[Comprehensive implementation guidance] +_Implementation Timeline: [Recommended phased implementation approach]_ +_Required Resources: [Key resources and capabilities needed]_ +_Implementation Milestones: [Key milestones and success criteria]_ +_Source: [URL]_ + +### Success Metrics and KPIs + +[Comprehensive success measurement framework] +_Key Performance Indicators: [Critical metrics for measuring success]_ +_Monitoring and Reporting: [Approach for tracking and reporting progress]_ +_Success Criteria: [Clear criteria for determining success]_ +_Source: [URL]_ + +## 9. Future Market Outlook and Opportunities + +### Future Market Trends + +[Forward-looking market analysis] +_Near-term Market Evolution: [1-2 year market development expectations]_ +_Medium-term Market Trends: [3-5 year expected market developments]_ +_Long-term Market Vision: [5+ year market outlook for {{research_topic}}]_ +_Source: [URL]_ + +### Strategic Opportunities + +[Market opportunity analysis and recommendations] +_Emerging Opportunities: [New market opportunities and their potential]_ +_Innovation Opportunities: [Areas for market innovation and differentiation]_ +_Strategic Market Investments: [Recommended market investments and priorities]_ +_Source: [URL]_ + +## 10. Market Research Methodology and Source Verification + +### Comprehensive Market Source Documentation + +[Complete documentation of all market research sources] +_Primary Market Sources: [Key authoritative market sources used]_ +_Secondary Market Sources: [Supporting market research and analysis]_ +_Market Web Search Queries: [Complete list of market search queries used]_ + +### Market Research Quality Assurance + +[Market research quality assurance and validation approach] +_Market Source Verification: [All market claims verified with multiple sources]_ +_Market Confidence Levels: [Confidence assessments for uncertain market data]_ +_Market Research Limitations: [Market research limitations and areas for further investigation]_ +_Methodology Transparency: [Complete transparency about market research approach]_ + +## 11. Market Research Appendices and Additional Resources + +### Detailed Market Data Tables + +[Comprehensive market data tables supporting research findings] +_Market Size Data: [Detailed market size and growth data tables]_ +_Customer Analysis Data: [Detailed customer behavior and segmentation data]_ +_Competitive Analysis Data: [Detailed competitor comparison and positioning data]_ + +### Market Resources and References + +[Valuable market resources for continued research and implementation] +_Market Research Reports: [Authoritative market research reports and publications]_ +_Industry Associations: [Key industry organizations and market resources]_ +_Market Analysis Tools: [Tools and resources for ongoing market analysis]_ + +--- + +## Market Research Conclusion + +### Summary of Key Market Findings + +[Comprehensive summary of the most important market research findings] + +### Strategic Market Impact Assessment + +[Assessment of market implications for {{research_topic}}] + +### Next Steps Market Recommendations + +[Specific next steps for leveraging this market research] + +--- + +**Market Research Completion Date:** {{date}} +**Research Period:** current comprehensive market analysis +**Document Length:** As needed for comprehensive market coverage +**Source Verification:** All market facts cited with current sources +**Market Confidence Level:** High - based on multiple authoritative market sources + +_This comprehensive market research document serves as an authoritative market reference on {{research_topic}} and provides strategic market insights for informed decision-making._ +``` + +### 5. Present Complete Market Research Document and Final Option + +**Market Research Document Completion Presentation:** + +"I've completed the **comprehensive market research document synthesis** for **{{research_topic}}**, producing an authoritative market research document with: + +**Document Features:** + +- **Compelling Market Introduction**: Engaging opening that establishes market research significance +- **Comprehensive Market TOC**: Complete navigation structure for market reference +- **Exhaustive Market Research Coverage**: All market aspects of {{research_topic}} thoroughly analyzed +- **Executive Market Summary**: Key market findings and strategic implications highlighted +- **Strategic Market Recommendations**: Actionable market insights based on comprehensive research +- **Complete Market Source Citations**: Every market claim verified with current sources + +**Market Research Completeness:** + +- Market analysis and dynamics fully documented +- Customer insights and behavior analysis comprehensively covered +- Competitive landscape and positioning detailed +- Strategic market recommendations and implementation guidance provided + +**Document Standards Met:** + +- Exhaustive market research with no critical gaps +- Professional market structure and compelling narrative +- As long as needed for comprehensive market coverage +- Multiple independent sources for all market claims +- current market data throughout with proper citations + +**Ready to complete this comprehensive market research document?** +[C] Complete Research - Save final comprehensive market research document + +### 6. Handle Complete Selection + +#### If 'C' (Complete Research): + +- Append the final content to the research document +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4]` +- Complete the market research workflow + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the research document using the structure from step 4. + +## SUCCESS METRICS: + +✅ Compelling market introduction with research significance +✅ Comprehensive market table of contents with complete document structure +✅ Exhaustive market research coverage across all market aspects +✅ Executive market summary with key findings and strategic implications +✅ Strategic market recommendations grounded in comprehensive research +✅ Complete market source verification with current citations +✅ Professional market document structure and compelling narrative +✅ [C] complete option presented and handled correctly +✅ Market research workflow completed with comprehensive document + +## FAILURE MODES: + +❌ Not producing compelling market introduction +❌ Missing comprehensive market table of contents +❌ Incomplete market research coverage across market aspects +❌ Not providing executive market summary with key findings +❌ Missing strategic market recommendations based on research +❌ Relying solely on training data without web verification for current facts +❌ Producing market document without professional structure +❌ Not presenting completion option for final market document + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## STRATEGIC RESEARCH PROTOCOLS: + +- Search for current market strategy frameworks and best practices +- Research successful market entry cases and approaches +- Identify risk management methodologies and frameworks +- Research implementation planning and execution strategies +- Consider market timing and readiness factors + +## COMPREHENSIVE MARKET DOCUMENT STANDARDS: + +This step ensures the final market research document: + +- Serves as an authoritative market reference on {{research_topic}} +- Provides strategic market insights for informed decision-making +- Includes comprehensive market coverage with no gaps +- Maintains rigorous market source verification standards +- Delivers strategic market insights and actionable recommendations +- Meets professional market research document quality standards + +## MARKET RESEARCH WORKFLOW COMPLETION: + +When 'C' is selected: + +- All market research steps completed (1-4) +- Comprehensive market research document generated +- Professional market document structure with intro, TOC, and summary +- All market sections appended with source citations +- Market research workflow status updated to complete +- Final comprehensive market research document delivered to user + +## FINAL MARKET DELIVERABLE: + +Complete authoritative market research document on {{research_topic}} that: + +- Establishes professional market credibility through comprehensive research +- Provides strategic market insights for informed decision-making +- Serves as market reference document for continued use +- Maintains highest market research quality standards with current verification + +## NEXT STEPS: + +Comprehensive market research workflow complete. User may: + +- Use market research document to inform business strategies and decisions +- Conduct additional market research on specific segments or opportunities +- Combine market research with other research types for comprehensive insights +- Move forward with implementation based on strategic market recommendations + +Congratulations on completing comprehensive market research with professional documentation! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/research.template.md b/src/bmm/workflows/1-analysis/research/research.template.md new file mode 100644 index 00000000..1d995247 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/research.template.md @@ -0,0 +1,29 @@ +--- +stepsCompleted: [] +inputDocuments: [] +workflowType: 'research' +lastStep: 1 +research_type: '{{research_type}}' +research_topic: '{{research_topic}}' +research_goals: '{{research_goals}}' +user_name: '{{user_name}}' +date: '{{date}}' +web_research_enabled: true +source_verification: true +--- + +# Research Report: {{research_type}} + +**Date:** {{date}} +**Author:** {{user_name}} +**Research Type:** {{research_type}} + +--- + +## Research Overview + +[Research overview and methodology will be appended here] + +--- + + diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-01-init.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-01-init.md new file mode 100644 index 00000000..b286822d --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-01-init.md @@ -0,0 +1,137 @@ +# Technical Research Step 1: Technical Research Scope Confirmation + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user confirmation + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ FOCUS EXCLUSIVELY on confirming technical research scope and approach +- 📋 YOU ARE A TECHNICAL RESEARCH PLANNER, not content generator +- 💬 ACKNOWLEDGE and CONFIRM understanding of technical research goals +- 🔍 This is SCOPE CONFIRMATION ONLY - no web research yet +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present [C] continue option after scope confirmation +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Research type = "technical" is already set +- **Research topic = "{{research_topic}}"** - discovered from initial discussion +- **Research goals = "{{research_goals}}"** - captured from initial discussion +- Focus on technical architecture and implementation research +- Web search is required to verify and supplement your knowledge with current facts + +## YOUR TASK: + +Confirm technical research scope and approach for **{{research_topic}}** with the user's goals in mind. + +## TECHNICAL SCOPE CONFIRMATION: + +### 1. Begin Scope Confirmation + +Start with technical scope understanding: +"I understand you want to conduct **technical research** for **{{research_topic}}** with these goals: {{research_goals}} + +**Technical Research Scope:** + +- **Architecture Analysis**: System design patterns, frameworks, and architectural decisions +- **Implementation Approaches**: Development methodologies, coding patterns, and best practices +- **Technology Stack**: Languages, frameworks, tools, and platforms relevant to {{research_topic}} +- **Integration Patterns**: APIs, communication protocols, and system interoperability +- **Performance Considerations**: Scalability, optimization, and performance patterns + +**Research Approach:** + +- Current web data with rigorous source verification +- Multi-source validation for critical technical claims +- Confidence levels for uncertain technical information +- Comprehensive technical coverage with architecture-specific insights + +### 2. Scope Confirmation + +Present clear scope confirmation: +"**Technical Research Scope Confirmation:** + +For **{{research_topic}}**, I will research: + +✅ **Architecture Analysis** - design patterns, frameworks, system architecture +✅ **Implementation Approaches** - development methodologies, coding patterns +✅ **Technology Stack** - languages, frameworks, tools, platforms +✅ **Integration Patterns** - APIs, protocols, interoperability +✅ **Performance Considerations** - scalability, optimization, patterns + +**All claims verified against current public sources.** + +**Does this technical research scope and approach align with your goals?** +[C] Continue - Begin technical research with this scope + +### 3. Handle Continue Selection + +#### If 'C' (Continue): + +- Document scope confirmation in research file +- Update frontmatter: `stepsCompleted: [1]` +- Load: `./step-02-technical-overview.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append scope confirmation: + +```markdown +## Technical Research Scope Confirmation + +**Research Topic:** {{research_topic}} +**Research Goals:** {{research_goals}} + +**Technical Research Scope:** + +- Architecture Analysis - design patterns, frameworks, system architecture +- Implementation Approaches - development methodologies, coding patterns +- Technology Stack - languages, frameworks, tools, platforms +- Integration Patterns - APIs, protocols, interoperability +- Performance Considerations - scalability, optimization, patterns + +**Research Methodology:** + +- Current web data with rigorous source verification +- Multi-source validation for critical technical claims +- Confidence level framework for uncertain information +- Comprehensive technical coverage with architecture-specific insights + +**Scope Confirmed:** {{date}} +``` + +## SUCCESS METRICS: + +✅ Technical research scope clearly confirmed with user +✅ All technical analysis areas identified and explained +✅ Research methodology emphasized +✅ [C] continue option presented and handled correctly +✅ Scope confirmation documented when user proceeds +✅ Proper routing to next technical research step + +## FAILURE MODES: + +❌ Not clearly confirming technical research scope with user +❌ Missing critical technical analysis areas +❌ Not explaining that web search is required for current facts +❌ Not presenting [C] continue option +❌ Proceeding without user scope confirmation +❌ Not routing to next technical research step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C', load `./step-02-technical-overview.md` to begin technology stack analysis. + +Remember: This is SCOPE CONFIRMATION ONLY - no actual technical research yet, just confirming the research approach and scope! diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-02-technical-overview.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-02-technical-overview.md new file mode 100644 index 00000000..78151eb0 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-02-technical-overview.md @@ -0,0 +1,239 @@ +# Technical Research Step 2: Technology Stack Analysis + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A TECHNOLOGY STACK ANALYST, not content generator +- 💬 FOCUS on languages, frameworks, tools, and platforms +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after technology stack content generation +- 📝 WRITE TECHNOLOGY STACK ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from step-01 are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on languages, frameworks, tools, and platforms +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct technology stack analysis focusing on languages, frameworks, tools, and platforms. Search the web to verify and supplement current facts. + +## TECHNOLOGY STACK ANALYSIS SEQUENCE: + +### 1. Begin Technology Stack Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different technology stack areas simultaneously and thoroughly. + +Start with technology stack research approach: +"Now I'll conduct **technology stack analysis** for **{{research_topic}}** to understand the technology landscape. + +**Technology Stack Focus:** + +- Programming languages and their evolution +- Development frameworks and libraries +- Database and storage technologies +- Development tools and platforms +- Cloud infrastructure and deployment platforms + +**Let me search for current technology stack insights.**" + +### 2. Parallel Technology Stack Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} programming languages frameworks" +Search the web: "{{research_topic}} development tools platforms" +Search the web: "{{research_topic}} database storage technologies" +Search the web: "{{research_topic}} cloud infrastructure platforms" + +**Analysis approach:** + +- Look for recent technology trend reports and developer surveys +- Search for technology documentation and best practices +- Research open-source projects and their technology choices +- Analyze technology adoption patterns and migration trends +- Study platform and tool evolution in the domain + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate technology stack findings: + +**Research Coverage:** + +- Programming languages and frameworks analysis +- Development tools and platforms evaluation +- Database and storage technologies assessment +- Cloud infrastructure and deployment platform analysis + +**Cross-Technology Analysis:** +[Identify patterns connecting language choices, frameworks, and platform decisions] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Technology Stack Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare technology stack analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Technology Stack Analysis + +### Programming Languages + +[Programming languages analysis with source citations] +_Popular Languages: [Most widely used languages for {{research_topic}}]_ +_Emerging Languages: [Growing languages gaining adoption]_ +_Language Evolution: [How language preferences are changing]_ +_Performance Characteristics: [Language performance and suitability]_ +_Source: [URL]_ + +### Development Frameworks and Libraries + +[Frameworks analysis with source citations] +_Major Frameworks: [Dominant frameworks and their use cases]_ +_Micro-frameworks: [Lightweight options and specialized libraries]_ +_Evolution Trends: [How frameworks are evolving and changing]_ +_Ecosystem Maturity: [Library availability and community support]_ +_Source: [URL]_ + +### Database and Storage Technologies + +[Database analysis with source citations] +_Relational Databases: [Traditional SQL databases and their evolution]_ +_NoSQL Databases: [Document, key-value, graph, and other NoSQL options]_ +_In-Memory Databases: [Redis, Memcached, and performance-focused solutions]_ +_Data Warehousing: [Analytics and big data storage solutions]_ +_Source: [URL]_ + +### Development Tools and Platforms + +[Tools and platforms analysis with source citations] +_IDE and Editors: [Development environments and their evolution]_ +_Version Control: [Git and related development tools]_ +_Build Systems: [Compilation, packaging, and automation tools]_ +_Testing Frameworks: [Unit testing, integration testing, and QA tools]_ +_Source: [URL]_ + +### Cloud Infrastructure and Deployment + +[Cloud platforms analysis with source citations] +_Major Cloud Providers: [AWS, Azure, GCP and their services]_ +_Container Technologies: [Docker, Kubernetes, and orchestration]_ +_Serverless Platforms: [FaaS and event-driven computing]_ +_CDN and Edge Computing: [Content delivery and distributed computing]_ +_Source: [URL]_ + +### Technology Adoption Trends + +[Adoption trends analysis with source citations] +_Migration Patterns: [How technology choices are evolving]_ +_Emerging Technologies: [New technologies gaining traction]_ +_Legacy Technology: [Older technologies being phased out]_ +_Community Trends: [Developer preferences and open-source adoption]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **technology stack analysis** of the technology landscape for {{research_topic}}. + +**Key Technology Stack Findings:** + +- Programming languages and frameworks thoroughly analyzed +- Database and storage technologies evaluated +- Development tools and platforms documented +- Cloud infrastructure and deployment options mapped +- Technology adoption trends identified + +**Ready to proceed to integration patterns analysis?** +[C] Continue - Save this to document and proceed to integration patterns + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2]` +- Load: `./step-03-integration-patterns.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ Programming languages and frameworks thoroughly analyzed +✅ Database and storage technologies evaluated +✅ Development tools and platforms documented +✅ Cloud infrastructure and deployment options mapped +✅ Technology adoption trends identified +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (integration patterns) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical programming languages or frameworks +❌ Incomplete database and storage technology analysis +❌ Not identifying development tools and platforms +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to integration patterns step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## TECHNOLOGY STACK RESEARCH PROTOCOLS: + +- Research technology trend reports and developer surveys +- Use technology documentation and best practices guides +- Analyze open-source projects and their technology choices +- Study technology adoption patterns and migration trends +- Focus on current technology data +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## TECHNOLOGY STACK ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative technology research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable technology insights + +## NEXT STEP: + +After user selects 'C', load `./step-03-integration-patterns.md` to analyze APIs, communication protocols, and system interoperability for {{research_topic}}. + +Remember: Always write research content to document immediately and emphasize current technology data with rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-03-integration-patterns.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-03-integration-patterns.md new file mode 100644 index 00000000..68e2b70f --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-03-integration-patterns.md @@ -0,0 +1,248 @@ +# Technical Research Step 3: Integration Patterns + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE AN INTEGRATION ANALYST, not content generator +- 💬 FOCUS on APIs, protocols, and system interoperability +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after integration patterns content generation +- 📝 WRITE INTEGRATION PATTERNS ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on APIs, protocols, and system interoperability +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct integration patterns analysis focusing on APIs, communication protocols, and system interoperability. Search the web to verify and supplement current facts. + +## INTEGRATION PATTERNS ANALYSIS SEQUENCE: + +### 1. Begin Integration Patterns Analysis + +**UTILIZE SUBPROCESSES AND SUBAGENTS**: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different integration areas simultaneously and thoroughly. + +Start with integration patterns research approach: +"Now I'll conduct **integration patterns analysis** for **{{research_topic}}** to understand system integration approaches. + +**Integration Patterns Focus:** + +- API design patterns and protocols +- Communication protocols and data formats +- System interoperability approaches +- Microservices integration patterns +- Event-driven architectures and messaging + +**Let me search for current integration patterns insights.**" + +### 2. Parallel Integration Patterns Research Execution + +**Execute multiple web searches simultaneously:** + +Search the web: "{{research_topic}} API design patterns protocols" +Search the web: "{{research_topic}} communication protocols data formats" +Search the web: "{{research_topic}} system interoperability integration" +Search the web: "{{research_topic}} microservices integration patterns" + +**Analysis approach:** + +- Look for recent API design guides and best practices +- Search for communication protocol documentation and standards +- Research integration platform and middleware solutions +- Analyze microservices architecture patterns and approaches +- Study event-driven systems and messaging patterns + +### 3. Analyze and Aggregate Results + +**Collect and analyze findings from all parallel searches:** + +"After executing comprehensive parallel web searches, let me analyze and aggregate integration patterns findings: + +**Research Coverage:** + +- API design patterns and protocols analysis +- Communication protocols and data formats evaluation +- System interoperability approaches assessment +- Microservices integration patterns documentation + +**Cross-Integration Analysis:** +[Identify patterns connecting API choices, communication protocols, and system design] + +**Quality Assessment:** +[Overall confidence levels and research gaps identified]" + +### 4. Generate Integration Patterns Content + +**WRITE IMMEDIATELY TO DOCUMENT** + +Prepare integration patterns analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Integration Patterns Analysis + +### API Design Patterns + +[API design patterns analysis with source citations] +_RESTful APIs: [REST principles and best practices for {{research_topic}}]_ +_GraphQL APIs: [GraphQL adoption and implementation patterns]_ +_RPC and gRPC: [High-performance API communication patterns]_ +_Webhook Patterns: [Event-driven API integration approaches]_ +_Source: [URL]_ + +### Communication Protocols + +[Communication protocols analysis with source citations] +_HTTP/HTTPS Protocols: [Web-based communication patterns and evolution]_ +_WebSocket Protocols: [Real-time communication and persistent connections]_ +_Message Queue Protocols: [AMQP, MQTT, and messaging patterns]_ +_grpc and Protocol Buffers: [High-performance binary communication protocols]_ +_Source: [URL]_ + +### Data Formats and Standards + +[Data formats analysis with source citations] +_JSON and XML: [Structured data exchange formats and their evolution]_ +_Protobuf and MessagePack: [Efficient binary serialization formats]_ +_CSV and Flat Files: [Legacy data integration and bulk transfer patterns]_ +_Custom Data Formats: [Domain-specific data exchange standards]_ +_Source: [URL]_ + +### System Interoperability Approaches + +[Interoperability analysis with source citations] +_Point-to-Point Integration: [Direct system-to-system communication patterns]_ +_API Gateway Patterns: [Centralized API management and routing]_ +_Service Mesh: [Service-to-service communication and observability]_ +_Enterprise Service Bus: [Traditional enterprise integration patterns]_ +_Source: [URL]_ + +### Microservices Integration Patterns + +[Microservices integration analysis with source citations] +_API Gateway Pattern: [External API management and routing]_ +_Service Discovery: [Dynamic service registration and discovery]_ +_Circuit Breaker Pattern: [Fault tolerance and resilience patterns]_ +_Saga Pattern: [Distributed transaction management]_ +_Source: [URL]_ + +### Event-Driven Integration + +[Event-driven analysis with source citations] +_Publish-Subscribe Patterns: [Event broadcasting and subscription models]_ +_Event Sourcing: [Event-based state management and persistence]_ +_Message Broker Patterns: [RabbitMQ, Kafka, and message routing]_ +_CQRS Patterns: [Command Query Responsibility Segregation]_ +_Source: [URL]_ + +### Integration Security Patterns + +[Security patterns analysis with source citations] +_OAuth 2.0 and JWT: [API authentication and authorization patterns]_ +_API Key Management: [Secure API access and key rotation]_ +_Mutual TLS: [Certificate-based service authentication]_ +_Data Encryption: [Secure data transmission and storage]_ +_Source: [URL]_ +``` + +### 5. Present Analysis and Continue Option + +**Show analysis and present continue option:** + +"I've completed **integration patterns analysis** of system integration approaches for {{research_topic}}. + +**Key Integration Patterns Findings:** + +- API design patterns and protocols thoroughly analyzed +- Communication protocols and data formats evaluated +- System interoperability approaches documented +- Microservices integration patterns mapped +- Event-driven integration strategies identified + +**Ready to proceed to architectural patterns analysis?** +[C] Continue - Save this to document and proceed to architectural patterns + +### 6. Handle Continue Selection + +#### If 'C' (Continue): + +- **CONTENT ALREADY WRITTEN TO DOCUMENT** +- Update frontmatter: `stepsCompleted: [1, 2, 3]` +- Load: `./step-04-architectural-patterns.md` + +## APPEND TO DOCUMENT: + +Content is already written to document when generated in step 4. No additional append needed. + +## SUCCESS METRICS: + +✅ API design patterns and protocols thoroughly analyzed +✅ Communication protocols and data formats evaluated +✅ System interoperability approaches documented +✅ Microservices integration patterns mapped +✅ Event-driven integration strategies identified +✅ Content written immediately to document +✅ [C] continue option presented and handled correctly +✅ Proper routing to next step (architectural patterns) +✅ Research goals alignment maintained + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical API design patterns or protocols +❌ Incomplete communication protocols analysis +❌ Not identifying system interoperability approaches +❌ Not writing content immediately to document +❌ Not presenting [C] continue option after content generation +❌ Not routing to architectural patterns step + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## INTEGRATION PATTERNS RESEARCH PROTOCOLS: + +- Research API design guides and best practices documentation +- Use communication protocol specifications and standards +- Analyze integration platform and middleware solutions +- Study microservices architecture patterns and case studies +- Focus on current integration data +- Present conflicting information when sources disagree +- Apply confidence levels appropriately + +## INTEGRATION PATTERNS ANALYSIS STANDARDS: + +- Always cite URLs for web search results +- Use authoritative integration research sources +- Note data currency and potential limitations +- Present multiple perspectives when sources conflict +- Apply confidence levels to uncertain data +- Focus on actionable integration insights + +## NEXT STEP: + +After user selects 'C', load `./step-04-architectural-patterns.md` to analyze architectural patterns, design decisions, and system structures for {{research_topic}}. + +Remember: Always write research content to document immediately and emphasize current integration data with rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-04-architectural-patterns.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-04-architectural-patterns.md new file mode 100644 index 00000000..426cc662 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-04-architectural-patterns.md @@ -0,0 +1,202 @@ +# Technical Research Step 4: Architectural Patterns + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A SYSTEMS ARCHITECT, not content generator +- 💬 FOCUS on architectural patterns and design decisions +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📝 WRITE CONTENT IMMEDIATELY TO DOCUMENT +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] continue option after architectural patterns content generation +- 📝 WRITE ARCHITECTURAL PATTERNS ANALYSIS TO DOCUMENT IMMEDIATELY +- 💾 ONLY proceed when user chooses C (Continue) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4]` before loading next step +- 🚫 FORBIDDEN to load next step until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - established from initial discussion +- **Research goals = "{{research_goals}}"** - established from initial discussion +- Focus on architectural patterns and design decisions +- Web search capabilities with source verification are enabled + +## YOUR TASK: + +Conduct comprehensive architectural patterns analysis with emphasis on design decisions and implementation approaches for {{research_topic}}. + +## ARCHITECTURAL PATTERNS SEQUENCE: + +### 1. Begin Architectural Patterns Analysis + +Start with architectural research approach: +"Now I'll focus on **architectural patterns and design decisions** for effective architecture approaches for [technology/domain]. + +**Architectural Patterns Focus:** + +- System architecture patterns and their trade-offs +- Design principles and best practices +- Scalability and maintainability considerations +- Integration and communication patterns +- Security and performance architectural considerations + +**Let me search for current architectural patterns and approaches.**" + +### 2. Web Search for System Architecture Patterns + +Search for current architecture patterns: +Search the web: "system architecture patterns best practices" + +**Architecture focus:** + +- Microservices, monolithic, and serverless patterns +- Event-driven and reactive architectures +- Domain-driven design patterns +- Cloud-native and edge architecture patterns + +### 3. Web Search for Design Principles + +Search for current design principles: +Search the web: "software design principles patterns" + +**Design focus:** + +- SOLID principles and their application +- Clean architecture and hexagonal architecture +- API design and GraphQL vs REST patterns +- Database design and data architecture patterns + +### 4. Web Search for Scalability Patterns + +Search for current scalability approaches: +Search the web: "scalability architecture patterns" + +**Scalability focus:** + +- Horizontal vs vertical scaling patterns +- Load balancing and caching strategies +- Distributed systems and consensus patterns +- Performance optimization techniques + +### 5. Generate Architectural Patterns Content + +Prepare architectural analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Architectural Patterns and Design + +### System Architecture Patterns + +[System architecture patterns analysis with source citations] +_Source: [URL]_ + +### Design Principles and Best Practices + +[Design principles analysis with source citations] +_Source: [URL]_ + +### Scalability and Performance Patterns + +[Scalability patterns analysis with source citations] +_Source: [URL]_ + +### Integration and Communication Patterns + +[Integration patterns analysis with source citations] +_Source: [URL]_ + +### Security Architecture Patterns + +[Security patterns analysis with source citations] +_Source: [URL]_ + +### Data Architecture Patterns + +[Data architecture analysis with source citations] +_Source: [URL]_ + +### Deployment and Operations Architecture + +[Deployment architecture analysis with source citations] +_Source: [URL]_ +``` + +### 6. Present Analysis and Continue Option + +Show the generated architectural patterns and present continue option: +"I've completed the **architectural patterns analysis** for effective architecture approaches. + +**Key Architectural Findings:** + +- System architecture patterns and trade-offs clearly mapped +- Design principles and best practices thoroughly documented +- Scalability and performance patterns identified +- Integration and communication patterns analyzed +- Security and data architecture considerations captured + +**Ready to proceed to implementation research?** +[C] Continue - Save this to the document and move to implementation research + +### 7. Handle Continue Selection + +#### If 'C' (Continue): + +- Append the final content to the research document +- Update frontmatter: `stepsCompleted: [1, 2, 3]` +- Load: `./step-05-implementation-research.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the research document using the structure from step 5. + +## SUCCESS METRICS: + +✅ System architecture patterns identified with current citations +✅ Design principles clearly documented and analyzed +✅ Scalability and performance patterns thoroughly mapped +✅ Integration and communication patterns captured +✅ Security and data architecture considerations analyzed +✅ [C] continue option presented and handled correctly +✅ Content properly appended to document when C selected +✅ Proper routing to implementation research step + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical system architecture patterns +❌ Not analyzing design trade-offs and considerations +❌ Incomplete scalability or performance patterns analysis +❌ Not presenting [C] continue option after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## ARCHITECTURAL RESEARCH PROTOCOLS: + +- Search for architecture documentation and pattern catalogs +- Use architectural conference proceedings and case studies +- Research successful system architectures and their evolution +- Note architectural decision records (ADRs) and rationales +- Research architecture assessment and evaluation frameworks + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-05-implementation-research.md` to focus on implementation approaches and technology adoption. + +Remember: Always emphasize current architectural data and rigorous source verification! diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-05-implementation-research.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-05-implementation-research.md new file mode 100644 index 00000000..7117d525 --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-05-implementation-research.md @@ -0,0 +1,239 @@ +# Technical Research Step 4: Implementation Research + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE AN IMPLEMENTATION ENGINEER, not content generator +- 💬 FOCUS on implementation approaches and technology adoption +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] complete option after implementation research content generation +- 💾 ONLY save when user chooses C (Complete) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4]` before completing workflow +- 🚫 FORBIDDEN to complete workflow until C is selected + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Focus on implementation approaches and technology adoption strategies +- Web search capabilities with source verification are enabled +- This is the final step in the technical research workflow + +## YOUR TASK: + +Conduct comprehensive implementation research with emphasis on practical implementation approaches and technology adoption. + +## IMPLEMENTATION RESEARCH SEQUENCE: + +### 1. Begin Implementation Research + +Start with implementation research approach: +"Now I'll complete our technical research with **implementation approaches and technology adoption** analysis. + +**Implementation Research Focus:** + +- Technology adoption strategies and migration patterns +- Development workflows and tooling ecosystems +- Testing, deployment, and operational practices +- Team organization and skill requirements +- Cost optimization and resource management + +**Let me search for current implementation and adoption strategies.**" + +### 2. Web Search for Technology Adoption + +Search for current adoption strategies: +Search the web: "technology adoption strategies migration" + +**Adoption focus:** + +- Technology migration patterns and approaches +- Gradual adoption vs big bang strategies +- Legacy system modernization approaches +- Vendor evaluation and selection criteria + +### 3. Web Search for Development Workflows + +Search for current development practices: +Search the web: "software development workflows tooling" + +**Workflow focus:** + +- CI/CD pipelines and automation tools +- Code quality and review processes +- Testing strategies and frameworks +- Collaboration and communication tools + +### 4. Web Search for Operational Excellence + +Search for current operational practices: +Search the web: "DevOps operations best practices" + +**Operations focus:** + +- Monitoring and observability practices +- Incident response and disaster recovery +- Infrastructure as code and automation +- Security operations and compliance automation + +### 5. Generate Implementation Research Content + +Prepare implementation analysis with web search citations: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Implementation Approaches and Technology Adoption + +### Technology Adoption Strategies + +[Technology adoption analysis with source citations] +_Source: [URL]_ + +### Development Workflows and Tooling + +[Development workflows analysis with source citations] +_Source: [URL]_ + +### Testing and Quality Assurance + +[Testing approaches analysis with source citations] +_Source: [URL]_ + +### Deployment and Operations Practices + +[Deployment practices analysis with source citations] +_Source: [URL]_ + +### Team Organization and Skills + +[Team organization analysis with source citations] +_Source: [URL]_ + +### Cost Optimization and Resource Management + +[Cost optimization analysis with source citations] +_Source: [URL]_ + +### Risk Assessment and Mitigation + +[Risk mitigation analysis with source citations] +_Source: [URL]_ + +## Technical Research Recommendations + +### Implementation Roadmap + +[Implementation roadmap recommendations] + +### Technology Stack Recommendations + +[Technology stack suggestions] + +### Skill Development Requirements + +[Skill development recommendations] + +### Success Metrics and KPIs + +[Success measurement framework] +``` + +### 6. Present Analysis and Complete Option + +Show the generated implementation research and present complete option: +"I've completed the **implementation research and technology adoption** analysis, finalizing our comprehensive technical research. + +**Implementation Highlights:** + +- Technology adoption strategies and migration patterns documented +- Development workflows and tooling ecosystems analyzed +- Testing, deployment, and operational practices mapped +- Team organization and skill requirements identified +- Cost optimization and resource management strategies provided + +**This completes our technical research covering:** + +- Technical overview and landscape analysis +- Architectural patterns and design decisions +- Implementation approaches and technology adoption +- Practical recommendations and implementation roadmap + +**Ready to complete the technical research report?** +[C] Complete Research - Save final document and conclude + +### 7. Handle Complete Selection + +#### If 'C' (Complete Research): + +- Append the final content to the research document +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4]` +- Complete the technical research workflow + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the research document using the structure from step 5. + +## SUCCESS METRICS: + +✅ Technology adoption strategies identified with current citations +✅ Development workflows and tooling thoroughly analyzed +✅ Testing and deployment practices clearly documented +✅ Team organization and skill requirements mapped +✅ Cost optimization and risk mitigation strategies provided +✅ [C] complete option presented and handled correctly +✅ Content properly appended to document when C selected +✅ Technical research workflow completed successfully + +## FAILURE MODES: + +❌ Relying solely on training data without web verification for current facts + +❌ Missing critical technology adoption strategies +❌ Not providing practical implementation guidance +❌ Incomplete development workflows or operational practices analysis +❌ Not presenting completion option for research workflow +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## IMPLEMENTATION RESEARCH PROTOCOLS: + +- Search for implementation case studies and success stories +- Research technology migration patterns and lessons learned +- Identify common implementation challenges and solutions +- Research development tooling ecosystem evaluations +- Analyze operational excellence frameworks and maturity models + +## TECHNICAL RESEARCH WORKFLOW COMPLETION: + +When 'C' is selected: + +- All technical research steps completed +- Comprehensive technical research document generated +- All sections appended with source citations +- Technical research workflow status updated +- Final implementation recommendations provided to user + +## NEXT STEPS: + +Technical research workflow complete. User may: + +- Use technical research to inform architecture decisions +- Conduct additional research on specific technologies +- Combine technical research with other research types for comprehensive insights +- Move forward with implementation based on technical insights + +Congratulations on completing comprehensive technical research! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/technical-steps/step-06-research-synthesis.md b/src/bmm/workflows/1-analysis/research/technical-steps/step-06-research-synthesis.md new file mode 100644 index 00000000..7dc28a2d --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/technical-steps/step-06-research-synthesis.md @@ -0,0 +1,486 @@ +# Technical Research Step 5: Technical Synthesis and Completion + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without web search verification + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ Search the web to verify and supplement your knowledge with current facts +- 📋 YOU ARE A TECHNICAL RESEARCH STRATEGIST, not content generator +- 💬 FOCUS on comprehensive technical synthesis and authoritative conclusions +- 🔍 WEB SEARCH REQUIRED - verify current facts against live sources +- 📄 PRODUCE COMPREHENSIVE DOCUMENT with narrative intro, TOC, and summary +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show web search analysis before presenting findings +- ⚠️ Present [C] complete option after synthesis content generation +- 💾 ONLY save when user chooses C (Complete) +- 📖 Update frontmatter `stepsCompleted: [1, 2, 3, 4, 5]` before completing workflow +- 🚫 FORBIDDEN to complete workflow until C is selected +- 📚 GENERATE COMPLETE DOCUMENT STRUCTURE with intro, TOC, and summary + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- **Research topic = "{{research_topic}}"** - comprehensive technical analysis +- **Research goals = "{{research_goals}}"** - achieved through exhaustive technical research +- All technical research sections have been completed (overview, architecture, implementation) +- Web search capabilities with source verification are enabled +- This is the final synthesis step producing the complete technical research document + +## YOUR TASK: + +Produce a comprehensive, authoritative technical research document on **{{research_topic}}** with compelling narrative introduction, detailed TOC, and executive summary based on exhaustive technical research. + +## COMPREHENSIVE TECHNICAL DOCUMENT SYNTHESIS: + +### 1. Technical Document Structure Planning + +**Complete Technical Research Document Structure:** + +```markdown +# [Compelling Technical Title]: Comprehensive {{research_topic}} Technical Research + +## Executive Summary + +[Brief compelling overview of key technical findings and strategic implications] + +## Table of Contents + +- Technical Research Introduction and Methodology +- Technical Landscape and Architecture Analysis +- Implementation Approaches and Best Practices +- Technology Stack Evolution and Trends +- Integration and Interoperability Patterns +- Performance and Scalability Analysis +- Security and Compliance Considerations +- Strategic Technical Recommendations +- Implementation Roadmap and Risk Assessment +- Future Technical Outlook and Innovation Opportunities +- Technical Research Methodology and Source Documentation +- Technical Appendices and Reference Materials +``` + +### 2. Generate Compelling Technical Introduction + +**Technical Introduction Requirements:** + +- Hook reader with compelling technical opening about {{research_topic}} +- Establish technical research significance and current relevance +- Outline comprehensive technical research methodology +- Preview key technical findings and strategic implications +- Set authoritative, technical expert tone + +**Web Search for Technical Introduction Context:** +Search the web: "{{research_topic}} technical significance importance" + +### 3. Synthesize All Technical Research Sections + +**Technical Section-by-Section Integration:** + +- Combine technical overview from step-02 +- Integrate architectural patterns from step-03 +- Incorporate implementation research from step-04 +- Add cross-technical insights and connections +- Ensure comprehensive technical coverage with no gaps + +### 4. Generate Complete Technical Document Content + +#### Final Technical Document Structure: + +```markdown +# [Compelling Title]: Comprehensive {{research_topic}} Technical Research + +## Executive Summary + +[2-3 paragraph compelling summary of the most critical technical findings and strategic implications for {{research_topic}} based on comprehensive current technical research] + +**Key Technical Findings:** + +- [Most significant architectural insights] +- [Critical implementation considerations] +- [Important technology trends] +- [Strategic technical implications] + +**Technical Recommendations:** + +- [Top 3-5 actionable technical recommendations based on research] + +## Table of Contents + +1. Technical Research Introduction and Methodology +2. {{research_topic}} Technical Landscape and Architecture Analysis +3. Implementation Approaches and Best Practices +4. Technology Stack Evolution and Current Trends +5. Integration and Interoperability Patterns +6. Performance and Scalability Analysis +7. Security and Compliance Considerations +8. Strategic Technical Recommendations +9. Implementation Roadmap and Risk Assessment +10. Future Technical Outlook and Innovation Opportunities +11. Technical Research Methodology and Source Verification +12. Technical Appendices and Reference Materials + +## 1. Technical Research Introduction and Methodology + +### Technical Research Significance + +[Compelling technical narrative about why {{research_topic}} research is critical right now] +_Technical Importance: [Strategic technical significance with current context]_ +_Business Impact: [Business implications of technical research]_ +_Source: [URL]_ + +### Technical Research Methodology + +[Comprehensive description of technical research approach including:] + +- **Technical Scope**: [Comprehensive technical coverage areas] +- **Data Sources**: [Authoritative technical sources and verification approach] +- **Analysis Framework**: [Structured technical analysis methodology] +- **Time Period**: [current focus and technical evolution context] +- **Technical Depth**: [Level of technical detail and analysis] + +### Technical Research Goals and Objectives + +**Original Technical Goals:** {{research_goals}} + +**Achieved Technical Objectives:** + +- [Technical Goal 1 achievement with supporting evidence] +- [Technical Goal 2 achievement with supporting evidence] +- [Additional technical insights discovered during research] + +## 2. {{research_topic}} Technical Landscape and Architecture Analysis + +### Current Technical Architecture Patterns + +[Comprehensive architectural analysis synthesized from step-03 with current context] +_Dominant Patterns: [Current architectural approaches]_ +_Architectural Evolution: [Historical and current evolution patterns]_ +_Architectural Trade-offs: [Key architectural decisions and implications]_ +_Source: [URL]_ + +### System Design Principles and Best Practices + +[Complete system design analysis] +_Design Principles: [Core principles guiding {{research_topic}} implementations]_ +_Best Practice Patterns: [Industry-standard approaches and methodologies]_ +_Architectural Quality Attributes: [Performance, scalability, maintainability considerations]_ +_Source: [URL]_ + +## 3. Implementation Approaches and Best Practices + +### Current Implementation Methodologies + +[Implementation analysis from step-04 with current context] +_Development Approaches: [Current development methodologies and approaches]_ +_Code Organization Patterns: [Structural patterns and organization strategies]_ +_Quality Assurance Practices: [Testing, validation, and quality approaches]_ +_Deployment Strategies: [Current deployment and operations practices]_ +_Source: [URL]_ + +### Implementation Framework and Tooling + +[Comprehensive implementation framework analysis] +_Development Frameworks: [Popular frameworks and their characteristics]_ +_Tool Ecosystem: [Development tools and platform considerations]_ +_Build and Deployment Systems: [CI/CD and automation approaches]_ +_Source: [URL]_ + +## 4. Technology Stack Evolution and Current Trends + +### Current Technology Stack Landscape + +[Technology stack analysis from step-02 with current updates] +_Programming Languages: [Current language trends and adoption patterns]_ +_Frameworks and Libraries: [Popular frameworks and their use cases]_ +_Database and Storage Technologies: [Current data storage and management trends]_ +_API and Communication Technologies: [Integration and communication patterns]_ +_Source: [URL]_ + +### Technology Adoption Patterns + +[Comprehensive technology adoption analysis] +_Adoption Trends: [Technology adoption rates and patterns]_ +_Migration Patterns: [Technology migration and evolution trends]_ +_Emerging Technologies: [New technologies and their potential impact]_ +_Source: [URL]_ + +## 5. Integration and Interoperability Patterns + +### Current Integration Approaches + +[Integration patterns analysis with current context] +_API Design Patterns: [Current API design and implementation patterns]_ +_Service Integration: [Microservices and service integration approaches]_ +_Data Integration: [Data exchange and integration patterns]_ +_Source: [URL]_ + +### Interoperability Standards and Protocols + +[Comprehensive interoperability analysis] +_Standards Compliance: [Industry standards and compliance requirements]_ +_Protocol Selection: [Communication protocols and selection criteria]_ +_Integration Challenges: [Common integration challenges and solutions]_ +_Source: [URL]_ + +## 6. Performance and Scalability Analysis + +### Performance Characteristics and Optimization + +[Performance analysis based on research findings] +_Performance Benchmarks: [Current performance characteristics and benchmarks]_ +_Optimization Strategies: [Performance optimization approaches and techniques]_ +_Monitoring and Measurement: [Performance monitoring and measurement practices]_ +_Source: [URL]_ + +### Scalability Patterns and Approaches + +[Comprehensive scalability analysis] +_Scalability Patterns: [Architectural and design patterns for scalability]_ +_Capacity Planning: [Capacity planning and resource management approaches]_ +_Elasticity and Auto-scaling: [Dynamic scaling approaches and implementations]_ +_Source: [URL]_ + +## 7. Security and Compliance Considerations + +### Security Best Practices and Frameworks + +[Security analysis with current context] +_Security Frameworks: [Current security frameworks and best practices]_ +_Threat Landscape: [Current security threats and mitigation approaches]_ +_Secure Development Practices: [Secure coding and development lifecycle]_ +_Source: [URL]_ + +### Compliance and Regulatory Considerations + +[Comprehensive compliance analysis] +_Industry Standards: [Relevant industry standards and compliance requirements]_ +_Regulatory Compliance: [Legal and regulatory considerations for {{research_topic}}]_ +_Audit and Governance: [Technical audit and governance practices]_ +_Source: [URL]_ + +## 8. Strategic Technical Recommendations + +### Technical Strategy and Decision Framework + +[Strategic technical recommendations based on comprehensive research] +_Architecture Recommendations: [Recommended architectural approaches and patterns]_ +_Technology Selection: [Recommended technology stack and selection criteria]_ +_Implementation Strategy: [Recommended implementation approaches and methodologies]_ +_Source: [URL]_ + +### Competitive Technical Advantage + +[Analysis of technical competitive positioning] +_Technology Differentiation: [Technical approaches that provide competitive advantage]_ +_Innovation Opportunities: [Areas for technical innovation and differentiation]_ +_Strategic Technology Investments: [Recommended technology investments and priorities]_ +_Source: [URL]_ + +## 9. Implementation Roadmap and Risk Assessment + +### Technical Implementation Framework + +[Comprehensive implementation guidance based on research findings] +_Implementation Phases: [Recommended phased implementation approach]_ +_Technology Migration Strategy: [Approach for technology adoption and migration]_ +_Resource Planning: [Technical resources and capabilities planning]_ +_Source: [URL]_ + +### Technical Risk Management + +[Comprehensive technical risk assessment] +_Technical Risks: [Major technical risks and mitigation strategies]_ +_Implementation Risks: [Risks associated with implementation and deployment]_ +_Business Impact Risks: [Technical risks and their business implications]_ +_Source: [URL]_ + +## 10. Future Technical Outlook and Innovation Opportunities + +### Emerging Technology Trends + +[Forward-looking technical analysis based on comprehensive research] +_Near-term Technical Evolution: [1-2 year technical development expectations]_ +_Medium-term Technology Trends: [3-5 year expected technical developments]_ +_Long-term Technical Vision: [5+ year technical outlook for {{research_topic}}]_ +_Source: [URL]_ + +### Innovation and Research Opportunities + +[Technical innovation analysis and recommendations] +_Research Opportunities: [Areas for technical research and innovation]_ +_Emerging Technology Adoption: [Potential new technologies and adoption timelines]_ +_Innovation Framework: [Approach for fostering technical innovation]_ +_Source: [URL]_ + +## 11. Technical Research Methodology and Source Verification + +### Comprehensive Technical Source Documentation + +[Complete documentation of all technical research sources] +_Primary Technical Sources: [Key authoritative technical sources used]_ +_Secondary Technical Sources: [Supporting technical research and analysis]_ +_Technical Web Search Queries: [Complete list of technical search queries used]_ + +### Technical Research Quality Assurance + +[Technical quality assurance and validation approach] +_Technical Source Verification: [All technical claims verified with multiple sources]_ +_Technical Confidence Levels: [Confidence assessments for uncertain technical data]_ +_Technical Limitations: [Technical research limitations and areas for further investigation]_ +_Methodology Transparency: [Complete transparency about technical research approach]_ + +## 12. Technical Appendices and Reference Materials + +### Detailed Technical Data Tables + +[Comprehensive technical data tables supporting research findings] +_Architectural Pattern Tables: [Detailed architectural pattern comparisons]_ +_Technology Stack Analysis: [Detailed technology evaluation and comparison data]_ +_Performance Benchmark Data: [Comprehensive performance measurement data]_ + +### Technical Resources and References + +[Valuable technical resources for continued research and implementation] +_Technical Standards: [Relevant technical standards and specifications]_ +_Open Source Projects: [Key open source projects and communities]_ +_Research Papers and Publications: [Academic and industry research sources]_ +_Technical Communities: [Professional networks and technical communities]_ + +--- + +## Technical Research Conclusion + +### Summary of Key Technical Findings + +[Comprehensive summary of the most important technical research findings] + +### Strategic Technical Impact Assessment + +[Assessment of technical implications for {{research_topic}}] + +### Next Steps Technical Recommendations + +[Specific next steps for leveraging this technical research] + +--- + +**Technical Research Completion Date:** {{date}} +**Research Period:** current comprehensive technical analysis +**Document Length:** As needed for comprehensive technical coverage +**Source Verification:** All technical facts cited with current sources +**Technical Confidence Level:** High - based on multiple authoritative technical sources + +_This comprehensive technical research document serves as an authoritative technical reference on {{research_topic}} and provides strategic technical insights for informed decision-making and implementation._ +``` + +### 5. Present Complete Technical Document and Final Option + +**Technical Document Completion Presentation:** + +"I've completed the **comprehensive technical research document synthesis** for **{{research_topic}}**, producing an authoritative technical research document with: + +**Technical Document Features:** + +- **Compelling Technical Introduction**: Engaging technical opening that establishes research significance +- **Comprehensive Technical TOC**: Complete navigation structure for technical reference +- **Exhaustive Technical Research Coverage**: All technical aspects of {{research_topic}} thoroughly analyzed +- **Executive Technical Summary**: Key technical findings and strategic implications highlighted +- **Strategic Technical Recommendations**: Actionable technical insights based on comprehensive research +- **Complete Technical Source Citations**: Every technical claim verified with current sources + +**Technical Research Completeness:** + +- Technical landscape and architecture analysis fully documented +- Implementation approaches and best practices comprehensively covered +- Technology stack evolution and trends detailed +- Integration, performance, and security analysis complete +- Strategic technical insights and implementation guidance provided + +**Technical Document Standards Met:** + +- Exhaustive technical research with no critical gaps +- Professional technical structure and compelling narrative +- As long as needed for comprehensive technical coverage +- Multiple independent technical sources for all claims +- current technical data throughout with proper citations + +**Ready to complete this comprehensive technical research document?** +[C] Complete Research - Save final comprehensive technical document + +### 6. Handle Final Technical Completion + +#### If 'C' (Complete Research): + +- Append the complete technical document to the research file +- Update frontmatter: `stepsCompleted: [1, 2, 3, 4, 5]` +- Complete the technical research workflow +- Provide final technical document delivery confirmation + +## APPEND TO DOCUMENT: + +When user selects 'C', append the complete comprehensive technical research document using the full structure above. + +## SUCCESS METRICS: + +✅ Compelling technical introduction with research significance +✅ Comprehensive technical table of contents with complete document structure +✅ Exhaustive technical research coverage across all technical aspects +✅ Executive technical summary with key findings and strategic implications +✅ Strategic technical recommendations grounded in comprehensive research +✅ Complete technical source verification with current citations +✅ Professional technical document structure and compelling narrative +✅ [C] complete option presented and handled correctly +✅ Technical research workflow completed with comprehensive document + +## FAILURE MODES: + +❌ Not producing compelling technical introduction +❌ Missing comprehensive technical table of contents +❌ Incomplete technical research coverage across technical aspects +❌ Not providing executive technical summary with key findings +❌ Missing strategic technical recommendations based on research +❌ Relying solely on training data without web verification for current facts +❌ Producing technical document without professional structure +❌ Not presenting completion option for final technical document + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## COMPREHENSIVE TECHNICAL DOCUMENT STANDARDS: + +This step ensures the final technical research document: + +- Serves as an authoritative technical reference on {{research_topic}} +- Provides strategic technical insights for informed decision-making +- Includes comprehensive technical coverage with no gaps +- Maintains rigorous technical source verification standards +- Delivers strategic technical insights and actionable recommendations +- Meets professional technical research document quality standards + +## TECHNICAL RESEARCH WORKFLOW COMPLETION: + +When 'C' is selected: + +- All technical research steps completed (1-5) +- Comprehensive technical research document generated +- Professional technical document structure with intro, TOC, and summary +- All technical sections appended with source citations +- Technical research workflow status updated to complete +- Final comprehensive technical research document delivered to user + +## FINAL TECHNICAL DELIVERABLE: + +Complete authoritative technical research document on {{research_topic}} that: + +- Establishes technical credibility through comprehensive research +- Provides strategic technical insights for informed decision-making +- Serves as technical reference document for continued use +- Maintains highest technical research quality standards with current verification + +Congratulations on completing comprehensive technical research with professional documentation! 🎉 diff --git a/src/bmm/workflows/1-analysis/research/workflow.md b/src/bmm/workflows/1-analysis/research/workflow.md new file mode 100644 index 00000000..64f62bef --- /dev/null +++ b/src/bmm/workflows/1-analysis/research/workflow.md @@ -0,0 +1,173 @@ +--- +name: research +description: Conduct comprehensive research across multiple domains using current web data and verified sources - Market, Technical, Domain and other research types. +web_bundle: true +--- + +# Research Workflow + +**Goal:** Conduct comprehensive, exhaustive research across multiple domains using current web data and verified sources to produce complete research documents with compelling narratives and proper citations. + +**Document Standards:** + +- **Comprehensive Coverage**: Exhaustive research with no critical gaps +- **Source Verification**: Every factual claim backed by web sources with URL citations +- **Document Length**: As long as needed to fully cover the research topic +- **Professional Structure**: Compelling narrative introduction, detailed TOC, and comprehensive summary +- **Authoritative Sources**: Multiple independent sources for all critical claims + +**Your Role:** You are a research facilitator and web data analyst working with an expert partner. This is a collaboration where you bring research methodology and web search capabilities, while your partner brings domain knowledge and research direction. + +**Final Deliverable**: A complete research document that serves as an authoritative reference on the research topic with: + +- Compelling narrative introduction +- Comprehensive table of contents +- Detailed research sections with proper citations +- Executive summary and conclusions + +## WORKFLOW ARCHITECTURE + +This uses **micro-file architecture** with **routing-based discovery**: + +- Each research type has its own step folder +- Step 01 discovers research type and routes to appropriate sub-workflow +- Sequential progression within each research type +- Document state tracked in output frontmatter + +## INITIALIZATION + +### Configuration Loading + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `output_folder`, , `planning_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` +- `date` as a system-generated value + +### Paths + +- `installed_path` = `{project-root}/_bmad/bmm/workflows/1-analysis/research` +- `template_path` = `{installed_path}/research.template.md` +- `default_output_file` = `{planning_artifacts}/research/{{research_type}}-{{topic}}-research-{{date}}.md` (dynamic based on research type) + +## PREREQUISITE + +**⛔ Web search required.** If unavailable, abort and tell the user. + +## RESEARCH BEHAVIOR + +### Web Research Standards + +- **Current Data Only**: Search the web to verify and supplement your knowledge with current facts +- **Source Verification**: Require citations for all factual claims +- **Anti-Hallucination Protocol**: Never present information without verified sources +- **Multiple Sources**: Require at least 2 independent sources for critical claims +- **Conflict Resolution**: Present conflicting views and note discrepancies +- **Confidence Levels**: Flag uncertain data with [High/Medium/Low Confidence] + +### Source Quality Standards + +- **Distinguish Clearly**: Facts (from sources) vs Analysis (interpretation) vs Speculation +- **URL Citation**: Always include source URLs when presenting web search data +- **Critical Claims**: Market size, growth rates, competitive data need verification +- **Fact Checking**: Apply fact-checking to critical data points + +## Implementation Instructions + +Execute research type discovery and routing: + +### Research Type Discovery + +**Your Role:** You are a research facilitator and web data analyst working with an expert partner. This is a collaboration where you bring research methodology and web search capabilities, while your partner brings domain knowledge and research direction. + +**Research Standards:** + +- **Anti-Hallucination Protocol**: Never present information without verified sources +- **Current Data Only**: Search the web to verify and supplement your knowledge with current facts +- **Source Citation**: Always include URLs for factual claims from web searches +- **Multiple Sources**: Require 2+ independent sources for critical claims +- **Conflict Resolution**: Present conflicting views and note discrepancies +- **Confidence Levels**: Flag uncertain data with [High/Medium/Low Confidence] + +### Collaborative Research Discovery + +"Welcome {{user_name}}! I'm excited to work with you as your research partner. I bring web research capabilities with rigorous source verification, while you bring the domain expertise and research direction. + +**Let me help you clarify what you'd like to research.** + +**First, tell me: What specific topic, problem, or area do you want to research?** + +For example: + +- 'The electric vehicle market in Europe' +- 'Cloud migration strategies for healthcare' +- 'AI implementation in financial services' +- 'Sustainable packaging regulations' +- 'Or anything else you have in mind...' + +### Topic Exploration and Clarification + +Based on the user's initial topic, explore and refine the research scope: + +#### Topic Clarification Questions: + +1. **Core Topic**: "What exactly about [topic] are you most interested in?" +2. **Research Goals**: "What do you hope to achieve with this research?" +3. **Scope**: "Should we focus broadly or dive deep into specific aspects?" +4. **Timeline**: "Are you looking at current state, historical context, or future trends?" +5. **Application**: "How will you use this research? (product development, strategy, academic, etc.)" + +#### Context Building: + +- **Initial Input**: User provides topic or research interest +- **Collaborative Refinement**: Work together to clarify scope and objectives +- **Goal Alignment**: Ensure research direction matches user needs +- **Research Boundaries**: Establish clear focus areas and deliverables + +### Research Type Identification + +After understanding the research topic and goals, identify the most appropriate research approach: + +**Research Type Options:** + +1. **Market Research** - Market size, growth, competition, customer insights + _Best for: Understanding market dynamics, customer behavior, competitive landscape_ + +2. **Domain Research** - Industry analysis, regulations, technology trends in specific domain + _Best for: Understanding industry context, regulatory environment, ecosystem_ + +3. **Technical Research** - Technology evaluation, architecture decisions, implementation approaches + _Best for: Technical feasibility, technology selection, implementation strategies_ + +**Recommendation**: Based on [topic] and [goals], I recommend [suggested research type] because [specific rationale]. + +**What type of research would work best for your needs?** + +### Research Type Routing + +Based on user selection, route to appropriate sub-workflow with the discovered topic using the following IF block sets of instructions. YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +#### If Market Research: + +- Set `research_type = "market"` +- Set `research_topic = [discovered topic from discussion]` +- Create the starter output file: `{planning_artifacts}/research/market-{{research_topic}}-research-{{date}}.md` with exact copy of the ./research.template.md contents +- Load: `./market-steps/step-01-init.md` with topic context + +#### If Domain Research: + +- Set `research_type = "domain"` +- Set `research_topic = [discovered topic from discussion]` +- Create the starter output file: `{planning_artifacts}/research/domain-{{research_topic}}-research-{{date}}.md` with exact copy of the ./research.template.md contents +- Load: `./domain-steps/step-01-init.md` with topic context + +#### If Technical Research: + +- Set `research_type = "technical"` +- Set `research_topic = [discovered topic from discussion]` +- Create the starter output file: `{planning_artifacts}/research/technical-{{research_topic}}-research-{{date}}.md` with exact copy of the ./research.template.md contents +- Load: `./technical-steps/step-01-init.md` with topic context + +**Important**: The discovered topic from the collaborative discussion should be passed to the research initialization steps, so they don't need to ask "What do you want to research?" again - they can focus on refining the scope for their specific research type. + +**Note:** All research workflows require web search for current data and source verification. diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01-init.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01-init.md new file mode 100644 index 00000000..62969baf --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01-init.md @@ -0,0 +1,135 @@ +# Step 1: UX Design Workflow Initialization + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on initialization and setup only - don't look ahead to future steps +- 🚪 DETECT existing workflow state and handle continuation properly +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Initialize document and update frontmatter +- 📖 Set up frontmatter `stepsCompleted: [1]` before loading next step +- 🚫 FORBIDDEN to load next step until setup is complete + +## CONTEXT BOUNDARIES: + +- Variables from workflow.md are available in memory +- Previous context = what's in output document + frontmatter +- Don't assume knowledge from other steps +- Input document discovery happens in this step + +## YOUR TASK: + +Initialize the UX design workflow by detecting continuation state and setting up the design specification document. + +## INITIALIZATION SEQUENCE: + +### 1. Check for Existing Workflow + +First, check if the output document already exists: + +- Look for file at `{planning_artifacts}/*ux-design-specification*.md` +- If exists, read the complete file including frontmatter +- If not exists, this is a fresh workflow + +### 2. Handle Continuation (If Document Exists) + +If the document exists and has frontmatter with `stepsCompleted`: + +- **STOP here** and load `./step-01b-continue.md` immediately +- Do not proceed with any initialization tasks +- Let step-01b handle the continuation logic + +### 3. Fresh Workflow Setup (If No Document) + +If no document exists or no `stepsCompleted` in frontmatter: + +#### A. Input Document Discovery + +Discover and load context documents using smart discovery. Documents can be in the following locations: +- {planning_artifacts}/** +- {output_folder}/** +- {product_knowledge}/** +- docs/** + +Also - when searching - documents can be a single markdown file, or a folder with an index and multiple files. For Example, if searching for `*foo*.md` and not found, also search for a folder called *foo*/index.md (which indicates sharded content) + +Try to discover the following: +- Product Brief (`*brief*.md`) +- Research Documents (`*prd*.md`) +- Project Documentation (generally multiple documents might be found for this in the `{product_knowledge}` or `docs` folder.) +- Project Context (`**/project-context.md`) + +Confirm what you have found with the user, along with asking if the user wants to provide anything else. Only after this confirmation will you proceed to follow the loading rules + +**Loading Rules:** + +- Load ALL discovered files completely that the user confirmed or provided (no offset/limit) +- If there is a project context, whatever is relevant should try to be biased in the remainder of this whole workflow process +- For sharded folders, load ALL files to get complete picture, using the index first to potentially know the potential of each document +- index.md is a guide to what's relevant whenever available +- Track all successfully loaded files in frontmatter `inputDocuments` array + +#### B. Create Initial Document + +Copy the template from `{installed_path}/ux-design-template.md` to `{planning_artifacts}/ux-design-specification.md` +Initialize frontmatter in the template. + +#### C. Complete Initialization and Report + +Complete setup and report to user: + +**Document Setup:** + +- Created: `{planning_artifacts}/ux-design-specification.md` from template +- Initialized frontmatter with workflow state + +**Input Documents Discovered:** +Report what was found: +"Welcome {{user_name}}! I've set up your UX design workspace for {{project_name}}. + +**Documents Found:** + +- PRD: {number of PRD files loaded or "None found"} +- Product brief: {number of brief files loaded or "None found"} +- Other context: {number of other files loaded or "None found"} + +**Files loaded:** {list of specific file names or "No additional documents found"} + +Do you have any other documents you'd like me to include, or shall we continue to the next step? + +[C] Continue to UX discovery" + +## NEXT STEP: + +After user selects [C] to continue, ensure the file `{planning_artifacts}/ux-design-specification.md` has been created and saved, and then load `./step-02-discovery.md` to begin the UX discovery phase. + +Remember: Do NOT proceed to step-02 until output file has been updated and user explicitly selects [C] to continue! + +## SUCCESS METRICS: + +✅ Existing workflow detected and handed off to step-01b correctly +✅ Fresh workflow initialized with template and frontmatter +✅ Input documents discovered and loaded using sharded-first logic +✅ All discovered files tracked in frontmatter `inputDocuments` +✅ User confirmed document setup and can proceed + +## FAILURE MODES: + +❌ Proceeding with fresh initialization when existing workflow exists +❌ Not updating frontmatter with discovered input documents +❌ Creating document without proper template +❌ Not checking sharded folders first before whole files +❌ Not reporting what documents were found to user + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01b-continue.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01b-continue.md new file mode 100644 index 00000000..3d0f647e --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-01b-continue.md @@ -0,0 +1,127 @@ +# Step 1B: UX Design Workflow Continuation + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on understanding where we left off and continuing appropriately +- 🚪 RESUME workflow from exact point where it was interrupted +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis of current state before taking action +- 💾 Keep existing frontmatter `stepsCompleted` values +- 📖 Only load documents that were already tracked in `inputDocuments` +- 🚫 FORBIDDEN to modify content completed in previous steps + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter are already loaded +- Previous context = complete document + existing frontmatter +- Input documents listed in frontmatter were already processed +- Last completed step = `lastStep` value from frontmatter + +## YOUR TASK: + +Resume the UX design workflow from where it was left off, ensuring smooth continuation. + +## CONTINUATION SEQUENCE: + +### 1. Analyze Current State + +Review the frontmatter to understand: + +- `stepsCompleted`: Which steps are already done +- `lastStep`: The most recently completed step number +- `inputDocuments`: What context was already loaded +- All other frontmatter variables + +### 2. Load All Input Documents + +Reload the context documents listed in `inputDocuments`: + +- For each document in `inputDocuments`, load the complete file +- This ensures you have full context for continuation +- Don't discover new documents - only reload what was previously processed + +### 3. Summarize Current Progress + +Welcome the user back and provide context: +"Welcome back {{user_name}}! I'm resuming our UX design collaboration for {{project_name}}. + +**Current Progress:** + +- Steps completed: {stepsCompleted} +- Last worked on: Step {lastStep} +- Context documents available: {len(inputDocuments)} files +- Current UX design specification is ready with all completed sections + +**Document Status:** + +- Current UX design document is ready with all completed sections +- Ready to continue from where we left off + +Does this look right, or do you want to make any adjustments before we proceed?" + +### 4. Determine Next Step + +Based on `lastStep` value, determine which step to load next: + +- If `lastStep = 1` → Load `./step-02-discovery.md` +- If `lastStep = 2` → Load `./step-03-core-experience.md` +- If `lastStep = 3` → Load `./step-04-emotional-response.md` +- Continue this pattern for all steps +- If `lastStep` indicates final step → Workflow already complete + +### 5. Present Continuation Options + +After presenting current progress, ask: +"Ready to continue with Step {nextStepNumber}: {nextStepTitle}? + +[C] Continue to Step {nextStepNumber}" + +## SUCCESS METRICS: + +✅ All previous input documents successfully reloaded +✅ Current workflow state accurately analyzed and presented +✅ User confirms understanding of progress +✅ Correct next step identified and prepared for loading + +## FAILURE MODES: + +❌ Discovering new input documents instead of reloading existing ones +❌ Modifying content from already completed steps +❌ Loading wrong next step based on `lastStep` value +❌ Proceeding without user confirmation of current state + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## WORKFLOW ALREADY COMPLETE? + +If `lastStep` indicates the final step is completed: +"Great news! It looks like we've already completed the UX design workflow for {{project_name}}. + +The final UX design specification is ready at {output_folder}/ux-design-specification.md with all sections completed through step {finalStepNumber}. + +The complete UX design includes visual foundations, user flows, and design specifications ready for implementation. + +Would you like me to: + +- Review the completed UX design specification with you +- Suggest next workflow steps (like wireframe generation or architecture) +- Start a new UX design revision + +What would be most helpful?" + +## NEXT STEP: + +After user confirms they're ready to continue, load the appropriate next step file based on the `lastStep` value from frontmatter. + +Remember: Do NOT load the next step until user explicitly selects [C] to continue! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-02-discovery.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-02-discovery.md new file mode 100644 index 00000000..7ab275a8 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-02-discovery.md @@ -0,0 +1,190 @@ +# Step 2: Project Understanding + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on understanding project context and user needs +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating project understanding content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper project insights +- **P (Party Mode)**: Bring multiple perspectives to understand project context +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from step 1 are available +- Input documents (PRD, briefs, epics) already loaded are in memory +- No additional data files needed for this step +- Focus on project and user understanding + +## YOUR TASK: + +Understand the project context, target users, and what makes this product special from a UX perspective. + +## PROJECT DISCOVERY SEQUENCE: + +### 1. Review Loaded Context + +Start by analyzing what we know from the loaded documents: +"Based on the project documentation we have loaded, let me confirm what I'm understanding about {{project_name}}. + +**From the documents:** +{summary of key insights from loaded PRD, briefs, and other context documents} + +**Target Users:** +{summary of user information from loaded documents} + +**Key Features/Goals:** +{summary of main features and goals from loaded documents} + +Does this match your understanding? Are there any corrections or additions you'd like to make?" + +### 2. Fill Context Gaps (If no documents or gaps exist) + +If no documents were loaded or key information is missing: +"Since we don't have complete documentation, let's start with the essentials: + +**What are you building?** (Describe your product in 1-2 sentences) + +**Who is this for?** (Describe your ideal user or target audience) + +**What makes this special or different?** (What's the unique value proposition?) + +**What's the main thing users will do with this?** (Core user action or goal)" + +### 3. Explore User Context Deeper + +Dive into user understanding: +"Let me understand your users better to inform the UX design: + +**User Context Questions:** + +- What problem are users trying to solve? +- What frustrates them with current solutions? +- What would make them say 'this is exactly what I needed'? +- How tech-savvy are your target users? +- What devices will they use most? +- When/where will they use this product?" + +### 4. Identify UX Design Challenges + +Surface the key UX challenges to address: +"From what we've discussed, I'm seeing some key UX design considerations: + +**Design Challenges:** + +- [Identify 2-3 key UX challenges based on project type and user needs] +- [Note any platform-specific considerations] +- [Highlight any complex user flows or interactions] + +**Design Opportunities:** + +- [Identify 2-3 areas where great UX could create competitive advantage] +- [Note any opportunities for innovative UX patterns] + +Does this capture the key UX considerations we need to address?" + +### 5. Generate Project Understanding Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Executive Summary + +### Project Vision + +[Project vision summary based on conversation] + +### Target Users + +[Target user descriptions based on conversation] + +### Key Design Challenges + +[Key UX challenges identified based on conversation] + +### Design Opportunities + +[Design opportunities identified based on conversation] +``` + +### 6. Present Content and Menu + +Show the generated project understanding content and present choices: +"I've documented our understanding of {{project_name}} from a UX perspective. This will guide all our design decisions moving forward. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 5] + +**What would you like to do?** +[C] Continue - Save this to the document and move to core experience definition" + +### 7. Handle Menu Selection + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: `stepsCompleted: [1, 2]` +- Load `./step-03-core-experience.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document. Only after the content is saved to document, read fully and follow: `./step-03-core-experience.md`. + +## SUCCESS METRICS: + +✅ All available context documents reviewed and synthesized +✅ Project vision clearly articulated +✅ Target users well understood +✅ Key UX challenges identified +✅ Design opportunities surfaced +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not reviewing loaded context documents thoroughly +❌ Making assumptions about users without asking +❌ Missing key UX challenges that will impact design +❌ Not identifying design opportunities +❌ Generating generic content without real project insight +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +Remember: Do NOT proceed to step-03 until user explicitly selects 'C' from the menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-03-core-experience.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-03-core-experience.md new file mode 100644 index 00000000..c64c8423 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-03-core-experience.md @@ -0,0 +1,216 @@ +# Step 3: Core Experience Definition + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on defining the core user experience and platform +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating core experience content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper experience insights +- **P (Party Mode)**: Bring multiple perspectives to define optimal user experience +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Project understanding from step 2 informs this step +- No additional data files needed for this step +- Focus on core experience and platform decisions + +## YOUR TASK: + +Define the core user experience, platform requirements, and what makes the interaction effortless. + +## CORE EXPERIENCE DISCOVERY SEQUENCE: + +### 1. Define Core User Action + +Start by identifying the most important user interaction: +"Now let's dig into the heart of the user experience for {{project_name}}. + +**Core Experience Questions:** + +- What's the ONE thing users will do most frequently? +- What user action is absolutely critical to get right? +- What should be completely effortless for users? +- If we nail one interaction, everything else follows - what is it? + +Think about the core loop or primary action that defines your product's value." + +### 2. Explore Platform Requirements + +Determine where and how users will interact: +"Let's define the platform context for {{project_name}}: + +**Platform Questions:** + +- Web, mobile app, desktop, or multiple platforms? +- Will this be primarily touch-based or mouse/keyboard? +- Any specific platform requirements or constraints? +- Do we need to consider offline functionality? +- Any device-specific capabilities we should leverage?" + +### 3. Identify Effortless Interactions + +Surface what should feel magical or completely seamless: +"**Effortless Experience Design:** + +- What user actions should feel completely natural and require zero thought? +- Where do users currently struggle with similar products? +- What interaction, if made effortless, would create delight? +- What should happen automatically without user intervention? +- Where can we eliminate steps that competitors require?" + +### 4. Define Critical Success Moments + +Identify the moments that determine success or failure: +"**Critical Success Moments:** + +- What's the moment where users realize 'this is better'? +- When does the user feel successful or accomplished? +- What interaction, if failed, would ruin the experience? +- What are the make-or-break user flows? +- Where does first-time user success happen?" + +### 5. Synthesize Experience Principles + +Extract guiding principles from the conversation: +"Based on our discussion, I'm hearing these core experience principles for {{project_name}}: + +**Experience Principles:** + +- [Principle 1 based on core action focus] +- [Principle 2 based on effortless interactions] +- [Principle 3 based on platform considerations] +- [Principle 4 based on critical success moments] + +These principles will guide all our UX decisions. Do these capture what's most important?" + +### 6. Generate Core Experience Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Core User Experience + +### Defining Experience + +[Core experience definition based on conversation] + +### Platform Strategy + +[Platform requirements and decisions based on conversation] + +### Effortless Interactions + +[Effortless interaction areas identified based on conversation] + +### Critical Success Moments + +[Critical success moments defined based on conversation] + +### Experience Principles + +[Guiding principles for UX decisions based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated core experience content and present choices: +"I've defined the core user experience for {{project_name}} based on our conversation. This establishes the foundation for all our UX design decisions. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine the core experience definition +[P] Party Mode - Bring different perspectives on the user experience +[C] Continue - Save this to the document and move to emotional response definition" + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current core experience content +- Process the enhanced experience insights that come back +- Ask user: "Accept these improvements to the core experience definition? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current core experience definition +- Process the collaborative experience improvements that come back +- Ask user: "Accept these changes to the core experience definition? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-04-emotional-response.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Core user action clearly identified and defined +✅ Platform requirements thoroughly explored +✅ Effortless interaction areas identified +✅ Critical success moments mapped out +✅ Experience principles established as guiding framework +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Missing the core user action that defines the product +❌ Not properly considering platform requirements +❌ Overlooking what should be effortless for users +❌ Not identifying critical make-or-break interactions +❌ Experience principles too generic or not actionable +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-04-emotional-response.md` to define desired emotional responses. + +Remember: Do NOT proceed to step-04 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-04-emotional-response.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-04-emotional-response.md new file mode 100644 index 00000000..247a61e2 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-04-emotional-response.md @@ -0,0 +1,219 @@ +# Step 4: Desired Emotional Response + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on defining desired emotional responses and user feelings +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating emotional response content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper emotional insights +- **P (Party Mode)**: Bring multiple perspectives to define optimal emotional responses +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Core experience definition from step 3 informs emotional response +- No additional data files needed for this step +- Focus on user feelings and emotional design goals + +## YOUR TASK: + +Define the desired emotional responses users should feel when using the product. + +## EMOTIONAL RESPONSE DISCOVERY SEQUENCE: + +### 1. Explore Core Emotional Goals + +Start by understanding the emotional objectives: +"Now let's think about how {{project_name}} should make users feel. + +**Emotional Response Questions:** + +- What should users FEEL when using this product? +- What emotion would make them tell a friend about this? +- How should users feel after accomplishing their primary goal? +- What feeling differentiates this from competitors? + +Common emotional goals: Empowered and in control? Delighted and surprised? Efficient and productive? Creative and inspired? Calm and focused? Connected and engaged?" + +### 2. Identify Emotional Journey Mapping + +Explore feelings at different stages: +"**Emotional Journey Considerations:** + +- How should users feel when they first discover the product? +- What emotion during the core experience/action? +- How should they feel after completing their task? +- What if something goes wrong - what emotional response do we want? +- How should they feel when returning to use it again?" + +### 3. Define Micro-Emotions + +Surface subtle but important emotional states: +"**Micro-Emotions to Consider:** + +- Confidence vs. Confusion +- Trust vs. Skepticism +- Excitement vs. Anxiety +- Accomplishment vs. Frustration +- Delight vs. Satisfaction +- Belonging vs. Isolation + +Which of these emotional states are most critical for your product's success?" + +### 4. Connect Emotions to UX Decisions + +Link feelings to design implications: +"**Design Implications:** + +- If we want users to feel [emotional state], what UX choices support this? +- What interactions might create negative emotions we want to avoid? +- Where can we add moments of delight or surprise? +- How do we build trust and confidence through design? + +**Emotion-Design Connections:** + +- [Emotion 1] → [UX design approach] +- [Emotion 2] → [UX design approach] +- [Emotion 3] → [UX design approach]" + +### 5. Validate Emotional Goals + +Check if emotional goals align with product vision: +"Let me make sure I understand the emotional vision for {{project_name}}: + +**Primary Emotional Goal:** [Summarize main emotional response] +**Secondary Feelings:** [List supporting emotional states] +**Emotions to Avoid:** [List negative emotions to prevent] + +Does this capture the emotional experience you want to create? Any adjustments needed?" + +### 6. Generate Emotional Response Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Desired Emotional Response + +### Primary Emotional Goals + +[Primary emotional goals based on conversation] + +### Emotional Journey Mapping + +[Emotional journey mapping based on conversation] + +### Micro-Emotions + +[Micro-emotions identified based on conversation] + +### Design Implications + +[UX design implications for emotional responses based on conversation] + +### Emotional Design Principles + +[Guiding principles for emotional design based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated emotional response content and present choices: +"I've defined the desired emotional responses for {{project_name}}. These emotional goals will guide our design decisions to create the right user experience. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine the emotional response definition +[P] Party Mode - Bring different perspectives on user emotional needs +[C] Continue - Save this to the document and move to inspiration analysis" + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current emotional response content +- Process the enhanced emotional insights that come back +- Ask user: "Accept these improvements to the emotional response definition? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current emotional response definition +- Process the collaborative emotional insights that come back +- Ask user: "Accept these changes to the emotional response definition? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-05-inspiration.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Primary emotional goals clearly defined +✅ Emotional journey mapped across user experience +✅ Micro-emotions identified and addressed +✅ Design implications connected to emotional responses +✅ Emotional design principles established +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Missing core emotional goals or being too generic +❌ Not considering emotional journey across different stages +❌ Overlooking micro-emotions that impact user satisfaction +❌ Not connecting emotional goals to specific UX design choices +❌ Emotional principles too vague or not actionable +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-05-inspiration.md` to analyze UX patterns from inspiring products. + +Remember: Do NOT proceed to step-05 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-05-inspiration.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-05-inspiration.md new file mode 100644 index 00000000..87fe5603 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-05-inspiration.md @@ -0,0 +1,234 @@ +# Step 5: UX Pattern Analysis & Inspiration + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on analyzing existing UX patterns and extracting inspiration +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating inspiration analysis content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper pattern insights +- **P ( Party Mode)**: Bring multiple perspectives to analyze UX patterns +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Emotional response goals from step 4 inform pattern analysis +- No additional data files needed for this step +- Focus on analyzing existing UX patterns and extracting lessons + +## YOUR TASK: + +Analyze inspiring products and UX patterns to inform design decisions for the current project. + +## INSPIRATION ANALYSIS SEQUENCE: + +### 1. Identify User's Favorite Apps + +Start by gathering inspiration sources: +"Let's learn from products your users already love and use regularly. + +**Inspiration Questions:** + +- Name 2-3 apps your target users already love and USE frequently +- For each one, what do they do well from a UX perspective? +- What makes the experience compelling or delightful? +- What keeps users coming back to these apps? + +Think about apps in your category or even unrelated products that have great UX." + +### 2. Analyze UX Patterns and Principles + +Break down what makes these apps successful: +"For each inspiring app, let's analyze their UX success: + +**For [App Name]:** + +- What core problem does it solve elegantly? +- What makes the onboarding experience effective? +- How do they handle navigation and information hierarchy? +- What are their most innovative or delightful interactions? +- What visual design choices support the user experience? +- How do they handle errors or edge cases?" + +### 3. Extract Transferable Patterns + +Identify patterns that could apply to your project: +"**Transferable UX Patterns:** +Looking across these inspiring apps, I see patterns we could adapt: + +**Navigation Patterns:** + +- [Pattern 1] - could work for your [specific use case] +- [Pattern 2] - might solve your [specific challenge] + +**Interaction Patterns:** + +- [Pattern 1] - excellent for [your user goal] +- [Pattern 2] - addresses [your user pain point] + +**Visual Patterns:** + +- [Pattern 1] - supports your [emotional goal] +- [Pattern 2] - aligns with your [platform requirements] + +Which of these patterns resonate most for your product?" + +### 4. Identify Anti-Patterns to Avoid + +Surface what not to do based on analysis: +"**UX Anti-Patterns to Avoid:** +From analyzing both successes and failures in your space, here are patterns to avoid: + +- [Anti-pattern 1] - users find this confusing/frustrating +- [Anti-pattern 2] - this creates unnecessary friction +- [Anti-pattern 3] - doesn't align with your [emotional goals] + +Learning from others' mistakes is as important as learning from their successes." + +### 5. Define Design Inspiration Strategy + +Create a clear strategy for using this inspiration: +"**Design Inspiration Strategy:** + +**What to Adopt:** + +- [Specific pattern] - because it supports [your core experience] +- [Specific pattern] - because it aligns with [user needs] + +**What to Adapt:** + +- [Specific pattern] - modify for [your unique requirements] +- [Specific pattern] - simplify for [your user skill level] + +**What to Avoid:** + +- [Specific anti-pattern] - conflicts with [your goals] +- [Specific anti-pattern] - doesn't fit [your platform] + +This strategy will guide our design decisions while keeping {{project_name}} unique." + +### 6. Generate Inspiration Analysis Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## UX Pattern Analysis & Inspiration + +### Inspiring Products Analysis + +[Analysis of inspiring products based on conversation] + +### Transferable UX Patterns + +[Transferable patterns identified based on conversation] + +### Anti-Patterns to Avoid + +[Anti-patterns to avoid based on conversation] + +### Design Inspiration Strategy + +[Strategy for using inspiration based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated inspiration analysis content and present choices: +"I've analyzed inspiring UX patterns and products to inform our design strategy for {{project_name}}. This gives us a solid foundation of proven patterns to build upon. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's deepen our UX pattern analysis +[P] Party Mode - Bring different perspectives on inspiration sources +[C] Continue - Save this to the document and move to design system choice" + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current inspiration analysis content +- Process the enhanced pattern insights that come back +- Ask user: "Accept these improvements to the inspiration analysis? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current inspiration analysis +- Process the collaborative pattern insights that come back +- Ask user: "Accept these changes to the inspiration analysis? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Read fully and follow: `./step-06-design-system.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Inspiring products identified and analyzed thoroughly +✅ UX patterns extracted and categorized effectively +✅ Transferable patterns identified for current project +✅ Anti-patterns identified to avoid common mistakes +✅ Clear design inspiration strategy established +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not getting specific examples of inspiring products +❌ Surface-level analysis without deep pattern extraction +❌ Missing opportunities for pattern adaptation +❌ Not identifying relevant anti-patterns to avoid +❌ Strategy too generic or not actionable +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-06-design-system.md` to choose the appropriate design system approach. + +Remember: Do NOT proceed to step-06 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-06-design-system.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-06-design-system.md new file mode 100644 index 00000000..70d566ad --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-06-design-system.md @@ -0,0 +1,252 @@ +# Step 6: Design System Choice + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on choosing appropriate design system approach +- 🎯 COLLABORATIVE decision-making, not recommendation-only +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating design system decision content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper design system insights +- **P (Party Mode)**: Bring multiple perspectives to evaluate design system options +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Platform requirements from step 3 inform design system choice +- Inspiration patterns from step 5 guide design system selection +- Focus on choosing foundation for consistent design + +## YOUR TASK: + +Choose appropriate design system approach based on project requirements and constraints. + +## DESIGN SYSTEM CHOICE SEQUENCE: + +### 1. Present Design System Options + +Educate about design system approaches: +"For {{project_name}}, we need to choose a design system foundation. Think of design systems like LEGO blocks for UI - they provide proven components and patterns, ensuring consistency and speeding development. + +**Design System Approaches:** + +**1. Custom Design System** + +- Complete visual uniqueness +- Full control over every component +- Higher initial investment +- Perfect for established brands with unique needs + +**2. Established System (Material Design, Ant Design, etc.)** + +- Fast development with proven patterns +- Great defaults and accessibility built-in +- Less visual differentiation +- Ideal for startups or internal tools + +**3. Themeable System (MUI, Chakra UI, Tailwind UI)** + +- Customizable with strong foundation +- Brand flexibility with proven components +- Moderate learning curve +- Good balance of speed and uniqueness + +Which direction feels right for your project?" + +### 2. Analyze Project Requirements + +Guide decision based on project context: +"**Let's consider your specific needs:** + +**Based on our previous conversations:** + +- Platform: [platform from step 3] +- Timeline: [inferred from user conversation] +- Team Size: [inferred from user conversation] +- Brand Requirements: [inferred from user conversation] +- Technical Constraints: [inferred from user conversation] + +**Decision Factors:** + +- Need for speed vs. need for uniqueness +- Brand guidelines or existing visual identity +- Team's design expertise +- Long-term maintenance considerations +- Integration requirements with existing systems" + +### 3. Explore Specific Design System Options + +Dive deeper into relevant options: +"**Recommended Options Based on Your Needs:** + +**For [Your Platform Type]:** + +- [Option 1] - [Key benefit] - [Best for scenario] +- [Option 2] - [Key benefit] - [Best for scenario] +- [Option 3] - [Key benefit] - [Best for scenario] + +**Considerations:** + +- Component library size and quality +- Documentation and community support +- Customization capabilities +- Accessibility compliance +- Performance characteristics +- Learning curve for your team" + +### 4. Facilitate Decision Process + +Help user make informed choice: +"**Decision Framework:** + +1. What's most important: Speed, uniqueness, or balance? +2. How much design expertise does your team have? +3. Are there existing brand guidelines to follow? +4. What's your timeline and budget? +5. Long-term maintenance needs? + +Let's evaluate options based on your answers to these questions." + +### 5. Finalize Design System Choice + +Confirm and document the decision: +"Based on our analysis, I recommend [Design System Choice] for {{project_name}}. + +**Rationale:** + +- [Reason 1 based on project needs] +- [Reason 2 based on constraints] +- [Reason 3 based on team considerations] + +**Next Steps:** + +- We'll customize this system to match your brand and needs +- Define component strategy for custom components needed +- Establish design tokens and patterns + +Does this design system choice feel right to you?" + +### 6. Generate Design System Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Design System Foundation + +### 1.1 Design System Choice + +[Design system choice based on conversation] + +### Rationale for Selection + +[Rationale for design system selection based on conversation] + +### Implementation Approach + +[Implementation approach based on chosen system] + +### Customization Strategy + +[Customization strategy based on project needs] +``` + +### 7. Present Content and Menu + +Show the generated design system content and present choices: +"I've documented our design system choice for {{project_name}}. This foundation will ensure consistency and speed up development. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our design system decision +[P] Party Mode - Bring technical perspectives on design systems +[C] Continue - Save this to the document and move to defining experience + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current design system content +- Process the enhanced design system insights that come back +- Ask user: "Accept these improvements to the design system decision? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current design system choice +- Process the collaborative design system insights that come back +- Ask user: "Accept these changes to the design system decision? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-07-defining-experience.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Design system options clearly presented and explained +✅ Decision framework applied to project requirements +✅ Specific design system chosen with clear rationale +✅ Implementation approach planned +✅ Customization strategy defined +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not explaining design system concepts clearly +❌ Rushing to recommendation without understanding requirements +❌ Not considering technical constraints or team capabilities +❌ Choosing design system without clear rationale +❌ Not planning implementation approach +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-07-defining-experience.md` to define the core user interaction. + +Remember: Do NOT proceed to step-07 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-07-defining-experience.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-07-defining-experience.md new file mode 100644 index 00000000..7e904b94 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-07-defining-experience.md @@ -0,0 +1,254 @@ +# Step 7: Defining Core Experience + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on defining the core interaction that defines the product +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating defining experience content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper experience insights +- **P (Party Mode)**: Bring multiple perspectives to define optimal core experience +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Core experience from step 3 provides foundation +- Design system choice from step 6 informs implementation +- Focus on the defining interaction that makes the product special + +## YOUR TASK: + +Define the core interaction that, if nailed, makes everything else follow in the user experience. + +## DEFINING EXPERIENCE SEQUENCE: + +### 1. Identify the Defining Experience + +Focus on the core interaction: +"Every successful product has a defining experience - the core interaction that, if we nail it, everything else follows. + +**Think about these famous examples:** + +- Tinder: "Swipe to match with people" +- Snapchat: "Share photos that disappear" +- Instagram: "Share perfect moments with filters" +- Spotify: "Discover and play any song instantly" + +**For {{project_name}}:** +What's the core action that users will describe to their friends? +What's the interaction that makes users feel successful? +If we get ONE thing perfectly right, what should it be?" + +### 2. Explore the User's Mental Model + +Understand how users think about the core task: +"**User Mental Model Questions:** + +- How do users currently solve this problem? +- What mental model do they bring to this task? +- What's their expectation for how this should work? +- Where are they likely to get confused or frustrated? + +**Current Solutions:** + +- What do users love/hate about existing approaches? +- What shortcuts or workarounds do they use? +- What makes existing solutions feel magical or terrible?" + +### 3. Define Success Criteria for Core Experience + +Establish what makes the core interaction successful: +"**Core Experience Success Criteria:** + +- What makes users say 'this just works'? +- When do they feel smart or accomplished? +- What feedback tells them they're doing it right? +- How fast should it feel? +- What should happen automatically? + +**Success Indicators:** + +- [Success indicator 1] +- [Success indicator 2] +- [Success indicator 3]" + +### 4. Identify Novel vs. Established Patterns + +Determine if we need to innovate or can use proven patterns: +"**Pattern Analysis:** +Looking at your core experience, does this: + +- Use established UX patterns that users already understand? +- Require novel interaction design that needs user education? +- Combine familiar patterns in innovative ways? + +**If Novel:** + +- What makes this different from existing approaches? +- How will we teach users this new pattern? +- What familiar metaphors can we use? + +**If Established:** + +- Which proven patterns should we adopt? +- How can we innovate within familiar patterns? +- What's our unique twist on established interactions?" + +### 5. Define Experience Mechanics + +Break down the core interaction into details: +"**Core Experience Mechanics:** +Let's design the step-by-step flow for [defining experience]: + +**1. Initiation:** + +- How does the user start this action? +- What triggers or invites them to begin? + +**2. Interaction:** + +- What does the user actually do? +- What controls or inputs do they use? +- How does the system respond? + +**3. Feedback:** + +- What tells users they're succeeding? +- How do they know when it's working? +- What happens if they make a mistake? + +**4. Completion:** + +- How do users know they're done? +- What's the successful outcome? +- What's next?" + +### 6. Generate Defining Experience Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## 2. Core User Experience + +### 2.1 Defining Experience + +[Defining experience description based on conversation] + +### 2.2 User Mental Model + +[User mental model analysis based on conversation] + +### 2.3 Success Criteria + +[Success criteria for core experience based on conversation] + +### 2.4 Novel UX Patterns + +[Novel UX patterns analysis based on conversation] + +### 2.5 Experience Mechanics + +[Detailed mechanics for core experience based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated defining experience content and present choices: +"I've defined the core experience for {{project_name}} - the interaction that will make users love this product. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine the core experience definition +[P] Party Mode - Bring different perspectives on the defining interaction +[C] Continue - Save this to the document and move to visual foundation + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current defining experience content +- Process the enhanced experience insights that come back +- Ask user: "Accept these improvements to the defining experience? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current defining experience +- Process the collaborative experience insights that come back +- Ask user: "Accept these changes to the defining experience? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-08-visual-foundation.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Defining experience clearly articulated +✅ User mental model thoroughly analyzed +✅ Success criteria established for core interaction +✅ Novel vs. established patterns properly evaluated +✅ Experience mechanics designed in detail +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not identifying the true core interaction +❌ Missing user's mental model and expectations +❌ Not establishing clear success criteria +❌ Not properly evaluating novel vs. established patterns +❌ Experience mechanics too vague or incomplete +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-08-visual-foundation.md` to establish visual design foundation. + +Remember: Do NOT proceed to step-08 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-08-visual-foundation.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-08-visual-foundation.md new file mode 100644 index 00000000..bd764a60 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-08-visual-foundation.md @@ -0,0 +1,224 @@ +# Step 8: Visual Foundation + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on establishing visual design foundation (colors, typography, spacing) +- 🎯 COLLABORATIVE discovery, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating visual foundation content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper visual insights +- **P (Party Mode)**: Bring multiple perspectives to define visual foundation +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Design system choice from step 6 provides component foundation +- Emotional response goals from step 4 inform visual decisions +- Focus on colors, typography, spacing, and layout foundation + +## YOUR TASK: + +Establish the visual design foundation including color themes, typography, and spacing systems. + +## VISUAL FOUNDATION SEQUENCE: + +### 1. Brand Guidelines Assessment + +Check for existing brand requirements: +"Do you have existing brand guidelines or a specific color palette I should follow? (y/n) + +If yes, I'll extract and document your brand colors and create semantic color mappings. +If no, I'll generate theme options based on your project's personality and emotional goals from our earlier discussion." + +### 2. Generate Color Theme Options (If no brand guidelines) + +Create visual exploration opportunities: +"If no existing brand guidelines, I'll create a color theme visualizer to help you explore options. + +🎨 I can generate comprehensive HTML color theme visualizers with multiple theme options, complete UI examples, and the ability to see how colors work in real interface contexts. + +This will help you make an informed decision about the visual direction for {{project_name}}." + +### 3. Define Typography System + +Establish the typographic foundation: +"**Typography Questions:** + +- What should the overall tone feel like? (Professional, friendly, modern, classic?) +- How much text content will users read? (Headings only? Long-form content?) +- Any accessibility requirements for font sizes or contrast? +- Any brand fonts we must use? + +**Typography Strategy:** + +- Choose primary and secondary typefaces +- Establish type scale (h1, h2, h3, body, etc.) +- Define line heights and spacing relationships +- Consider readability and accessibility" + +### 4. Establish Spacing and Layout Foundation + +Define the structural foundation: +"**Spacing and Layout Foundation:** + +- How should the overall layout feel? (Dense and efficient? Airy and spacious?) +- What spacing unit should we use? (4px, 8px, 12px base?) +- How much white space should be between elements? +- Should we use a grid system? If so, what column structure? + +**Layout Principles:** + +- [Layout principle 1 based on product type] +- [Layout principle 2 based on user needs] +- [Layout principle 3 based on platform requirements]" + +### 5. Create Visual Foundation Strategy + +Synthesize all visual decisions: +"**Visual Foundation Strategy:** + +**Color System:** + +- [Color strategy based on brand guidelines or generated themes] +- Semantic color mapping (primary, secondary, success, warning, error, etc.) +- Accessibility compliance (contrast ratios) + +**Typography System:** + +- [Typography strategy based on content needs and tone] +- Type scale and hierarchy +- Font pairing rationale + +**Spacing & Layout:** + +- [Spacing strategy based on content density and platform] +- Grid system approach +- Component spacing relationships + +This foundation will ensure consistency across all our design decisions." + +### 6. Generate Visual Foundation Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Visual Design Foundation + +### Color System + +[Color system strategy based on conversation] + +### Typography System + +[Typography system strategy based on conversation] + +### Spacing & Layout Foundation + +[Spacing and layout foundation based on conversation] + +### Accessibility Considerations + +[Accessibility considerations based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated visual foundation content and present choices: +"I've established the visual design foundation for {{project_name}}. This provides the building blocks for consistent, beautiful design. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our visual foundation +[P] Party Mode - Bring design perspectives on visual choices +[C] Continue - Save this to the document and move to design directions + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current visual foundation content +- Process the enhanced visual insights that come back +- Ask user: "Accept these improvements to the visual foundation? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current visual foundation +- Process the collaborative visual insights that come back +- Ask user: "Accept these changes to the visual foundation? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-09-design-directions.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Brand guidelines assessed and incorporated if available +✅ Color system established with accessibility consideration +✅ Typography system defined with appropriate hierarchy +✅ Spacing and layout foundation created +✅ Visual foundation strategy documented +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not checking for existing brand guidelines first +❌ Color palette not aligned with emotional goals +❌ Typography not suitable for content type or readability needs +❌ Spacing system not appropriate for content density +❌ Missing accessibility considerations +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-09-design-directions.md` to generate design direction mockups. + +Remember: Do NOT proceed to step-09 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-09-design-directions.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-09-design-directions.md new file mode 100644 index 00000000..a50ed503 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-09-design-directions.md @@ -0,0 +1,224 @@ +# Step 9: Design Direction Mockups + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on generating and evaluating design direction variations +- 🎯 COLLABORATIVE exploration, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating design direction content +- 💾 Generate HTML visualizer for design directions +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper design insights +- **P (Party Mode)**: Bring multiple perspectives to evaluate design directions +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Visual foundation from step 8 provides design tokens +- Core experience from step 7 informs layout and interaction design +- Focus on exploring different visual design directions + +## YOUR TASK: + +Generate comprehensive design direction mockups showing different visual approaches for the product. + +## DESIGN DIRECTIONS SEQUENCE: + +### 1. Generate Design Direction Variations + +Create diverse visual explorations: +"I'll generate 6-8 different design direction variations exploring: + +- Different layout approaches and information hierarchy +- Various interaction patterns and visual weights +- Alternative color applications from our foundation +- Different density and spacing approaches +- Various navigation and component arrangements + +Each mockup will show a complete vision for {{project_name}} with all our design decisions applied." + +### 2. Create HTML Design Direction Showcase + +Generate interactive visual exploration: +"🎨 Design Direction Mockups Generated! + +I'm creating a comprehensive HTML design direction showcase at `{planning_artifacts}/ux-design-directions.html` + +**What you'll see:** + +- 6-8 full-screen mockup variations +- Interactive states and hover effects +- Side-by-side comparison tools +- Complete UI examples with real content +- Responsive behavior demonstrations + +Each mockup represents a complete visual direction for your app's look and feel." + +### 3. Present Design Exploration Framework + +Guide evaluation criteria: +"As you explore the design directions, look for: + +✅ **Layout Intuitiveness** - Which information hierarchy matches your priorities? +✅ **Interaction Style** - Which interaction style fits your core experience? +✅ **Visual Weight** - Which visual density feels right for your brand? +✅ **Navigation Approach** - Which navigation pattern matches user expectations? +✅ **Component Usage** - How well do the components support your user journeys? +✅ **Brand Alignment** - Which direction best supports your emotional goals? + +Take your time exploring - this is a crucial decision that will guide all our design work!" + +### 4. Facilitate Design Direction Selection + +Help user choose or combine elements: +"After exploring all the design directions: + +**Which approach resonates most with you?** + +- Pick a favorite direction as-is +- Combine elements from multiple directions +- Request modifications to any direction +- Use one direction as a base and iterate + +**Tell me:** + +- Which layout feels most intuitive for your users? +- Which visual weight matches your brand personality? +- Which interaction style supports your core experience? +- Are there elements from different directions you'd like to combine?" + +### 5. Document Design Direction Decision + +Capture the chosen approach: +"Based on your exploration, I'm understanding your design direction preference: + +**Chosen Direction:** [Direction number or combination] +**Key Elements:** [Specific elements you liked] +**Modifications Needed:** [Any changes requested] +**Rationale:** [Why this direction works for your product] + +This will become our design foundation moving forward. Are we ready to lock this in, or do you want to explore variations?" + +### 6. Generate Design Direction Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Design Direction Decision + +### Design Directions Explored + +[Summary of design directions explored based on conversation] + +### Chosen Direction + +[Chosen design direction based on conversation] + +### Design Rationale + +[Rationale for design direction choice based on conversation] + +### Implementation Approach + +[Implementation approach based on chosen direction] +``` + +### 7. Present Content and Menu + +Show the generated design direction content and present choices: +"I've documented our design direction decision for {{project_name}}. This visual approach will guide all our detailed design work. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our design direction +[P] Party Mode - Bring different perspectives on visual choices +[C] Continue - Save this to the document and move to user journey flows + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current design direction content +- Process the enhanced design insights that come back +- Ask user: "Accept these improvements to the design direction? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current design direction +- Process the collaborative design insights that come back +- Ask user: "Accept these changes to the design direction? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-10-user-journeys.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Multiple design direction variations generated +✅ HTML showcase created with interactive elements +✅ Design evaluation criteria clearly established +✅ User able to explore and compare directions effectively +✅ Design direction decision made with clear rationale +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not creating enough variation in design directions +❌ Design directions not aligned with established foundation +❌ Missing interactive elements in HTML showcase +❌ Not providing clear evaluation criteria +❌ Rushing decision without thorough exploration +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-10-user-journeys.md` to design user journey flows. + +Remember: Do NOT proceed to step-10 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-10-user-journeys.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-10-user-journeys.md new file mode 100644 index 00000000..985577f0 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-10-user-journeys.md @@ -0,0 +1,241 @@ +# Step 10: User Journey Flows + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on designing user flows and journey interactions +- 🎯 COLLABORATIVE flow design, not assumption-based layouts +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating user journey content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper journey insights +- **P (Party Mode)**: Bring multiple perspectives to design user flows +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Design direction from step 9 informs flow layout and visual design +- Core experience from step 7 defines key journey interactions +- Focus on designing detailed user flows with Mermaid diagrams + +## YOUR TASK: + +Design detailed user journey flows for critical user interactions. + +## USER JOURNEY FLOWS SEQUENCE: + +### 1. Load PRD User Journeys as Foundation + +Start with user journeys already defined in the PRD: +"Great! Since we have the PRD available, let's build on the user journeys already documented there. + +**Existing User Journeys from PRD:** +I've already loaded these user journeys from your PRD: +[Journey narratives from PRD input documents] + +These journeys tell us **who** users are and **why** they take certain actions. Now we need to design **how** those journeys work in detail. + +**Critical Journeys to Design Flows For:** +Looking at the PRD journeys, I need to design detailed interaction flows for: + +- [Critical journey 1 identified from PRD narratives] +- [Critical journey 2 identified from PRD narratives] +- [Critical journey 3 identified from PRD narratives] + +The PRD gave us the stories - now we design the mechanics!" + +### 2. Design Each Journey Flow + +For each critical journey, design detailed flow: + +**For [Journey Name]:** +"Let's design the flow for users accomplishing [journey goal]. + +**Flow Design Questions:** + +- How do users start this journey? (entry point) +- What information do they need at each step? +- What decisions do they need to make? +- How do they know they're progressing successfully? +- What does success look like for this journey? +- Where might they get confused or stuck? +- How do they recover from errors?" + +### 3. Create Flow Diagrams + +Visualize each journey with Mermaid diagrams: +"I'll create detailed flow diagrams for each journey showing: + +**[Journey Name] Flow:** + +- Entry points and triggers +- Decision points and branches +- Success and failure paths +- Error recovery mechanisms +- Progressive disclosure of information + +Each diagram will map the complete user experience from start to finish." + +### 4. Optimize for Efficiency and Delight + +Refine flows for optimal user experience: +"**Flow Optimization:** +For each journey, let's ensure we're: + +- Minimizing steps to value (getting users to success quickly) +- Reducing cognitive load at each decision point +- Providing clear feedback and progress indicators +- Creating moments of delight or accomplishment +- Handling edge cases and error recovery gracefully + +**Specific Optimizations:** + +- [Optimization 1 for journey efficiency] +- [Optimization 2 for user delight] +- [Optimization 3 for error handling]" + +### 5. Document Journey Patterns + +Extract reusable patterns across journeys: +"**Journey Patterns:** +Across these flows, I'm seeing some common patterns we can standardize: + +**Navigation Patterns:** + +- [Navigation pattern 1] +- [Navigation pattern 2] + +**Decision Patterns:** + +- [Decision pattern 1] +- [Decision pattern 2] + +**Feedback Patterns:** + +- [Feedback pattern 1] +- [Feedback pattern 2] + +These patterns will ensure consistency across all user experiences." + +### 6. Generate User Journey Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## User Journey Flows + +### [Journey 1 Name] + +[Journey 1 description and Mermaid diagram] + +### [Journey 2 Name] + +[Journey 2 description and Mermaid diagram] + +### Journey Patterns + +[Journey patterns identified based on conversation] + +### Flow Optimization Principles + +[Flow optimization principles based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated user journey content and present choices: +"I've designed detailed user journey flows for {{project_name}}. These flows will guide the detailed design of each user interaction. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our user journey designs +[P] Party Mode - Bring different perspectives on user flows +[C] Continue - Save this to the document and move to component strategy + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current user journey content +- Process the enhanced journey insights that come back +- Ask user: "Accept these improvements to the user journeys? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current user journeys +- Process the collaborative journey insights that come back +- Ask user: "Accept these changes to the user journeys? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-11-component-strategy.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Critical user journeys identified and designed +✅ Detailed flow diagrams created for each journey +✅ Flows optimized for efficiency and user delight +✅ Common journey patterns extracted and documented +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not identifying all critical user journeys +❌ Flows too complex or not optimized for user success +❌ Missing error recovery paths +❌ Not extracting reusable patterns across journeys +❌ Flow diagrams unclear or incomplete +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-11-component-strategy.md` to define component library strategy. + +Remember: Do NOT proceed to step-11 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-11-component-strategy.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-11-component-strategy.md new file mode 100644 index 00000000..deef19b7 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-11-component-strategy.md @@ -0,0 +1,248 @@ +# Step 11: Component Strategy + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on defining component library strategy and custom components +- 🎯 COLLABORATIVE component planning, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating component strategy content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper component insights +- **P (Party Mode)**: Bring multiple perspectives to define component strategy +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Design system choice from step 6 determines available components +- User journeys from step 10 identify component needs +- Focus on defining custom components and implementation strategy + +## YOUR TASK: + +Define component library strategy and design custom components not covered by the design system. + +## COMPONENT STRATEGY SEQUENCE: + +### 1. Analyze Design System Coverage + +Review what components are available vs. needed: +"Based on our chosen design system [design system from step 6], let's identify what components are already available and what we need to create custom. + +**Available from Design System:** +[List of components available in chosen design system] + +**Components Needed for {{project_name}}:** +Looking at our user journeys and design direction, we need: + +- [Component need 1 from journey analysis] +- [Component need 2 from design requirements] +- [Component need 3 from core experience] + +**Gap Analysis:** + +- [Gap 1 - needed but not available] +- [Gap 2 - needed but not available]" + +### 2. Design Custom Components + +For each custom component needed, design thoroughly: + +**For each custom component:** +"**[Component Name] Design:** + +**Purpose:** What does this component do for users? +**Content:** What information or data does it display? +**Actions:** What can users do with this component? +**States:** What different states does it have? (default, hover, active, disabled, error, etc.) +**Variants:** Are there different sizes or styles needed? +**Accessibility:** What ARIA labels and keyboard support needed? + +Let's walk through each custom component systematically." + +### 3. Document Component Specifications + +Create detailed specifications for each component: + +**Component Specification Template:** + +```markdown +### [Component Name] + +**Purpose:** [Clear purpose statement] +**Usage:** [When and how to use] +**Anatomy:** [Visual breakdown of parts] +**States:** [All possible states with descriptions] +**Variants:** [Different sizes/styles if applicable] +**Accessibility:** [ARIA labels, keyboard navigation] +**Content Guidelines:** [What content works best] +**Interaction Behavior:** [How users interact] +``` + +### 4. Define Component Strategy + +Establish overall component library approach: +"**Component Strategy:** + +**Foundation Components:** (from design system) + +- [Foundation component 1] +- [Foundation component 2] + +**Custom Components:** (designed in this step) + +- [Custom component 1 with rationale] +- [Custom component 2 with rationale] + +**Implementation Approach:** + +- Build custom components using design system tokens +- Ensure consistency with established patterns +- Follow accessibility best practices +- Create reusable patterns for common use cases" + +### 5. Plan Implementation Roadmap + +Define how and when to build components: +"**Implementation Roadmap:** + +**Phase 1 - Core Components:** + +- [Component 1] - needed for [critical flow] +- [Component 2] - needed for [critical flow] + +**Phase 2 - Supporting Components:** + +- [Component 3] - enhances [user experience] +- [Component 4] - supports [design pattern] + +**Phase 3 - Enhancement Components:** + +- [Component 5] - optimizes [user journey] +- [Component 6] - adds [special feature] + +This roadmap helps prioritize development based on user journey criticality." + +### 6. Generate Component Strategy Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Component Strategy + +### Design System Components + +[Analysis of available design system components based on conversation] + +### Custom Components + +[Custom component specifications based on conversation] + +### Component Implementation Strategy + +[Component implementation strategy based on conversation] + +### Implementation Roadmap + +[Implementation roadmap based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated component strategy content and present choices: +"I've defined the component strategy for {{project_name}}. This balances using proven design system components with custom components for your unique needs. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our component strategy +[P] Party Mode - Bring technical perspectives on component design +[C] Continue - Save this to the document and move to UX patterns + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current component strategy content +- Process the enhanced component insights that come back +- Ask user: "Accept these improvements to the component strategy? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current component strategy +- Process the collaborative component insights that come back +- Ask user: "Accept these changes to the component strategy? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-12-ux-patterns.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Design system coverage properly analyzed +✅ All custom components thoroughly specified +✅ Component strategy clearly defined +✅ Implementation roadmap prioritized by user need +✅ Accessibility considered for all components +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not analyzing design system coverage properly +❌ Custom components not thoroughly specified +❌ Missing accessibility considerations +❌ Component strategy not aligned with user journeys +❌ Implementation roadmap not prioritized effectively +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-12-ux-patterns.md` to define UX consistency patterns. + +Remember: Do NOT proceed to step-12 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-12-ux-patterns.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-12-ux-patterns.md new file mode 100644 index 00000000..4708b52a --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-12-ux-patterns.md @@ -0,0 +1,237 @@ +# Step 12: UX Consistency Patterns + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on establishing consistency patterns for common UX situations +- 🎯 COLLABORATIVE pattern definition, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating UX patterns content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper pattern insights +- **P (Party Mode)**: Bring multiple perspectives to define UX patterns +- **C (Continue)**: Save the content to the document and proceed to next step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Component strategy from step 11 informs pattern decisions +- User journeys from step 10 identify common pattern needs +- Focus on consistency patterns for common UX situations + +## YOUR TASK: + +Establish UX consistency patterns for common situations like buttons, forms, navigation, and feedback. + +## UX PATTERNS SEQUENCE: + +### 1. Identify Pattern Categories + +Determine which patterns need definition for your product: +"Let's establish consistency patterns for how {{project_name}} behaves in common situations. + +**Pattern Categories to Define:** + +- Button hierarchy and actions +- Feedback patterns (success, error, warning, info) +- Form patterns and validation +- Navigation patterns +- Modal and overlay patterns +- Empty states and loading states +- Search and filtering patterns + +Which categories are most critical for your product? We can go through each thoroughly or focus on the most important ones." + +### 2. Define Critical Patterns First + +Focus on patterns most relevant to your product: + +**For [Critical Pattern Category]:** +"**[Pattern Type] Patterns:** +What should users see/do when they need to [pattern action]? + +**Considerations:** + +- Visual hierarchy (primary vs. secondary actions) +- Feedback mechanisms +- Error recovery +- Accessibility requirements +- Mobile vs. desktop considerations + +**Examples:** + +- [Example 1 for this pattern type] +- [Example 2 for this pattern type] + +How should {{project_name}} handle [pattern type] interactions?" + +### 3. Establish Pattern Guidelines + +Document specific design decisions: + +**Pattern Guidelines Template:** + +```markdown +### [Pattern Type] + +**When to Use:** [Clear usage guidelines] +**Visual Design:** [How it should look] +**Behavior:** [How it should interact] +**Accessibility:** [A11y requirements] +**Mobile Considerations:** [Mobile-specific needs] +**Variants:** [Different states or styles if applicable] +``` + +### 4. Design System Integration + +Ensure patterns work with chosen design system: +"**Integration with [Design System]:** + +- How do these patterns complement our design system components? +- What customizations are needed? +- How do we maintain consistency while meeting unique needs? + +**Custom Pattern Rules:** + +- [Custom rule 1] +- [Custom rule 2] +- [Custom rule 3]" + +### 5. Create Pattern Documentation + +Generate comprehensive pattern library: + +**Pattern Library Structure:** + +- Clear usage guidelines for each pattern +- Visual examples and specifications +- Implementation notes for developers +- Accessibility checklists +- Mobile-first considerations + +### 6. Generate UX Patterns Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## UX Consistency Patterns + +### Button Hierarchy + +[Button hierarchy patterns based on conversation] + +### Feedback Patterns + +[Feedback patterns based on conversation] + +### Form Patterns + +[Form patterns based on conversation] + +### Navigation Patterns + +[Navigation patterns based on conversation] + +### Additional Patterns + +[Additional patterns based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated UX patterns content and present choices: +"I've established UX consistency patterns for {{project_name}}. These patterns ensure users have a consistent, predictable experience across all interactions. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our UX patterns +[P] Party Mode - Bring different perspectives on consistency patterns +[C] Continue - Save this to the document and move to responsive design + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current UX patterns content +- Process the enhanced pattern insights that come back +- Ask user: "Accept these improvements to the UX patterns? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current UX patterns +- Process the collaborative pattern insights that come back +- Ask user: "Accept these changes to the UX patterns? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-13-responsive-accessibility.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Critical pattern categories identified and prioritized +✅ Consistency patterns clearly defined and documented +✅ Patterns integrated with chosen design system +✅ Accessibility considerations included for all patterns +✅ Mobile-first approach incorporated +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not identifying the most critical pattern categories +❌ Patterns too generic or not actionable +❌ Missing accessibility considerations +❌ Patterns not aligned with design system +❌ Not considering mobile differences +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-13-responsive-accessibility.md` to define responsive design and accessibility strategy. + +Remember: Do NOT proceed to step-13 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-13-responsive-accessibility.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-13-responsive-accessibility.md new file mode 100644 index 00000000..80b81d4c --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-13-responsive-accessibility.md @@ -0,0 +1,264 @@ +# Step 13: Responsive Design & Accessibility + +## MANDATORY EXECUTION RULES (READ FIRST): + +- 🛑 NEVER generate content without user input + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- ✅ ALWAYS treat this as collaborative discovery between UX facilitator and stakeholder +- 📋 YOU ARE A UX FACILITATOR, not a content generator +- 💬 FOCUS on responsive design strategy and accessibility compliance +- 🎯 COLLABORATIVE strategy definition, not assumption-based design +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- ⚠️ Present A/P/C menu after generating responsive/accessibility content +- 💾 ONLY save when user chooses C (Continue) +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted. +- 🚫 FORBIDDEN to load next step until C is selected + +## COLLABORATION MENUS (A/P/C): + +This step will generate content and present choices: + +- **A (Advanced Elicitation)**: Use discovery protocols to develop deeper responsive/accessibility insights +- **P (Party Mode)**: Bring multiple perspectives to define responsive/accessibility strategy +- **C (Continue)**: Save the content to the document and proceed to final step + +## PROTOCOL INTEGRATION: + +- When 'A' selected: Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml +- When 'P' selected: Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md +- PROTOCOLS always return to this step's A/P/C menu +- User accepts/rejects protocol changes before proceeding + +## CONTEXT BOUNDARIES: + +- Current document and frontmatter from previous steps are available +- Platform requirements from step 3 inform responsive design +- Design direction from step 9 influences responsive layout choices +- Focus on cross-device adaptation and accessibility compliance + +## YOUR TASK: + +Define responsive design strategy and accessibility requirements for the product. + +## RESPONSIVE & ACCESSIBILITY SEQUENCE: + +### 1. Define Responsive Strategy + +Establish how the design adapts across devices: +"Let's define how {{project_name}} adapts across different screen sizes and devices. + +**Responsive Design Questions:** + +**Desktop Strategy:** + +- How should we use extra screen real estate? +- Multi-column layouts, side navigation, or content density? +- What desktop-specific features can we include? + +**Tablet Strategy:** + +- Should we use simplified layouts or touch-optimized interfaces? +- How do gestures and touch interactions work on tablets? +- What's the optimal information density for tablet screens? + +**Mobile Strategy:** + +- Bottom navigation or hamburger menu? +- How do layouts collapse on small screens? +- What's the most critical information to show mobile-first?" + +### 2. Establish Breakpoint Strategy + +Define when and how layouts change: +"**Breakpoint Strategy:** +We need to define screen size breakpoints where layouts adapt. + +**Common Breakpoints:** + +- Mobile: 320px - 767px +- Tablet: 768px - 1023px +- Desktop: 1024px+ + +**For {{project_name}}, should we:** + +- Use standard breakpoints or custom ones? +- Focus on mobile-first or desktop-first design? +- Have specific breakpoints for your key use cases?" + +### 3. Design Accessibility Strategy + +Define accessibility requirements and compliance level: +"**Accessibility Strategy:** +What level of WCAG compliance does {{project_name}} need? + +**WCAG Levels:** + +- **Level A (Basic)** - Essential accessibility for legal compliance +- **Level AA (Recommended)** - Industry standard for good UX +- **Level AAA (Highest)** - Exceptional accessibility (rarely needed) + +**Based on your product:** + +- [Recommendation based on user base, legal requirements, etc.] + +**Key Accessibility Considerations:** + +- Color contrast ratios (4.5:1 for normal text) +- Keyboard navigation support +- Screen reader compatibility +- Touch target sizes (minimum 44x44px) +- Focus indicators and skip links" + +### 4. Define Testing Strategy + +Plan how to ensure responsive design and accessibility: +"**Testing Strategy:** + +**Responsive Testing:** + +- Device testing on actual phones/tablets +- Browser testing across Chrome, Firefox, Safari, Edge +- Real device network performance testing + +**Accessibility Testing:** + +- Automated accessibility testing tools +- Screen reader testing (VoiceOver, NVDA, JAWS) +- Keyboard-only navigation testing +- Color blindness simulation testing + +**User Testing:** + +- Include users with disabilities in testing +- Test with diverse assistive technologies +- Validate with actual target devices" + +### 5. Document Implementation Guidelines + +Create specific guidelines for developers: +"**Implementation Guidelines:** + +**Responsive Development:** + +- Use relative units (rem, %, vw, vh) over fixed pixels +- Implement mobile-first media queries +- Test touch targets and gesture areas +- Optimize images and assets for different devices + +**Accessibility Development:** + +- Semantic HTML structure +- ARIA labels and roles +- Keyboard navigation implementation +- Focus management and skip links +- High contrast mode support" + +### 6. Generate Responsive & Accessibility Content + +Prepare the content to append to the document: + +#### Content Structure: + +When saving to document, append these Level 2 and Level 3 sections: + +```markdown +## Responsive Design & Accessibility + +### Responsive Strategy + +[Responsive strategy based on conversation] + +### Breakpoint Strategy + +[Breakpoint strategy based on conversation] + +### Accessibility Strategy + +[Accessibility strategy based on conversation] + +### Testing Strategy + +[Testing strategy based on conversation] + +### Implementation Guidelines + +[Implementation guidelines based on conversation] +``` + +### 7. Present Content and Menu + +Show the generated responsive and accessibility content and present choices: +"I've defined the responsive design and accessibility strategy for {{project_name}}. This ensures your product works beautifully across all devices and is accessible to all users. + +**Here's what I'll add to the document:** + +[Show the complete markdown content from step 6] + +**What would you like to do?** +[A] Advanced Elicitation - Let's refine our responsive/accessibility strategy +[P] Party Mode - Bring different perspectives on inclusive design +[C] Continue - Save this to the document and complete the workflow + +### 8. Handle Menu Selection + +#### If 'A' (Advanced Elicitation): + +- Read fully and follow: {project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml with the current responsive/accessibility content +- Process the enhanced insights that come back +- Ask user: "Accept these improvements to the responsive/accessibility strategy? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'P' (Party Mode): + +- Read fully and follow: {project-root}/_bmad/core/workflows/party-mode/workflow.md with the current responsive/accessibility strategy +- Process the collaborative insights that come back +- Ask user: "Accept these changes to the responsive/accessibility strategy? (y/n)" +- If yes: Update content with improvements, then return to A/P/C menu +- If no: Keep original content, then return to A/P/C menu + +#### If 'C' (Continue): + +- Append the final content to `{planning_artifacts}/ux-design-specification.md` +- Update frontmatter: append step to end of stepsCompleted array +- Load `./step-14-complete.md` + +## APPEND TO DOCUMENT: + +When user selects 'C', append the content directly to the document using the structure from step 6. + +## SUCCESS METRICS: + +✅ Responsive strategy clearly defined for all device types +✅ Appropriate breakpoint strategy established +✅ Accessibility requirements determined and documented +✅ Comprehensive testing strategy planned +✅ Implementation guidelines provided for development team +✅ A/P/C menu presented and handled correctly +✅ Content properly appended to document when C selected + +## FAILURE MODES: + +❌ Not considering all device types and screen sizes +❌ Accessibility requirements not properly researched +❌ Testing strategy not comprehensive enough +❌ Implementation guidelines too generic or unclear +❌ Not addressing specific accessibility challenges for your product +❌ Not presenting A/P/C menu after content generation +❌ Appending content without user selecting 'C' + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## NEXT STEP: + +After user selects 'C' and content is saved to document, load `./step-14-complete.md` to finalize the UX design workflow. + +Remember: Do NOT proceed to step-14 until user explicitly selects 'C' from the A/P/C menu and content is saved! diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-14-complete.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-14-complete.md new file mode 100644 index 00000000..fe784788 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-14-complete.md @@ -0,0 +1,171 @@ +# Step 14: Workflow Completion + +## MANDATORY EXECUTION RULES (READ FIRST): + +- ✅ THIS IS A FINAL STEP - Workflow completion required + +- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete decisions +- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding +- 🛑 NO content generation - this is a wrap-up step +- 📋 FINALIZE document and update workflow status +- 💬 FOCUS on completion, validation, and next steps +- 🎯 UPDATE workflow status files with completion information +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## EXECUTION PROTOCOLS: + +- 🎯 Show your analysis before taking any action +- 💾 Update the main workflow status file with completion information +- 📖 Suggest potential next workflow steps for the user +- 🚫 DO NOT load additional steps after this one + +## TERMINATION STEP PROTOCOLS: + +- This is a FINAL step - workflow completion required +- 📖 Update output file frontmatter, adding this step to the end of the list of stepsCompleted to indicate all is finished.. +- Output completion summary and next step guidance +- Update the main workflow status file with finalized document +- Suggest potential next workflow steps for the user +- Mark workflow as complete in status tracking + +## CONTEXT BOUNDARIES: + +- Complete UX design specification is available from all previous steps +- Workflow frontmatter shows all completed steps +- All collaborative content has been generated and saved +- Focus on completion, validation, and next steps + +## YOUR TASK: + +Complete the UX design workflow, update status files, and suggest next steps for the project. + +## WORKFLOW COMPLETION SEQUENCE: + +### 1. Announce Workflow Completion + +Inform user that the UX design is complete: +"🎉 **UX Design Complete, {{user_name}}!** + +I've successfully collaborated with you to create a comprehensive UX design specification for {{project_name}}. + +**What we've accomplished:** + +- ✅ Project understanding and user insights +- ✅ Core experience and emotional response definition +- ✅ UX pattern analysis and inspiration +- ✅ Design system choice and implementation strategy +- ✅ Core interaction definition and experience mechanics +- ✅ Visual design foundation (colors, typography, spacing) +- ✅ Design direction mockups and visual explorations +- ✅ User journey flows and interaction design +- ✅ Component strategy and custom component specifications +- ✅ UX consistency patterns for common interactions +- ✅ Responsive design and accessibility strategy + +**The complete UX design specification is now available at:** `{planning_artifacts}/ux-design-specification.md` + +**Supporting Visual Assets:** + +- Color themes visualizer: `{planning_artifacts}/ux-color-themes.html` +- Design directions mockups: `{planning_artifacts}/ux-design-directions.html` + +This specification is now ready to guide visual design, implementation, and development." + +### 2. Workflow Status Update + +Update the main workflow status file: + +- Load `{status_file}` from workflow configuration (if exists) +- Update workflow_status["create-ux-design"] = "{default_output_file}" +- Save file, preserving all comments and structure +- Mark current timestamp as completion time + +### 3. Suggest Next Steps + +UX Design complete. Read fully and follow: `_bmad/core/tasks/bmad-help.md` with argument `Create UX`. + +### 5. Final Completion Confirmation + +Congratulate the user on the completion you both completed together of the UX. + + + +## SUCCESS METRICS: + +✅ UX design specification contains all required sections +✅ All collaborative content properly saved to document +✅ Workflow status file updated with completion information +✅ Clear next step guidance provided to user +✅ Document quality validation completed +✅ User acknowledges completion and understands next options + +## FAILURE MODES: + +❌ Not updating workflow status file with completion information +❌ Missing clear next step guidance for user +❌ Not confirming document completeness with user +❌ Workflow not properly marked as complete in status tracking +❌ User unclear about what happens next + +❌ **CRITICAL**: Reading only partial step file - leads to incomplete understanding and poor decisions +❌ **CRITICAL**: Proceeding with 'C' without fully reading and understanding the next step file +❌ **CRITICAL**: Making decisions without complete understanding of step requirements and protocols + +## WORKFLOW COMPLETION CHECKLIST: + +### Design Specification Complete: + +- [ ] Executive summary and project understanding +- [ ] Core experience and emotional response definition +- [ ] UX pattern analysis and inspiration +- [ ] Design system choice and strategy +- [ ] Core interaction mechanics definition +- [ ] Visual design foundation (colors, typography, spacing) +- [ ] Design direction decisions and mockups +- [ ] User journey flows and interaction design +- [ ] Component strategy and specifications +- [ ] UX consistency patterns documentation +- [ ] Responsive design and accessibility strategy + +### Process Complete: + +- [ ] All steps completed with user confirmation +- [ ] All content saved to specification document +- [ ] Frontmatter properly updated with all steps +- [ ] Workflow status file updated with completion +- [ ] Next steps clearly communicated + +## NEXT STEPS GUIDANCE: + +**Immediate Options:** + +1. **Wireframe Generation** - Create low-fidelity layouts based on UX spec +2. **Interactive Prototype** - Build clickable prototypes for testing +3. **Solution Architecture** - Technical design with UX context +4. **Figma Visual Design** - High-fidelity UI implementation +5. **Epic Creation** - Break down UX requirements for development + +**Recommended Sequence:** +For design-focused teams: Wireframes → Prototypes → Figma Design → Development +For technical teams: Architecture → Epic Creation → Development + +Consider team capacity, timeline, and whether user validation is needed before implementation. + +## WORKFLOW FINALIZATION: + +- Set `lastStep = 14` in document frontmatter +- Update workflow status file with completion timestamp +- Provide completion summary to user +- Do NOT load any additional steps + +## FINAL REMINDER: + +This UX design workflow is now complete. The specification serves as the foundation for all visual and development work. All design decisions, patterns, and requirements are documented to ensure consistent, accessible, and user-centered implementation. + +**Congratulations on completing the UX Design Specification for {{project_name}}!** 🎉 + +**Core Deliverables:** + +- ✅ UX Design Specification: `{planning_artifacts}/ux-design-specification.md` +- ✅ Color Themes Visualizer: `{planning_artifacts}/ux-color-themes.html` +- ✅ Design Directions: `{planning_artifacts}/ux-design-directions.html` diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/ux-design-template.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/ux-design-template.md new file mode 100644 index 00000000..aeed9dc5 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/ux-design-template.md @@ -0,0 +1,13 @@ +--- +stepsCompleted: [] +inputDocuments: [] +--- + +# UX Design Specification {{project_name}} + +**Author:** {{user_name}} +**Date:** {{date}} + +--- + + diff --git a/src/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md b/src/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md new file mode 100644 index 00000000..d74cb487 --- /dev/null +++ b/src/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md @@ -0,0 +1,43 @@ +--- +name: create-ux-design +description: Work with a peer UX Design expert to plan your applications UX patterns, look and feel. +web_bundle: true +--- + +# Create UX Design Workflow + +**Goal:** Create comprehensive UX design specifications through collaborative visual exploration and informed decision-making where you act as a UX facilitator working with a product stakeholder. + +--- + +## WORKFLOW ARCHITECTURE + +This uses **micro-file architecture** for disciplined execution: + +- Each step is a self-contained file with embedded rules +- Sequential progression with user control at each step +- Document state tracked in frontmatter +- Append-only document building through conversation + +--- + +## INITIALIZATION + +### Configuration Loading + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `output_folder`, `planning_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` +- `date` as system-generated current datetime + +### Paths + +- `installed_path` = `{project-root}/_bmad/bmm/workflows/2-plan-workflows/create-ux-design` +- `template_path` = `{installed_path}/ux-design-template.md` +- `default_output_file` = `{planning_artifacts}/ux-design-specification.md` + +## EXECUTION + +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` +- Read fully and follow: `steps/step-01-init.md` to begin the UX design workflow. diff --git a/src/bmm/workflows/4-implementation/code-review/checklist.md b/src/bmm/workflows/4-implementation/code-review/checklist.md new file mode 100644 index 00000000..f213a6b9 --- /dev/null +++ b/src/bmm/workflows/4-implementation/code-review/checklist.md @@ -0,0 +1,23 @@ +# Senior Developer Review - Validation Checklist + +- [ ] Story file loaded from `{{story_path}}` +- [ ] Story Status verified as reviewable (review) +- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}}) +- [ ] Story Context located or warning recorded +- [ ] Epic Tech Spec located or warning recorded +- [ ] Architecture/standards docs loaded (as available) +- [ ] Tech stack detected and documented +- [ ] MCP doc search performed (or web fallback) and references captured +- [ ] Acceptance Criteria cross-checked against implementation +- [ ] File List reviewed and validated for completeness +- [ ] Tests identified and mapped to ACs; gaps noted +- [ ] Code quality review performed on changed files +- [ ] Security review performed on changed files and dependencies +- [ ] Outcome decided (Approve/Changes Requested/Blocked) +- [ ] Review notes appended under "Senior Developer Review (AI)" +- [ ] Change Log updated with review entry +- [ ] Status updated according to settings (if enabled) +- [ ] Sprint status synced (if sprint tracking enabled) +- [ ] Story saved successfully + +_Reviewer: {{user_name}} on {{date}}_ diff --git a/src/bmm/workflows/4-implementation/code-review/instructions.xml b/src/bmm/workflows/4-implementation/code-review/instructions.xml new file mode 100644 index 00000000..e5649559 --- /dev/null +++ b/src/bmm/workflows/4-implementation/code-review/instructions.xml @@ -0,0 +1,227 @@ + + The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml + You MUST have already loaded and processed: {installed_path}/workflow.yaml + Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} + Generate all documents in {document_output_language} + + 🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥 + Your purpose: Validate story file claims against actual implementation + Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented? + Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent + that wrote this slop + Read EVERY file in the File List - verify implementation against story requirements + Tasks marked complete but not done = CRITICAL finding + Acceptance Criteria not implemented = HIGH severity finding + Do not review files that are not part of the application's source code. Always exclude the _bmad/ and _bmad-output/ folders from the review. Always exclude IDE and CLI configuration folders like .cursor/ and .windsurf/ and .claude/ + + + + Use provided {{story_path}} or ask user which story file to review + Read COMPLETE story file + Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story + metadata + Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log + + + Check if git repository detected in current directory + + Run `git status --porcelain` to find uncommitted changes + Run `git diff --name-only` to see modified files + Run `git diff --cached --name-only` to see staged files + Compile list of actually changed files from git output + + + + Compare story's Dev Agent Record → File List with actual git changes + Note discrepancies: + - Files in git but not in story File List + - Files in story File List but no git changes + - Missing documentation of what was actually changed + + + + Load {project_context} for coding standards (if exists) + + + + Extract ALL Acceptance Criteria from story + Extract ALL Tasks/Subtasks with completion status ([x] vs [ ]) + From Dev Agent Record → File List, compile list of claimed changes + + Create review plan: + 1. **AC Validation**: Verify each AC is actually implemented + 2. **Task Audit**: Verify each [x] task is really done + 3. **Code Quality**: Security, performance, maintainability + 4. **Test Quality**: Real tests vs placeholder bullshit + + + + + VALIDATE EVERY CLAIM - Check git reality vs story claims + + + Review git vs story File List discrepancies: + 1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation) + 2. **Story lists files but no git changes** → HIGH finding (false claims) + 3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue) + + + + Create comprehensive review file list from story File List and git changes + + + For EACH Acceptance Criterion: + 1. Read the AC requirement + 2. Search implementation files for evidence + 3. Determine: IMPLEMENTED, PARTIAL, or MISSING + 4. If MISSING/PARTIAL → HIGH SEVERITY finding + + + + For EACH task marked [x]: + 1. Read the task description + 2. Search files for evidence it was actually done + 3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding + 4. Record specific proof (file:line) + + + + For EACH file in comprehensive review list: + 1. **Security**: Look for injection risks, missing validation, auth issues + 2. **Performance**: N+1 queries, inefficient loops, missing caching + 3. **Error Handling**: Missing try/catch, poor error messages + 4. **Code Quality**: Complex functions, magic numbers, poor naming + 5. **Test Quality**: Are tests real assertions or placeholders? + + + + NOT LOOKING HARD ENOUGH - Find more problems! + Re-examine code for: + - Edge cases and null handling + - Architecture violations + - Documentation gaps + - Integration issues + - Dependency problems + - Git commit message quality (if applicable) + + Find at least 3 more specific, actionable issues + + + + + Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix) + Set {{fixed_count}} = 0 + Set {{action_count}} = 0 + + **🔥 CODE REVIEW FINDINGS, {user_name}!** + + **Story:** {{story_file}} + **Git vs Story Discrepancies:** {{git_discrepancy_count}} found + **Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low + + ## 🔴 CRITICAL ISSUES + - Tasks marked [x] but not actually implemented + - Acceptance Criteria not implemented + - Story claims files changed but no git evidence + - Security vulnerabilities + + ## 🟡 MEDIUM ISSUES + - Files changed but not documented in story File List + - Uncommitted changes not tracked + - Performance problems + - Poor test coverage/quality + - Code maintainability issues + + ## 🟢 LOW ISSUES + - Code style improvements + - Documentation gaps + - Git commit message quality + + + What should I do with these issues? + + 1. **Fix them automatically** - I'll update the code and tests + 2. **Create action items** - Add to story Tasks/Subtasks for later + 3. **Show me details** - Deep dive into specific issues + + Choose [1], [2], or specify which issue to examine: + + + Fix all HIGH and MEDIUM issues in the code + Add/update tests as needed + Update File List in story if files changed + Update story Dev Agent Record with fixes applied + Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed + Set {{action_count}} = 0 + + + + Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks + For each issue: `- [ ] [AI-Review][Severity] Description [file:line]` + Set {{action_count}} = number of action items created + Set {{fixed_count}} = 0 + + + + Show detailed explanation with code examples + Return to fix decision + + + + + + + Set {{new_status}} = "done" + Update story Status field to "done" + + + Set {{new_status}} = "in-progress" + Update story Status field to "in-progress" + + Save story file + + + + Set {{current_sprint_status}} = "enabled" + + + Set {{current_sprint_status}} = "no-sprint-tracking" + + + + + Load the FULL file: {sprint_status} + Find development_status key matching {{story_key}} + + + Update development_status[{{story_key}}] = "done" + Save file, preserving ALL comments and structure + ✅ Sprint status synced: {{story_key}} → done + + + + Update development_status[{{story_key}}] = "in-progress" + Save file, preserving ALL comments and structure + 🔄 Sprint status synced: {{story_key}} → in-progress + + + + ⚠️ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml + + + + + ℹ️ Story status updated (no sprint tracking configured) + + + **✅ Review Complete!** + + **Story Status:** {{new_status}} + **Issues Fixed:** {{fixed_count}} + **Action Items Created:** {{action_count}} + + {{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}} + + + + \ No newline at end of file diff --git a/src/bmm/workflows/4-implementation/code-review/workflow.yaml b/src/bmm/workflows/4-implementation/code-review/workflow.yaml new file mode 100644 index 00000000..9e66b932 --- /dev/null +++ b/src/bmm/workflows/4-implementation/code-review/workflow.yaml @@ -0,0 +1,51 @@ +# Review Story Workflow +name: code-review +description: "Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts `looks good` - must find minimum issues and can auto-fix with user approval." +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +user_skill_level: "{config_source}:user_skill_level" +document_output_language: "{config_source}:document_output_language" +date: system-generated +planning_artifacts: "{config_source}:planning_artifacts" +implementation_artifacts: "{config_source}:implementation_artifacts" +output_folder: "{implementation_artifacts}" +sprint_status: "{implementation_artifacts}/sprint-status.yaml" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/code-review" +instructions: "{installed_path}/instructions.xml" +validation: "{installed_path}/checklist.md" +template: false + +variables: + # Project context + project_context: "**/project-context.md" + story_dir: "{implementation_artifacts}" + +# Smart input file references - handles both whole docs and sharded docs +# Priority: Whole document first, then sharded version +# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story review +input_file_patterns: + architecture: + description: "System architecture for review context" + whole: "{planning_artifacts}/*architecture*.md" + sharded: "{planning_artifacts}/*architecture*/*.md" + load_strategy: "FULL_LOAD" + ux_design: + description: "UX design specification (if UI review)" + whole: "{planning_artifacts}/*ux*.md" + sharded: "{planning_artifacts}/*ux*/*.md" + load_strategy: "FULL_LOAD" + epics: + description: "Epic containing story being reviewed" + whole: "{planning_artifacts}/*epic*.md" + sharded_index: "{planning_artifacts}/*epic*/index.md" + sharded_single: "{planning_artifacts}/*epic*/epic-{{epic_num}}.md" + load_strategy: "SELECTIVE_LOAD" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/4-implementation/correct-course/checklist.md b/src/bmm/workflows/4-implementation/correct-course/checklist.md new file mode 100644 index 00000000..f13ab9be --- /dev/null +++ b/src/bmm/workflows/4-implementation/correct-course/checklist.md @@ -0,0 +1,288 @@ +# Change Navigation Checklist + +This checklist is executed as part of: {project-root}/_bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml +Work through each section systematically with the user, recording findings and impacts + + + +
+ + +Identify the triggering story that revealed this issue +Document story ID and brief description +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Define the core problem precisely +Categorize issue type: + - Technical limitation discovered during implementation + - New requirement emerged from stakeholders + - Misunderstanding of original requirements + - Strategic pivot or market change + - Failed approach requiring different solution +Write clear problem statement +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Assess initial impact and gather supporting evidence +Collect concrete examples, error messages, stakeholder feedback, or technical constraints +Document evidence for later reference +[ ] Done / [ ] N/A / [ ] Action-needed + + + +HALT: "Cannot proceed without understanding what caused the need for change" +HALT: "Need concrete evidence or examples of the issue before analyzing impact" + + +
+ +
+ + +Evaluate current epic containing the trigger story +Can this epic still be completed as originally planned? +If no, what modifications are needed? +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Determine required epic-level changes +Check each scenario: + - Modify existing epic scope or acceptance criteria + - Add new epic to address the issue + - Remove or defer epic that's no longer viable + - Completely redefine epic based on new understanding +Document specific epic changes needed +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Review all remaining planned epics for required changes +Check each future epic for impact +Identify dependencies that may be affected +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Check if issue invalidates future epics or necessitates new ones +Does this change make any planned epics obsolete? +Are new epics needed to address gaps created by this change? +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Consider if epic order or priority should change +Should epics be resequenced based on this issue? +Do priorities need adjustment? +[ ] Done / [ ] N/A / [ ] Action-needed + + +
+ +
+ + +Check PRD for conflicts +Does issue conflict with core PRD goals or objectives? +Do requirements need modification, addition, or removal? +Is the defined MVP still achievable or does scope need adjustment? +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Review Architecture document for conflicts +Check each area for impact: + - System components and their interactions + - Architectural patterns and design decisions + - Technology stack choices + - Data models and schemas + - API designs and contracts + - Integration points +Document specific architecture sections requiring updates +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Examine UI/UX specifications for conflicts +Check for impact on: + - User interface components + - User flows and journeys + - Wireframes or mockups + - Interaction patterns + - Accessibility considerations +Note specific UI/UX sections needing revision +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Consider impact on other artifacts +Review additional artifacts for impact: + - Deployment scripts + - Infrastructure as Code (IaC) + - Monitoring and observability setup + - Testing strategies + - Documentation + - CI/CD pipelines +Document any secondary artifacts requiring updates +[ ] Done / [ ] N/A / [ ] Action-needed + + +
+ +
+ + +Evaluate Option 1: Direct Adjustment +Can the issue be addressed by modifying existing stories? +Can new stories be added within the current epic structure? +Would this approach maintain project timeline and scope? +Effort estimate: [High/Medium/Low] +Risk level: [High/Medium/Low] +[ ] Viable / [ ] Not viable + + + +Evaluate Option 2: Potential Rollback +Would reverting recently completed stories simplify addressing this issue? +Which stories would need to be rolled back? +Is the rollback effort justified by the simplification gained? +Effort estimate: [High/Medium/Low] +Risk level: [High/Medium/Low] +[ ] Viable / [ ] Not viable + + + +Evaluate Option 3: PRD MVP Review +Is the original PRD MVP still achievable with this issue? +Does MVP scope need to be reduced or redefined? +Do core goals need modification based on new constraints? +What would be deferred to post-MVP if scope is reduced? +Effort estimate: [High/Medium/Low] +Risk level: [High/Medium/Low] +[ ] Viable / [ ] Not viable + + + +Select recommended path forward +Based on analysis of all options, choose the best path +Provide clear rationale considering: + - Implementation effort and timeline impact + - Technical risk and complexity + - Impact on team morale and momentum + - Long-term sustainability and maintainability + - Stakeholder expectations and business value +Selected approach: [Option 1 / Option 2 / Option 3 / Hybrid] +Justification: [Document reasoning] +[ ] Done / [ ] N/A / [ ] Action-needed + + +
+ +
+ + +Create identified issue summary +Write clear, concise problem statement +Include context about discovery and impact +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Document epic impact and artifact adjustment needs +Summarize findings from Epic Impact Assessment (Section 2) +Summarize findings from Artifact Conflict Analysis (Section 3) +Be specific about what changes are needed and why +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Present recommended path forward with rationale +Include selected approach from Section 4 +Provide complete justification for recommendation +Address trade-offs and alternatives considered +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Define PRD MVP impact and high-level action plan +State clearly if MVP is affected +Outline major action items needed for implementation +Identify dependencies and sequencing +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Establish agent handoff plan +Identify which roles/agents will execute the changes: + - Development team (for implementation) + - Product Owner / Scrum Master (for backlog changes) + - Product Manager / Architect (for strategic changes) +Define responsibilities for each role +[ ] Done / [ ] N/A / [ ] Action-needed + + +
+ +
+ + +Review checklist completion +Verify all applicable sections have been addressed +Confirm all [Action-needed] items have been documented +Ensure analysis is comprehensive and actionable +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Verify Sprint Change Proposal accuracy +Review complete proposal for consistency and clarity +Ensure all recommendations are well-supported by analysis +Check that proposal is actionable and specific +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Obtain explicit user approval +Present complete proposal to user +Get clear yes/no approval for proceeding +Document approval and any conditions +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Update sprint-status.yaml to reflect approved epic changes +If epics were added: Add new epic entries with status 'backlog' +If epics were removed: Remove corresponding entries +If epics were renumbered: Update epic IDs and story references +If stories were added/removed: Update story entries within affected epics +[ ] Done / [ ] N/A / [ ] Action-needed + + + +Confirm next steps and handoff plan +Review handoff responsibilities with user +Ensure all stakeholders understand their roles +Confirm timeline and success criteria +[ ] Done / [ ] N/A / [ ] Action-needed + + + +HALT: "Cannot proceed to proposal without complete impact analysis" +HALT: "Must have explicit approval before implementing changes" +HALT: "Must clearly define who will execute the proposed changes" + + +
+ +
+ + +This checklist is for SIGNIFICANT changes affecting project direction +Work interactively with user - they make final decisions +Be factual, not blame-oriented when analyzing issues +Handle changes professionally as opportunities to improve the project +Maintain conversation context throughout - this is collaborative work + diff --git a/src/bmm/workflows/4-implementation/correct-course/instructions.md b/src/bmm/workflows/4-implementation/correct-course/instructions.md new file mode 100644 index 00000000..430239a6 --- /dev/null +++ b/src/bmm/workflows/4-implementation/correct-course/instructions.md @@ -0,0 +1,206 @@ +# Correct Course - Sprint Change Management Instructions + +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml +Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} +Generate all documents in {document_output_language} + +DOCUMENT OUTPUT: Updated epics, stories, or PRD sections. Clear, actionable changes. User skill level ({user_skill_level}) affects conversation style ONLY, not document updates. + + + + + Confirm change trigger and gather user description of the issue + Ask: "What specific issue or change has been identified that requires navigation?" + Verify access to required project documents: + - PRD (Product Requirements Document) + - Current Epics and Stories + - Architecture documentation + - UI/UX specifications + Ask user for mode preference: + - **Incremental** (recommended): Refine each edit collaboratively + - **Batch**: Present all changes at once for review + Store mode selection for use throughout workflow + +HALT: "Cannot navigate change without clear understanding of the triggering issue. Please provide specific details about what needs to change and why." + +HALT: "Need access to project documents (PRD, Epics, Architecture, UI/UX) to assess change impact. Please ensure these documents are accessible." + + + + + After discovery, these content variables are available: {prd_content}, {epics_content}, {architecture_content}, {ux_design_content}, {tech_spec_content}, {document_project_content} + + + + Read fully and follow the systematic analysis from: {checklist} + Work through each checklist section interactively with the user + Record status for each checklist item: + - [x] Done - Item completed successfully + - [N/A] Skip - Item not applicable to this change + - [!] Action-needed - Item requires attention or follow-up + Maintain running notes of findings and impacts discovered + Present checklist progress after each major section + +Identify blocking issues and work with user to resolve before continuing + + + +Based on checklist findings, create explicit edit proposals for each identified artifact + +For Story changes: + +- Show old → new text format +- Include story ID and section being modified +- Provide rationale for each change +- Example format: + + ``` + Story: [STORY-123] User Authentication + Section: Acceptance Criteria + + OLD: + - User can log in with email/password + + NEW: + - User can log in with email/password + - User can enable 2FA via authenticator app + + Rationale: Security requirement identified during implementation + ``` + +For PRD modifications: + +- Specify exact sections to update +- Show current content and proposed changes +- Explain impact on MVP scope and requirements + +For Architecture changes: + +- Identify affected components, patterns, or technology choices +- Describe diagram updates needed +- Note any ripple effects on other components + +For UI/UX specification updates: + +- Reference specific screens or components +- Show wireframe or flow changes needed +- Connect changes to user experience impact + + + Present each edit proposal individually + Review and refine this change? Options: Approve [a], Edit [e], Skip [s] + Iterate on each proposal based on user feedback + + +Collect all edit proposals and present together at end of step + + + + +Compile comprehensive Sprint Change Proposal document with following sections: + +Section 1: Issue Summary + +- Clear problem statement describing what triggered the change +- Context about when/how the issue was discovered +- Evidence or examples demonstrating the issue + +Section 2: Impact Analysis + +- Epic Impact: Which epics are affected and how +- Story Impact: Current and future stories requiring changes +- Artifact Conflicts: PRD, Architecture, UI/UX documents needing updates +- Technical Impact: Code, infrastructure, or deployment implications + +Section 3: Recommended Approach + +- Present chosen path forward from checklist evaluation: + - Direct Adjustment: Modify/add stories within existing plan + - Potential Rollback: Revert completed work to simplify resolution + - MVP Review: Reduce scope or modify goals +- Provide clear rationale for recommendation +- Include effort estimate, risk assessment, and timeline impact + +Section 4: Detailed Change Proposals + +- Include all refined edit proposals from Step 3 +- Group by artifact type (Stories, PRD, Architecture, UI/UX) +- Ensure each change includes before/after and justification + +Section 5: Implementation Handoff + +- Categorize change scope: + - Minor: Direct implementation by dev team + - Moderate: Backlog reorganization needed (PO/SM) + - Major: Fundamental replan required (PM/Architect) +- Specify handoff recipients and their responsibilities +- Define success criteria for implementation + +Present complete Sprint Change Proposal to user +Write Sprint Change Proposal document to {default_output_file} +Review complete proposal. Continue [c] or Edit [e]? + + + +Get explicit user approval for complete proposal +Do you approve this Sprint Change Proposal for implementation? (yes/no/revise) + + + Gather specific feedback on what needs adjustment + Return to appropriate step to address concerns + If changes needed to edit proposals + If changes needed to overall proposal structure + + + + + Finalize Sprint Change Proposal document + Determine change scope classification: + +- **Minor**: Can be implemented directly by development team +- **Moderate**: Requires backlog reorganization and PO/SM coordination +- **Major**: Needs fundamental replan with PM/Architect involvement + +Provide appropriate handoff based on scope: + + + + + Route to: Development team for direct implementation + Deliverables: Finalized edit proposals and implementation tasks + + + + Route to: Product Owner / Scrum Master agents + Deliverables: Sprint Change Proposal + backlog reorganization plan + + + + Route to: Product Manager / Solution Architect + Deliverables: Complete Sprint Change Proposal + escalation notice + +Confirm handoff completion and next steps with user +Document handoff in workflow execution log + + + + + +Summarize workflow execution: + - Issue addressed: {{change_trigger}} + - Change scope: {{scope_classification}} + - Artifacts modified: {{list_of_artifacts}} + - Routed to: {{handoff_recipients}} + +Confirm all deliverables produced: + +- Sprint Change Proposal document +- Specific edit proposals with before/after +- Implementation handoff plan + +Report workflow completion to user with personalized message: "✅ Correct Course workflow complete, {user_name}!" +Remind user of success criteria and next steps for implementation team + + + diff --git a/src/bmm/workflows/4-implementation/correct-course/workflow.yaml b/src/bmm/workflows/4-implementation/correct-course/workflow.yaml new file mode 100644 index 00000000..70813514 --- /dev/null +++ b/src/bmm/workflows/4-implementation/correct-course/workflow.yaml @@ -0,0 +1,60 @@ +# Correct Course - Sprint Change Management Workflow +name: "correct-course" +description: "Navigate significant changes during sprint execution by analyzing impact, proposing solutions, and routing for implementation" +author: "BMad Method" + +config_source: "{project-root}/_bmad/bmm/config.yaml" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +user_skill_level: "{config_source}:user_skill_level" +document_output_language: "{config_source}:document_output_language" +date: system-generated +implementation_artifacts: "{config_source}:implementation_artifacts" +planning_artifacts: "{config_source}:planning_artifacts" +project_knowledge: "{config_source}:project_knowledge" +output_folder: "{implementation_artifacts}" +sprint_status: "{implementation_artifacts}/sprint-status.yaml" + +# Smart input file references - handles both whole docs and sharded docs +# Priority: Whole document first, then sharded version +# Strategy: Load project context for impact analysis +input_file_patterns: + prd: + description: "Product requirements for impact analysis" + whole: "{planning_artifacts}/*prd*.md" + sharded: "{planning_artifacts}/*prd*/*.md" + load_strategy: "FULL_LOAD" + epics: + description: "All epics to analyze change impact" + whole: "{planning_artifacts}/*epic*.md" + sharded: "{planning_artifacts}/*epic*/*.md" + load_strategy: "FULL_LOAD" + architecture: + description: "System architecture and decisions" + whole: "{planning_artifacts}/*architecture*.md" + sharded: "{planning_artifacts}/*architecture*/*.md" + load_strategy: "FULL_LOAD" + ux_design: + description: "UX design specification (if UI impacts)" + whole: "{planning_artifacts}/*ux*.md" + sharded: "{planning_artifacts}/*ux*/*.md" + load_strategy: "FULL_LOAD" + tech_spec: + description: "Technical specification" + whole: "{planning_artifacts}/*tech-spec*.md" + load_strategy: "FULL_LOAD" + document_project: + description: "Brownfield project documentation (optional)" + sharded: "{project_knowledge}/index.md" + load_strategy: "INDEX_GUIDED" + +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/correct-course" +template: false +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +checklist: "{installed_path}/checklist.md" +default_output_file: "{planning_artifacts}/sprint-change-proposal-{date}.md" + +standalone: true + +web_bundle: false diff --git a/src/bmm/workflows/4-implementation/dev-story/checklist.md b/src/bmm/workflows/4-implementation/dev-story/checklist.md new file mode 100644 index 00000000..86d6e9be --- /dev/null +++ b/src/bmm/workflows/4-implementation/dev-story/checklist.md @@ -0,0 +1,80 @@ +--- +title: 'Enhanced Dev Story Definition of Done Checklist' +validation-target: 'Story markdown ({{story_path}})' +validation-criticality: 'HIGHEST' +required-inputs: + - 'Story markdown file with enhanced Dev Notes containing comprehensive implementation context' + - 'Completed Tasks/Subtasks section with all items marked [x]' + - 'Updated File List section with all changed files' + - 'Updated Dev Agent Record with implementation notes' +optional-inputs: + - 'Test results output' + - 'CI logs' + - 'Linting reports' +validation-rules: + - 'Only permitted story sections modified: Tasks/Subtasks checkboxes, Dev Agent Record, File List, Change Log, Status' + - 'All implementation requirements from story Dev Notes must be satisfied' + - 'Definition of Done checklist must pass completely' + - 'Enhanced story context must contain sufficient technical guidance' +--- + +# 🎯 Enhanced Definition of Done Checklist + +**Critical validation:** Story is truly ready for review only when ALL items below are satisfied + +## 📋 Context & Requirements Validation + +- [ ] **Story Context Completeness:** Dev Notes contains ALL necessary technical requirements, architecture patterns, and implementation guidance +- [ ] **Architecture Compliance:** Implementation follows all architectural requirements specified in Dev Notes +- [ ] **Technical Specifications:** All technical specifications (libraries, frameworks, versions) from Dev Notes are implemented correctly +- [ ] **Previous Story Learnings:** Previous story insights incorporated (if applicable) and build upon appropriately + +## ✅ Implementation Completion + +- [ ] **All Tasks Complete:** Every task and subtask marked complete with [x] +- [ ] **Acceptance Criteria Satisfaction:** Implementation satisfies EVERY Acceptance Criterion in the story +- [ ] **No Ambiguous Implementation:** Clear, unambiguous implementation that meets story requirements +- [ ] **Edge Cases Handled:** Error conditions and edge cases appropriately addressed +- [ ] **Dependencies Within Scope:** Only uses dependencies specified in story or project-context.md + +## 🧪 Testing & Quality Assurance + +- [ ] **Unit Tests:** Unit tests added/updated for ALL core functionality introduced/changed by this story +- [ ] **Integration Tests:** Integration tests added/updated for component interactions when story requirements demand them +- [ ] **End-to-End Tests:** End-to-end tests created for critical user flows when story requirements specify them +- [ ] **Test Coverage:** Tests cover acceptance criteria and edge cases from story Dev Notes +- [ ] **Regression Prevention:** ALL existing tests pass (no regressions introduced) +- [ ] **Code Quality:** Linting and static checks pass when configured in project +- [ ] **Test Framework Compliance:** Tests use project's testing frameworks and patterns from Dev Notes + +## 📝 Documentation & Tracking + +- [ ] **File List Complete:** File List includes EVERY new, modified, or deleted file (paths relative to repo root) +- [ ] **Dev Agent Record Updated:** Contains relevant Implementation Notes and/or Debug Log for this work +- [ ] **Change Log Updated:** Change Log includes clear summary of what changed and why +- [ ] **Review Follow-ups:** All review follow-up tasks (marked [AI-Review]) completed and corresponding review items marked resolved (if applicable) +- [ ] **Story Structure Compliance:** Only permitted sections of story file were modified + +## 🔚 Final Status Verification + +- [ ] **Story Status Updated:** Story Status set to "review" +- [ ] **Sprint Status Updated:** Sprint status updated to "review" (when sprint tracking is used) +- [ ] **Quality Gates Passed:** All quality checks and validations completed successfully +- [ ] **No HALT Conditions:** No blocking issues or incomplete work remaining +- [ ] **User Communication Ready:** Implementation summary prepared for user review + +## 🎯 Final Validation Output + +``` +Definition of Done: {{PASS/FAIL}} + +✅ **Story Ready for Review:** {{story_key}} +📊 **Completion Score:** {{completed_items}}/{{total_items}} items passed +🔍 **Quality Gates:** {{quality_gates_status}} +📋 **Test Results:** {{test_results_summary}} +📝 **Documentation:** {{documentation_status}} +``` + +**If FAIL:** List specific failures and required actions before story can be marked Ready for Review + +**If PASS:** Story is fully ready for code review and production consideration diff --git a/src/bmm/workflows/4-implementation/dev-story/instructions.xml b/src/bmm/workflows/4-implementation/dev-story/instructions.xml new file mode 100644 index 00000000..4fb70efe --- /dev/null +++ b/src/bmm/workflows/4-implementation/dev-story/instructions.xml @@ -0,0 +1,410 @@ + + The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml + You MUST have already loaded and processed: {installed_path}/workflow.yaml + Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} + Generate all documents in {document_output_language} + Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, + Change Log, and Status + Execute ALL steps in exact order; do NOT skip steps + Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution + until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives + other instruction. + Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion. + User skill level ({user_skill_level}) affects conversation style ONLY, not code updates. + + + + Use {{story_path}} directly + Read COMPLETE story file + Extract story_key from filename or metadata + + + + + + MUST read COMPLETE sprint-status.yaml file from start to end to preserve order + Load the FULL file: {{sprint_status}} + Read ALL lines from beginning to end - do not skip any content + Parse the development_status section completely to understand story order + + Find the FIRST story (by reading in order from top to bottom) where: + - Key matches pattern: number-number-name (e.g., "1-2-user-auth") + - NOT an epic key (epic-X) or retrospective (epic-X-retrospective) + - Status value equals "ready-for-dev" + + + + 📋 No ready-for-dev stories found in sprint-status.yaml + + **Current Sprint Status:** {{sprint_status_summary}} + + **What would you like to do?** + 1. Run `create-story` to create next story from epics with comprehensive context + 2. Run `*validate-create-story` to improve existing stories before development (recommended quality check) + 3. Specify a particular story file to develop (provide full path) + 4. Check {{sprint_status}} file to see current sprint status + + 💡 **Tip:** Stories in `ready-for-dev` may not have been validated. Consider running `validate-create-story` first for a quality + check. + + Choose option [1], [2], [3], or [4], or specify story file path: + + + HALT - Run create-story to create next story + + + + HALT - Run validate-create-story to improve existing stories + + + + Provide the story file path to develop: + Store user-provided story path as {{story_path}} + + + + + Loading {{sprint_status}} for detailed status review... + Display detailed sprint status analysis + HALT - User can review sprint status and provide story path + + + + Store user-provided story path as {{story_path}} + + + + + + + + Search {story_dir} for stories directly + Find stories with "ready-for-dev" status in files + Look for story files matching pattern: *-*-*.md + Read each candidate story file to check Status section + + + 📋 No ready-for-dev stories found + + **Available Options:** + 1. Run `create-story` to create next story from epics with comprehensive context + 2. Run `*validate-create-story` to improve existing stories + 3. Specify which story to develop + + What would you like to do? Choose option [1], [2], or [3]: + + + HALT - Run create-story to create next story + + + + HALT - Run validate-create-story to improve existing stories + + + + It's unclear what story you want developed. Please provide the full path to the story file: + Store user-provided story path as {{story_path}} + Continue with provided story file + + + + + Use discovered story file and extract story_key + + + + Store the found story_key (e.g., "1-2-user-authentication") for later status updates + Find matching story file in {story_dir} using story_key pattern: {{story_key}}.md + Read COMPLETE story file from discovered path + + + + Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status + + Load comprehensive context from story file's Dev Notes section + Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications + Use enhanced story context to inform implementation decisions and approaches + + Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks + + + Completion sequence + + HALT: "Cannot develop story without access to story file" + ASK user to clarify or HALT + + + + Load all available context to inform implementation + + Load {project_context} for coding standards and project-wide patterns (if exists) + Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status + Load comprehensive context from story file's Dev Notes section + Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications + Use enhanced story context to inform implementation decisions and approaches + ✅ **Context Loaded** + Story and project context available for implementation + + + + + Determine if this is a fresh start or continuation after code review + + Check if "Senior Developer Review (AI)" section exists in the story file + Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks + + + Set review_continuation = true + Extract from "Senior Developer Review (AI)" section: + - Review outcome (Approve/Changes Requested/Blocked) + - Review date + - Total action items with checkboxes (count checked vs unchecked) + - Severity breakdown (High/Med/Low counts) + + Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection + Store list of unchecked review items as {{pending_review_items}} + + ⏯️ **Resuming Story After Code Review** ({{review_date}}) + + **Review Outcome:** {{review_outcome}} + **Action Items:** {{unchecked_review_count}} remaining to address + **Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low + + **Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks. + + + + + Set review_continuation = false + Set {{pending_review_items}} = empty + + 🚀 **Starting Fresh Implementation** + + Story: {{story_key}} + Story Status: {{current_status}} + First incomplete task: {{first_task_description}} + + + + + + + Load the FULL file: {{sprint_status}} + Read all development_status entries to find {{story_key}} + Get current status value for development_status[{{story_key}}] + + + Update the story in the sprint status report to = "in-progress" + 🚀 Starting work on story {{story_key}} + Status updated: ready-for-dev → in-progress + + + + + ⏯️ Resuming work on story {{story_key}} + Story is already marked in-progress + + + + + ⚠️ Unexpected story status: {{current_status}} + Expected ready-for-dev or in-progress. Continuing anyway... + + + + Store {{current_sprint_status}} for later use + + + + ℹ️ No sprint status file exists - story progress will be tracked in story file only + Set {{current_sprint_status}} = "no-sprint-tracking" + + + + + FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION + + Review the current task/subtask from the story file - this is your authoritative implementation guide + Plan implementation following red-green-refactor cycle + + + Write FAILING tests first for the task/subtask functionality + Confirm tests fail before implementation - this validates test correctness + + + Implement MINIMAL code to make tests pass + Run tests to confirm they now pass + Handle error conditions and edge cases as specified in task/subtask + + + Improve code structure while keeping tests green + Ensure code follows architecture patterns and coding standards from Dev Notes + + Document technical approach and decisions in Dev Agent Record → Implementation Plan + + HALT: "Additional dependencies need user approval" + HALT and request guidance + HALT: "Cannot proceed without necessary configuration files" + + NEVER implement anything not mapped to a specific task/subtask in the story file + NEVER proceed to next task until current task/subtask is complete AND tests pass + Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition + Do NOT propose to pause for review until Step 9 completion gates are satisfied + + + + Create unit tests for business logic and core functionality introduced/changed by the task + Add integration tests for component interactions specified in story requirements + Include end-to-end tests for critical user flows when story requirements demand them + Cover edge cases and error handling scenarios identified in story Dev Notes + + + + Determine how to run tests for this repo (infer test framework from project structure) + Run all existing tests to ensure no regressions + Run the new tests to verify implementation correctness + Run linting and code quality checks if configured in project + Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly + STOP and fix before continuing - identify breaking changes immediately + STOP and fix before continuing - ensure implementation correctness + + + + NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING + + + Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100% + Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features + Validate that ALL acceptance criteria related to this task are satisfied + Run full test suite to ensure NO regressions introduced + + + + Extract review item details (severity, description, related AC/file) + Add to resolution tracking list: {{resolved_review_items}} + + + Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section + + + Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description + Mark that action item checkbox [x] as resolved + + Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}" + + + + + ONLY THEN mark the task (and subtasks) checkbox with [x] + Update File List section with ALL new, modified, or deleted files (paths relative to repo root) + Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested + + + + DO NOT mark task complete - fix issues first + HALT if unable to fix validation failures + + + + Count total resolved review items in this session + Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})" + + + Save the story file + Determine if more incomplete tasks remain + + Next task + + + Completion + + + + + Verify ALL tasks and subtasks are marked [x] (re-scan the story document now) + Run the full regression suite (do not skip) + Confirm File List includes every changed file + Execute enhanced definition-of-done validation + Update the story Status to: "review" + + + Validate definition-of-done checklist with essential requirements: + - All tasks/subtasks marked complete with [x] + - Implementation satisfies every Acceptance Criterion + - Unit tests for core functionality added/updated + - Integration tests for component interactions added when required + - End-to-end tests for critical flows added when story demands them + - All tests pass (no regressions, new tests successful) + - Code quality checks pass (linting, static analysis if configured) + - File List includes every new/modified/deleted file (relative paths) + - Dev Agent Record contains implementation notes + - Change Log includes summary of changes + - Only permitted story sections were modified + + + + + Load the FULL file: {sprint_status} + Find development_status key matching {{story_key}} + Verify current status is "in-progress" (expected previous state) + Update development_status[{{story_key}}] = "review" + Save file, preserving ALL comments and structure including STATUS DEFINITIONS + ✅ Story status updated to "review" in sprint-status.yaml + + + + ℹ️ Story status updated to "review" in story file (no sprint tracking configured) + + + + ⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found + + Story status is set to "review" in file, but sprint-status.yaml may be out of sync. + + + + + HALT - Complete remaining tasks before marking ready for review + HALT - Fix regression issues before completing + HALT - Update File List with all changed files + HALT - Address DoD failures before completing + + + + Execute the enhanced definition-of-done checklist using the validation framework + Prepare a concise summary in Dev Agent Record → Completion Notes + + Communicate to {user_name} that story implementation is complete and ready for review + Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified + Provide the story file path and current status (now "review") + + Based on {user_skill_level}, ask if user needs any explanations about: + - What was implemented and how it works + - Why certain technical decisions were made + - How to test or verify the changes + - Any patterns, libraries, or approaches used + - Anything else they'd like clarified + + + + Provide clear, contextual explanations tailored to {user_skill_level} + Use examples and references to specific code when helpful + + + Once explanations are complete (or user indicates no questions), suggest logical next steps + Recommended next steps (flexible based on project setup): + - Review the implemented story and test the changes + - Verify all acceptance criteria are met + - Ensure deployment readiness if applicable + - Run `code-review` workflow for peer review + - Optional: Run TEA `*automate` to expand guardrail tests + + + 💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story. + + Suggest checking {sprint_status} to see project progress + + Remain flexible - allow user to choose their own path or ask for other assistance + + + diff --git a/src/modules/bmgd/workflows/4-production/super-dev-story/workflow.yaml b/src/bmm/workflows/4-implementation/dev-story/workflow.yaml similarity index 62% rename from src/modules/bmgd/workflows/4-production/super-dev-story/workflow.yaml rename to src/bmm/workflows/4-implementation/dev-story/workflow.yaml index 9130b874..d5824ee1 100644 --- a/src/modules/bmgd/workflows/4-production/super-dev-story/workflow.yaml +++ b/src/bmm/workflows/4-implementation/dev-story/workflow.yaml @@ -1,9 +1,9 @@ -name: super-dev-story -description: "Enhanced story development with post-implementation validation and automated code review - ensures stories are truly complete before marking done" +name: dev-story +description: "Execute a story by implementing tasks/subtasks, writing tests, validating, and updating the story file per acceptance criteria" author: "BMad" # Critical variables from config -config_source: "{project-root}/_bmad/bmgd/config.yaml" +config_source: "{project-root}/_bmad/bmm/config.yaml" output_folder: "{config_source}:output_folder" user_name: "{config_source}:user_name" communication_language: "{config_source}:communication_language" @@ -13,7 +13,7 @@ story_dir: "{config_source}:implementation_artifacts" date: system-generated # Workflow components -installed_path: "{project-root}/_bmad/bmgd/workflows/4-production/super-dev-story" +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/dev-story" instructions: "{installed_path}/instructions.xml" validation: "{installed_path}/checklist.md" @@ -22,13 +22,6 @@ implementation_artifacts: "{config_source}:implementation_artifacts" sprint_status: "{implementation_artifacts}/sprint-status.yaml" project_context: "**/project-context.md" -# Super-dev specific settings -super_dev_settings: - post_dev_gap_analysis: true - auto_code_review: true - fail_on_critical_issues: true - max_fix_iterations: 3 - standalone: true web_bundle: false diff --git a/src/bmm/workflows/4-implementation/retrospective/instructions.md b/src/bmm/workflows/4-implementation/retrospective/instructions.md new file mode 100644 index 00000000..01750312 --- /dev/null +++ b/src/bmm/workflows/4-implementation/retrospective/instructions.md @@ -0,0 +1,1443 @@ +# Retrospective - Epic Completion Review Instructions + +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/retrospective/workflow.yaml +Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} +Generate all documents in {document_output_language} +⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever. + + + DOCUMENT OUTPUT: Retrospective analysis. Concise insights, lessons learned, action items. User skill level ({user_skill_level}) affects conversation style ONLY, not retrospective content. + +FACILITATION NOTES: + +- Scrum Master facilitates this retrospective +- Psychological safety is paramount - NO BLAME +- Focus on systems, processes, and learning +- Everyone contributes with specific examples preferred +- Action items must be achievable with clear ownership +- Two-part format: (1) Epic Review + (2) Next Epic Preparation + +PARTY MODE PROTOCOL: + +- ALL agent dialogue MUST use format: "Name (Role): dialogue" +- Example: Bob (Scrum Master): "Let's begin..." +- Example: {user_name} (Project Lead): [User responds] +- Create natural back-and-forth with user actively participating +- Show disagreements, diverse perspectives, authentic team dynamics + + + + + + +Explain to {user_name} the epic discovery process using natural dialogue + + +Bob (Scrum Master): "Welcome to the retrospective, {user_name}. Let me help you identify which epic we just completed. I'll check sprint-status first, but you're the ultimate authority on what we're reviewing today." + + +PRIORITY 1: Check {sprint_status_file} first + +Load the FULL file: {sprint_status_file} +Read ALL development_status entries +Find the highest epic number with at least one story marked "done" +Extract epic number from keys like "epic-X-retrospective" or story keys like "X-Y-story-name" +Set {{detected_epic}} = highest epic number found with completed stories + + + Present finding to user with context + + +Bob (Scrum Master): "Based on {sprint_status_file}, it looks like Epic {{detected_epic}} was recently completed. Is that the epic you want to review today, {user_name}?" + + +WAIT for {user_name} to confirm or correct + + + Set {{epic_number}} = {{detected_epic}} + + + + Set {{epic_number}} = user-provided number + +Bob (Scrum Master): "Got it, we're reviewing Epic {{epic_number}}. Let me gather that information." + + + + + + PRIORITY 2: Ask user directly + + +Bob (Scrum Master): "I'm having trouble detecting the completed epic from {sprint_status_file}. {user_name}, which epic number did you just complete?" + + +WAIT for {user_name} to provide epic number +Set {{epic_number}} = user-provided number + + + + PRIORITY 3: Fallback to stories folder + +Scan {story_directory} for highest numbered story files +Extract epic numbers from story filenames (pattern: epic-X-Y-story-name.md) +Set {{detected_epic}} = highest epic number found + + +Bob (Scrum Master): "I found stories for Epic {{detected_epic}} in the stories folder. Is that the epic we're reviewing, {user_name}?" + + +WAIT for {user_name} to confirm or correct +Set {{epic_number}} = confirmed number + + +Once {{epic_number}} is determined, verify epic completion status + +Find all stories for epic {{epic_number}} in {sprint_status_file}: + +- Look for keys starting with "{{epic_number}}-" (e.g., "1-1-", "1-2-", etc.) +- Exclude epic key itself ("epic-{{epic_number}}") +- Exclude retrospective key ("epic-{{epic_number}}-retrospective") + + +Count total stories found for this epic +Count stories with status = "done" +Collect list of pending story keys (status != "done") +Determine if complete: true if all stories are done, false otherwise + + + +Alice (Product Owner): "Wait, Bob - I'm seeing that Epic {{epic_number}} isn't actually complete yet." + +Bob (Scrum Master): "Let me check... you're right, Alice." + +**Epic Status:** + +- Total Stories: {{total_stories}} +- Completed (Done): {{done_stories}} +- Pending: {{pending_count}} + +**Pending Stories:** +{{pending_story_list}} + +Bob (Scrum Master): "{user_name}, we typically run retrospectives after all stories are done. What would you like to do?" + +**Options:** + +1. Complete remaining stories before running retrospective (recommended) +2. Continue with partial retrospective (not ideal, but possible) +3. Run sprint-planning to refresh story tracking + + +Continue with incomplete epic? (yes/no) + + + +Bob (Scrum Master): "Smart call, {user_name}. Let's finish those stories first and then have a proper retrospective." + + HALT + + +Set {{partial_retrospective}} = true + +Charlie (Senior Dev): "Just so everyone knows, this partial retro might miss some important lessons from those pending stories." + +Bob (Scrum Master): "Good point, Charlie. {user_name}, we'll document what we can now, but we may want to revisit after everything's done." + + + + + +Alice (Product Owner): "Excellent! All {{done_stories}} stories are marked done." + +Bob (Scrum Master): "Perfect. Epic {{epic_number}} is complete and ready for retrospective, {user_name}." + + + + + + + + After discovery, these content variables are available: {epics_content} (selective load for this epic), {architecture_content}, {prd_content}, {document_project_content} + + + + + +Bob (Scrum Master): "Before we start the team discussion, let me review all the story records to surface key themes. This'll help us have a richer conversation." + +Charlie (Senior Dev): "Good idea - those dev notes always have gold in them." + + +For each story in epic {{epic_number}}, read the complete story file from {story_directory}/{{epic_number}}-{{story_num}}-\*.md + +Extract and analyze from each story: + +**Dev Notes and Struggles:** + +- Look for sections like "## Dev Notes", "## Implementation Notes", "## Challenges", "## Development Log" +- Identify where developers struggled or made mistakes +- Note unexpected complexity or gotchas discovered +- Record technical decisions that didn't work out as planned +- Track where estimates were way off (too high or too low) + +**Review Feedback Patterns:** + +- Look for "## Review", "## Code Review", "## SM Review", "## Scrum Master Review" sections +- Identify recurring feedback themes across stories +- Note which types of issues came up repeatedly +- Track quality concerns or architectural misalignments +- Document praise or exemplary work called out in reviews + +**Lessons Learned:** + +- Look for "## Lessons Learned", "## Retrospective Notes", "## Takeaways" sections within stories +- Extract explicit lessons documented during development +- Identify "aha moments" or breakthroughs +- Note what would be done differently +- Track successful experiments or approaches + +**Technical Debt Incurred:** + +- Look for "## Technical Debt", "## TODO", "## Known Issues", "## Future Work" sections +- Document shortcuts taken and why +- Track debt items that affect next epic +- Note severity and priority of debt items + +**Testing and Quality Insights:** + +- Look for "## Testing", "## QA Notes", "## Test Results" sections +- Note testing challenges or surprises +- Track bug patterns or regression issues +- Document test coverage gaps + +Synthesize patterns across all stories: + +**Common Struggles:** + +- Identify issues that appeared in 2+ stories (e.g., "3 out of 5 stories had API authentication issues") +- Note areas where team consistently struggled +- Track where complexity was underestimated + +**Recurring Review Feedback:** + +- Identify feedback themes (e.g., "Error handling was flagged in every review") +- Note quality patterns (positive and negative) +- Track areas where team improved over the course of epic + +**Breakthrough Moments:** + +- Document key discoveries (e.g., "Story 3 discovered the caching pattern we used for rest of epic") +- Note when team velocity improved dramatically +- Track innovative solutions worth repeating + +**Velocity Patterns:** + +- Calculate average completion time per story +- Note velocity trends (e.g., "First 2 stories took 3x longer than estimated") +- Identify which types of stories went faster/slower + +**Team Collaboration Highlights:** + +- Note moments of excellent collaboration mentioned in stories +- Track where pair programming or mob programming was effective +- Document effective problem-solving sessions + +Store this synthesis - these patterns will drive the retrospective discussion + + +Bob (Scrum Master): "Okay, I've reviewed all {{total_stories}} story records. I found some really interesting patterns we should discuss." + +Dana (QA Engineer): "I'm curious what you found, Bob. I noticed some things in my testing too." + +Bob (Scrum Master): "We'll get to all of it. But first, let me load the previous epic's retro to see if we learned from last time." + + + + + + +Calculate previous epic number: {{prev_epic_num}} = {{epic_number}} - 1 + + + Search for previous retrospective using pattern: {retrospectives_folder}/epic-{{prev_epic_num}}-retro-*.md + + + +Bob (Scrum Master): "I found our retrospective from Epic {{prev_epic_num}}. Let me see what we committed to back then..." + + + Read the complete previous retrospective file + + Extract key elements: + - **Action items committed**: What did the team agree to improve? + - **Lessons learned**: What insights were captured? + - **Process improvements**: What changes were agreed upon? + - **Technical debt flagged**: What debt was documented? + - **Team agreements**: What commitments were made? + - **Preparation tasks**: What was needed for this epic? + + Cross-reference with current epic execution: + + **Action Item Follow-Through:** + - For each action item from Epic {{prev_epic_num}} retro, check if it was completed + - Look for evidence in current epic's story records + - Mark each action item: ✅ Completed, ⏳ In Progress, ❌ Not Addressed + + **Lessons Applied:** + - For each lesson from Epic {{prev_epic_num}}, check if team applied it in Epic {{epic_number}} + - Look for evidence in dev notes, review feedback, or outcomes + - Document successes and missed opportunities + + **Process Improvements Effectiveness:** + - For each process change agreed to in Epic {{prev_epic_num}}, assess if it helped + - Did the change improve velocity, quality, or team satisfaction? + - Should we keep, modify, or abandon the change? + + **Technical Debt Status:** + - For each debt item from Epic {{prev_epic_num}}, check if it was addressed + - Did unaddressed debt cause problems in Epic {{epic_number}}? + - Did the debt grow or shrink? + + Prepare "continuity insights" for the retrospective discussion + + Identify wins where previous lessons were applied successfully: + - Document specific examples of applied learnings + - Note positive impact on Epic {{epic_number}} outcomes + - Celebrate team growth and improvement + + Identify missed opportunities where previous lessons were ignored: + - Document where team repeated previous mistakes + - Note impact of not applying lessons (without blame) + - Explore barriers that prevented application + + + +Bob (Scrum Master): "Interesting... in Epic {{prev_epic_num}}'s retro, we committed to {{action_count}} action items." + +Alice (Product Owner): "How'd we do on those, Bob?" + +Bob (Scrum Master): "We completed {{completed_count}}, made progress on {{in_progress_count}}, but didn't address {{not_addressed_count}}." + +Charlie (Senior Dev): _looking concerned_ "Which ones didn't we address?" + +Bob (Scrum Master): "We'll discuss that in the retro. Some of them might explain challenges we had this epic." + +Elena (Junior Dev): "That's... actually pretty insightful." + +Bob (Scrum Master): "That's why we track this stuff. Pattern recognition helps us improve." + + + + + + +Bob (Scrum Master): "I don't see a retrospective for Epic {{prev_epic_num}}. Either we skipped it, or this is your first retro." + +Alice (Product Owner): "Probably our first one. Good time to start the habit!" + +Set {{first_retrospective}} = true + + + + + +Bob (Scrum Master): "This is Epic 1, so naturally there's no previous retro to reference. We're starting fresh!" + +Charlie (Senior Dev): "First epic, first retro. Let's make it count." + +Set {{first_retrospective}} = true + + + + + + +Calculate next epic number: {{next_epic_num}} = {{epic_number}} + 1 + + +Bob (Scrum Master): "Before we dive into the discussion, let me take a quick look at Epic {{next_epic_num}} to understand what's coming." + +Alice (Product Owner): "Good thinking - helps us connect what we learned to what we're about to do." + + +Attempt to load next epic using selective loading strategy: + +**Try sharded first (more specific):** +Check if file exists: {planning_artifacts}/epic\*/epic-{{next_epic_num}}.md + + + Load {planning_artifacts}/*epic*/epic-{{next_epic_num}}.md + Set {{next_epic_source}} = "sharded" + + +**Fallback to whole document:** + +Check if file exists: {planning_artifacts}/epic\*.md + + + Load entire epics document + Extract Epic {{next_epic_num}} section + Set {{next_epic_source}} = "whole" + + + + + Analyze next epic for: + - Epic title and objectives + - Planned stories and complexity estimates + - Dependencies on Epic {{epic_number}} work + - New technical requirements or capabilities needed + - Potential risks or unknowns + - Business goals and success criteria + +Identify dependencies on completed work: + +- What components from Epic {{epic_number}} does Epic {{next_epic_num}} rely on? +- Are all prerequisites complete and stable? +- Any incomplete work that creates blocking dependencies? + +Note potential gaps or preparation needed: + +- Technical setup required (infrastructure, tools, libraries) +- Knowledge gaps to fill (research, training, spikes) +- Refactoring needed before starting next epic +- Documentation or specifications to create + +Check for technical prerequisites: + +- APIs or integrations that must be ready +- Data migrations or schema changes needed +- Testing infrastructure requirements +- Deployment or environment setup + + +Bob (Scrum Master): "Alright, I've reviewed Epic {{next_epic_num}}: '{{next_epic_title}}'" + +Alice (Product Owner): "What are we looking at?" + +Bob (Scrum Master): "{{next_epic_num}} stories planned, building on the {{dependency_description}} from Epic {{epic_number}}." + +Charlie (Senior Dev): "Dependencies concern me. Did we finish everything we need for that?" + +Bob (Scrum Master): "Good question - that's exactly what we need to explore in this retro." + + +Set {{next_epic_exists}} = true + + + + +Bob (Scrum Master): "Hmm, I don't see Epic {{next_epic_num}} defined yet." + +Alice (Product Owner): "We might be at the end of the roadmap, or we haven't planned that far ahead yet." + +Bob (Scrum Master): "No problem. We'll still do a thorough retro on Epic {{epic_number}}. The lessons will be valuable whenever we plan the next work." + + +Set {{next_epic_exists}} = false + + + + + + +Load agent configurations from {agent_manifest} +Identify which agents participated in Epic {{epic_number}} based on story records +Ensure key roles present: Product Owner, Scrum Master (facilitating), Devs, Testing/QA, Architect + + +Bob (Scrum Master): "Alright team, everyone's here. Let me set the stage for our retrospective." + +═══════════════════════════════════════════════════════════ +🔄 TEAM RETROSPECTIVE - Epic {{epic_number}}: {{epic_title}} +═══════════════════════════════════════════════════════════ + +Bob (Scrum Master): "Here's what we accomplished together." + +**EPIC {{epic_number}} SUMMARY:** + +Delivery Metrics: + +- Completed: {{completed_stories}}/{{total_stories}} stories ({{completion_percentage}}%) +- Velocity: {{actual_points}} story points{{#if planned_points}} (planned: {{planned_points}}){{/if}} +- Duration: {{actual_sprints}} sprints{{#if planned_sprints}} (planned: {{planned_sprints}}){{/if}} +- Average velocity: {{points_per_sprint}} points/sprint + +Quality and Technical: + +- Blockers encountered: {{blocker_count}} +- Technical debt items: {{debt_count}} +- Test coverage: {{coverage_info}} +- Production incidents: {{incident_count}} + +Business Outcomes: + +- Goals achieved: {{goals_met}}/{{total_goals}} +- Success criteria: {{criteria_status}} +- Stakeholder feedback: {{feedback_summary}} + +Alice (Product Owner): "Those numbers tell a good story. {{completion_percentage}}% completion is {{#if completion_percentage >= 90}}excellent{{else}}something we should discuss{{/if}}." + +Charlie (Senior Dev): "I'm more interested in that technical debt number - {{debt_count}} items is {{#if debt_count > 10}}concerning{{else}}manageable{{/if}}." + +Dana (QA Engineer): "{{incident_count}} production incidents - {{#if incident_count == 0}}clean epic!{{else}}we should talk about those{{/if}}." + +{{#if next_epic_exists}} +═══════════════════════════════════════════════════════════ +**NEXT EPIC PREVIEW:** Epic {{next_epic_num}}: {{next_epic_title}} +═══════════════════════════════════════════════════════════ + +Dependencies on Epic {{epic_number}}: +{{list_dependencies}} + +Preparation Needed: +{{list_preparation_gaps}} + +Technical Prerequisites: +{{list_technical_prereqs}} + +Bob (Scrum Master): "And here's what's coming next. Epic {{next_epic_num}} builds on what we just finished." + +Elena (Junior Dev): "Wow, that's a lot of dependencies on our work." + +Charlie (Senior Dev): "Which means we better make sure Epic {{epic_number}} is actually solid before moving on." +{{/if}} + +═══════════════════════════════════════════════════════════ + +Bob (Scrum Master): "Team assembled for this retrospective:" + +{{list_participating_agents}} + +Bob (Scrum Master): "{user_name}, you're joining us as Project Lead. Your perspective is crucial here." + +{user_name} (Project Lead): [Participating in the retrospective] + +Bob (Scrum Master): "Our focus today:" + +1. Learning from Epic {{epic_number}} execution + {{#if next_epic_exists}}2. Preparing for Epic {{next_epic_num}} success{{/if}} + +Bob (Scrum Master): "Ground rules: psychological safety first. No blame, no judgment. We focus on systems and processes, not individuals. Everyone's voice matters. Specific examples are better than generalizations." + +Alice (Product Owner): "And everything shared here stays in this room - unless we decide together to escalate something." + +Bob (Scrum Master): "Exactly. {user_name}, any questions before we dive in?" + + +WAIT for {user_name} to respond or indicate readiness + + + + + + +Bob (Scrum Master): "Let's start with the good stuff. What went well in Epic {{epic_number}}?" + +Bob (Scrum Master): _pauses, creating space_ + +Alice (Product Owner): "I'll start. The user authentication flow we delivered exceeded my expectations. The UX is smooth, and early user feedback has been really positive." + +Charlie (Senior Dev): "I'll add to that - the caching strategy we implemented in Story {{breakthrough_story_num}} was a game-changer. We cut API calls by 60% and it set the pattern for the rest of the epic." + +Dana (QA Engineer): "From my side, testing went smoother than usual. The dev team's documentation was way better this epic - actually usable test plans!" + +Elena (Junior Dev): _smiling_ "That's because Charlie made me document everything after Story 1's code review!" + +Charlie (Senior Dev): _laughing_ "Tough love pays off." + + +Bob (Scrum Master) naturally turns to {user_name} to engage them in the discussion + + +Bob (Scrum Master): "{user_name}, what stood out to you as going well in this epic?" + + +WAIT for {user_name} to respond - this is a KEY USER INTERACTION moment + +After {user_name} responds, have 1-2 team members react to or build on what {user_name} shared + + +Alice (Product Owner): [Responds naturally to what {user_name} said, either agreeing, adding context, or offering a different perspective] + +Charlie (Senior Dev): [Builds on the discussion, perhaps adding technical details or connecting to specific stories] + + +Continue facilitating natural dialogue, periodically bringing {user_name} back into the conversation + +After covering successes, guide the transition to challenges with care + + +Bob (Scrum Master): "Okay, we've celebrated some real wins. Now let's talk about challenges - where did we struggle? What slowed us down?" + +Bob (Scrum Master): _creates safe space with tone and pacing_ + +Elena (Junior Dev): _hesitates_ "Well... I really struggled with the database migrations in Story {{difficult_story_num}}. The documentation wasn't clear, and I had to redo it three times. Lost almost a full sprint on that story alone." + +Charlie (Senior Dev): _defensive_ "Hold on - I wrote those migration docs, and they were perfectly clear. The issue was that the requirements kept changing mid-story!" + +Alice (Product Owner): _frustrated_ "That's not fair, Charlie. We only clarified requirements once, and that was because the technical team didn't ask the right questions during planning!" + +Charlie (Senior Dev): _heat rising_ "We asked plenty of questions! You said the schema was finalized, then two days into development you wanted to add three new fields!" + +Bob (Scrum Master): _intervening calmly_ "Let's take a breath here. This is exactly the kind of thing we need to unpack." + +Bob (Scrum Master): "Elena, you spent almost a full sprint on Story {{difficult_story_num}}. Charlie, you're saying requirements changed. Alice, you feel the right questions weren't asked up front." + +Bob (Scrum Master): "{user_name}, you have visibility across the whole project. What's your take on this situation?" + + +WAIT for {user_name} to respond and help facilitate the conflict resolution + +Use {user_name}'s response to guide the discussion toward systemic understanding rather than blame + + +Bob (Scrum Master): [Synthesizes {user_name}'s input with what the team shared] "So it sounds like the core issue was {{root_cause_based_on_discussion}}, not any individual person's fault." + +Elena (Junior Dev): "That makes sense. If we'd had {{preventive_measure}}, I probably could have avoided those redos." + +Charlie (Senior Dev): _softening_ "Yeah, and I could have been clearer about assumptions in the docs. Sorry for getting defensive, Alice." + +Alice (Product Owner): "I appreciate that. I could've been more proactive about flagging the schema additions earlier, too." + +Bob (Scrum Master): "This is good. We're identifying systemic improvements, not assigning blame." + + +Continue the discussion, weaving in patterns discovered from the deep story analysis (Step 2) + + +Bob (Scrum Master): "Speaking of patterns, I noticed something when reviewing all the story records..." + +Bob (Scrum Master): "{{pattern_1_description}} - this showed up in {{pattern_1_count}} out of {{total_stories}} stories." + +Dana (QA Engineer): "Oh wow, I didn't realize it was that widespread." + +Bob (Scrum Master): "Yeah. And there's more - {{pattern_2_description}} came up in almost every code review." + +Charlie (Senior Dev): "That's... actually embarrassing. We should've caught that pattern earlier." + +Bob (Scrum Master): "No shame, Charlie. Now we know, and we can improve. {user_name}, did you notice these patterns during the epic?" + + +WAIT for {user_name} to share their observations + +Continue the retrospective discussion, creating moments where: + +- Team members ask {user_name} questions directly +- {user_name}'s input shifts the discussion direction +- Disagreements arise naturally and get resolved +- Quieter team members are invited to contribute +- Specific stories are referenced with real examples +- Emotions are authentic (frustration, pride, concern, hope) + + + +Bob (Scrum Master): "Before we move on, I want to circle back to Epic {{prev_epic_num}}'s retrospective." + +Bob (Scrum Master): "We made some commitments in that retro. Let's see how we did." + +Bob (Scrum Master): "Action item 1: {{prev_action_1}}. Status: {{prev_action_1_status}}" + +Alice (Product Owner): {{#if prev_action_1_status == "completed"}}"We nailed that one!"{{else}}"We... didn't do that one."{{/if}} + +Charlie (Senior Dev): {{#if prev_action_1_status == "completed"}}"And it helped! I noticed {{evidence_of_impact}}"{{else}}"Yeah, and I think that's why we had {{consequence_of_not_doing_it}} this epic."{{/if}} + +Bob (Scrum Master): "Action item 2: {{prev_action_2}}. Status: {{prev_action_2_status}}" + +Dana (QA Engineer): {{#if prev_action_2_status == "completed"}}"This one made testing so much easier this time."{{else}}"If we'd done this, I think testing would've gone faster."{{/if}} + +Bob (Scrum Master): "{user_name}, looking at what we committed to last time and what we actually did - what's your reaction?" + + +WAIT for {user_name} to respond + +Use the previous retro follow-through as a learning moment about commitment and accountability + + + +Bob (Scrum Master): "Alright, we've covered a lot of ground. Let me summarize what I'm hearing..." + +Bob (Scrum Master): "**Successes:**" +{{list_success_themes}} + +Bob (Scrum Master): "**Challenges:**" +{{list_challenge_themes}} + +Bob (Scrum Master): "**Key Insights:**" +{{list_insight_themes}} + +Bob (Scrum Master): "Does that capture it? Anyone have something important we missed?" + + +Allow team members to add any final thoughts on the epic review +Ensure {user_name} has opportunity to add their perspective + + + + + + + +Bob (Scrum Master): "Normally we'd discuss preparing for the next epic, but since Epic {{next_epic_num}} isn't defined yet, let's skip to action items." + + Skip to Step 8 + + + +Bob (Scrum Master): "Now let's shift gears. Epic {{next_epic_num}} is coming up: '{{next_epic_title}}'" + +Bob (Scrum Master): "The question is: are we ready? What do we need to prepare?" + +Alice (Product Owner): "From my perspective, we need to make sure {{dependency_concern_1}} from Epic {{epic_number}} is solid before we start building on it." + +Charlie (Senior Dev): _concerned_ "I'm worried about {{technical_concern_1}}. We have {{technical_debt_item}} from this epic that'll blow up if we don't address it before Epic {{next_epic_num}}." + +Dana (QA Engineer): "And I need {{testing_infrastructure_need}} in place, or we're going to have the same testing bottleneck we had in Story {{bottleneck_story_num}}." + +Elena (Junior Dev): "I'm less worried about infrastructure and more about knowledge. I don't understand {{knowledge_gap}} well enough to work on Epic {{next_epic_num}}'s stories." + +Bob (Scrum Master): "{user_name}, the team is surfacing some real concerns here. What's your sense of our readiness?" + + +WAIT for {user_name} to share their assessment + +Use {user_name}'s input to guide deeper exploration of preparation needs + + +Alice (Product Owner): [Reacts to what {user_name} said] "I agree with {user_name} about {{point_of_agreement}}, but I'm still worried about {{lingering_concern}}." + +Charlie (Senior Dev): "Here's what I think we need technically before Epic {{next_epic_num}} can start..." + +Charlie (Senior Dev): "1. {{tech_prep_item_1}} - estimated {{hours_1}} hours" +Charlie (Senior Dev): "2. {{tech_prep_item_2}} - estimated {{hours_2}} hours" +Charlie (Senior Dev): "3. {{tech_prep_item_3}} - estimated {{hours_3}} hours" + +Elena (Junior Dev): "That's like {{total_hours}} hours! That's a full sprint of prep work!" + +Charlie (Senior Dev): "Exactly. We can't just jump into Epic {{next_epic_num}} on Monday." + +Alice (Product Owner): _frustrated_ "But we have stakeholder pressure to keep shipping features. They're not going to be happy about a 'prep sprint.'" + +Bob (Scrum Master): "Let's think about this differently. What happens if we DON'T do this prep work?" + +Dana (QA Engineer): "We'll hit blockers in the middle of Epic {{next_epic_num}}, velocity will tank, and we'll ship late anyway." + +Charlie (Senior Dev): "Worse - we'll ship something built on top of {{technical_concern_1}}, and it'll be fragile." + +Bob (Scrum Master): "{user_name}, you're balancing stakeholder pressure against technical reality. How do you want to handle this?" + + +WAIT for {user_name} to provide direction on preparation approach + +Create space for debate and disagreement about priorities + + +Alice (Product Owner): [Potentially disagrees with {user_name}'s approach] "I hear what you're saying, {user_name}, but from a business perspective, {{business_concern}}." + +Charlie (Senior Dev): [Potentially supports or challenges Alice's point] "The business perspective is valid, but {{technical_counter_argument}}." + +Bob (Scrum Master): "We have healthy tension here between business needs and technical reality. That's good - it means we're being honest." + +Bob (Scrum Master): "Let's explore a middle ground. Charlie, which of your prep items are absolutely critical vs. nice-to-have?" + +Charlie (Senior Dev): "{{critical_prep_item_1}} and {{critical_prep_item_2}} are non-negotiable. {{nice_to_have_prep_item}} can wait." + +Alice (Product Owner): "And can any of the critical prep happen in parallel with starting Epic {{next_epic_num}}?" + +Charlie (Senior Dev): _thinking_ "Maybe. If we tackle {{first_critical_item}} before the epic starts, we could do {{second_critical_item}} during the first sprint." + +Dana (QA Engineer): "But that means Story 1 of Epic {{next_epic_num}} can't depend on {{second_critical_item}}." + +Alice (Product Owner): _looking at epic plan_ "Actually, Stories 1 and 2 are about {{independent_work}}, so they don't depend on it. We could make that work." + +Bob (Scrum Master): "{user_name}, the team is finding a workable compromise here. Does this approach make sense to you?" + + +WAIT for {user_name} to validate or adjust the preparation strategy + +Continue working through preparation needs across all dimensions: + +- Dependencies on Epic {{epic_number}} work +- Technical setup and infrastructure +- Knowledge gaps and research needs +- Documentation or specification work +- Testing infrastructure +- Refactoring or debt reduction +- External dependencies (APIs, integrations, etc.) + +For each preparation area, facilitate team discussion that: + +- Identifies specific needs with concrete examples +- Estimates effort realistically based on Epic {{epic_number}} experience +- Assigns ownership to specific agents +- Determines criticality and timing +- Surfaces risks of NOT doing the preparation +- Explores parallel work opportunities +- Brings {user_name} in for key decisions + + +Bob (Scrum Master): "I'm hearing a clear picture of what we need before Epic {{next_epic_num}}. Let me summarize..." + +**CRITICAL PREPARATION (Must complete before epic starts):** +{{list_critical_prep_items_with_owners_and_estimates}} + +**PARALLEL PREPARATION (Can happen during early stories):** +{{list_parallel_prep_items_with_owners_and_estimates}} + +**NICE-TO-HAVE PREPARATION (Would help but not blocking):** +{{list_nice_to_have_prep_items}} + +Bob (Scrum Master): "Total critical prep effort: {{critical_hours}} hours ({{critical_days}} days)" + +Alice (Product Owner): "That's manageable. We can communicate that to stakeholders." + +Bob (Scrum Master): "{user_name}, does this preparation plan work for you?" + + +WAIT for {user_name} final validation of preparation plan + + + + + + +Bob (Scrum Master): "Let's capture concrete action items from everything we've discussed." + +Bob (Scrum Master): "I want specific, achievable actions with clear owners. Not vague aspirations." + + +Synthesize themes from Epic {{epic_number}} review discussion into actionable improvements + +Create specific action items with: + +- Clear description of the action +- Assigned owner (specific agent or role) +- Timeline or deadline +- Success criteria (how we'll know it's done) +- Category (process, technical, documentation, team, etc.) + +Ensure action items are SMART: + +- Specific: Clear and unambiguous +- Measurable: Can verify completion +- Achievable: Realistic given constraints +- Relevant: Addresses real issues from retro +- Time-bound: Has clear deadline + + +Bob (Scrum Master): "Based on our discussion, here are the action items I'm proposing..." + +═══════════════════════════════════════════════════════════ +📝 EPIC {{epic_number}} ACTION ITEMS: +═══════════════════════════════════════════════════════════ + +**Process Improvements:** + +1. {{action_item_1}} + Owner: {{agent_1}} + Deadline: {{timeline_1}} + Success criteria: {{criteria_1}} + +2. {{action_item_2}} + Owner: {{agent_2}} + Deadline: {{timeline_2}} + Success criteria: {{criteria_2}} + +Charlie (Senior Dev): "I can own action item 1, but {{timeline_1}} is tight. Can we push it to {{alternative_timeline}}?" + +Bob (Scrum Master): "What do others think? Does that timing still work?" + +Alice (Product Owner): "{{alternative_timeline}} works for me, as long as it's done before Epic {{next_epic_num}} starts." + +Bob (Scrum Master): "Agreed. Updated to {{alternative_timeline}}." + +**Technical Debt:** + +1. {{debt_item_1}} + Owner: {{agent_3}} + Priority: {{priority_1}} + Estimated effort: {{effort_1}} + +2. {{debt_item_2}} + Owner: {{agent_4}} + Priority: {{priority_2}} + Estimated effort: {{effort_2}} + +Dana (QA Engineer): "For debt item 1, can we prioritize that as high? It caused testing issues in three different stories." + +Charlie (Senior Dev): "I marked it medium because {{reasoning}}, but I hear your point." + +Bob (Scrum Master): "{user_name}, this is a priority call. Testing impact vs. {{reasoning}} - how do you want to prioritize it?" + + +WAIT for {user_name} to help resolve priority discussions + + +**Documentation:** +1. {{doc_need_1}} + Owner: {{agent_5}} + Deadline: {{timeline_3}} + +2. {{doc_need_2}} + Owner: {{agent_6}} + Deadline: {{timeline_4}} + +**Team Agreements:** + +- {{agreement_1}} +- {{agreement_2}} +- {{agreement_3}} + +Bob (Scrum Master): "These agreements are how we're committing to work differently going forward." + +Elena (Junior Dev): "I like agreement 2 - that would've saved me on Story {{difficult_story_num}}." + +═══════════════════════════════════════════════════════════ +🚀 EPIC {{next_epic_num}} PREPARATION TASKS: +═══════════════════════════════════════════════════════════ + +**Technical Setup:** +[ ] {{setup_task_1}} +Owner: {{owner_1}} +Estimated: {{est_1}} + +[ ] {{setup_task_2}} +Owner: {{owner_2}} +Estimated: {{est_2}} + +**Knowledge Development:** +[ ] {{research_task_1}} +Owner: {{owner_3}} +Estimated: {{est_3}} + +**Cleanup/Refactoring:** +[ ] {{refactor_task_1}} +Owner: {{owner_4}} +Estimated: {{est_4}} + +**Total Estimated Effort:** {{total_hours}} hours ({{total_days}} days) + +═══════════════════════════════════════════════════════════ +⚠️ CRITICAL PATH: +═══════════════════════════════════════════════════════════ + +**Blockers to Resolve Before Epic {{next_epic_num}}:** + +1. {{critical_item_1}} + Owner: {{critical_owner_1}} + Must complete by: {{critical_deadline_1}} + +2. {{critical_item_2}} + Owner: {{critical_owner_2}} + Must complete by: {{critical_deadline_2}} + + +CRITICAL ANALYSIS - Detect if discoveries require epic updates + +Check if any of the following are true based on retrospective discussion: + +- Architectural assumptions from planning proven wrong during Epic {{epic_number}} +- Major scope changes or descoping occurred that affects next epic +- Technical approach needs fundamental change for Epic {{next_epic_num}} +- Dependencies discovered that Epic {{next_epic_num}} doesn't account for +- User needs significantly different than originally understood +- Performance/scalability concerns that affect Epic {{next_epic_num}} design +- Security or compliance issues discovered that change approach +- Integration assumptions proven incorrect +- Team capacity or skill gaps more severe than planned +- Technical debt level unsustainable without intervention + + + + +═══════════════════════════════════════════════════════════ +🚨 SIGNIFICANT DISCOVERY ALERT 🚨 +═══════════════════════════════════════════════════════════ + +Bob (Scrum Master): "{user_name}, we need to flag something important." + +Bob (Scrum Master): "During Epic {{epic_number}}, the team uncovered findings that may require updating the plan for Epic {{next_epic_num}}." + +**Significant Changes Identified:** + +1. {{significant_change_1}} + Impact: {{impact_description_1}} + +2. {{significant_change_2}} + Impact: {{impact_description_2}} + +{{#if significant_change_3}} 3. {{significant_change_3}} +Impact: {{impact_description_3}} +{{/if}} + +Charlie (Senior Dev): "Yeah, when we discovered {{technical_discovery}}, it fundamentally changed our understanding of {{affected_area}}." + +Alice (Product Owner): "And from a product perspective, {{product_discovery}} means Epic {{next_epic_num}}'s stories are based on wrong assumptions." + +Dana (QA Engineer): "If we start Epic {{next_epic_num}} as-is, we're going to hit walls fast." + +**Impact on Epic {{next_epic_num}}:** + +The current plan for Epic {{next_epic_num}} assumes: + +- {{wrong_assumption_1}} +- {{wrong_assumption_2}} + +But Epic {{epic_number}} revealed: + +- {{actual_reality_1}} +- {{actual_reality_2}} + +This means Epic {{next_epic_num}} likely needs: +{{list_likely_changes_needed}} + +**RECOMMENDED ACTIONS:** + +1. Review and update Epic {{next_epic_num}} definition based on new learnings +2. Update affected stories in Epic {{next_epic_num}} to reflect reality +3. Consider updating architecture or technical specifications if applicable +4. Hold alignment session with Product Owner before starting Epic {{next_epic_num}} + {{#if prd_update_needed}}5. Update PRD sections affected by new understanding{{/if}} + +Bob (Scrum Master): "**Epic Update Required**: YES - Schedule epic planning review session" + +Bob (Scrum Master): "{user_name}, this is significant. We need to address this before committing to Epic {{next_epic_num}}'s current plan. How do you want to handle it?" + + +WAIT for {user_name} to decide on how to handle the significant changes + +Add epic review session to critical path if user agrees + + +Alice (Product Owner): "I agree with {user_name}'s approach. Better to adjust the plan now than fail mid-epic." + +Charlie (Senior Dev): "This is why retrospectives matter. We caught this before it became a disaster." + +Bob (Scrum Master): "Adding to critical path: Epic {{next_epic_num}} planning review session before epic kickoff." + + + + + +Bob (Scrum Master): "Good news - nothing from Epic {{epic_number}} fundamentally changes our plan for Epic {{next_epic_num}}. The plan is still sound." + +Alice (Product Owner): "We learned a lot, but the direction is right." + + + + +Bob (Scrum Master): "Let me show you the complete action plan..." + +Bob (Scrum Master): "That's {{total_action_count}} action items, {{prep_task_count}} preparation tasks, and {{critical_count}} critical path items." + +Bob (Scrum Master): "Everyone clear on what they own?" + + +Give each agent with assignments a moment to acknowledge their ownership + +Ensure {user_name} approves the complete action plan + + + + + + +Bob (Scrum Master): "Before we close, I want to do a final readiness check." + +Bob (Scrum Master): "Epic {{epic_number}} is marked complete in sprint-status, but is it REALLY done?" + +Alice (Product Owner): "What do you mean, Bob?" + +Bob (Scrum Master): "I mean truly production-ready, stakeholders happy, no loose ends that'll bite us later." + +Bob (Scrum Master): "{user_name}, let's walk through this together." + + +Explore testing and quality state through natural conversation + + +Bob (Scrum Master): "{user_name}, tell me about the testing for Epic {{epic_number}}. What verification has been done?" + + +WAIT for {user_name} to describe testing status + + +Dana (QA Engineer): [Responds to what {user_name} shared] "I can add to that - {{additional_testing_context}}." + +Dana (QA Engineer): "But honestly, {{testing_concern_if_any}}." + +Bob (Scrum Master): "{user_name}, are you confident Epic {{epic_number}} is production-ready from a quality perspective?" + + +WAIT for {user_name} to assess quality readiness + + + +Bob (Scrum Master): "Okay, let's capture that. What specific testing is still needed?" + +Dana (QA Engineer): "I can handle {{testing_work_needed}}, estimated {{testing_hours}} hours." + +Bob (Scrum Master): "Adding to critical path: Complete {{testing_work_needed}} before Epic {{next_epic_num}}." + +Add testing completion to critical path + + +Explore deployment and release status + + +Bob (Scrum Master): "{user_name}, what's the deployment status for Epic {{epic_number}}? Is it live in production, scheduled for deployment, or still pending?" + + +WAIT for {user_name} to provide deployment status + + + +Charlie (Senior Dev): "If it's not deployed yet, we need to factor that into Epic {{next_epic_num}} timing." + +Bob (Scrum Master): "{user_name}, when is deployment planned? Does that timing work for starting Epic {{next_epic_num}}?" + + +WAIT for {user_name} to clarify deployment timeline + +Add deployment milestone to critical path with agreed timeline + + +Explore stakeholder acceptance + + +Bob (Scrum Master): "{user_name}, have stakeholders seen and accepted the Epic {{epic_number}} deliverables?" + +Alice (Product Owner): "This is important - I've seen 'done' epics get rejected by stakeholders and force rework." + +Bob (Scrum Master): "{user_name}, any feedback from stakeholders still pending?" + + +WAIT for {user_name} to describe stakeholder acceptance status + + + +Alice (Product Owner): "We should get formal acceptance before moving on. Otherwise Epic {{next_epic_num}} might get interrupted by rework." + +Bob (Scrum Master): "{user_name}, how do you want to handle stakeholder acceptance? Should we make it a critical path item?" + + +WAIT for {user_name} decision + +Add stakeholder acceptance to critical path if user agrees + + +Explore technical health and stability + + +Bob (Scrum Master): "{user_name}, this is a gut-check question: How does the codebase feel after Epic {{epic_number}}?" + +Bob (Scrum Master): "Stable and maintainable? Or are there concerns lurking?" + +Charlie (Senior Dev): "Be honest, {user_name}. We've all shipped epics that felt... fragile." + + +WAIT for {user_name} to assess codebase health + + + +Charlie (Senior Dev): "Okay, let's dig into that. What's causing those concerns?" + +Charlie (Senior Dev): [Helps {user_name} articulate technical concerns] + +Bob (Scrum Master): "What would it take to address these concerns and feel confident about stability?" + +Charlie (Senior Dev): "I'd say we need {{stability_work_needed}}, roughly {{stability_hours}} hours." + +Bob (Scrum Master): "{user_name}, is addressing this stability work worth doing before Epic {{next_epic_num}}?" + + +WAIT for {user_name} decision + +Add stability work to preparation sprint if user agrees + + +Explore unresolved blockers + + +Bob (Scrum Master): "{user_name}, are there any unresolved blockers or technical issues from Epic {{epic_number}} that we're carrying forward?" + +Dana (QA Engineer): "Things that might create problems for Epic {{next_epic_num}} if we don't deal with them?" + +Bob (Scrum Master): "Nothing is off limits here. If there's a problem, we need to know." + + +WAIT for {user_name} to surface any blockers + + + +Bob (Scrum Master): "Let's capture those blockers and figure out how they affect Epic {{next_epic_num}}." + +Charlie (Senior Dev): "For {{blocker_1}}, if we leave it unresolved, it'll {{impact_description_1}}." + +Alice (Product Owner): "That sounds critical. We need to address that before moving forward." + +Bob (Scrum Master): "Agreed. Adding to critical path: Resolve {{blocker_1}} before Epic {{next_epic_num}} kickoff." + +Bob (Scrum Master): "Who owns that work?" + + +Assign blocker resolution to appropriate agent +Add to critical path with priority and deadline + + +Synthesize the readiness assessment + + +Bob (Scrum Master): "Okay {user_name}, let me synthesize what we just uncovered..." + +**EPIC {{epic_number}} READINESS ASSESSMENT:** + +Testing & Quality: {{quality_status}} +{{#if quality_concerns}}⚠️ Action needed: {{quality_action_needed}}{{/if}} + +Deployment: {{deployment_status}} +{{#if deployment_pending}}⚠️ Scheduled for: {{deployment_date}}{{/if}} + +Stakeholder Acceptance: {{acceptance_status}} +{{#if acceptance_incomplete}}⚠️ Action needed: {{acceptance_action_needed}}{{/if}} + +Technical Health: {{stability_status}} +{{#if stability_concerns}}⚠️ Action needed: {{stability_action_needed}}{{/if}} + +Unresolved Blockers: {{blocker_status}} +{{#if blockers_exist}}⚠️ Must resolve: {{blocker_list}}{{/if}} + +Bob (Scrum Master): "{user_name}, does this assessment match your understanding?" + + +WAIT for {user_name} to confirm or correct the assessment + + +Bob (Scrum Master): "Based on this assessment, Epic {{epic_number}} is {{#if all_clear}}fully complete and we're clear to proceed{{else}}complete from a story perspective, but we have {{critical_work_count}} critical items before Epic {{next_epic_num}}{{/if}}." + +Alice (Product Owner): "This level of thoroughness is why retrospectives are valuable." + +Charlie (Senior Dev): "Better to catch this now than three stories into the next epic." + + + + + + + +Bob (Scrum Master): "We've covered a lot of ground today. Let me bring this retrospective to a close." + +═══════════════════════════════════════════════════════════ +✅ RETROSPECTIVE COMPLETE +═══════════════════════════════════════════════════════════ + +Bob (Scrum Master): "Epic {{epic_number}}: {{epic_title}} - REVIEWED" + +**Key Takeaways:** + +1. {{key_lesson_1}} +2. {{key_lesson_2}} +3. {{key_lesson_3}} + {{#if key_lesson_4}}4. {{key_lesson_4}}{{/if}} + +Alice (Product Owner): "That first takeaway is huge - {{impact_of_lesson_1}}." + +Charlie (Senior Dev): "And lesson 2 is something we can apply immediately." + +Bob (Scrum Master): "Commitments made today:" + +- Action Items: {{action_count}} +- Preparation Tasks: {{prep_task_count}} +- Critical Path Items: {{critical_count}} + +Dana (QA Engineer): "That's a lot of commitments. We need to actually follow through this time." + +Bob (Scrum Master): "Agreed. Which is why we'll review these action items in our next standup." + +═══════════════════════════════════════════════════════════ +🎯 NEXT STEPS: +═══════════════════════════════════════════════════════════ + +1. Execute Preparation Sprint (Est: {{prep_days}} days) +2. Complete Critical Path items before Epic {{next_epic_num}} +3. Review action items in next standup + {{#if epic_update_needed}}4. Hold Epic {{next_epic_num}} planning review session{{else}}4. Begin Epic {{next_epic_num}} planning when preparation complete{{/if}} + +Elena (Junior Dev): "{{prep_days}} days of prep work is significant, but necessary." + +Alice (Product Owner): "I'll communicate the timeline to stakeholders. They'll understand if we frame it as 'ensuring Epic {{next_epic_num}} success.'" + +═══════════════════════════════════════════════════════════ + +Bob (Scrum Master): "Before we wrap, I want to take a moment to acknowledge the team." + +Bob (Scrum Master): "Epic {{epic_number}} delivered {{completed_stories}} stories with {{velocity_description}} velocity. We overcame {{blocker_count}} blockers. We learned a lot. That's real work by real people." + +Charlie (Senior Dev): "Hear, hear." + +Alice (Product Owner): "I'm proud of what we shipped." + +Dana (QA Engineer): "And I'm excited about Epic {{next_epic_num}} - especially now that we're prepared for it." + +Bob (Scrum Master): "{user_name}, any final thoughts before we close?" + + +WAIT for {user_name} to share final reflections + + +Bob (Scrum Master): [Acknowledges what {user_name} shared] "Thank you for that, {user_name}." + +Bob (Scrum Master): "Alright team - great work today. We learned a lot from Epic {{epic_number}}. Let's use these insights to make Epic {{next_epic_num}} even better." + +Bob (Scrum Master): "See you all when prep work is done. Meeting adjourned!" + +═══════════════════════════════════════════════════════════ + + +Prepare to save retrospective summary document + + + + + +Ensure retrospectives folder exists: {retrospectives_folder} +Create folder if it doesn't exist + +Generate comprehensive retrospective summary document including: + +- Epic summary and metrics +- Team participants +- Successes and strengths identified +- Challenges and growth areas +- Key insights and learnings +- Previous retro follow-through analysis (if applicable) +- Next epic preview and dependencies +- Action items with owners and timelines +- Preparation tasks for next epic +- Critical path items +- Significant discoveries and epic update recommendations (if any) +- Readiness assessment +- Commitments and next steps + +Format retrospective document as readable markdown with clear sections +Set filename: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md +Save retrospective document + + +✅ Retrospective document saved: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md + + +Update {sprint_status_file} to mark retrospective as completed + +Load the FULL file: {sprint_status_file} +Find development_status key "epic-{{epic_number}}-retrospective" +Verify current status (typically "optional" or "pending") +Update development_status["epic-{{epic_number}}-retrospective"] = "done" +Save file, preserving ALL comments and structure including STATUS DEFINITIONS + + + +✅ Retrospective marked as completed in {sprint_status_file} + +Retrospective key: epic-{{epic_number}}-retrospective +Status: {{previous_status}} → done + + + + + +⚠️ Could not update retrospective status: epic-{{epic_number}}-retrospective not found in {sprint_status_file} + +Retrospective document was saved successfully, but {sprint_status_file} may need manual update. + + + + + + + + +**✅ Retrospective Complete, {user_name}!** + +**Epic Review:** + +- Epic {{epic_number}}: {{epic_title}} reviewed +- Retrospective Status: completed +- Retrospective saved: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md + +**Commitments Made:** + +- Action Items: {{action_count}} +- Preparation Tasks: {{prep_task_count}} +- Critical Path Items: {{critical_count}} + +**Next Steps:** + +1. **Review retrospective summary**: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md + +2. **Execute preparation sprint** (Est: {{prep_days}} days) + - Complete {{critical_count}} critical path items + - Execute {{prep_task_count}} preparation tasks + - Verify all action items are in progress + +3. **Review action items in next standup** + - Ensure ownership is clear + - Track progress on commitments + - Adjust timelines if needed + +{{#if epic_update_needed}} 4. **IMPORTANT: Schedule Epic {{next_epic_num}} planning review session** + +- Significant discoveries from Epic {{epic_number}} require epic updates +- Review and update affected stories +- Align team on revised approach +- Do NOT start Epic {{next_epic_num}} until review is complete + {{else}} + +4. **Begin Epic {{next_epic_num}} when ready** + - Start creating stories with SM agent's `create-story` + - Epic will be marked as `in-progress` automatically when first story is created + - Ensure all critical path items are done first + {{/if}} + +**Team Performance:** +Epic {{epic_number}} delivered {{completed_stories}} stories with {{velocity_summary}}. The retrospective surfaced {{insight_count}} key insights and {{significant_discovery_count}} significant discoveries. The team is well-positioned for Epic {{next_epic_num}} success. + +{{#if significant_discovery_count > 0}} +⚠️ **REMINDER**: Epic update required before starting Epic {{next_epic_num}} +{{/if}} + +--- + +Bob (Scrum Master): "Great session today, {user_name}. The team did excellent work." + +Alice (Product Owner): "See you at epic planning!" + +Charlie (Senior Dev): "Time to knock out that prep work." + + + + + + + + +PARTY MODE REQUIRED: All agent dialogue uses "Name (Role): dialogue" format +Scrum Master maintains psychological safety throughout - no blame or judgment +Focus on systems and processes, not individual performance +Create authentic team dynamics: disagreements, diverse perspectives, emotions +User ({user_name}) is active participant, not passive observer +Encourage specific examples over general statements +Balance celebration of wins with honest assessment of challenges +Ensure every voice is heard - all agents contribute +Action items must be specific, achievable, and owned +Forward-looking mindset - how do we improve for next epic? +Intent-based facilitation, not scripted phrases +Deep story analysis provides rich material for discussion +Previous retro integration creates accountability and continuity +Significant change detection prevents epic misalignment +Critical verification prevents starting next epic prematurely +Document everything - retrospective insights are valuable for future reference +Two-part structure ensures both reflection AND preparation + diff --git a/src/bmm/workflows/4-implementation/retrospective/workflow.yaml b/src/bmm/workflows/4-implementation/retrospective/workflow.yaml new file mode 100644 index 00000000..80d934b2 --- /dev/null +++ b/src/bmm/workflows/4-implementation/retrospective/workflow.yaml @@ -0,0 +1,58 @@ +# Retrospective - Epic Completion Review Workflow +name: "retrospective" +description: "Run after epic completion to review overall success, extract lessons learned, and explore if new information emerged that might impact the next epic" +author: "BMad" + +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:implementation_artifacts}" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +user_skill_level: "{config_source}:user_skill_level" +document_output_language: "{config_source}:document_output_language" +date: system-generated +planning_artifacts: "{config_source}:planning_artifacts" +implementation_artifacts: "{config_source}:implementation_artifacts" + +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/retrospective" +template: false +instructions: "{installed_path}/instructions.md" + +required_inputs: + - agent_manifest: "{project-root}/_bmad/_config/agent-manifest.csv" + +# Smart input file references - handles both whole docs and sharded docs +# Priority: Whole document first, then sharded version +# Strategy: SELECTIVE LOAD - only load the completed epic and relevant retrospectives +input_file_patterns: + epics: + description: "The completed epic for retrospective" + whole: "{planning_artifacts}/*epic*.md" + sharded_index: "{planning_artifacts}/*epic*/index.md" + sharded_single: "{planning_artifacts}/*epic*/epic-{{epic_num}}.md" + load_strategy: "SELECTIVE_LOAD" + previous_retrospective: + description: "Previous epic's retrospective (optional)" + pattern: "{implementation_artifacts}/**/epic-{{prev_epic_num}}-retro-*.md" + load_strategy: "SELECTIVE_LOAD" + architecture: + description: "System architecture for context" + whole: "{planning_artifacts}/*architecture*.md" + sharded: "{planning_artifacts}/*architecture*/*.md" + load_strategy: "FULL_LOAD" + prd: + description: "Product requirements for context" + whole: "{planning_artifacts}/*prd*.md" + sharded: "{planning_artifacts}/*prd*/*.md" + load_strategy: "FULL_LOAD" + document_project: + description: "Brownfield project documentation (optional)" + sharded: "{planning_artifacts}/*.md" + load_strategy: "INDEX_GUIDED" + +# Required files +sprint_status_file: "{implementation_artifacts}/sprint-status.yaml" +story_directory: "{implementation_artifacts}" +retrospectives_folder: "{implementation_artifacts}" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/4-implementation/sprint-planning/checklist.md b/src/bmm/workflows/4-implementation/sprint-planning/checklist.md new file mode 100644 index 00000000..7c20b1f3 --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-planning/checklist.md @@ -0,0 +1,33 @@ +# Sprint Planning Validation Checklist + +## Core Validation + +### Complete Coverage Check + +- [ ] Every epic found in epic\*.md files appears in sprint-status.yaml +- [ ] Every story found in epic\*.md files appears in sprint-status.yaml +- [ ] Every epic has a corresponding retrospective entry +- [ ] No items in sprint-status.yaml that don't exist in epic files + +### Parsing Verification + +Compare epic files against generated sprint-status.yaml: + +``` +Epic Files Contains: Sprint Status Contains: +✓ Epic 1 ✓ epic-1: [status] + ✓ Story 1.1: User Auth ✓ 1-1-user-auth: [status] + ✓ Story 1.2: Account Mgmt ✓ 1-2-account-mgmt: [status] + ✓ Story 1.3: Plant Naming ✓ 1-3-plant-naming: [status] + ✓ epic-1-retrospective: [status] +✓ Epic 2 ✓ epic-2: [status] + ✓ Story 2.1: Personality Model ✓ 2-1-personality-model: [status] + ✓ Story 2.2: Chat Interface ✓ 2-2-chat-interface: [status] + ✓ epic-2-retrospective: [status] +``` + +### Final Check + +- [ ] Total count of epics matches +- [ ] Total count of stories matches +- [ ] All items are in the expected order (epic, stories, retrospective) diff --git a/src/bmm/workflows/4-implementation/sprint-planning/instructions.md b/src/bmm/workflows/4-implementation/sprint-planning/instructions.md new file mode 100644 index 00000000..c4f4bd42 --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-planning/instructions.md @@ -0,0 +1,225 @@ +# Sprint Planning - Sprint Status Generator + +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/sprint-planning/workflow.yaml + +## 📚 Document Discovery - Full Epic Loading + +**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking. + +**Epic Discovery Process:** + +1. **Search for whole document first** - Look for `epics.md`, `bmm-epics.md`, or any `*epic*.md` file +2. **Check for sharded version** - If whole document not found, look for `epics/index.md` +3. **If sharded version found**: + - Read `index.md` to understand the document structure + - Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.) + - Process all epics and their stories from the combined content + - This ensures complete sprint status coverage +4. **Priority**: If both whole and sharded versions exist, use the whole document + +**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `bmm-epics.md`, `user-stories.md`, etc. + + + + +Communicate in {communication_language} with {user_name} +Look for all files matching `{epics_pattern}` in {epics_location} +Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files + +For each epic file found, extract: + +- Epic numbers from headers like `## Epic 1:` or `## Epic 2:` +- Story IDs and titles from patterns like `### Story 1.1: User Authentication` +- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title` + +**Story ID Conversion Rules:** + +- Original: `### Story 1.1: User Authentication` +- Replace period with dash: `1-1` +- Convert title to kebab-case: `user-authentication` +- Final key: `1-1-user-authentication` + +Build complete inventory of all epics and stories from all epic files + + + + + After discovery, these content variables are available: {epics_content} (all epics loaded - uses FULL_LOAD strategy) + + + +For each epic found, create entries in this order: + +1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog` +2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog` +3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional` + +**Example structure:** + +```yaml +development_status: + epic-1: backlog + 1-1-user-authentication: backlog + 1-2-account-management: backlog + epic-1-retrospective: optional +``` + + + + +For each story, detect current status by checking files: + +**Story file detection:** + +- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`) +- If exists → upgrade status to at least `ready-for-dev` + +**Preservation rule:** + +- If existing `{status_file}` exists and has more advanced status, preserve it +- Never downgrade status (e.g., don't change `done` to `ready-for-dev`) + +**Status Flow Reference:** + +- Epic: `backlog` → `in-progress` → `done` +- Story: `backlog` → `ready-for-dev` → `in-progress` → `review` → `done` +- Retrospective: `optional` ↔ `done` + + + +Create or update {status_file} with: + +**File Structure:** + +```yaml +# generated: {date} +# project: {project_name} +# project_key: {project_key} +# tracking_system: {tracking_system} +# story_location: {story_location} + +# STATUS DEFINITIONS: +# ================== +# Epic Status: +# - backlog: Epic not yet started +# - in-progress: Epic actively being worked on +# - done: All stories in epic completed +# +# Epic Status Transitions: +# - backlog → in-progress: Automatically when first story is created (via create-story) +# - in-progress → done: Manually when all stories reach 'done' status +# +# Story Status: +# - backlog: Story only exists in epic file +# - ready-for-dev: Story file created in stories folder +# - in-progress: Developer actively working on implementation +# - review: Ready for code review (via Dev's code-review workflow) +# - done: Story completed +# +# Retrospective Status: +# - optional: Can be completed but not required +# - done: Retrospective has been completed +# +# WORKFLOW NOTES: +# =============== +# - Epic transitions to 'in-progress' automatically when first story is created +# - Stories can be worked in parallel if team capacity allows +# - SM typically creates next story after previous one is 'done' to incorporate learnings +# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended) + +generated: { date } +project: { project_name } +project_key: { project_key } +tracking_system: { tracking_system } +story_location: { story_location } + +development_status: + # All epics, stories, and retrospectives in order +``` + +Write the complete sprint status YAML to {status_file} +CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing +Ensure all items are ordered: epic, its stories, its retrospective, next epic... + + + +Perform validation checks: + +- [ ] Every epic in epic files appears in {status_file} +- [ ] Every story in epic files appears in {status_file} +- [ ] Every epic has a corresponding retrospective entry +- [ ] No items in {status_file} that don't exist in epic files +- [ ] All status values are legal (match state machine definitions) +- [ ] File is valid YAML syntax + +Count totals: + +- Total epics: {{epic_count}} +- Total stories: {{story_count}} +- Epics in-progress: {{in_progress_count}} +- Stories done: {{done_count}} + +Display completion summary to {user_name} in {communication_language}: + +**Sprint Status Generated Successfully** + +- **File Location:** {status_file} +- **Total Epics:** {{epic_count}} +- **Total Stories:** {{story_count}} +- **Epics In Progress:** {{epics_in_progress_count}} +- **Stories Completed:** {{done_count}} + +**Next Steps:** + +1. Review the generated {status_file} +2. Use this file to track development progress +3. Agents will update statuses as they work +4. Re-run this workflow to refresh auto-detected statuses + + + + + +## Additional Documentation + +### Status State Machine + +**Epic Status Flow:** + +``` +backlog → in-progress → done +``` + +- **backlog**: Epic not yet started +- **in-progress**: Epic actively being worked on (stories being created/implemented) +- **done**: All stories in epic completed + +**Story Status Flow:** + +``` +backlog → ready-for-dev → in-progress → review → done +``` + +- **backlog**: Story only exists in epic file +- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`) +- **in-progress**: Developer actively working +- **review**: Ready for code review (via Dev's code-review workflow) +- **done**: Completed + +**Retrospective Status:** + +``` +optional ↔ done +``` + +- **optional**: Ready to be conducted but not required +- **done**: Finished + +### Guidelines + +1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story +2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported +3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows +4. **Review Before Done**: Stories should pass through `review` before `done` +5. **Learning Transfer**: SM typically creates next story after previous one is `done` to incorporate learnings diff --git a/src/bmm/workflows/4-implementation/sprint-planning/sprint-status-template.yaml b/src/bmm/workflows/4-implementation/sprint-planning/sprint-status-template.yaml new file mode 100644 index 00000000..fd93e3b3 --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-planning/sprint-status-template.yaml @@ -0,0 +1,55 @@ +# Sprint Status Template +# This is an EXAMPLE showing the expected format +# The actual file will be generated with all epics/stories from your epic files + +# generated: {date} +# project: {project_name} +# project_key: {project_key} +# tracking_system: {tracking_system} +# story_location: {story_location} + +# STATUS DEFINITIONS: +# ================== +# Epic Status: +# - backlog: Epic not yet started +# - in-progress: Epic actively being worked on +# - done: All stories in epic completed +# +# Story Status: +# - backlog: Story only exists in epic file +# - ready-for-dev: Story file created, ready for development +# - in-progress: Developer actively working on implementation +# - review: Implementation complete, ready for review +# - done: Story completed +# +# Retrospective Status: +# - optional: Can be completed but not required +# - done: Retrospective has been completed +# +# WORKFLOW NOTES: +# =============== +# - Mark epic as 'in-progress' when starting work on its first story +# - SM typically creates next story ONLY after previous one is 'done' to incorporate learnings +# - Dev moves story to 'review', then Dev runs code-review (fresh context, ideally different LLM) + +# EXAMPLE STRUCTURE (your actual epics/stories will replace these): + +generated: 05-06-2-2025 21:30 +project: My Awesome Project +project_key: jira-1234 +tracking_system: file-system +story_location: "{story_location}" + +development_status: + epic-1: backlog + 1-1-user-authentication: done + 1-2-account-management: ready-for-dev + 1-3-plant-data-model: backlog + 1-4-add-plant-manual: backlog + epic-1-retrospective: optional + + epic-2: backlog + 2-1-personality-system: backlog + 2-2-chat-interface: backlog + 2-3-llm-integration: backlog + epic-2-retrospective: optional diff --git a/src/bmm/workflows/4-implementation/sprint-planning/workflow.yaml b/src/bmm/workflows/4-implementation/sprint-planning/workflow.yaml new file mode 100644 index 00000000..50998f0a --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-planning/workflow.yaml @@ -0,0 +1,54 @@ +name: sprint-planning +description: "Generate and manage the sprint status tracking file for Phase 4 implementation, extracting all epics and stories from epic files and tracking their status through the development lifecycle" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +date: system-generated +implementation_artifacts: "{config_source}:implementation_artifacts" +planning_artifacts: "{config_source}:planning_artifacts" +output_folder: "{implementation_artifacts}" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-planning" +instructions: "{installed_path}/instructions.md" +template: "{installed_path}/sprint-status-template.yaml" +validation: "{installed_path}/checklist.md" + +# Variables and inputs +variables: + # Project context + project_context: "**/project-context.md" + # Project identification + project_name: "{config_source}:project_name" + + # Tracking system configuration + tracking_system: "file-system" # Options: file-system, Future will support other options from config of mcp such as jira, linear, trello + story_location: "{config_source}:implementation_artifacts" # Relative path for file-system, Future will support URL for Jira/Linear/Trello + story_location_absolute: "{config_source}:implementation_artifacts" # Absolute path for file operations + + # Source files (file-system only) + epics_location: "{planning_artifacts}" # Directory containing epic*.md files + epics_pattern: "epic*.md" # Pattern to find epic files + + # Output configuration + status_file: "{implementation_artifacts}/sprint-status.yaml" + +# Smart input file references - handles both whole docs and sharded docs +# Priority: Whole document first, then sharded version +# Strategy: FULL LOAD - sprint planning needs ALL epics to build complete status +input_file_patterns: + epics: + description: "All epics with user stories" + whole: "{output_folder}/*epic*.md" + sharded: "{output_folder}/*epic*/*.md" + load_strategy: "FULL_LOAD" + +# Output configuration +default_output_file: "{status_file}" + +standalone: true + +web_bundle: false diff --git a/src/bmm/workflows/4-implementation/sprint-status/instructions.md b/src/bmm/workflows/4-implementation/sprint-status/instructions.md new file mode 100644 index 00000000..c058644a --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-status/instructions.md @@ -0,0 +1,229 @@ +# Sprint Status - Multi-Mode Service + +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/sprint-status/workflow.yaml +Modes: interactive (default), validate, data +⚠️ ABSOLUTELY NO TIME ESTIMATES. Do NOT mention hours, days, weeks, or timelines. + + + + + Set mode = {{mode}} if provided by caller; otherwise mode = "interactive" + + + Jump to Step 20 + + + + Jump to Step 30 + + + + Continue to Step 1 + + + + + Try {sprint_status_file} + + ❌ sprint-status.yaml not found. +Run `/bmad:bmm:workflows:sprint-planning` to generate it, then rerun sprint-status. + Exit workflow + + Continue to Step 2 + + + + Read the FULL file: {sprint_status_file} + Parse fields: generated, project, project_key, tracking_system, story_location + Parse development_status map. Classify keys: + - Epics: keys starting with "epic-" (and not ending with "-retrospective") + - Retrospectives: keys ending with "-retrospective" + - Stories: everything else (e.g., 1-2-login-form) + Map legacy story status "drafted" → "ready-for-dev" + Count story statuses: backlog, ready-for-dev, in-progress, review, done + Map legacy epic status "contexted" → "in-progress" + Count epic statuses: backlog, in-progress, done + Count retrospective statuses: optional, done + +Validate all statuses against known values: + +- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy) +- Valid epic statuses: backlog, in-progress, done, contexted (legacy) +- Valid retrospective statuses: optional, done + + + +⚠️ **Unknown status detected:** +{{#each invalid_entries}} + +- `{{key}}`: "{{status}}" (not recognized) + {{/each}} + +**Valid statuses:** + +- Stories: backlog, ready-for-dev, in-progress, review, done +- Epics: backlog, in-progress, done +- Retrospectives: optional, done + + How should these be corrected? + {{#each invalid_entries}} + {{@index}}. {{key}}: "{{status}}" → [select valid status] + {{/each}} + +Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing: + +Update sprint-status.yaml with corrected values +Re-parse the file with corrected statuses + + + +Detect risks: + +- IF any story has status "review": suggest `/bmad:bmm:workflows:code-review` +- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story +- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:bmm:workflows:create-story` +- IF `generated` timestamp is more than 7 days old: warn "sprint-status.yaml may be stale" +- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected" +- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories" + + + + Pick the next recommended workflow using priority: + When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1) + 1. If any story status == in-progress → recommend `dev-story` for the first in-progress story + 2. Else if any story status == review → recommend `code-review` for the first review story + 3. Else if any story status == ready-for-dev → recommend `dev-story` + 4. Else if any story status == backlog → recommend `create-story` + 5. Else if any retrospective status == optional → recommend `retrospective` + 6. Else → All implementation items done; congratulate the user - you both did amazing work together! + Store selected recommendation as: next_story_id, next_workflow_id, next_agent (SM/DEV as appropriate) + + + + +## 📊 Sprint Status + +- Project: {{project}} ({{project_key}}) +- Tracking: {{tracking_system}} +- Status file: {sprint_status_file} + +**Stories:** backlog {{count_backlog}}, ready-for-dev {{count_ready}}, in-progress {{count_in_progress}}, review {{count_review}}, done {{count_done}} + +**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}} + +**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}}) + +{{#if risks}} +**Risks:** +{{#each risks}} + +- {{this}} + {{/each}} + {{/if}} + + + + + + Pick an option: +1) Run recommended workflow now +2) Show all stories grouped by status +3) Show raw sprint-status.yaml +4) Exit +Choice: + + + Run `/bmad:bmm:workflows:{{next_workflow_id}}`. +If the command targets a story, set `story_key={{next_story_id}}` when prompted. + + + + +### Stories by Status +- In Progress: {{stories_in_progress}} +- Review: {{stories_in_review}} +- Ready for Dev: {{stories_ready_for_dev}} +- Backlog: {{stories_backlog}} +- Done: {{stories_done}} + + + + + Display the full contents of {sprint_status_file} + + + + Exit workflow + + + + + + + + + Load and parse {sprint_status_file} same as Step 2 + Compute recommendation same as Step 3 + next_workflow_id = {{next_workflow_id}} + next_story_id = {{next_story_id}} + count_backlog = {{count_backlog}} + count_ready = {{count_ready}} + count_in_progress = {{count_in_progress}} + count_review = {{count_review}} + count_done = {{count_done}} + epic_backlog = {{epic_backlog}} + epic_in_progress = {{epic_in_progress}} + epic_done = {{epic_done}} + risks = {{risks}} + Return to caller + + + + + + + + Check that {sprint_status_file} exists + + is_valid = false + error = "sprint-status.yaml missing" + suggestion = "Run sprint-planning to create it" + Return + + +Read and parse {sprint_status_file} + +Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location + +is_valid = false +error = "Missing required field(s): {{missing_fields}}" +suggestion = "Re-run sprint-planning or add missing fields manually" +Return + + +Verify development_status section exists with at least one entry + +is_valid = false +error = "development_status missing or empty" +suggestion = "Re-run sprint-planning or repair the file manually" +Return + + +Validate all status values against known valid statuses: + +- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted) +- Epics: backlog, in-progress, done (legacy: contexted) +- Retrospectives: optional, done + + is_valid = false + error = "Invalid status values: {{invalid_entries}}" + suggestion = "Fix invalid statuses in sprint-status.yaml" + Return + + +is_valid = true +message = "sprint-status.yaml valid: metadata complete, all statuses recognized" + + + diff --git a/src/bmm/workflows/4-implementation/sprint-status/workflow.yaml b/src/bmm/workflows/4-implementation/sprint-status/workflow.yaml new file mode 100644 index 00000000..6f10a9a6 --- /dev/null +++ b/src/bmm/workflows/4-implementation/sprint-status/workflow.yaml @@ -0,0 +1,36 @@ +# Sprint Status - Implementation Tracker +name: sprint-status +description: "Summarize sprint-status.yaml, surface risks, and route to the right implementation workflow." +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated +implementation_artifacts: "{config_source}:implementation_artifacts" +planning_artifacts: "{config_source}:planning_artifacts" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-status" +instructions: "{installed_path}/instructions.md" + +# Inputs +variables: + sprint_status_file: "{implementation_artifacts}/sprint-status.yaml" + tracking_system: "file-system" + +# Smart input file references +input_file_patterns: + sprint_status: + description: "Sprint status file generated by sprint-planning" + whole: "{implementation_artifacts}/sprint-status.yaml" + load_strategy: "FULL_LOAD" + +# Standalone so IDE commands get generated +standalone: true + +# No web bundle needed +web_bundle: false diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-01-mode-detection.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-01-mode-detection.md new file mode 100644 index 00000000..4ea630b1 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-01-mode-detection.md @@ -0,0 +1,176 @@ +--- +name: 'step-01-mode-detection' +description: 'Determine execution mode (tech-spec vs direct), handle escalation, set state variables' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-01-mode-detection.md' +nextStepFile_modeA: './step-03-execute.md' +nextStepFile_modeB: './step-02-context-gathering.md' +--- + +# Step 1: Mode Detection + +**Goal:** Determine execution mode, capture baseline, handle escalation if needed. + +--- + +## STATE VARIABLES (capture now, persist throughout) + +These variables MUST be set in this step and available to all subsequent steps: + +- `{baseline_commit}` - Git HEAD at workflow start (or "NO_GIT" if not a git repo) +- `{execution_mode}` - "tech-spec" or "direct" +- `{tech_spec_path}` - Path to tech-spec file (if Mode A) + +--- + +## EXECUTION SEQUENCE + +### 1. Capture Baseline + +First, check if the project uses Git version control: + +**If Git repo exists** (`.git` directory present or `git rev-parse --is-inside-work-tree` succeeds): + +- Run `git rev-parse HEAD` and store result as `{baseline_commit}` + +**If NOT a Git repo:** + +- Set `{baseline_commit}` = "NO_GIT" + +### 2. Load Project Context + +Check if `{project_context}` exists (`**/project-context.md`). If found, load it as a foundational reference for ALL implementation decisions. + +### 3. Parse User Input + +Analyze the user's input to determine mode: + +**Mode A: Tech-Spec** + +- User provided a path to a tech-spec file (e.g., `quick-dev tech-spec-auth.md`) +- Load the spec, extract tasks/context/AC +- Set `{execution_mode}` = "tech-spec" +- Set `{tech_spec_path}` = provided path +- **NEXT:** Read fully and follow: `step-03-execute.md` + +**Mode B: Direct Instructions** + +- User provided task description directly (e.g., `refactor src/foo.ts...`) +- Set `{execution_mode}` = "direct" +- **NEXT:** Evaluate escalation threshold, then proceed + +--- + +## ESCALATION THRESHOLD (Mode B only) + +Evaluate user input with minimal token usage (no file loading): + +**Triggers escalation (if 2+ signals present):** + +- Multiple components mentioned (dashboard + api + database) +- System-level language (platform, integration, architecture) +- Uncertainty about approach ("how should I", "best way to") +- Multi-layer scope (UI + backend + data together) +- Extended timeframe ("this week", "over the next few days") + +**Reduces signal:** + +- Simplicity markers ("just", "quickly", "fix", "bug", "typo", "simple") +- Single file/component focus +- Confident, specific request + +Use holistic judgment, not mechanical keyword matching. + +--- + +## ESCALATION HANDLING + +### No Escalation (simple request) + +Display: "**Select:** [P] Plan first (tech-spec) [E] Execute directly" + +#### Menu Handling Logic: + +- IF P: Direct user to `{quick_spec_workflow}`. **EXIT Quick Dev.** +- IF E: Ask for any additional guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md` + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed when user makes a selection + +--- + +### Escalation Triggered - Level 0-2 + +Present: "This looks like a focused feature with multiple components." + +Display: + +**[P] Plan first (tech-spec)** (recommended) +**[W] Seems bigger than quick-dev** - Recommend the Full BMad Flow PRD Process +**[E] Execute directly** + +#### Menu Handling Logic: + +- IF P: Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.** +- IF W: Direct user to run the PRD workflow instead. **EXIT Quick Dev.** +- IF E: Ask for guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md` + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed when user makes a selection + +--- + +### Escalation Triggered - Level 3+ + +Present: "This sounds like platform/system work." + +Display: + +**[W] Start BMad Method** (recommended) +**[P] Plan first (tech-spec)** (lighter planning) +**[E] Execute directly** - feeling lucky + +#### Menu Handling Logic: + +- IF P: Direct to `{quick_spec_workflow}`. **EXIT Quick Dev.** +- IF W: Direct user to run the PRD workflow instead. **EXIT Quick Dev.** +- IF E: Ask for guidance, then **NEXT:** Read fully and follow: `step-02-context-gathering.md` + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed when user makes a selection + +--- + +## NEXT STEP DIRECTIVE + +**CRITICAL:** When this step completes, explicitly state which step to load: + +- Mode A (tech-spec): "**NEXT:** read fully and follow: `step-03-execute.md`" +- Mode B (direct, [E] selected): "**NEXT:** Read fully and follow: `step-02-context-gathering.md`" +- Escalation ([P] or [W]): "**EXITING Quick Dev.** Follow the directed workflow." + +--- + +## SUCCESS METRICS + +- `{baseline_commit}` captured and stored +- `{execution_mode}` determined ("tech-spec" or "direct") +- `{tech_spec_path}` set if Mode A +- Project context loaded if exists +- Escalation evaluated appropriately (Mode B) +- Explicit NEXT directive provided + +## FAILURE MODES + +- Proceeding without capturing baseline commit +- Not setting execution_mode variable +- Loading step-02 when Mode A (tech-spec provided) +- Attempting to "return" after escalation instead of EXIT +- No explicit NEXT directive at step completion diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-02-context-gathering.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-02-context-gathering.md new file mode 100644 index 00000000..dffb86a8 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-02-context-gathering.md @@ -0,0 +1,120 @@ +--- +name: 'step-02-context-gathering' +description: 'Quick context gathering for direct mode - identify files, patterns, dependencies' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-02-context-gathering.md' +nextStepFile: './step-03-execute.md' +--- + +# Step 2: Context Gathering (Direct Mode) + +**Goal:** Quickly gather context for direct instructions - files, patterns, dependencies. + +**Note:** This step only runs for Mode B (direct instructions). If `{execution_mode}` is "tech-spec", this step was skipped. + +--- + +## AVAILABLE STATE + +From step-01: + +- `{baseline_commit}` - Git HEAD at workflow start +- `{execution_mode}` - Should be "direct" +- `{project_context}` - Loaded if exists + +--- + +## EXECUTION SEQUENCE + +### 1. Identify Files to Modify + +Based on user's direct instructions: + +- Search for relevant files using glob/grep +- Identify the specific files that need changes +- Note file locations and purposes + +### 2. Find Relevant Patterns + +Examine the identified files and their surroundings: + +- Code style and conventions used +- Existing patterns for similar functionality +- Import/export patterns +- Error handling approaches +- Test patterns (if tests exist nearby) + +### 3. Note Dependencies + +Identify: + +- External libraries used +- Internal module dependencies +- Configuration files that may need updates +- Related files that might be affected + +### 4. Create Mental Plan + +Synthesize gathered context into: + +- List of tasks to complete +- Acceptance criteria (inferred from user request) +- Order of operations +- Files to touch + +--- + +## PRESENT PLAN + +Display to user: + +``` +**Context Gathered:** + +**Files to modify:** +- {list files} + +**Patterns identified:** +- {key patterns} + +**Plan:** +1. {task 1} +2. {task 2} +... + +**Inferred AC:** +- {acceptance criteria} + +Ready to execute? (y/n/adjust) +``` + +- **y:** Proceed to execution +- **n:** Gather more context or clarify +- **adjust:** Modify the plan based on feedback + +--- + +## NEXT STEP DIRECTIVE + +**CRITICAL:** When user confirms ready, explicitly state: + +- **y:** "**NEXT:** Read fully and follow: `step-03-execute.md`" +- **n/adjust:** Continue gathering context, then re-present plan + +--- + +## SUCCESS METRICS + +- Files to modify identified +- Relevant patterns documented +- Dependencies noted +- Mental plan created with tasks and AC +- User confirmed readiness to proceed + +## FAILURE MODES + +- Executing this step when Mode A (tech-spec) +- Proceeding without identifying files to modify +- Not presenting plan for user confirmation +- Missing obvious patterns in existing code diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-03-execute.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-03-execute.md new file mode 100644 index 00000000..9d728361 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-03-execute.md @@ -0,0 +1,113 @@ +--- +name: 'step-03-execute' +description: 'Execute implementation - iterate through tasks, write code, run tests' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-03-execute.md' +nextStepFile: './step-04-self-check.md' +--- + +# Step 3: Execute Implementation + +**Goal:** Implement all tasks, write tests, follow patterns, handle errors. + +**Critical:** Continue through ALL tasks without stopping for milestones. + +--- + +## AVAILABLE STATE + +From previous steps: + +- `{baseline_commit}` - Git HEAD at workflow start +- `{execution_mode}` - "tech-spec" or "direct" +- `{tech_spec_path}` - Tech-spec file (if Mode A) +- `{project_context}` - Project patterns (if exists) + +From context: + +- Mode A: Tasks and AC extracted from tech-spec +- Mode B: Tasks and AC from step-02 mental plan + +--- + +## EXECUTION LOOP + +For each task: + +### 1. Load Context + +- Read files relevant to this task +- Review patterns from project-context or observed code +- Understand dependencies + +### 2. Implement + +- Write code following existing patterns +- Handle errors appropriately +- Follow conventions observed in codebase +- Add appropriate comments where non-obvious + +### 3. Test + +- Write tests if appropriate for the change +- Run existing tests to catch regressions +- Verify the specific AC for this task + +### 4. Mark Complete + +- Check off task: `- [x] Task N` +- Continue to next task immediately + +--- + +## HALT CONDITIONS + +**HALT and request guidance if:** + +- 3 consecutive failures on same task +- Tests fail and fix is not obvious +- Blocking dependency discovered +- Ambiguity that requires user decision + +**Do NOT halt for:** + +- Minor issues that can be noted and continued +- Warnings that don't block functionality +- Style preferences (follow existing patterns) + +--- + +## CONTINUOUS EXECUTION + +**Critical:** Do not stop between tasks for approval. + +- Execute all tasks in sequence +- Only halt for blocking issues +- Tests failing = fix before continuing +- Track all completed work for self-check + +--- + +## NEXT STEP + +When ALL tasks are complete (or halted on blocker), read fully and follow: `step-04-self-check.md`. + +--- + +## SUCCESS METRICS + +- All tasks attempted +- Code follows existing patterns +- Error handling appropriate +- Tests written where appropriate +- Tests passing +- No unnecessary halts + +## FAILURE MODES + +- Stopping for approval between tasks +- Ignoring existing patterns +- Not running tests after changes +- Giving up after first failure +- Not following project-context rules (if exists) diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-04-self-check.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-04-self-check.md new file mode 100644 index 00000000..6179ebba --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-04-self-check.md @@ -0,0 +1,113 @@ +--- +name: 'step-04-self-check' +description: 'Self-audit implementation against tasks, tests, AC, and patterns' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-04-self-check.md' +nextStepFile: './step-05-adversarial-review.md' +--- + +# Step 4: Self-Check + +**Goal:** Audit completed work against tasks, tests, AC, and patterns before external review. + +--- + +## AVAILABLE STATE + +From previous steps: + +- `{baseline_commit}` - Git HEAD at workflow start +- `{execution_mode}` - "tech-spec" or "direct" +- `{tech_spec_path}` - Tech-spec file (if Mode A) +- `{project_context}` - Project patterns (if exists) + +--- + +## SELF-CHECK AUDIT + +### 1. Tasks Complete + +Verify all tasks are marked complete: + +- [ ] All tasks from tech-spec or mental plan marked `[x]` +- [ ] No tasks skipped without documented reason +- [ ] Any blocked tasks have clear explanation + +### 2. Tests Passing + +Verify test status: + +- [ ] All existing tests still pass +- [ ] New tests written for new functionality +- [ ] No test warnings or skipped tests without reason + +### 3. Acceptance Criteria Satisfied + +For each AC: + +- [ ] AC is demonstrably met +- [ ] Can explain how implementation satisfies AC +- [ ] Edge cases considered + +### 4. Patterns Followed + +Verify code quality: + +- [ ] Follows existing code patterns in codebase +- [ ] Follows project-context rules (if exists) +- [ ] Error handling consistent with codebase +- [ ] No obvious code smells introduced + +--- + +## UPDATE TECH-SPEC (Mode A only) + +If `{execution_mode}` is "tech-spec": + +1. Load `{tech_spec_path}` +2. Mark all tasks as `[x]` complete +3. Update status to "Implementation Complete" +4. Save changes + +--- + +## IMPLEMENTATION SUMMARY + +Present summary to transition to review: + +``` +**Implementation Complete!** + +**Summary:** {what was implemented} +**Files Modified:** {list of files} +**Tests:** {test summary - passed/added/etc} +**AC Status:** {all satisfied / issues noted} + +Proceeding to adversarial code review... +``` + +--- + +## NEXT STEP + +Proceed immediately to `step-05-adversarial-review.md`. + +--- + +## SUCCESS METRICS + +- All tasks verified complete +- All tests passing +- All AC satisfied +- Patterns followed +- Tech-spec updated (if Mode A) +- Summary presented + +## FAILURE MODES + +- Claiming tasks complete when they're not +- Not running tests before proceeding +- Missing AC verification +- Ignoring pattern violations +- Not updating tech-spec status (Mode A) diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-05-adversarial-review.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-05-adversarial-review.md new file mode 100644 index 00000000..50c786d0 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-05-adversarial-review.md @@ -0,0 +1,106 @@ +--- +name: 'step-05-adversarial-review' +description: 'Construct diff and invoke adversarial review task' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-05-adversarial-review.md' +nextStepFile: './step-06-resolve-findings.md' +--- + +# Step 5: Adversarial Code Review + +**Goal:** Construct diff of all changes, invoke adversarial review task, present findings. + +--- + +## AVAILABLE STATE + +From previous steps: + +- `{baseline_commit}` - Git HEAD at workflow start (CRITICAL for diff) +- `{execution_mode}` - "tech-spec" or "direct" +- `{tech_spec_path}` - Tech-spec file (if Mode A) + +--- + +### 1. Construct Diff + +Build complete diff of all changes since workflow started. + +### If `{baseline_commit}` is a Git commit hash: + +**Tracked File Changes:** + +```bash +git diff {baseline_commit} +``` + +**New Untracked Files:** +Only include untracked files that YOU created during this workflow (steps 2-4). +Do not include pre-existing untracked files. +For each new file created, include its full content as a "new file" addition. + +### If `{baseline_commit}` is "NO_GIT": + +Use best-effort diff construction: + +- List all files you modified during steps 2-4 +- For each file, show the changes you made (before/after if you recall, or just current state) +- Include any new files you created with their full content +- Note: This is less precise than Git diff but still enables meaningful review + +### Capture as {diff_output} + +Merge all changes into `{diff_output}`. + +**Note:** Do NOT `git add` anything - this is read-only inspection. + +--- + +### 2. Invoke Adversarial Review + +With `{diff_output}` constructed, invoke the review task. If possible, use information asymmetry: run this step, and only it, in a separate subagent or process with read access to the project, but no context except the `{diff_output}`. + +```xml +Review {diff_output} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml +``` + +**Platform fallback:** If task invocation not available, load the task file and follow its instructions inline, passing `{diff_output}` as the content. + +The task should: review `{diff_output}` and return a list of findings. + +--- + +### 3. Process Findings + +Capture the findings from the task output. +**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance. +Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided). +DO NOT exclude findings based on severity or validity unless explicitly asked to do so. +Order findings by severity. +Number the ordered findings (F1, F2, F3, etc.). +If TodoWrite or similar tool is available, turn each finding into a TODO, include ID, severity, validity, and description in the TODO; otherwise present findings as a table with columns: ID, Severity, Validity, Description + +--- + +## NEXT STEP + +With findings in hand, read fully and follow: `step-06-resolve-findings.md` for user to choose resolution approach. + +--- + +## SUCCESS METRICS + +- Diff constructed from baseline_commit +- New files included in diff +- Task invoked with diff as input +- Findings received +- Findings processed into TODOs or table and presented to user + +## FAILURE MODES + +- Missing baseline_commit (can't construct accurate diff) +- Not including new untracked files in diff +- Invoking task without providing diff input +- Accepting zero findings without questioning +- Presenting fewer findings than the review task returned without explicit instruction to do so diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-06-resolve-findings.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-06-resolve-findings.md new file mode 100644 index 00000000..4ab367c6 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/steps/step-06-resolve-findings.md @@ -0,0 +1,149 @@ +--- +name: 'step-06-resolve-findings' +description: 'Handle review findings interactively, apply fixes, update tech-spec with final status' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev' +thisStepFile: './step-06-resolve-findings.md' +--- + +# Step 6: Resolve Findings + +**Goal:** Handle adversarial review findings interactively, apply fixes, finalize tech-spec. + +--- + +## AVAILABLE STATE + +From previous steps: + +- `{baseline_commit}` - Git HEAD at workflow start +- `{execution_mode}` - "tech-spec" or "direct" +- `{tech_spec_path}` - Tech-spec file (if Mode A) +- Findings table from step-05 + +--- + +## RESOLUTION OPTIONS + +Present: "How would you like to handle these findings?" + +Display: + +**[W] Walk through** - Discuss each finding individually +**[F] Fix automatically** - Automatically fix issues classified as "real" +**[S] Skip** - Acknowledge and proceed to commit + +### Menu Handling Logic: + +- IF W: Execute WALK THROUGH section below +- IF F: Execute FIX AUTOMATICALLY section below +- IF S: Execute SKIP section below + +### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed when user makes a selection + +--- + +## WALK THROUGH [W] + +For each finding in order: + +1. Present the finding with context +2. Ask: **fix now / skip / discuss** +3. If fix: Apply the fix immediately +4. If skip: Note as acknowledged, continue +5. If discuss: Provide more context, re-ask +6. Move to next finding + +After all findings processed, summarize what was fixed/skipped. + +--- + +## FIX AUTOMATICALLY [F] + +1. Filter findings to only those classified as "real" +2. Apply fixes for each real finding +3. Report what was fixed: + +``` +**Auto-fix Applied:** +- F1: {description of fix} +- F3: {description of fix} +... + +Skipped (noise/uncertain): F2, F4 +``` + +--- + +## SKIP [S] + +1. Acknowledge all findings were reviewed +2. Note that user chose to proceed without fixes +3. Continue to completion + +--- + +## UPDATE TECH-SPEC (Mode A only) + +If `{execution_mode}` is "tech-spec": + +1. Load `{tech_spec_path}` +2. Update status to "Completed" +3. Add review notes: + ``` + ## Review Notes + - Adversarial review completed + - Findings: {count} total, {fixed} fixed, {skipped} skipped + - Resolution approach: {walk-through/auto-fix/skip} + ``` +4. Save changes + +--- + +## COMPLETION OUTPUT + +``` +**Review complete. Ready to commit.** + +**Implementation Summary:** +- {what was implemented} +- Files modified: {count} +- Tests: {status} +- Review findings: {X} addressed, {Y} skipped + +{Explain what was implemented based on user_skill_level} +``` + +--- + +## WORKFLOW COMPLETE + +This is the final step. The Quick Dev workflow is now complete. + +User can: + +- Commit changes +- Run additional tests +- Start new Quick Dev session + +--- + +## SUCCESS METRICS + +- User presented with resolution options +- Chosen approach executed correctly +- Fixes applied cleanly (if applicable) +- Tech-spec updated with final status (Mode A) +- Completion summary provided +- User understands what was implemented + +## FAILURE MODES + +- Not presenting resolution options +- Auto-fixing "noise" or "uncertain" findings +- Not updating tech-spec after resolution (Mode A) +- No completion summary +- Leaving user unclear on next steps diff --git a/src/bmm/workflows/bmad-quick-flow/quick-dev/workflow.md b/src/bmm/workflows/bmad-quick-flow/quick-dev/workflow.md new file mode 100644 index 00000000..3fbeb13b --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-dev/workflow.md @@ -0,0 +1,50 @@ +--- +name: quick-dev +description: 'Flexible development - execute tech-specs OR direct instructions with optional planning.' +--- + +# Quick Dev Workflow + +**Goal:** Execute implementation tasks efficiently, either from a tech-spec or direct user instructions. + +**Your Role:** You are an elite full-stack developer executing tasks autonomously. Follow patterns, ship code, run tests. Every response moves the project forward. + +--- + +## WORKFLOW ARCHITECTURE + +This uses **step-file architecture** for focused execution: + +- Each step loads fresh to combat "lost in the middle" +- State persists via variables: `{baseline_commit}`, `{execution_mode}`, `{tech_spec_path}` +- Sequential progression through implementation phases + +--- + +## INITIALIZATION + +### Configuration Loading + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `user_name`, `communication_language`, `user_skill_level` +- `output_folder`, `planning_artifacts`, `implementation_artifacts` +- `date` as system-generated current datetime +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### Paths + +- `installed_path` = `{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev` +- `project_context` = `**/project-context.md` (load if exists) + +### Related Workflows + +- `quick_spec_workflow` = `{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-spec/workflow.md` +- `party_mode_exec` = `{project-root}/_bmad/core/workflows/party-mode/workflow.md` +- `advanced_elicitation` = `{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml` + +--- + +## EXECUTION + +Read fully and follow: `steps/step-01-mode-detection.md` to begin the workflow. diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-01-understand.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-01-understand.md new file mode 100644 index 00000000..a7cde555 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-01-understand.md @@ -0,0 +1,192 @@ +--- +name: 'step-01-understand' +description: 'Analyze the requirement delta between current state and what user wants to build' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-spec' +nextStepFile: './step-02-investigate.md' +skipToStepFile: './step-03-generate.md' +templateFile: '{workflow_path}/tech-spec-template.md' +wipFile: '{implementation_artifacts}/tech-spec-wip.md' +--- + +# Step 1: Analyze Requirement Delta + +**Progress: Step 1 of 4** - Next: Deep Investigation + +## RULES: + +- MUST NOT skip steps. +- MUST NOT optimize sequence. +- MUST follow exact instructions. +- MUST NOT look ahead to future steps. +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## CONTEXT: + +- Variables from `workflow.md` are available in memory. +- Focus: Define the technical requirement delta and scope. +- Investigation: Perform surface-level code scans ONLY to verify the delta. Reserve deep dives into implementation consequences for Step 2. +- Objective: Establish a verifiable delta between current state and target state. + +## SEQUENCE OF INSTRUCTIONS + +### 0. Check for Work in Progress + +a) **Before anything else, check if `{wipFile}` exists:** + +b) **IF WIP FILE EXISTS:** + +1. Read the frontmatter and extract: `title`, `slug`, `stepsCompleted` +2. Calculate progress: `lastStep = max(stepsCompleted)` +3. Present to user: + +``` +Hey {user_name}! Found a tech-spec in progress: + +**{title}** - Step {lastStep} of 4 complete + +Is this what you're here to continue? + +[Y] Yes, pick up where I left off +[N] No, archive it and start something new +``` + +4. **HALT and wait for user selection.** + +a) **Menu Handling:** + +- **[Y] Continue existing:** + - Jump directly to the appropriate step based on `stepsCompleted`: + - `[1]` → Load `{nextStepFile}` (Step 2) + - `[1, 2]` → Load `{skipToStepFile}` (Step 3) + - `[1, 2, 3]` → Load `./step-04-review.md` (Step 4) +- **[N] Archive and start fresh:** + - Rename `{wipFile}` to `{implementation_artifacts}/tech-spec-{slug}-archived-{date}.md` + +### 1. Greet and Ask for Initial Request + +a) **Greet the user briefly:** + +"Hey {user_name}! What are we building today?" + +b) **Get their initial description.** Don't ask detailed questions yet - just understand enough to know where to look. + +### 2. Quick Orient Scan + +a) **Before asking detailed questions, do a rapid scan to understand the landscape:** + +b) **Check for existing context docs:** + +- Check `{output_folder}` and `{planning_artifacts}`for planning documents (PRD, architecture, epics, research) +- Check for `**/project-context.md` - if it exists, skim for patterns and conventions +- Check for any existing stories or specs related to user's request + +c) **If user mentioned specific code/features, do a quick scan:** + +- Search for relevant files/classes/functions they mentioned +- Skim the structure (don't deep-dive yet - that's Step 2) +- Note: tech stack, obvious patterns, file locations + +d) **Build mental model:** + +- What's the likely landscape for this feature? +- What's the likely scope based on what you found? +- What questions do you NOW have, informed by the code? + +**This scan should take < 30 seconds. Just enough to ask smart questions.** + +### 3. Ask Informed Questions + +a) **Now ask clarifying questions - but make them INFORMED by what you found:** + +Instead of generic questions like "What's the scope?", ask specific ones like: +- "`AuthService` handles validation in the controller — should the new field follow that pattern or move it to a dedicated validator?" +- "`NavigationSidebar` component uses local state for the 'collapsed' toggle — should we stick with that or move it to the global store?" +- "The epics doc mentions X - is this related?" + +**Adapt to {user_skill_level}.** Technical users want technical questions. Non-technical users need translation. + +b) **If no existing code is found:** + +- Ask about intended architecture, patterns, constraints +- Ask what similar systems they'd like to emulate + +### 4. Capture Core Understanding + +a) **From the conversation, extract and confirm:** + +- **Title**: A clear, concise name for this work +- **Slug**: URL-safe version of title (lowercase, hyphens, no spaces) +- **Problem Statement**: What problem are we solving? +- **Solution**: High-level approach (1-2 sentences) +- **In Scope**: What's included +- **Out of Scope**: What's explicitly NOT included + +b) **Ask the user to confirm the captured understanding before proceeding.** + +### 5. Initialize WIP File + +a) **Create the tech-spec WIP file:** + +1. Copy template from `{templateFile}` +2. Write to `{wipFile}` +3. Update frontmatter with captured values: + ```yaml + --- + title: '{title}' + slug: '{slug}' + created: '{date}' + status: 'in-progress' + stepsCompleted: [1] + tech_stack: [] + files_to_modify: [] + code_patterns: [] + test_patterns: [] + --- + ``` +4. Fill in Overview section with Problem Statement, Solution, and Scope +5. Fill in Context for Development section with any technical preferences or constraints gathered during informed discovery. +6. Write the file + +b) **Report to user:** + +"Created: `{wipFile}` + +**Captured:** + +- Title: {title} +- Problem: {problem_statement_summary} +- Scope: {scope_summary}" + +### 6. Present Checkpoint Menu + +a) **Display menu:** + +Display: "**Select:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Deep Investigation (Step 2 of 4)" + +b) **HALT and wait for user selection.** + +#### Menu Handling Logic: + +- IF A: Read fully and follow: `{advanced_elicitation}` with current tech-spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu +- IF P: Read fully and follow: `{party_mode_exec}` with current tech-spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu +- IF C: Verify `{wipFile}` has `stepsCompleted: [1]`, then read fully and follow: `{nextStepFile}` +- IF Any other comments or queries: respond helpfully then redisplay menu + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After A or P execution, return to this menu + +--- + +## REQUIRED OUTPUTS: + +- MUST initialize WIP file with captured metadata. + +## VERIFICATION CHECKLIST: + +- [ ] WIP check performed FIRST before any greeting. +- [ ] `{wipFile}` created with correct frontmatter, Overview, Context for Development, and `stepsCompleted: [1]`. +- [ ] User selected [C] to continue. diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-02-investigate.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-02-investigate.md new file mode 100644 index 00000000..1b0d0cee --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-02-investigate.md @@ -0,0 +1,145 @@ +--- +name: 'step-02-investigate' +description: 'Map technical constraints and anchor points within the codebase' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-spec' +nextStepFile: './step-03-generate.md' +wipFile: '{implementation_artifacts}/tech-spec-wip.md' +--- + +# Step 2: Map Technical Constraints & Anchor Points + +**Progress: Step 2 of 4** - Next: Generate Plan + +## RULES: + +- MUST NOT skip steps. +- MUST NOT optimize sequence. +- MUST follow exact instructions. +- MUST NOT generate the full spec yet (that's Step 3). +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## CONTEXT: + +- Requires `{wipFile}` from Step 1 with the "Problem Statement" defined. +- Focus: Map the problem statement to specific anchor points in the codebase. +- Output: Exact files to touch, classes/patterns to extend, and technical constraints identified. +- Objective: Provide the implementation-ready ground truth for the plan. + +## SEQUENCE OF INSTRUCTIONS + +### 1. Load Current State + +**Read `{wipFile}` and extract:** + +- Problem statement and scope from Overview section +- Any context gathered in Step 1 + +### 2. Execute Investigation Path + +**Universal Code Investigation:** + +_Isolate deep exploration in sub-agents/tasks where available. Return distilled summaries only to prevent context snowballing._ + +a) **Build on Step 1's Quick Scan** + +Review what was found in Step 1's orient scan. Then ask: + +"Based on my quick look, I see [files/patterns found]. Are there other files or directories I should investigate deeply?" + +b) **Read and Analyze Code** + +For each file/directory provided: + +- Read the complete file(s) +- Identify patterns, conventions, coding style +- Note dependencies and imports +- Find related test files + +**If NO relevant code is found (Clean Slate):** + +- Identify the target directory where the feature should live. +- Scan parent directories for architectural context. +- Identify standard project utilities or boilerplate that SHOULD be used. +- Document this as "Confirmed Clean Slate" - establishing that no legacy constraints exist. + + +c) **Document Technical Context** + +Capture and confirm with user: + +- **Tech Stack**: Languages, frameworks, libraries +- **Code Patterns**: Architecture patterns, naming conventions, file structure +- **Files to Modify/Create**: Specific files that will need changes or new files to be created +- **Test Patterns**: How tests are structured, test frameworks used + +d) **Look for project-context.md** + +If `**/project-context.md` exists and wasn't loaded in Step 1: + +- Load it now +- Extract patterns and conventions +- Note any rules that must be followed + +### 3. Update WIP File + +**Update `{wipFile}` frontmatter:** + +```yaml +--- +# ... existing frontmatter ... +stepsCompleted: [1, 2] +tech_stack: ['{captured_tech_stack}'] +files_to_modify: ['{captured_files}'] +code_patterns: ['{captured_patterns}'] +test_patterns: ['{captured_test_patterns}'] +--- +``` + +**Update the Context for Development section:** + +Fill in: + +- Codebase Patterns (from investigation) +- Files to Reference table (files reviewed) +- Technical Decisions (any decisions made during investigation) + +**Report to user:** + +"**Context Gathered:** + +- Tech Stack: {tech_stack_summary} +- Files to Modify: {files_count} files identified +- Patterns: {patterns_summary} +- Tests: {test_patterns_summary}" + +### 4. Present Checkpoint Menu + +Display: "**Select:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Generate Spec (Step 3 of 4)" + +**HALT and wait for user selection.** + +#### Menu Handling Logic: + +- IF A: Read fully and follow: `{advanced_elicitation}` with current tech-spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu +- IF P: Read fully and follow: `{party_mode_exec}` with current tech-spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update WIP file then redisplay menu, if no keep original then redisplay menu +- IF C: Verify frontmatter updated with `stepsCompleted: [1, 2]`, then read fully and follow: `{nextStepFile}` +- IF Any other comments or queries: respond helpfully then redisplay menu + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to next step when user selects 'C' +- After A or P execution, return to this menu + +--- + +## REQUIRED OUTPUTS: + +- MUST document technical context (stack, patterns, files identified). +- MUST update `{wipFile}` with functional context. + +## VERIFICATION CHECKLIST: + +- [ ] Technical mapping performed and documented. +- [ ] `stepsCompleted: [1, 2]` set in frontmatter. diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-03-generate.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-03-generate.md new file mode 100644 index 00000000..79999db3 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-03-generate.md @@ -0,0 +1,128 @@ +--- +name: 'step-03-generate' +description: 'Build the implementation plan based on the technical mapping of constraints' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-spec' +nextStepFile: './step-04-review.md' +wipFile: '{implementation_artifacts}/tech-spec-wip.md' +--- + +# Step 3: Generate Implementation Plan + +**Progress: Step 3 of 4** - Next: Review & Finalize + +## RULES: + +- MUST NOT skip steps. +- MUST NOT optimize sequence. +- MUST follow exact instructions. +- MUST NOT implement anything - just document. +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## CONTEXT: + +- Requires `{wipFile}` with defined "Overview" and "Context for Development" sections. +- Focus: Create the implementation sequence that addresses the requirement delta using the captured technical context. +- Output: Implementation-ready tasks with specific files and instructions. +- Target: Meet the **READY FOR DEVELOPMENT** standard defined in `workflow.md`. + +## SEQUENCE OF INSTRUCTIONS + +### 1. Load Current State + +**Read `{wipFile}` completely and extract:** + +- All frontmatter values +- Overview section (Problem, Solution, Scope) +- Context for Development section (Patterns, Files, Decisions) + +### 2. Generate Implementation Plan + +Generate specific implementation tasks: + +a) **Task Breakdown** + +- Each task should be a discrete, completable unit of work +- Tasks should be ordered logically (dependencies first) +- Include the specific files to modify in each task +- Be explicit about what changes to make + +b) **Task Format** + +```markdown +- [ ] Task N: Clear action description + - File: `path/to/file.ext` + - Action: Specific change to make + - Notes: Any implementation details +``` + +### 3. Generate Acceptance Criteria + +**Create testable acceptance criteria:** + +Each AC should follow Given/When/Then format: + +```markdown +- [ ] AC N: Given [precondition], when [action], then [expected result] +``` + +**Ensure ACs cover:** + +- Happy path functionality +- Error handling +- Edge cases (if relevant) +- Integration points (if relevant) + +### 4. Complete Additional Context + +**Fill in remaining sections:** + +a) **Dependencies** + +- External libraries or services needed +- Other tasks or features this depends on +- API or data dependencies + +b) **Testing Strategy** + +- Unit tests needed +- Integration tests needed +- Manual testing steps + +c) **Notes** + +- High-risk items from pre-mortem analysis +- Known limitations +- Future considerations (out of scope but worth noting) + +### 5. Write Complete Spec + +a) **Update `{wipFile}` with all generated content:** + +- Ensure all template sections are filled in +- No placeholder text remaining +- All frontmatter values current +- Update status to 'review' (NOT 'ready-for-dev' - that happens after user review in Step 4) + +b) **Update frontmatter:** + +```yaml +--- +# ... existing values ... +status: 'review' +stepsCompleted: [1, 2, 3] +--- +``` + +c) **Read fully and follow: `{nextStepFile}` (Step 4)** + +## REQUIRED OUTPUTS: + +- Tasks MUST be specific, actionable, ordered logically, with files to modify. +- ACs MUST be testable, using Given/When/Then format. +- Status MUST be updated to 'review'. + +## VERIFICATION CHECKLIST: + +- [ ] `stepsCompleted: [1, 2, 3]` set in frontmatter. +- [ ] Spec meets the **READY FOR DEVELOPMENT** standard. diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-04-review.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-04-review.md new file mode 100644 index 00000000..a223a2e4 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/steps/step-04-review.md @@ -0,0 +1,201 @@ +--- +name: 'step-04-review' +description: 'Review and finalize the tech-spec' + +workflow_path: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-spec' +wipFile: '{implementation_artifacts}/tech-spec-wip.md' +--- + +# Step 4: Review & Finalize + +**Progress: Step 4 of 4** - Final Step + +## RULES: + +- MUST NOT skip steps. +- MUST NOT optimize sequence. +- MUST follow exact instructions. +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +## CONTEXT: + +- Requires `{wipFile}` from Step 3. +- MUST present COMPLETE spec content. Iterate until user is satisfied. +- **Criteria**: The spec MUST meet the **READY FOR DEVELOPMENT** standard defined in `workflow.md`. + +## SEQUENCE OF INSTRUCTIONS + +### 1. Load and Present Complete Spec + +**Read `{wipFile}` completely and extract `slug` from frontmatter for later use.** + +**Present to user:** + +"Here's your complete tech-spec. Please review:" + +[Display the complete spec content - all sections] + +"**Quick Summary:** + +- {task_count} tasks to implement +- {ac_count} acceptance criteria to verify +- {files_count} files to modify" + +**Present review menu:** + +Display: "**Select:** [C] Continue [E] Edit [Q] Questions [A] Advanced Elicitation [P] Party Mode" + +**HALT and wait for user selection.** + +#### Menu Handling Logic: + +- IF C: Proceed to Section 3 (Finalize the Spec) +- IF E: Proceed to Section 2 (Handle Review Feedback), then return here and redisplay menu +- IF Q: Answer questions, then redisplay this menu +- IF A: Read fully and follow: `{advanced_elicitation}` with current spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu +- IF P: Read fully and follow: `{party_mode_exec}` with current spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu +- IF Any other comments or queries: respond helpfully then redisplay menu + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- ONLY proceed to finalize when user selects 'C' +- After other menu items execution, return to this menu + +### 2. Handle Review Feedback + +a) **If user requests changes:** + +- Make the requested edits to `{wipFile}` +- Re-present the affected sections +- Ask if there are more changes +- Loop until user is satisfied + +b) **If the spec does NOT meet the "Ready for Development" standard:** + +- Point out the missing/weak sections (e.g., non-actionable tasks, missing ACs). +- Propose specific improvements to reach the standard. +- Make the edits once the user agrees. + +c) **If user has questions:** + +- Answer questions about the spec +- Clarify any confusing sections +- Make clarifying edits if needed + +### 3. Finalize the Spec + +**When user confirms the spec is good AND it meets the "Ready for Development" standard:** + +a) Update `{wipFile}` frontmatter: + + ```yaml + --- + # ... existing values ... + status: 'ready-for-dev' + stepsCompleted: [1, 2, 3, 4] + --- + ``` + +b) **Rename WIP file to final filename:** + - Using the `slug` extracted in Section 1 + - Rename `{wipFile}` → `{implementation_artifacts}/tech-spec-{slug}.md` + - Store this as `finalFile` for use in menus below + +### 4. Present Final Menu + +a) **Display completion message and menu:** + +``` +**Tech-Spec Complete!** + +Saved to: {finalFile} + +--- + +**Next Steps:** + +[A] Advanced Elicitation - refine further +[R] Adversarial Review - critique of the spec (highly recommended) +[B] Begin Development - start implementing now (not recommended) +[D] Done - exit workflow +[P] Party Mode - get expert feedback before dev + +--- + +Once you are fully satisfied with the spec (ideally after **Adversarial Review** and maybe a few rounds of **Advanced Elicitation**), it is recommended to run implementation in a FRESH CONTEXT for best results. + +Copy this prompt to start dev: + +\`\`\` +quick-dev {finalFile} +\`\`\` + +This ensures the dev agent has clean context focused solely on implementation. +``` + +b) **HALT and wait for user selection.** + +#### Menu Handling Logic: + +- IF A: Read fully and follow: `{advanced_elicitation}` with current spec content, process enhanced insights, ask user "Accept improvements? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu +- IF B: Read the entire workflow file at `{quick_dev_workflow}` and follow the instructions with the final spec file (warn: fresh context is better) +- IF D: Exit workflow - display final confirmation and path to spec +- IF P: Read fully and follow: `{party_mode_exec}` with current spec content, process collaborative insights, ask user "Accept changes? (y/n)", if yes update spec then redisplay menu, if no keep original then redisplay menu +- IF R: Execute Adversarial Review (see below) +- IF Any other comments or queries: respond helpfully then redisplay menu + +#### EXECUTION RULES: + +- ALWAYS halt and wait for user input after presenting menu +- After A, P, or R execution, return to this menu + +#### Adversarial Review [R] Process: + +1. **Invoke Adversarial Review Task**: + > With `{finalFile}` constructed, invoke the review task. If possible, use information asymmetry: run this task, and only it, in a separate subagent or process with read access to the project, but no context except the `{finalFile}`. + Review {finalFile} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml + > **Platform fallback:** If task invocation not available, load the task file and follow its instructions inline, passing `{finalFile}` as the content. + > The task should: review `{finalFile}` and return a list of findings. + + 2. **Process Findings**: + > Capture the findings from the task output. + > **If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance. + > Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided). + > DO NOT exclude findings based on severity or validity unless explicitly asked to do so. + > Order findings by severity. + > Number the ordered findings (F1, F2, F3, etc.). + > If TodoWrite or similar tool is available, turn each finding into a TODO, include ID, severity, validity, and description in the TODO; otherwise present findings as a table with columns: ID, Severity, Validity, Description + + 3. Return here and redisplay menu. + +### 5. Exit Workflow + +**When user selects [D]:** + +"**All done!** Your tech-spec is ready at: + +`{finalFile}` + +When you're ready to implement, run: + +``` +quick-dev {finalFile} +``` + +Ship it!" + +--- + +## REQUIRED OUTPUTS: + +- MUST update status to 'ready-for-dev'. +- MUST rename file to `tech-spec-{slug}.md`. +- MUST provide clear next-step guidance and recommend fresh context for dev. + +## VERIFICATION CHECKLIST: + +- [ ] Complete spec presented for review. +- [ ] Requested changes implemented. +- [ ] Spec verified against **READY FOR DEVELOPMENT** standard. +- [ ] `stepsCompleted: [1, 2, 3, 4]` set and file renamed. diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/tech-spec-template.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/tech-spec-template.md new file mode 100644 index 00000000..8d201149 --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/tech-spec-template.md @@ -0,0 +1,74 @@ +--- +title: '{title}' +slug: '{slug}' +created: '{date}' +status: 'in-progress' +stepsCompleted: [] +tech_stack: [] +files_to_modify: [] +code_patterns: [] +test_patterns: [] +--- + +# Tech-Spec: {title} + +**Created:** {date} + +## Overview + +### Problem Statement + +{problem_statement} + +### Solution + +{solution} + +### Scope + +**In Scope:** +{in_scope} + +**Out of Scope:** +{out_of_scope} + +## Context for Development + +### Codebase Patterns + +{codebase_patterns} + +### Files to Reference + +| File | Purpose | +| ---- | ------- | + +{files_table} + +### Technical Decisions + +{technical_decisions} + +## Implementation Plan + +### Tasks + +{tasks} + +### Acceptance Criteria + +{acceptance_criteria} + +## Additional Context + +### Dependencies + +{dependencies} + +### Testing Strategy + +{testing_strategy} + +### Notes + +{notes} diff --git a/src/bmm/workflows/bmad-quick-flow/quick-spec/workflow.md b/src/bmm/workflows/bmad-quick-flow/quick-spec/workflow.md new file mode 100644 index 00000000..bb6c877a --- /dev/null +++ b/src/bmm/workflows/bmad-quick-flow/quick-spec/workflow.md @@ -0,0 +1,79 @@ +--- +name: quick-spec +description: Conversational spec engineering - ask questions, investigate code, produce implementation-ready tech-spec. +main_config: '{project-root}/_bmad/bmm/config.yaml' +web_bundle: true + +# Checkpoint handler paths +advanced_elicitation: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml' +party_mode_exec: '{project-root}/_bmad/core/workflows/party-mode/workflow.md' +quick_dev_workflow: '{project-root}/_bmad/bmm/workflows/bmad-quick-flow/quick-dev/workflow.md' +--- + +# Quick-Spec Workflow + +**Goal:** Create implementation-ready technical specifications through conversational discovery, code investigation, and structured documentation. + +**READY FOR DEVELOPMENT STANDARD:** + +A specification is considered "Ready for Development" ONLY if it meets the following: + +- **Actionable**: Every task has a clear file path and specific action. +- **Logical**: Tasks are ordered by dependency (lowest level first). +- **Testable**: All ACs follow Given/When/Then and cover happy path and edge cases. +- **Complete**: All investigation results from Step 2 are inlined; no placeholders or "TBD". +- **Self-Contained**: A fresh agent can implement the feature without reading the workflow history. + +--- + +**Your Role:** You are an elite developer and spec engineer. You ask sharp questions, investigate existing code thoroughly, and produce specs that contain ALL context a fresh dev agent needs to implement the feature. No handoffs, no missing context - just complete, actionable specs. + +--- + +## WORKFLOW ARCHITECTURE + +This uses **step-file architecture** for disciplined execution: + +### Core Principles + +- **Micro-file Design**: Each step is a self-contained instruction file that must be followed exactly +- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until directed +- **Sequential Enforcement**: Sequence within step files must be completed in order, no skipping or optimization +- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array +- **Append-Only Building**: Build the tech-spec by updating content as directed + +### Step Processing Rules + +1. **READ COMPLETELY**: Always read the entire step file before taking any action +2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate +3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection +4. **CHECK CONTINUATION**: Only proceed to next step when user selects [C] (Continue) +5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step +6. **LOAD NEXT**: When directed, read fully and follow the next step file + +### Critical Rules (NO EXCEPTIONS) + +- **NEVER** load multiple step files simultaneously +- **ALWAYS** read entire step file before execution +- **NEVER** skip steps or optimize the sequence +- **ALWAYS** update frontmatter of output file when completing a step +- **ALWAYS** follow the exact instructions in the step file +- **ALWAYS** halt at menus and wait for user input +- **NEVER** create mental todo lists from future steps + +--- + +## INITIALIZATION SEQUENCE + +### 1. Configuration Loading + +Load and read full config from `{main_config}` and resolve: + +- `project_name`, `output_folder`, `planning_artifacts`, `implementation_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` +- `date` as system-generated current datetime +- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` + +### 2. First Step Execution + +Read fully and follow: `steps/step-01-understand.md` to begin the workflow. diff --git a/src/bmm/workflows/document-project/checklist.md b/src/bmm/workflows/document-project/checklist.md new file mode 100644 index 00000000..7b67d1e5 --- /dev/null +++ b/src/bmm/workflows/document-project/checklist.md @@ -0,0 +1,245 @@ +# Document Project Workflow - Validation Checklist + +## Scan Level and Resumability + +- [ ] Scan level selection offered (quick/deep/exhaustive) for initial_scan and full_rescan modes +- [ ] Deep-dive mode automatically uses exhaustive scan (no choice given) +- [ ] Quick scan does NOT read source files (only patterns, configs, manifests) +- [ ] Deep scan reads files in critical directories per project type +- [ ] Exhaustive scan reads ALL source files (excluding node_modules, dist, build) +- [ ] State file (project-scan-report.json) created at workflow start +- [ ] State file updated after each step completion +- [ ] State file contains all required fields per schema +- [ ] Resumability prompt shown if state file exists and is <24 hours old +- [ ] Old state files (>24 hours) automatically archived +- [ ] Resume functionality loads previous state correctly +- [ ] Workflow can jump to correct step when resuming + +## Write-as-you-go Architecture + +- [ ] Each document written to disk IMMEDIATELY after generation +- [ ] Document validation performed right after writing (section-level) +- [ ] State file updated after each document is written +- [ ] Detailed findings purged from context after writing (only summaries kept) +- [ ] Context contains only high-level summaries (1-2 sentences per section) +- [ ] No accumulation of full project analysis in memory + +## Batching Strategy (Deep/Exhaustive Scans) + +- [ ] Batching applied for deep and exhaustive scan levels +- [ ] Batches organized by SUBFOLDER (not arbitrary file count) +- [ ] Large files (>5000 LOC) handled with appropriate judgment +- [ ] Each batch: read files, extract info, write output, validate, purge context +- [ ] Batch completion tracked in state file (batches_completed array) +- [ ] Batch summaries kept in context (1-2 sentences max) + +## Project Detection and Classification + +- [ ] Project type correctly identified and matches actual technology stack +- [ ] Multi-part vs single-part structure accurately detected +- [ ] All project parts identified if multi-part (no missing client/server/etc.) +- [ ] Documentation requirements loaded for each part type +- [ ] Architecture registry match is appropriate for detected stack + +## Technology Stack Analysis + +- [ ] All major technologies identified (framework, language, database, etc.) +- [ ] Versions captured where available +- [ ] Technology decision table is complete and accurate +- [ ] Dependencies and libraries documented +- [ ] Build tools and package managers identified + +## Codebase Scanning Completeness + +- [ ] All critical directories scanned based on project type +- [ ] API endpoints documented (if requires_api_scan = true) +- [ ] Data models captured (if requires_data_models = true) +- [ ] State management patterns identified (if requires_state_management = true) +- [ ] UI components inventoried (if requires_ui_components = true) +- [ ] Configuration files located and documented +- [ ] Authentication/security patterns identified +- [ ] Entry points correctly identified +- [ ] Integration points mapped (for multi-part projects) +- [ ] Test files and patterns documented + +## Source Tree Analysis + +- [ ] Complete directory tree generated with no major omissions +- [ ] Critical folders highlighted and described +- [ ] Entry points clearly marked +- [ ] Integration paths noted (for multi-part) +- [ ] Asset locations identified (if applicable) +- [ ] File organization patterns explained + +## Architecture Documentation Quality + +- [ ] Architecture document uses appropriate template from registry +- [ ] All template sections filled with relevant information (no placeholders) +- [ ] Technology stack section is comprehensive +- [ ] Architecture pattern clearly explained +- [ ] Data architecture documented (if applicable) +- [ ] API design documented (if applicable) +- [ ] Component structure explained (if applicable) +- [ ] Source tree included and annotated +- [ ] Testing strategy documented +- [ ] Deployment architecture captured (if config found) + +## Development and Operations Documentation + +- [ ] Prerequisites clearly listed +- [ ] Installation steps documented +- [ ] Environment setup instructions provided +- [ ] Local run commands specified +- [ ] Build process documented +- [ ] Test commands and approach explained +- [ ] Deployment process documented (if applicable) +- [ ] CI/CD pipeline details captured (if found) +- [ ] Contribution guidelines extracted (if found) + +## Multi-Part Project Specific (if applicable) + +- [ ] Each part documented separately +- [ ] Part-specific architecture files created (architecture-{part_id}.md) +- [ ] Part-specific component inventories created (if applicable) +- [ ] Part-specific development guides created +- [ ] Integration architecture document created +- [ ] Integration points clearly defined with type and details +- [ ] Data flow between parts explained +- [ ] project-parts.json metadata file created + +## Index and Navigation + +- [ ] index.md created as master entry point +- [ ] Project structure clearly summarized in index +- [ ] Quick reference section complete and accurate +- [ ] All generated docs linked from index +- [ ] All existing docs linked from index (if found) +- [ ] Getting started section provides clear next steps +- [ ] AI-assisted development guidance included +- [ ] Navigation structure matches project complexity (simple for single-part, detailed for multi-part) + +## File Completeness + +- [ ] index.md generated +- [ ] project-overview.md generated +- [ ] source-tree-analysis.md generated +- [ ] architecture.md (or per-part) generated +- [ ] component-inventory.md (or per-part) generated if UI components exist +- [ ] development-guide.md (or per-part) generated +- [ ] api-contracts.md (or per-part) generated if APIs documented +- [ ] data-models.md (or per-part) generated if data models found +- [ ] deployment-guide.md generated if deployment config found +- [ ] contribution-guide.md generated if guidelines found +- [ ] integration-architecture.md generated if multi-part +- [ ] project-parts.json generated if multi-part + +## Content Quality + +- [ ] Technical information is accurate and specific +- [ ] No generic placeholders or "TODO" items remain +- [ ] Examples and code snippets are relevant to actual project +- [ ] File paths and directory references are correct +- [ ] Technology names and versions are accurate +- [ ] Terminology is consistent across all documents +- [ ] Descriptions are clear and actionable + +## Brownfield PRD Readiness + +- [ ] Documentation provides enough context for AI to understand existing system +- [ ] Integration points are clear for planning new features +- [ ] Reusable components are identified for leveraging in new work +- [ ] Data models are documented for schema extension planning +- [ ] API contracts are documented for endpoint expansion +- [ ] Code conventions and patterns are captured for consistency +- [ ] Architecture constraints are clear for informed decision-making + +## Output Validation + +- [ ] All files saved to correct output folder +- [ ] File naming follows convention (no part suffix for single-part, with suffix for multi-part) +- [ ] No broken internal links between documents +- [ ] Markdown formatting is correct and renders properly +- [ ] JSON files are valid (project-parts.json if applicable) + +## Final Validation + +- [ ] User confirmed project classification is accurate +- [ ] User provided any additional context needed +- [ ] All requested areas of focus addressed +- [ ] Documentation is immediately usable for brownfield PRD workflow +- [ ] No critical information gaps identified + +## Issues Found + +### Critical Issues (must fix before completion) + +- + +### Minor Issues (can be addressed later) + +- + +### Missing Information (to note for user) + +- + +## Deep-Dive Mode Validation (if deep-dive was performed) + +- [ ] Deep-dive target area correctly identified and scoped +- [ ] All files in target area read completely (no skipped files) +- [ ] File inventory includes all exports with complete signatures +- [ ] Dependencies mapped for all files +- [ ] Dependents identified (who imports each file) +- [ ] Code snippets included for key implementation details +- [ ] Patterns and design approaches documented +- [ ] State management strategy explained +- [ ] Side effects documented (API calls, DB queries, etc.) +- [ ] Error handling approaches captured +- [ ] Testing files and coverage documented +- [ ] TODOs and comments extracted +- [ ] Dependency graph created showing relationships +- [ ] Data flow traced through the scanned area +- [ ] Integration points with rest of codebase identified +- [ ] Related code and similar patterns found outside scanned area +- [ ] Reuse opportunities documented +- [ ] Implementation guidance provided +- [ ] Modification instructions clear +- [ ] Index.md updated with deep-dive link +- [ ] Deep-dive documentation is immediately useful for implementation + +--- + +## State File Quality + +- [ ] State file is valid JSON (no syntax errors) +- [ ] State file is optimized (no pretty-printing, minimal whitespace) +- [ ] State file contains all completed steps with timestamps +- [ ] State file outputs_generated list is accurate and complete +- [ ] State file resume_instructions are clear and actionable +- [ ] State file findings contain only high-level summaries (not detailed data) +- [ ] State file can be successfully loaded for resumption + +## Completion Criteria + +All items in the following sections must be checked: + +- ✓ Scan Level and Resumability +- ✓ Write-as-you-go Architecture +- ✓ Batching Strategy (if deep/exhaustive scan) +- ✓ Project Detection and Classification +- ✓ Technology Stack Analysis +- ✓ Architecture Documentation Quality +- ✓ Index and Navigation +- ✓ File Completeness +- ✓ Brownfield PRD Readiness +- ✓ State File Quality +- ✓ Deep-Dive Mode Validation (if applicable) + +The workflow is complete when: + +1. All critical checklist items are satisfied +2. No critical issues remain +3. User has reviewed and approved the documentation +4. Generated docs are ready for use in brownfield PRD workflow +5. Deep-dive docs (if any) are comprehensive and implementation-ready +6. State file is valid and can enable resumption if interrupted diff --git a/src/bmm/workflows/document-project/documentation-requirements.csv b/src/bmm/workflows/document-project/documentation-requirements.csv new file mode 100644 index 00000000..9f773ab0 --- /dev/null +++ b/src/bmm/workflows/document-project/documentation-requirements.csv @@ -0,0 +1,12 @@ +project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory +web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false +mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true +backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false +cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false +library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false +desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true +game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true +data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false +extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false +infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false +embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false diff --git a/src/bmm/workflows/document-project/instructions.md b/src/bmm/workflows/document-project/instructions.md new file mode 100644 index 00000000..2f567fa3 --- /dev/null +++ b/src/bmm/workflows/document-project/instructions.md @@ -0,0 +1,221 @@ +# Document Project Workflow Router + +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/document-project/workflow.yaml +Communicate all responses in {communication_language} + + + +This router determines workflow mode and delegates to specialized sub-workflows + + + + + mode: data + data_request: project_config + + + + {{suggestion}} + Note: Documentation workflow can run standalone. Continuing without progress tracking. + Set standalone_mode = true + Set status_file_found = false + + + + Store {{status_file_path}} for later updates + Set status_file_found = true + + + + Note: This is a greenfield project. Documentation workflow is typically for brownfield projects. + Continue anyway to document planning artifacts? (y/n) + + Exit workflow + + + + + + mode: validate + calling_workflow: document-project + + + + {{warning}} + Note: This may be auto-invoked by prd for brownfield documentation. + Continue with documentation? (y/n) + + {{suggestion}} + Exit workflow + + + + + + + +SMART LOADING STRATEGY: Check state file FIRST before loading any CSV files + +Check for existing state file at: {output_folder}/project-scan-report.json + + + Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification + Extract cached project_type_id(s) from state file if present + Calculate age of state file (current time - last_updated) + +I found an in-progress workflow state from {{last_updated}}. + +**Current Progress:** + +- Mode: {{mode}} +- Scan Level: {{scan_level}} +- Completed Steps: {{completed_steps_count}}/{{total_steps}} +- Last Step: {{current_step}} +- Project Type(s): {{cached_project_types}} + +Would you like to: + +1. **Resume from where we left off** - Continue from step {{current_step}} +2. **Start fresh** - Archive old state and begin new scan +3. **Cancel** - Exit without changes + +Your choice [1/2/3]: + + + + Set resume_mode = true + Set workflow_mode = {{mode}} + Load findings summaries from state file + Load cached project_type_id(s) from state file + + CONDITIONAL CSV LOADING FOR RESUME: + For each cached project_type_id, load ONLY the corresponding row from: {documentation_requirements_csv} + Skip loading project-types.csv and architecture_registry.csv (not needed on resume) + Store loaded doc requirements for use in remaining steps + + Display: "Resuming {{workflow_mode}} from {{current_step}} with cached project type(s): {{cached_project_types}}" + + + Read fully and follow: {installed_path}/workflows/deep-dive-instructions.md with resume context + + + + Read fully and follow: {installed_path}/workflows/full-scan-instructions.md with resume context + + + + + + Create archive directory: {output_folder}/.archive/ + Move old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json + Set resume_mode = false + Continue to Step 0.5 + + + + Display: "Exiting workflow without changes." + Exit workflow + + + + Display: "Found old state file (>24 hours). Starting fresh scan." + Archive old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json + Set resume_mode = false + Continue to Step 0.5 + + + + + +Check if {output_folder}/index.md exists + + + Read existing index.md to extract metadata (date, project structure, parts count) + Store as {{existing_doc_date}}, {{existing_structure}} + +I found existing documentation generated on {{existing_doc_date}}. + +What would you like to do? + +1. **Re-scan entire project** - Update all documentation with latest changes +2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder +3. **Cancel** - Keep existing documentation as-is + +Your choice [1/2/3]: + + + + Set workflow_mode = "full_rescan" + Display: "Starting full project rescan..." + Read fully and follow: {installed_path}/workflows/full-scan-instructions.md + After sub-workflow completes, continue to Step 4 + + + + Set workflow_mode = "deep_dive" + Set scan_level = "exhaustive" + Display: "Starting deep-dive documentation mode..." + Read fully and follow: {installed_path}/workflows/deep-dive-instructions.md + After sub-workflow completes, continue to Step 4 + + + + Display message: "Keeping existing documentation. Exiting workflow." + Exit workflow + + + + + Set workflow_mode = "initial_scan" + Display: "No existing documentation found. Starting initial project scan..." + Read fully and follow: {installed_path}/workflows/full-scan-instructions.md + After sub-workflow completes, continue to Step 4 + + + + + + + + + mode: update + action: complete_workflow + workflow_name: document-project + + + + Status updated! + + + +**✅ Document Project Workflow Complete, {user_name}!** + +**Documentation Generated:** + +- Mode: {{workflow_mode}} +- Scan Level: {{scan_level}} +- Output: {output_folder}/index.md and related files + +{{#if status_file_found}} +**Status Updated:** + +- Progress tracking updated + +**Next Steps:** + +- **Next required:** {{next_workflow}} ({{next_agent}} agent) + +Check status anytime with: `workflow-status` +{{else}} +**Next Steps:** +Since no workflow is in progress: + +- Refer to the BMM workflow guide if unsure what to do next +- Or run `workflow-init` to create a workflow path and get guided next steps + {{/if}} + + + + + diff --git a/src/bmm/workflows/document-project/templates/deep-dive-template.md b/src/bmm/workflows/document-project/templates/deep-dive-template.md new file mode 100644 index 00000000..c1285cdc --- /dev/null +++ b/src/bmm/workflows/document-project/templates/deep-dive-template.md @@ -0,0 +1,345 @@ +# {{target_name}} - Deep Dive Documentation + +**Generated:** {{date}} +**Scope:** {{target_path}} +**Files Analyzed:** {{file_count}} +**Lines of Code:** {{total_loc}} +**Workflow Mode:** Exhaustive Deep-Dive + +## Overview + +{{target_description}} + +**Purpose:** {{target_purpose}} +**Key Responsibilities:** {{responsibilities}} +**Integration Points:** {{integration_summary}} + +## Complete File Inventory + +{{#each files_in_inventory}} + +### {{file_path}} + +**Purpose:** {{purpose}} +**Lines of Code:** {{loc}} +**File Type:** {{file_type}} + +**What Future Contributors Must Know:** {{contributor_note}} + +**Exports:** +{{#each exports}} + +- `{{signature}}` - {{description}} + {{/each}} + +**Dependencies:** +{{#each imports}} + +- `{{import_path}}` - {{reason}} + {{/each}} + +**Used By:** +{{#each dependents}} + +- `{{dependent_path}}` + {{/each}} + +**Key Implementation Details:** + +```{{language}} +{{key_code_snippet}} +``` + +{{implementation_notes}} + +**Patterns Used:** +{{#each patterns}} + +- {{pattern_name}}: {{pattern_description}} + {{/each}} + +**State Management:** {{state_approach}} + +**Side Effects:** +{{#each side_effects}} + +- {{effect_type}}: {{effect_description}} + {{/each}} + +**Error Handling:** {{error_handling_approach}} + +**Testing:** + +- Test File: {{test_file_path}} +- Coverage: {{coverage_percentage}}% +- Test Approach: {{test_approach}} + +**Comments/TODOs:** +{{#each todos}} + +- Line {{line_number}}: {{todo_text}} + {{/each}} + +--- + +{{/each}} + +## Contributor Checklist + +- **Risks & Gotchas:** {{risks_notes}} +- **Pre-change Verification Steps:** {{verification_steps}} +- **Suggested Tests Before PR:** {{suggested_tests}} + +## Architecture & Design Patterns + +### Code Organization + +{{organization_approach}} + +### Design Patterns + +{{#each design_patterns}} + +- **{{pattern_name}}**: {{usage_description}} + {{/each}} + +### State Management Strategy + +{{state_management_details}} + +### Error Handling Philosophy + +{{error_handling_philosophy}} + +### Testing Strategy + +{{testing_strategy}} + +## Data Flow + +{{data_flow_diagram}} + +### Data Entry Points + +{{#each entry_points}} + +- **{{entry_name}}**: {{entry_description}} + {{/each}} + +### Data Transformations + +{{#each transformations}} + +- **{{transformation_name}}**: {{transformation_description}} + {{/each}} + +### Data Exit Points + +{{#each exit_points}} + +- **{{exit_name}}**: {{exit_description}} + {{/each}} + +## Integration Points + +### APIs Consumed + +{{#each apis_consumed}} + +- **{{api_endpoint}}**: {{api_description}} + - Method: {{method}} + - Authentication: {{auth_requirement}} + - Response: {{response_schema}} + {{/each}} + +### APIs Exposed + +{{#each apis_exposed}} + +- **{{api_endpoint}}**: {{api_description}} + - Method: {{method}} + - Request: {{request_schema}} + - Response: {{response_schema}} + {{/each}} + +### Shared State + +{{#each shared_state}} + +- **{{state_name}}**: {{state_description}} + - Type: {{state_type}} + - Accessed By: {{accessors}} + {{/each}} + +### Events + +{{#each events}} + +- **{{event_name}}**: {{event_description}} + - Type: {{publish_or_subscribe}} + - Payload: {{payload_schema}} + {{/each}} + +### Database Access + +{{#each database_operations}} + +- **{{table_name}}**: {{operation_type}} + - Queries: {{query_patterns}} + - Indexes Used: {{indexes}} + {{/each}} + +## Dependency Graph + +{{dependency_graph_visualization}} + +### Entry Points (Not Imported by Others in Scope) + +{{#each entry_point_files}} + +- {{file_path}} + {{/each}} + +### Leaf Nodes (Don't Import Others in Scope) + +{{#each leaf_files}} + +- {{file_path}} + {{/each}} + +### Circular Dependencies + +{{#if has_circular_dependencies}} +⚠️ Circular dependencies detected: +{{#each circular_deps}} + +- {{cycle_description}} + {{/each}} + {{else}} + ✓ No circular dependencies detected + {{/if}} + +## Testing Analysis + +### Test Coverage Summary + +- **Statements:** {{statements_coverage}}% +- **Branches:** {{branches_coverage}}% +- **Functions:** {{functions_coverage}}% +- **Lines:** {{lines_coverage}}% + +### Test Files + +{{#each test_files}} + +- **{{test_file_path}}** + - Tests: {{test_count}} + - Approach: {{test_approach}} + - Mocking Strategy: {{mocking_strategy}} + {{/each}} + +### Test Utilities Available + +{{#each test_utilities}} + +- `{{utility_name}}`: {{utility_description}} + {{/each}} + +### Testing Gaps + +{{#each testing_gaps}} + +- {{gap_description}} + {{/each}} + +## Related Code & Reuse Opportunities + +### Similar Features Elsewhere + +{{#each similar_features}} + +- **{{feature_name}}** (`{{feature_path}}`) + - Similarity: {{similarity_description}} + - Can Reference For: {{reference_use_case}} + {{/each}} + +### Reusable Utilities Available + +{{#each reusable_utilities}} + +- **{{utility_name}}** (`{{utility_path}}`) + - Purpose: {{utility_purpose}} + - How to Use: {{usage_example}} + {{/each}} + +### Patterns to Follow + +{{#each patterns_to_follow}} + +- **{{pattern_name}}**: Reference `{{reference_file}}` for implementation + {{/each}} + +## Implementation Notes + +### Code Quality Observations + +{{#each quality_observations}} + +- {{observation}} + {{/each}} + +### TODOs and Future Work + +{{#each all_todos}} + +- **{{file_path}}:{{line_number}}**: {{todo_text}} + {{/each}} + +### Known Issues + +{{#each known_issues}} + +- {{issue_description}} + {{/each}} + +### Optimization Opportunities + +{{#each optimizations}} + +- {{optimization_suggestion}} + {{/each}} + +### Technical Debt + +{{#each tech_debt_items}} + +- {{debt_description}} + {{/each}} + +## Modification Guidance + +### To Add New Functionality + +{{modification_guidance_add}} + +### To Modify Existing Functionality + +{{modification_guidance_modify}} + +### To Remove/Deprecate + +{{modification_guidance_remove}} + +### Testing Checklist for Changes + +{{#each testing_checklist_items}} + +- [ ] {{checklist_item}} + {{/each}} + +--- + +_Generated by `document-project` workflow (deep-dive mode)_ +_Base Documentation: docs/index.md_ +_Scan Date: {{date}}_ +_Analysis Mode: Exhaustive_ diff --git a/src/bmm/workflows/document-project/templates/index-template.md b/src/bmm/workflows/document-project/templates/index-template.md new file mode 100644 index 00000000..0340a35a --- /dev/null +++ b/src/bmm/workflows/document-project/templates/index-template.md @@ -0,0 +1,169 @@ +# {{project_name}} Documentation Index + +**Type:** {{repository_type}}{{#if is_multi_part}} with {{parts_count}} parts{{/if}} +**Primary Language:** {{primary_language}} +**Architecture:** {{architecture_type}} +**Last Updated:** {{date}} + +## Project Overview + +{{project_description}} + +{{#if is_multi_part}} + +## Project Structure + +This project consists of {{parts_count}} parts: + +{{#each project_parts}} + +### {{part_name}} ({{part_id}}) + +- **Type:** {{project_type}} +- **Location:** `{{root_path}}` +- **Tech Stack:** {{tech_stack_summary}} +- **Entry Point:** {{entry_point}} + {{/each}} + +## Cross-Part Integration + +{{integration_summary}} + +{{/if}} + +## Quick Reference + +{{#if is_single_part}} + +- **Tech Stack:** {{tech_stack_summary}} +- **Entry Point:** {{entry_point}} +- **Architecture Pattern:** {{architecture_pattern}} +- **Database:** {{database}} +- **Deployment:** {{deployment_platform}} + {{else}} + {{#each project_parts}} + +### {{part_name}} Quick Ref + +- **Stack:** {{tech_stack_summary}} +- **Entry:** {{entry_point}} +- **Pattern:** {{architecture_pattern}} + {{/each}} + {{/if}} + +## Generated Documentation + +### Core Documentation + +- [Project Overview](./project-overview.md) - Executive summary and high-level architecture +- [Source Tree Analysis](./source-tree-analysis.md) - Annotated directory structure + +{{#if is_single_part}} + +- [Architecture](./architecture.md) - Detailed technical architecture +- [Component Inventory](./component-inventory.md) - Catalog of major components{{#if has_ui_components}} and UI elements{{/if}} +- [Development Guide](./development-guide.md) - Local setup and development workflow + {{#if has_api_docs}}- [API Contracts](./api-contracts.md) - API endpoints and schemas{{/if}} + {{#if has_data_models}}- [Data Models](./data-models.md) - Database schema and models{{/if}} + {{else}} + +### Part-Specific Documentation + +{{#each project_parts}} + +#### {{part_name}} ({{part_id}}) + +- [Architecture](./architecture-{{part_id}}.md) - Technical architecture for {{part_name}} + {{#if has_components}}- [Components](./component-inventory-{{part_id}}.md) - Component catalog{{/if}} +- [Development Guide](./development-guide-{{part_id}}.md) - Setup and dev workflow + {{#if has_api}}- [API Contracts](./api-contracts-{{part_id}}.md) - API documentation{{/if}} + {{#if has_data}}- [Data Models](./data-models-{{part_id}}.md) - Data architecture{{/if}} + {{/each}} + +### Integration + +- [Integration Architecture](./integration-architecture.md) - How parts communicate +- [Project Parts Metadata](./project-parts.json) - Machine-readable structure + {{/if}} + +### Optional Documentation + +{{#if has_deployment_guide}}- [Deployment Guide](./deployment-guide.md) - Deployment process and infrastructure{{/if}} +{{#if has_contribution_guide}}- [Contribution Guide](./contribution-guide.md) - Contributing guidelines and standards{{/if}} + +## Existing Documentation + +{{#if has_existing_docs}} +{{#each existing_docs}} + +- [{{title}}]({{path}}) - {{description}} + {{/each}} + {{else}} + No existing documentation files were found in the project. + {{/if}} + +## Getting Started + +{{#if is_single_part}} + +### Prerequisites + +{{prerequisites}} + +### Setup + +```bash +{{setup_commands}} +``` + +### Run Locally + +```bash +{{run_commands}} +``` + +### Run Tests + +```bash +{{test_commands}} +``` + +{{else}} +{{#each project_parts}} + +### {{part_name}} Setup + +**Prerequisites:** {{prerequisites}} + +**Install & Run:** + +```bash +cd {{root_path}} +{{setup_command}} +{{run_command}} +``` + +{{/each}} +{{/if}} + +## For AI-Assisted Development + +This documentation was generated specifically to enable AI agents to understand and extend this codebase. + +### When Planning New Features: + +**UI-only features:** +{{#if is_multi_part}}→ Reference: `architecture-{{ui_part_id}}.md`, `component-inventory-{{ui_part_id}}.md`{{else}}→ Reference: `architecture.md`, `component-inventory.md`{{/if}} + +**API/Backend features:** +{{#if is_multi_part}}→ Reference: `architecture-{{api_part_id}}.md`, `api-contracts-{{api_part_id}}.md`, `data-models-{{api_part_id}}.md`{{else}}→ Reference: `architecture.md`{{#if has_api_docs}}, `api-contracts.md`{{/if}}{{#if has_data_models}}, `data-models.md`{{/if}}{{/if}} + +**Full-stack features:** +→ Reference: All architecture docs{{#if is_multi_part}} + `integration-architecture.md`{{/if}} + +**Deployment changes:** +{{#if has_deployment_guide}}→ Reference: `deployment-guide.md`{{else}}→ Review CI/CD configs in project{{/if}} + +--- + +_Documentation generated by BMAD Method `document-project` workflow_ diff --git a/src/bmm/workflows/document-project/templates/project-overview-template.md b/src/bmm/workflows/document-project/templates/project-overview-template.md new file mode 100644 index 00000000..3bbb0d24 --- /dev/null +++ b/src/bmm/workflows/document-project/templates/project-overview-template.md @@ -0,0 +1,103 @@ +# {{project_name}} - Project Overview + +**Date:** {{date}} +**Type:** {{project_type}} +**Architecture:** {{architecture_type}} + +## Executive Summary + +{{executive_summary}} + +## Project Classification + +- **Repository Type:** {{repository_type}} +- **Project Type(s):** {{project_types_list}} +- **Primary Language(s):** {{primary_languages}} +- **Architecture Pattern:** {{architecture_pattern}} + +{{#if is_multi_part}} + +## Multi-Part Structure + +This project consists of {{parts_count}} distinct parts: + +{{#each project_parts}} + +### {{part_name}} + +- **Type:** {{project_type}} +- **Location:** `{{root_path}}` +- **Purpose:** {{purpose}} +- **Tech Stack:** {{tech_stack}} + {{/each}} + +### How Parts Integrate + +{{integration_description}} +{{/if}} + +## Technology Stack Summary + +{{#if is_single_part}} +{{technology_table}} +{{else}} +{{#each project_parts}} + +### {{part_name}} Stack + +{{technology_table}} +{{/each}} +{{/if}} + +## Key Features + +{{key_features}} + +## Architecture Highlights + +{{architecture_highlights}} + +## Development Overview + +### Prerequisites + +{{prerequisites}} + +### Getting Started + +{{getting_started_summary}} + +### Key Commands + +{{#if is_single_part}} + +- **Install:** `{{install_command}}` +- **Dev:** `{{dev_command}}` +- **Build:** `{{build_command}}` +- **Test:** `{{test_command}}` + {{else}} + {{#each project_parts}} + +#### {{part_name}} + +- **Install:** `{{install_command}}` +- **Dev:** `{{dev_command}}` + {{/each}} + {{/if}} + +## Repository Structure + +{{repository_structure_summary}} + +## Documentation Map + +For detailed information, see: + +- [index.md](./index.md) - Master documentation index +- [architecture.md](./architecture{{#if is_multi_part}}-{part_id}{{/if}}.md) - Detailed architecture +- [source-tree-analysis.md](./source-tree-analysis.md) - Directory structure +- [development-guide.md](./development-guide{{#if is_multi_part}}-{part_id}{{/if}}.md) - Development workflow + +--- + +_Generated using BMAD Method `document-project` workflow_ diff --git a/src/bmm/workflows/document-project/templates/project-scan-report-schema.json b/src/bmm/workflows/document-project/templates/project-scan-report-schema.json new file mode 100644 index 00000000..8133e15f --- /dev/null +++ b/src/bmm/workflows/document-project/templates/project-scan-report-schema.json @@ -0,0 +1,160 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Project Scan Report Schema", + "description": "State tracking file for document-project workflow resumability", + "type": "object", + "required": ["workflow_version", "timestamps", "mode", "scan_level", "completed_steps", "current_step"], + "properties": { + "workflow_version": { + "type": "string", + "description": "Version of document-project workflow", + "example": "1.2.0" + }, + "timestamps": { + "type": "object", + "required": ["started", "last_updated"], + "properties": { + "started": { + "type": "string", + "format": "date-time", + "description": "ISO 8601 timestamp when workflow started" + }, + "last_updated": { + "type": "string", + "format": "date-time", + "description": "ISO 8601 timestamp of last state update" + }, + "completed": { + "type": "string", + "format": "date-time", + "description": "ISO 8601 timestamp when workflow completed (if finished)" + } + } + }, + "mode": { + "type": "string", + "enum": ["initial_scan", "full_rescan", "deep_dive"], + "description": "Workflow execution mode" + }, + "scan_level": { + "type": "string", + "enum": ["quick", "deep", "exhaustive"], + "description": "Scan depth level (deep_dive mode always uses exhaustive)" + }, + "project_root": { + "type": "string", + "description": "Absolute path to project root directory" + }, + "output_folder": { + "type": "string", + "description": "Absolute path to output folder" + }, + "completed_steps": { + "type": "array", + "items": { + "type": "object", + "required": ["step", "status"], + "properties": { + "step": { + "type": "string", + "description": "Step identifier (e.g., 'step_1', 'step_2')" + }, + "status": { + "type": "string", + "enum": ["completed", "partial", "failed"] + }, + "timestamp": { + "type": "string", + "format": "date-time" + }, + "outputs": { + "type": "array", + "items": { "type": "string" }, + "description": "Files written during this step" + }, + "summary": { + "type": "string", + "description": "1-2 sentence summary of step outcome" + } + } + } + }, + "current_step": { + "type": "string", + "description": "Current step identifier for resumption" + }, + "findings": { + "type": "object", + "description": "High-level summaries only (detailed findings purged after writing)", + "properties": { + "project_classification": { + "type": "object", + "properties": { + "repository_type": { "type": "string" }, + "parts_count": { "type": "integer" }, + "primary_language": { "type": "string" }, + "architecture_type": { "type": "string" } + } + }, + "technology_stack": { + "type": "array", + "items": { + "type": "object", + "properties": { + "part_id": { "type": "string" }, + "tech_summary": { "type": "string" } + } + } + }, + "batches_completed": { + "type": "array", + "description": "For deep/exhaustive scans: subfolders processed", + "items": { + "type": "object", + "properties": { + "path": { "type": "string" }, + "files_scanned": { "type": "integer" }, + "summary": { "type": "string" } + } + } + } + } + }, + "outputs_generated": { + "type": "array", + "items": { "type": "string" }, + "description": "List of all output files generated" + }, + "resume_instructions": { + "type": "string", + "description": "Instructions for resuming from current_step" + }, + "validation_status": { + "type": "object", + "properties": { + "last_validated": { + "type": "string", + "format": "date-time" + }, + "validation_errors": { + "type": "array", + "items": { "type": "string" } + } + } + }, + "deep_dive_targets": { + "type": "array", + "description": "Track deep-dive areas analyzed (for deep_dive mode)", + "items": { + "type": "object", + "properties": { + "target_name": { "type": "string" }, + "target_path": { "type": "string" }, + "files_analyzed": { "type": "integer" }, + "output_file": { "type": "string" }, + "timestamp": { "type": "string", "format": "date-time" } + } + } + } + } +} diff --git a/src/bmm/workflows/document-project/templates/source-tree-template.md b/src/bmm/workflows/document-project/templates/source-tree-template.md new file mode 100644 index 00000000..20306217 --- /dev/null +++ b/src/bmm/workflows/document-project/templates/source-tree-template.md @@ -0,0 +1,135 @@ +# {{project_name}} - Source Tree Analysis + +**Date:** {{date}} + +## Overview + +{{source_tree_overview}} + +{{#if is_multi_part}} + +## Multi-Part Structure + +This project is organized into {{parts_count}} distinct parts: + +{{#each project_parts}} + +- **{{part_name}}** (`{{root_path}}`): {{purpose}} + {{/each}} + {{/if}} + +## Complete Directory Structure + +``` +{{complete_source_tree}} +``` + +## Critical Directories + +{{#each critical_folders}} + +### `{{folder_path}}` + +{{description}} + +**Purpose:** {{purpose}} +**Contains:** {{contents_summary}} +{{#if entry_points}}**Entry Points:** {{entry_points}}{{/if}} +{{#if integration_note}}**Integration:** {{integration_note}}{{/if}} + +{{/each}} + +{{#if is_multi_part}} + +## Part-Specific Trees + +{{#each project_parts}} + +### {{part_name}} Structure + +``` +{{source_tree}} +``` + +**Key Directories:** +{{#each critical_directories}} + +- **`{{path}}`**: {{description}} + {{/each}} + +{{/each}} + +## Integration Points + +{{#each integration_points}} + +### {{from_part}} → {{to_part}} + +- **Location:** `{{integration_path}}` +- **Type:** {{integration_type}} +- **Details:** {{details}} + {{/each}} + +{{/if}} + +## Entry Points + +{{#if is_single_part}} + +- **Main Entry:** `{{main_entry_point}}` + {{#if additional_entry_points}} +- **Additional:** + {{#each additional_entry_points}} + - `{{path}}`: {{description}} + {{/each}} + {{/if}} + {{else}} + {{#each project_parts}} + +### {{part_name}} + +- **Entry Point:** `{{entry_point}}` +- **Bootstrap:** {{bootstrap_description}} + {{/each}} + {{/if}} + +## File Organization Patterns + +{{file_organization_patterns}} + +## Key File Types + +{{#each file_type_patterns}} + +### {{file_type}} + +- **Pattern:** `{{pattern}}` +- **Purpose:** {{purpose}} +- **Examples:** {{examples}} + {{/each}} + +## Asset Locations + +{{#if has_assets}} +{{#each asset_locations}} + +- **{{asset_type}}**: `{{location}}` ({{file_count}} files, {{total_size}}) + {{/each}} + {{else}} + No significant assets detected. + {{/if}} + +## Configuration Files + +{{#each config_files}} + +- **`{{path}}`**: {{description}} + {{/each}} + +## Notes for Development + +{{development_notes}} + +--- + +_Generated using BMAD Method `document-project` workflow_ diff --git a/src/bmm/workflows/document-project/workflow.yaml b/src/bmm/workflows/document-project/workflow.yaml new file mode 100644 index 00000000..536257b3 --- /dev/null +++ b/src/bmm/workflows/document-project/workflow.yaml @@ -0,0 +1,30 @@ +# Document Project Workflow Configuration +name: "document-project" +version: "1.2.0" +description: "Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development" +author: "BMad" + +# Critical variables +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:project_knowledge" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +user_skill_level: "{config_source}:user_skill_level" +date: system-generated + +# Module path and component files +installed_path: "{project-root}/_bmad/bmm/workflows/document-project" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Required data files - CRITICAL for project type detection and documentation requirements +documentation_requirements_csv: "{installed_path}/documentation-requirements.csv" + +# Output configuration - Multiple files generated in output folder +# Primary output: {output_folder}/project-documentation/ +# Additional files generated by sub-workflows based on project structure + +standalone: true + +web_bundle: false diff --git a/src/bmm/workflows/document-project/workflows/deep-dive-instructions.md b/src/bmm/workflows/document-project/workflows/deep-dive-instructions.md new file mode 100644 index 00000000..c88dfb08 --- /dev/null +++ b/src/bmm/workflows/document-project/workflows/deep-dive-instructions.md @@ -0,0 +1,298 @@ +# Deep-Dive Documentation Instructions + + + +This workflow performs exhaustive deep-dive documentation of specific areas +Called by: ../document-project/instructions.md router +Handles: deep_dive mode only + + +Deep-dive mode requires literal full-file review. Sampling, guessing, or relying solely on tooling output is FORBIDDEN. +Load existing project structure from index.md and project-parts.json (if exists) +Load source tree analysis to understand available areas + + + Analyze existing documentation to suggest deep-dive options + +What area would you like to deep-dive into? + +**Suggested Areas Based on Project Structure:** + +{{#if has_api_routes}} + +## API Routes ({{api_route_count}} endpoints found) + +{{#each api_route_groups}} +{{group_index}}. {{group_name}} - {{endpoint_count}} endpoints in `{{path}}` +{{/each}} +{{/if}} + +{{#if has_feature_modules}} + +## Feature Modules ({{feature_count}} features) + +{{#each feature_modules}} +{{module_index}}. {{module_name}} - {{file_count}} files in `{{path}}` +{{/each}} +{{/if}} + +{{#if has_ui_components}} + +### UI Component Areas + +{{#each component_groups}} +{{group_index}}. {{group_name}} - {{component_count}} components in `{{path}}` +{{/each}} +{{/if}} + +{{#if has_services}} + +### Services/Business Logic + +{{#each service_groups}} +{{service_index}}. {{service_name}} - `{{path}}` +{{/each}} +{{/if}} + +**Or specify custom:** + +- Folder path (e.g., "client/src/features/dashboard") +- File path (e.g., "server/src/api/users.ts") +- Feature name (e.g., "authentication system") + +Enter your choice (number or custom path): + + +Parse user input to determine: - target_type: "folder" | "file" | "feature" | "api_group" | "component_group" - target_path: Absolute path to scan - target_name: Human-readable name for documentation - target_scope: List of all files to analyze + + +Store as {{deep_dive_target}} + +Display confirmation: +Target: {{target_name}} +Type: {{target_type}} +Path: {{target_path}} +Estimated files to analyze: {{estimated_file_count}} + +This will read EVERY file in this area. Proceed? [y/n] + + +Return to Step 13a (select different area) + + + + Set scan_mode = "exhaustive" + Initialize file_inventory = [] + You must read every line of every file in scope and capture a plain-language explanation (what the file does, side effects, why it matters) that future developer agents can act on. No shortcuts. + + + Get complete recursive file list from {{target_path}} + Filter out: node_modules/, .git/, dist/, build/, coverage/, *.min.js, *.map + For EVERY remaining file in folder: + - Read complete file contents (all lines) + - Extract all exports (functions, classes, types, interfaces, constants) + - Extract all imports (dependencies) + - Identify purpose from comments and code structure + - Write 1-2 sentences (minimum) in natural language describing behaviour, side effects, assumptions, and anything a developer must know before modifying the file + - Extract function signatures with parameter types and return types + - Note any TODOs, FIXMEs, or comments + - Identify patterns (hooks, components, services, controllers, etc.) + - Capture per-file contributor guidance: `contributor_note`, `risks`, `verification_steps`, `suggested_tests` + - Store in file_inventory + + + + + Read complete file at {{target_path}} + Extract all information as above + Read all files it imports (follow import chain 1 level deep) + Find all files that import this file (dependents via grep) + Store all in file_inventory + + + + Identify all route/controller files in API group + Read all route handlers completely + Read associated middleware, controllers, services + Read data models and schemas used + Extract complete request/response schemas + Document authentication and authorization requirements + Store all in file_inventory + + + + Search codebase for all files related to feature name + Include: UI components, API endpoints, models, services, tests + Read each file completely + Store all in file_inventory + + + + Get all component files in group + Read each component completely + Extract: Props interfaces, hooks used, child components, state management + Store all in file_inventory + + +For each file in file\*inventory, document: - **File Path:** Full path - **Purpose:** What this file does (1-2 sentences) - **Lines of Code:** Total LOC - **Exports:** Complete list with signatures + +- Functions: `functionName(param: Type): ReturnType` - Description + - Classes: `ClassName` - Description with key methods + - Types/Interfaces: `TypeName` - Description + - Constants: `CONSTANT_NAME: Type` - Description - **Imports/Dependencies:** What it uses and why - **Used By:** Files that import this (dependents) - **Key Implementation Details:** Important logic, algorithms, patterns - **State Management:** If applicable (Redux, Context, local state) - **Side Effects:** API calls, database queries, file I/O, external services - **Error Handling:** Try/catch blocks, error boundaries, validation - **Testing:** Associated test files and coverage - **Comments/TODOs:** Any inline documentation or planned work + + +comprehensive_file_inventory + + + + Build dependency graph for scanned area: + - Create graph with files as nodes + - Add edges for import relationships + - Identify circular dependencies if any + - Find entry points (files not imported by others in scope) + - Find leaf nodes (files that don't import others in scope) + + +Trace data flow through the system: - Follow function calls and data transformations - Track API calls and their responses - Document state updates and propagation - Map database queries and mutations + + +Identify integration points: - External APIs consumed - Internal APIs/services called - Shared state accessed - Events published/subscribed - Database tables accessed + + +dependency_graph +data_flow_analysis +integration_points + + + + Search codebase OUTSIDE scanned area for: + - Similar file/folder naming patterns + - Similar function signatures + - Similar component structures + - Similar API patterns + - Reusable utilities that could be used + + +Identify code reuse opportunities: - Shared utilities available - Design patterns used elsewhere - Component libraries available - Helper functions that could apply + + +Find reference implementations: - Similar features in other parts of codebase - Established patterns to follow - Testing approaches used elsewhere + + +related_code_references +reuse_opportunities + + + + Create documentation filename: deep-dive-{{sanitized_target_name}}.md + Aggregate contributor insights across files: + - Combine unique risk/gotcha notes into {{risks_notes}} + - Combine verification steps developers should run before changes into {{verification_steps}} + - Combine recommended test commands into {{suggested_tests}} + + +Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md +Fill template with all collected data from steps 13b-13d +Write filled template to: {output_folder}/deep-dive-{{sanitized_target_name}}.md +Validate deep-dive document completeness + +deep_dive_documentation + +Update state file: - Add to deep_dive_targets array: {"target_name": "{{target_name}}", "target_path": "{{target_path}}", "files_analyzed": {{file_count}}, "output_file": "deep-dive-{{sanitized_target_name}}.md", "timestamp": "{{now}}"} - Add output to outputs_generated - Update last_updated timestamp + + + + + Read existing index.md + +Check if "Deep-Dive Documentation" section exists + + + Add new section after "Generated Documentation": + +## Deep-Dive Documentation + +Detailed exhaustive analysis of specific areas: + + + + + +Add link to new deep-dive doc: + +- [{{target_name}} Deep-Dive](./deep-dive-{{sanitized_target_name}}.md) - Comprehensive analysis of {{target_description}} ({{file_count}} files, {{total_loc}} LOC) - Generated {{date}} + + + Update index metadata: + Last Updated: {{date}} + Deep-Dives: {{deep_dive_count}} + + + Save updated index.md + + updated_index + + + + Display summary: + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +## Deep-Dive Documentation Complete! ✓ + +**Generated:** {output_folder}/deep-dive-{{target_name}}.md +**Files Analyzed:** {{file_count}} +**Lines of Code Scanned:** {{total_loc}} +**Time Taken:** ~{{duration}} + +**Documentation Includes:** + +- Complete file inventory with all exports +- Dependency graph and data flow +- Integration points and API contracts +- Testing analysis and coverage +- Related code and reuse opportunities +- Implementation guidance + +**Index Updated:** {output_folder}/index.md now includes link to this deep-dive + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + + +Would you like to: + +1. **Deep-dive another area** - Analyze another feature/module/folder +2. **Finish** - Complete workflow + +Your choice [1/2]: + + + + Clear current deep_dive_target + Go to Step 13a (select new area) + + + + Display final message: + +All deep-dive documentation complete! + +**Master Index:** {output_folder}/index.md +**Deep-Dives Generated:** {{deep_dive_count}} + +These comprehensive docs are now ready for: + +- Architecture review +- Implementation planning +- Code understanding +- Brownfield PRD creation + +Thank you for using the document-project workflow! + +Exit workflow + + + + + diff --git a/src/bmm/workflows/document-project/workflows/deep-dive.yaml b/src/bmm/workflows/document-project/workflows/deep-dive.yaml new file mode 100644 index 00000000..a333cc4b --- /dev/null +++ b/src/bmm/workflows/document-project/workflows/deep-dive.yaml @@ -0,0 +1,31 @@ +# Deep-Dive Documentation Workflow Configuration +name: "document-project-deep-dive" +description: "Exhaustive deep-dive documentation of specific project areas" +author: "BMad" + +# This is a sub-workflow called by document-project/workflow.yaml +parent_workflow: "{project-root}/_bmad/bmm/workflows/document-project/workflow.yaml" + +# Critical variables inherited from parent +config_source: "{project-root}/_bmad/bmb/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +date: system-generated + +# Module path and component files +installed_path: "{project-root}/_bmad/bmm/workflows/document-project/workflows" +template: false # Action workflow +instructions: "{installed_path}/deep-dive-instructions.md" +validation: "{project-root}/_bmad/bmm/workflows/document-project/checklist.md" + +# Templates +deep_dive_template: "{project-root}/_bmad/bmm/workflows/document-project/templates/deep-dive-template.md" + +# Runtime inputs (passed from parent workflow) +workflow_mode: "deep_dive" +scan_level: "exhaustive" # Deep-dive always uses exhaustive scan +project_root_path: "" +existing_index_path: "" # Path to existing index.md + +# Configuration +autonomous: false # Requires user input to select target area diff --git a/src/bmm/workflows/document-project/workflows/full-scan-instructions.md b/src/bmm/workflows/document-project/workflows/full-scan-instructions.md new file mode 100644 index 00000000..1340f75e --- /dev/null +++ b/src/bmm/workflows/document-project/workflows/full-scan-instructions.md @@ -0,0 +1,1106 @@ +# Full Project Scan Instructions + + + +This workflow performs complete project documentation (Steps 1-12) +Called by: document-project/instructions.md router +Handles: initial_scan and full_rescan modes + + +DATA LOADING STRATEGY - Understanding the Documentation Requirements System: + +Display explanation to user: + +**How Project Type Detection Works:** + +This workflow uses a single comprehensive CSV file to intelligently document your project: + +**documentation-requirements.csv** ({documentation_requirements_csv}) + +- Contains 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded) +- 24-column schema combining project type detection AND documentation requirements +- **Detection columns**: project_type_id, key_file_patterns (used to identify project type from codebase) +- **Requirement columns**: requires_api_scan, requires_data_models, requires_ui_components, etc. +- **Pattern columns**: critical_directories, test_file_patterns, config_patterns, etc. +- Acts as a "scan guide" - tells the workflow WHERE to look and WHAT to document +- Example: For project_type_id="web", key_file_patterns includes "package.json;tsconfig.json;\*.config.js" and requires_api_scan=true + +**When Documentation Requirements are Loaded:** + +- **Fresh Start (initial_scan)**: Load all 12 rows → detect type using key_file_patterns → use that row's requirements +- **Resume**: Load ONLY the doc requirements row(s) for cached project_type_id(s) +- **Full Rescan**: Same as fresh start (may re-detect project type) +- **Deep Dive**: Load ONLY doc requirements for the part being deep-dived + + +Now loading documentation requirements data for fresh start... + +Load documentation-requirements.csv from: {documentation_requirements_csv} +Store all 12 rows indexed by project_type_id for project detection and requirements lookup +Display: "Loaded documentation requirements for 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)" + +Display: "✓ Documentation requirements loaded successfully. Ready to begin project analysis." + + + +Check if {output_folder}/index.md exists + + + Read existing index.md to extract metadata (date, project structure, parts count) + Store as {{existing_doc_date}}, {{existing_structure}} + +I found existing documentation generated on {{existing_doc_date}}. + +What would you like to do? + +1. **Re-scan entire project** - Update all documentation with latest changes +2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder +3. **Cancel** - Keep existing documentation as-is + +Your choice [1/2/3]: + + + + Set workflow_mode = "full_rescan" + Continue to scan level selection below + + + + Set workflow_mode = "deep_dive" + Set scan_level = "exhaustive" + Initialize state file with mode=deep_dive, scan_level=exhaustive + Jump to Step 13 + + + + Display message: "Keeping existing documentation. Exiting workflow." + Exit workflow + + + + + Set workflow_mode = "initial_scan" + Continue to scan level selection below + + +Select Scan Level + + + Choose your scan depth level: + +**1. Quick Scan** (2-5 minutes) [DEFAULT] + +- Pattern-based analysis without reading source files +- Scans: Config files, package manifests, directory structure +- Best for: Quick project overview, initial understanding +- File reading: Minimal (configs, README, package.json, etc.) + +**2. Deep Scan** (10-30 minutes) + +- Reads files in critical directories based on project type +- Scans: All critical paths from documentation requirements +- Best for: Comprehensive documentation for brownfield PRD +- File reading: Selective (key files in critical directories) + +**3. Exhaustive Scan** (30-120 minutes) + +- Reads ALL source files in project +- Scans: Every source file (excludes node_modules, dist, build) +- Best for: Complete analysis, migration planning, detailed audit +- File reading: Complete (all source files) + +Your choice [1/2/3] (default: 1): + + + + Set scan_level = "quick" + Display: "Using Quick Scan (pattern-based, no source file reading)" + + + + Set scan_level = "deep" + Display: "Using Deep Scan (reading critical files per project type)" + + + + Set scan_level = "exhaustive" + Display: "Using Exhaustive Scan (reading all source files)" + + +Initialize state file: {output_folder}/project-scan-report.json +Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable. +Write initial state: +{ +"workflow_version": "1.2.0", +"timestamps": {"started": "{{current_timestamp}}", "last_updated": "{{current_timestamp}}"}, +"mode": "{{workflow_mode}}", +"scan_level": "{{scan_level}}", +"project_root": "{{project_root_path}}", +"output_folder": "{{output_folder}}", +"completed_steps": [], +"current_step": "step_1", +"findings": {}, +"outputs_generated": ["project-scan-report.json"], +"resume_instructions": "Starting from step 1" +} + +Continue with standard workflow from Step 1 + + + + +Ask user: "What is the root directory of the project to document?" (default: current working directory) +Store as {{project_root_path}} + +Scan {{project_root_path}} for key indicators: + +- Directory structure (presence of client/, server/, api/, src/, app/, etc.) +- Key files (package.json, go.mod, requirements.txt, etc.) +- Technology markers matching detection_keywords from project-types.csv + + +Detect if project is: + +- **Monolith**: Single cohesive codebase +- **Monorepo**: Multiple parts in one repository +- **Multi-part**: Separate client/server or similar architecture + + + + List detected parts with their paths + I detected multiple parts in this project: + {{detected_parts_list}} + +Is this correct? Should I document each part separately? [y/n] + + +Set repository_type = "monorepo" or "multi-part" +For each detected part: - Identify root path - Run project type detection using key_file_patterns from documentation-requirements.csv - Store as part in project_parts array + + +Ask user to specify correct parts and their paths + + + + Set repository_type = "monolith" + Create single part in project_parts array with root_path = {{project_root_path}} + Run project type detection using key_file_patterns from documentation-requirements.csv + + +For each part, match detected technologies and file patterns against key_file_patterns column in documentation-requirements.csv +Assign project_type_id to each part +Load corresponding documentation_requirements row for each part + +I've classified this project: +{{project_classification_summary}} + +Does this look correct? [y/n/edit] + + +project_structure +project_parts_metadata + +IMMEDIATELY update state file with step completion: + +- Add to completed_steps: {"step": "step_1", "status": "completed", "timestamp": "{{now}}", "summary": "Classified as {{repository_type}} with {{parts_count}} parts"} +- Update current_step = "step_2" +- Update findings.project_classification with high-level summary only +- **CACHE project_type_id(s)**: Add project_types array: [{"part_id": "{{part_id}}", "project_type_id": "{{project_type_id}}", "display_name": "{{display_name}}"}] +- This cached data prevents reloading all CSV files on resume - we can load just the needed documentation_requirements row(s) +- Update last_updated timestamp +- Write state file + + +PURGE detailed scan results from memory, keep only summary: "{{repository_type}}, {{parts_count}} parts, {{primary_tech}}" + + + +For each part, scan for existing documentation using patterns: +- README.md, README.rst, README.txt +- CONTRIBUTING.md, CONTRIBUTING.rst +- ARCHITECTURE.md, ARCHITECTURE.txt, docs/architecture/ +- DEPLOYMENT.md, DEPLOY.md, docs/deployment/ +- API.md, docs/api/ +- Any files in docs/, documentation/, .github/ folders + + +Create inventory of existing_docs with: + +- File path +- File type (readme, architecture, api, etc.) +- Which part it belongs to (if multi-part) + + +I found these existing documentation files: +{{existing_docs_list}} + +Are there any other important documents or key areas I should focus on while analyzing this project? [Provide paths or guidance, or type 'none'] + + +Store user guidance as {{user_context}} + +existing_documentation_inventory +user_provided_context + +Update state file: + +- Add to completed_steps: {"step": "step_2", "status": "completed", "timestamp": "{{now}}", "summary": "Found {{existing_docs_count}} existing docs"} +- Update current_step = "step_3" +- Update last_updated timestamp + + +PURGE detailed doc contents from memory, keep only: "{{existing_docs_count}} docs found" + + + +For each part in project_parts: + - Load key_file_patterns from documentation_requirements + - Scan part root for these patterns + - Parse technology manifest files (package.json, go.mod, requirements.txt, etc.) + - Extract: framework, language, version, database, dependencies + - Build technology_table with columns: Category, Technology, Version, Justification + + +Determine architecture pattern based on detected tech stack: + +- Use project_type_id as primary indicator (e.g., "web" → layered/component-based, "backend" → service/API-centric) +- Consider framework patterns (e.g., React → component hierarchy, Express → middleware pipeline) +- Note architectural style in technology table +- Store as {{architecture_pattern}} for each part + + +technology_stack +architecture_patterns + +Update state file: + +- Add to completed_steps: {"step": "step_3", "status": "completed", "timestamp": "{{now}}", "summary": "Tech stack: {{primary_framework}}"} +- Update current_step = "step_4" +- Update findings.technology_stack with summary per part +- Update last_updated timestamp + + +PURGE detailed tech analysis from memory, keep only: "{{framework}} on {{language}}" + + + + +BATCHING STRATEGY FOR DEEP/EXHAUSTIVE SCANS + + + This step requires file reading. Apply batching strategy: + +Identify subfolders to process based on: - scan_level == "deep": Use critical_directories from documentation_requirements - scan_level == "exhaustive": Get ALL subfolders recursively (excluding node_modules, .git, dist, build, coverage) + + +For each subfolder to scan: 1. Read all files in subfolder (consider file size - use judgment for files >5000 LOC) 2. Extract required information based on conditional flags below 3. IMMEDIATELY write findings to appropriate output file 4. Validate written document (section-level validation) 5. Update state file with batch completion 6. PURGE detailed findings from context, keep only 1-2 sentence summary 7. Move to next subfolder + + +Track batches in state file: +findings.batches_completed: [ +{"path": "{{subfolder_path}}", "files_scanned": {{count}}, "summary": "{{brief_summary}}"} +] + + + + + Use pattern matching only - do NOT read source files + Use glob/grep to identify file locations and patterns + Extract information from filenames, directory structure, and config files only + + +For each part, check documentation_requirements boolean flags and execute corresponding scans: + + + Scan for API routes and endpoints using integration_scan_patterns + Look for: controllers/, routes/, api/, handlers/, endpoints/ + + + Use glob to find route files, extract patterns from filenames and folder structure + + + + Read files in batches (one subfolder at a time) + Extract: HTTP methods, paths, request/response types from actual code + + +Build API contracts catalog +IMMEDIATELY write to: {output_folder}/api-contracts-{part_id}.md +Validate document has all required sections +Update state file with output generated +PURGE detailed API data, keep only: "{{api_count}} endpoints documented" +api_contracts\*{part_id} + + + + Scan for data models using schema_migration_patterns + Look for: models/, schemas/, entities/, migrations/, prisma/, ORM configs + + + Identify schema files via glob, parse migration file names for table discovery + + + + Read model files in batches (one subfolder at a time) + Extract: table names, fields, relationships, constraints from actual code + + +Build database schema documentation +IMMEDIATELY write to: {output_folder}/data-models-{part_id}.md +Validate document completeness +Update state file with output generated +PURGE detailed schema data, keep only: "{{table_count}} tables documented" +data_models\*{part_id} + + + + Analyze state management patterns + Look for: Redux, Context API, MobX, Vuex, Pinia, Provider patterns + Identify: stores, reducers, actions, state structure + state_management_patterns_{part_id} + + + + Inventory UI component library + Scan: components/, ui/, widgets/, views/ folders + Categorize: Layout, Form, Display, Navigation, etc. + Identify: Design system, component patterns, reusable elements + ui_component_inventory_{part_id} + + + + Look for hardware schematics using hardware_interface_patterns + This appears to be an embedded/hardware project. Do you have: + - Pinout diagrams + - Hardware schematics + - PCB layouts + - Hardware documentation + +If yes, please provide paths or links. [Provide paths or type 'none'] + +Store hardware docs references +hardware*documentation*{part_id} + + + + Scan and catalog assets using asset_patterns + Categorize by: Images, Audio, 3D Models, Sprites, Textures, etc. + Calculate: Total size, file counts, formats used + asset_inventory_{part_id} + + +Scan for additional patterns based on doc requirements: + +- config_patterns → Configuration management +- auth_security_patterns → Authentication/authorization approach +- entry_point_patterns → Application entry points and bootstrap +- shared_code_patterns → Shared libraries and utilities +- async_event_patterns → Event-driven architecture +- ci_cd_patterns → CI/CD pipeline details +- localization_patterns → i18n/l10n support + + +Apply scan_level strategy to each pattern scan (quick=glob only, deep/exhaustive=read files) + +comprehensive*analysis*{part_id} + +Update state file: + +- Add to completed_steps: {"step": "step_4", "status": "completed", "timestamp": "{{now}}", "summary": "Conditional analysis complete, {{files_generated}} files written"} +- Update current_step = "step_5" +- Update last_updated timestamp +- List all outputs_generated + + +PURGE all detailed scan results from context. Keep only summaries: + +- "APIs: {{api_count}} endpoints" +- "Data: {{table_count}} tables" +- "Components: {{component_count}} components" + + + + +For each part, generate complete directory tree using critical_directories from doc requirements + +Annotate the tree with: + +- Purpose of each critical directory +- Entry points marked +- Key file locations highlighted +- Integration points noted (for multi-part projects) + + +Show how parts are organized and where they interface + +Create formatted source tree with descriptions: + +``` +project-root/ +├── client/ # React frontend (Part: client) +│ ├── src/ +│ │ ├── components/ # Reusable UI components +│ │ ├── pages/ # Route-based pages +│ │ └── api/ # API client layer → Calls server/ +├── server/ # Express API backend (Part: api) +│ ├── src/ +│ │ ├── routes/ # REST API endpoints +│ │ ├── models/ # Database models +│ │ └── services/ # Business logic +``` + + + +source_tree_analysis +critical_folders_summary + +IMMEDIATELY write source-tree-analysis.md to disk +Validate document structure +Update state file: + +- Add to completed_steps: {"step": "step_5", "status": "completed", "timestamp": "{{now}}", "summary": "Source tree documented"} +- Update current_step = "step_6" +- Add output: "source-tree-analysis.md" + + PURGE detailed tree from context, keep only: "Source tree with {{folder_count}} critical folders" + + + +Scan for development setup using key_file_patterns and existing docs: +- Prerequisites (Node version, Python version, etc.) +- Installation steps (npm install, etc.) +- Environment setup (.env files, config) +- Build commands (npm run build, make, etc.) +- Run commands (npm start, go run, etc.) +- Test commands using test_file_patterns + + +Look for deployment configuration using ci_cd_patterns: + +- Dockerfile, docker-compose.yml +- Kubernetes configs (k8s/, helm/) +- CI/CD pipelines (.github/workflows/, .gitlab-ci.yml) +- Deployment scripts +- Infrastructure as Code (terraform/, pulumi/) + + + + Extract contribution guidelines: + - Code style rules + - PR process + - Commit conventions + - Testing requirements + + + +development_instructions +deployment_configuration +contribution_guidelines + +Update state file: + +- Add to completed_steps: {"step": "step_6", "status": "completed", "timestamp": "{{now}}", "summary": "Dev/deployment guides written"} +- Update current_step = "step_7" +- Add generated outputs to list + + PURGE detailed instructions, keep only: "Dev setup and deployment documented" + + + +Analyze how parts communicate: +- Scan integration_scan_patterns across parts +- Identify: REST calls, GraphQL queries, gRPC, message queues, shared databases +- Document: API contracts between parts, data flow, authentication flow + + +Create integration_points array with: + +- from: source part +- to: target part +- type: REST API, GraphQL, gRPC, Event Bus, etc. +- details: Endpoints, protocols, data formats + + +IMMEDIATELY write integration-architecture.md to disk +Validate document completeness + +integration_architecture + +Update state file: + +- Add to completed_steps: {"step": "step_7", "status": "completed", "timestamp": "{{now}}", "summary": "Integration architecture documented"} +- Update current_step = "step_8" + + PURGE integration details, keep only: "{{integration_count}} integration points" + + + +For each part in project_parts: + - Use matched architecture template from Step 3 as base structure + - Fill in all sections with discovered information: + * Executive Summary + * Technology Stack (from Step 3) + * Architecture Pattern (from registry match) + * Data Architecture (from Step 4 data models scan) + * API Design (from Step 4 API scan if applicable) + * Component Overview (from Step 4 component scan if applicable) + * Source Tree (from Step 5) + * Development Workflow (from Step 6) + * Deployment Architecture (from Step 6) + * Testing Strategy (from test patterns) + + + + - Generate: architecture.md (no part suffix) + + + + - Generate: architecture-{part_id}.md for each part + + +For each architecture file generated: + +- IMMEDIATELY write architecture file to disk +- Validate against architecture template schema +- Update state file with output +- PURGE detailed architecture from context, keep only: "Architecture for {{part_id}} written" + + +architecture_document + +Update state file: + +- Add to completed_steps: {"step": "step_8", "status": "completed", "timestamp": "{{now}}", "summary": "Architecture docs written for {{parts_count}} parts"} +- Update current_step = "step_9" + + + + +Generate project-overview.md with: +- Project name and purpose (from README or user input) +- Executive summary +- Tech stack summary table +- Architecture type classification +- Repository structure (monolith/monorepo/multi-part) +- Links to detailed docs + + +Generate source-tree-analysis.md with: + +- Full annotated directory tree from Step 5 +- Critical folders explained +- Entry points documented +- Multi-part structure (if applicable) + + +IMMEDIATELY write project-overview.md to disk +Validate document sections + +Generate source-tree-analysis.md (if not already written in Step 5) +IMMEDIATELY write to disk and validate + +Generate component-inventory.md (or per-part versions) with: + +- All discovered components from Step 4 +- Categorized by type +- Reusable vs specific components +- Design system elements (if found) + + IMMEDIATELY write each component inventory to disk and validate + +Generate development-guide.md (or per-part versions) with: + +- Prerequisites and dependencies +- Environment setup instructions +- Local development commands +- Build process +- Testing approach and commands +- Common development tasks + + IMMEDIATELY write each development guide to disk and validate + + + Generate deployment-guide.md with: + - Infrastructure requirements + - Deployment process + - Environment configuration + - CI/CD pipeline details + + IMMEDIATELY write to disk and validate + + + + Generate contribution-guide.md with: + - Code style and conventions + - PR process + - Testing requirements + - Documentation standards + + IMMEDIATELY write to disk and validate + + + + Generate api-contracts.md (or per-part) with: + - All API endpoints + - Request/response schemas + - Authentication requirements + - Example requests + + IMMEDIATELY write to disk and validate + + + + Generate data-models.md (or per-part) with: + - Database schema + - Table relationships + - Data models and entities + - Migration strategy + + IMMEDIATELY write to disk and validate + + + + Generate integration-architecture.md with: + - How parts communicate + - Integration points diagram/description + - Data flow between parts + - Shared dependencies + + IMMEDIATELY write to disk and validate + +Generate project-parts.json metadata file: +`json + { + "repository_type": "monorepo", + "parts": [ ... ], + "integration_points": [ ... ] + } + ` + +IMMEDIATELY write to disk + + +supporting_documentation + +Update state file: + +- Add to completed_steps: {"step": "step_9", "status": "completed", "timestamp": "{{now}}", "summary": "All supporting docs written"} +- Update current_step = "step_10" +- List all newly generated outputs + + +PURGE all document contents from context, keep only list of files generated + + + + +INCOMPLETE DOCUMENTATION MARKER CONVENTION: +When a document SHOULD be generated but wasn't (due to quick scan, missing data, conditional requirements not met): + +- Use EXACTLY this marker: _(To be generated)_ +- Place it at the end of the markdown link line +- Example: - [API Contracts - Server](./api-contracts-server.md) _(To be generated)_ +- This allows Step 11 to detect and offer to complete these items +- ALWAYS use this exact format for consistency and automated detection + + +Create index.md with intelligent navigation based on project structure + + + Generate simple index with: + - Project name and type + - Quick reference (tech stack, architecture type) + - Links to all generated docs + - Links to discovered existing docs + - Getting started section + + + + + Generate comprehensive index with: + - Project overview and structure summary + - Part-based navigation section + - Quick reference by part + - Cross-part integration links + - Links to all generated and existing docs + - Getting started per part + + + +Include in index.md: + +## Project Documentation Index + +### Project Overview + +- **Type:** {{repository_type}} {{#if multi-part}}with {{parts.length}} parts{{/if}} +- **Primary Language:** {{primary_language}} +- **Architecture:** {{architecture_type}} + +### Quick Reference + +{{#if single_part}} + +- **Tech Stack:** {{tech_stack_summary}} +- **Entry Point:** {{entry_point}} +- **Architecture Pattern:** {{architecture_pattern}} + {{else}} + {{#each parts}} + +#### {{part_name}} ({{part_id}}) + +- **Type:** {{project_type}} +- **Tech Stack:** {{tech_stack}} +- **Root:** {{root_path}} + {{/each}} + {{/if}} + +### Generated Documentation + +- [Project Overview](./project-overview.md) +- [Architecture](./architecture{{#if multi-part}}-{part\*id}{{/if}}.md){{#unless architecture_file_exists}} (To be generated) {{/unless}} +- [Source Tree Analysis](./source-tree-analysis.md) +- [Component Inventory](./component-inventory{{#if multi-part}}-{part\*id}{{/if}}.md){{#unless component_inventory_exists}} (To be generated) {{/unless}} +- [Development Guide](./development-guide{{#if multi-part}}-{part\*id}{{/if}}.md){{#unless dev_guide_exists}} (To be generated) {{/unless}} + {{#if deployment_found}}- [Deployment Guide](./deployment-guide.md){{#unless deployment_guide_exists}} (To be generated) {{/unless}}{{/if}} + {{#if contribution_found}}- [Contribution Guide](./contribution-guide.md){{/if}} + {{#if api_documented}}- [API Contracts](./api-contracts{{#if multi-part}}-{part_id}{{/if}}.md){{#unless api_contracts_exists}} (To be generated) {{/unless}}{{/if}} + {{#if data_models_documented}}- [Data Models](./data-models{{#if multi-part}}-{part_id}{{/if}}.md){{#unless data_models_exists}} (To be generated) {{/unless}}{{/if}} + {{#if multi-part}}- [Integration Architecture](./integration-architecture.md){{#unless integration_arch_exists}} (To be generated) {{/unless}}{{/if}} + +### Existing Documentation + +{{#each existing_docs}} + +- [{{title}}]({{relative_path}}) - {{description}} + {{/each}} + +### Getting Started + +{{getting_started_instructions}} + + +Before writing index.md, check which expected files actually exist: + +- For each document that should have been generated, check if file exists on disk +- Set existence flags: architecture_file_exists, component_inventory_exists, dev_guide_exists, etc. +- These flags determine whether to add the _(To be generated)_ marker +- Track which files are missing in {{missing_docs_list}} for reporting + + +IMMEDIATELY write index.md to disk with appropriate _(To be generated)_ markers for missing files +Validate index has all required sections and links are valid + +index + +Update state file: + +- Add to completed_steps: {"step": "step_10", "status": "completed", "timestamp": "{{now}}", "summary": "Master index generated"} +- Update current_step = "step_11" +- Add output: "index.md" + + +PURGE index content from context + + + +Show summary of all generated files: +Generated in {{output_folder}}/: +{{file_list_with_sizes}} + + +Run validation checklist from {validation} + +INCOMPLETE DOCUMENTATION DETECTION: + +1. PRIMARY SCAN: Look for exact marker: _(To be generated)_ +2. FALLBACK SCAN: Look for fuzzy patterns (in case agent was lazy): + - _(TBD)_ + - _(TODO)_ + - _(Coming soon)_ + - _(Not yet generated)_ + - _(Pending)_ +3. Extract document metadata from each match for user selection + + +Read {output_folder}/index.md + +Scan for incomplete documentation markers: +Step 1: Search for exact pattern "_(To be generated)_" (case-sensitive) +Step 2: For each match found, extract the entire line +Step 3: Parse line to extract: + +- Document title (text within [brackets] or **bold**) +- File path (from markdown link or inferable from title) +- Document type (infer from filename: architecture, api-contracts, data-models, component-inventory, development-guide, deployment-guide, integration-architecture) +- Part ID if applicable (extract from filename like "architecture-server.md" → part_id: "server") + Step 4: Add to {{incomplete_docs_strict}} array + + +Fallback fuzzy scan for alternate markers: +Search for patterns: _(TBD)_, _(TODO)_, _(Coming soon)_, _(Not yet generated)_, _(Pending)_ +For each fuzzy match: + +- Extract same metadata as strict scan +- Add to {{incomplete_docs_fuzzy}} array with fuzzy_match flag + + +Combine results: +Set {{incomplete_docs_list}} = {{incomplete_docs_strict}} + {{incomplete_docs_fuzzy}} +For each item store structure: +{ +"title": "Architecture – Server", +"file\*path": "./architecture-server.md", +"doc_type": "architecture", +"part_id": "server", +"line_text": "- [Architecture – Server](./architecture-server.md) (To be generated)", +"fuzzy_match": false +} + + +Documentation generation complete! + +Summary: + +- Project Type: {{project_type_summary}} +- Parts Documented: {{parts_count}} +- Files Generated: {{files_count}} +- Total Lines: {{total_lines}} + +{{#if incomplete_docs_list.length > 0}} +⚠️ **Incomplete Documentation Detected:** + +I found {{incomplete_docs_list.length}} item(s) marked as incomplete: + +{{#each incomplete_docs_list}} +{{@index + 1}}. **{{title}}** ({{doc_type}}{{#if part_id}} for {{part_id}}{{/if}}){{#if fuzzy_match}} ⚠️ [non-standard marker]{{/if}} +{{/each}} + +{{/if}} + +Would you like to: + +{{#if incomplete_docs_list.length > 0}} + +1. **Generate incomplete documentation** - Complete any of the {{incomplete_docs_list.length}} items above +2. Review any specific section [type section name] +3. Add more detail to any area [type area name] +4. Generate additional custom documentation [describe what] +5. Finalize and complete [type 'done'] + {{else}} +6. Review any specific section [type section name] +7. Add more detail to any area [type area name] +8. Generate additional documentation [describe what] +9. Finalize and complete [type 'done'] + {{/if}} + +Your choice: + + + + Which incomplete items would you like to generate? + +{{#each incomplete_docs_list}} +{{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} - {{part_id}}{{/if}}) +{{/each}} +{{incomplete_docs_list.length + 1}}. All of them + +Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all': + + +Parse user selection: + +- If "all", set {{selected_items}} = all items in {{incomplete_docs_list}} +- If comma-separated numbers, extract selected items by index +- Store result in {{selected_items}} array + + + Display: "Generating {{selected_items.length}} document(s)..." + + For each item in {{selected_items}}: + +1. **Identify the part and requirements:** + - Extract part_id from item (if exists) + - Look up part data in project_parts array from state file + - Load documentation_requirements for that part's project_type_id + +2. **Route to appropriate generation substep based on doc_type:** + + **If doc_type == "architecture":** + - Display: "Generating architecture documentation for {{part_id}}..." + - Load architecture_match for this part from state file (Step 3 cache) + - Re-run Step 8 architecture generation logic ONLY for this specific part + - Use matched template and fill with cached data from state file + - Write architecture-{{part_id}}.md to disk + - Validate completeness + + **If doc_type == "api-contracts":** + - Display: "Generating API contracts for {{part_id}}..." + - Load part data and documentation_requirements + - Re-run Step 4 API scan substep targeting ONLY this part + - Use scan_level from state file (quick/deep/exhaustive) + - Generate api-contracts-{{part_id}}.md + - Validate document structure + + **If doc_type == "data-models":** + - Display: "Generating data models documentation for {{part_id}}..." + - Re-run Step 4 data models scan substep targeting ONLY this part + - Use schema_migration_patterns from documentation_requirements + - Generate data-models-{{part_id}}.md + - Validate completeness + + **If doc_type == "component-inventory":** + - Display: "Generating component inventory for {{part_id}}..." + - Re-run Step 9 component inventory generation for this specific part + - Scan components/, ui/, widgets/ folders + - Generate component-inventory-{{part_id}}.md + - Validate structure + + **If doc_type == "development-guide":** + - Display: "Generating development guide for {{part_id}}..." + - Re-run Step 9 development guide generation for this specific part + - Use key_file_patterns and test_file_patterns from documentation_requirements + - Generate development-guide-{{part_id}}.md + - Validate completeness + + **If doc_type == "deployment-guide":** + - Display: "Generating deployment guide..." + - Re-run Step 6 deployment configuration scan + - Re-run Step 9 deployment guide generation + - Generate deployment-guide.md + - Validate structure + + **If doc_type == "integration-architecture":** + - Display: "Generating integration architecture..." + - Re-run Step 7 integration analysis for all parts + - Generate integration-architecture.md + - Validate completeness + +3. **Post-generation actions:** + - Confirm file was written successfully + - Update state file with newly generated output + - Add to {{newly_generated_docs}} tracking list + - Display: "✓ Generated: {{file_path}}" + +4. **Handle errors:** + - If generation fails, log error and continue with next item + - Track failed items in {{failed_generations}} list + + +After all selected items are processed: + +**Update index.md to remove markers:** + +1. Read current index.md content +2. For each item in {{newly_generated_docs}}: + - Find the line containing the file link and marker + - Remove the _(To be generated)_ or fuzzy marker text + - Leave the markdown link intact +3. Write updated index.md back to disk +4. Update state file to record index.md modification + + +Display generation summary: + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +✓ **Documentation Generation Complete!** + +**Successfully Generated:** +{{#each newly_generated_docs}} + +- {{title}} → {{file_path}} + {{/each}} + +{{#if failed_generations.length > 0}} +**Failed to Generate:** +{{#each failed_generations}} + +- {{title}} ({{error_message}}) + {{/each}} + {{/if}} + +**Updated:** index.md (removed incomplete markers) + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + + +Update state file with all generation activities + +Return to Step 11 menu (loop back to check for any remaining incomplete items) + + +Make requested modifications and regenerate affected files +Proceed to Step 12 completion + + + Update state file: +- Add to completed_steps: {"step": "step_11_iteration", "status": "completed", "timestamp": "{{now}}", "summary": "Review iteration complete"} +- Keep current_step = "step_11" (for loop back) +- Update last_updated timestamp + + Loop back to beginning of Step 11 (re-scan for remaining incomplete docs) + + + + Update state file: +- Add to completed_steps: {"step": "step_11", "status": "completed", "timestamp": "{{now}}", "summary": "Validation and review complete"} +- Update current_step = "step_12" + + Proceed to Step 12 + + + + +Create final summary report +Compile verification recap variables: + - Set {{verification_summary}} to the concrete tests, validations, or scripts you executed (or "none run"). + - Set {{open_risks}} to any remaining risks or TODO follow-ups (or "none"). + - Set {{next_checks}} to recommended actions before merging/deploying (or "none"). + + +Display completion message: + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +## Project Documentation Complete! ✓ + +**Location:** {{output_folder}}/ + +**Master Index:** {{output_folder}}/index.md +👆 This is your primary entry point for AI-assisted development + +**Generated Documentation:** +{{generated_files_list}} + +**Next Steps:** + +1. Review the index.md to familiarize yourself with the documentation structure +2. When creating a brownfield PRD, point the PRD workflow to: {{output_folder}}/index.md +3. For UI-only features: Reference {{output_folder}}/architecture-{{ui_part_id}}.md +4. For API-only features: Reference {{output_folder}}/architecture-{{api_part_id}}.md +5. For full-stack features: Reference both part architectures + integration-architecture.md + +**Verification Recap:** + +- Tests/extractions executed: {{verification_summary}} +- Outstanding risks or follow-ups: {{open_risks}} +- Recommended next checks before PR: {{next_checks}} + +**Brownfield PRD Command:** +When ready to plan new features, run the PRD workflow and provide this index as input. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + + +FINALIZE state file: + +- Add to completed_steps: {"step": "step_12", "status": "completed", "timestamp": "{{now}}", "summary": "Workflow complete"} +- Update timestamps.completed = "{{now}}" +- Update current_step = "completed" +- Write final state file + + +Display: "State file saved: {{output_folder}}/project-scan-report.json" + + diff --git a/src/bmm/workflows/document-project/workflows/full-scan.yaml b/src/bmm/workflows/document-project/workflows/full-scan.yaml new file mode 100644 index 00000000..f62aba9b --- /dev/null +++ b/src/bmm/workflows/document-project/workflows/full-scan.yaml @@ -0,0 +1,31 @@ +# Full Project Scan Workflow Configuration +name: "document-project-full-scan" +description: "Complete project documentation workflow (initial scan or full rescan)" +author: "BMad" + +# This is a sub-workflow called by document-project/workflow.yaml +parent_workflow: "{project-root}/_bmad/bmm/workflows/document-project/workflow.yaml" + +# Critical variables inherited from parent +config_source: "{project-root}/_bmad/bmb/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +date: system-generated + +# Data files +documentation_requirements_csv: "{project-root}/_bmad/bmm/workflows/document-project/documentation-requirements.csv" + +# Module path and component files +installed_path: "{project-root}/_bmad/bmm/workflows/document-project/workflows" +template: false # Action workflow +instructions: "{installed_path}/full-scan-instructions.md" +validation: "{project-root}/_bmad/bmm/workflows/document-project/checklist.md" + +# Runtime inputs (passed from parent workflow) +workflow_mode: "" # "initial_scan" or "full_rescan" +scan_level: "" # "quick", "deep", or "exhaustive" +resume_mode: false +project_root_path: "" + +# Configuration +autonomous: false # Requires user input at key decision points diff --git a/src/bmm/workflows/excalidraw-diagrams/create-dataflow/checklist.md b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/checklist.md new file mode 100644 index 00000000..3c9463d5 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/checklist.md @@ -0,0 +1,39 @@ +# Create Data Flow Diagram - Validation Checklist + +## DFD Notation + +- [ ] Processes shown as circles/ellipses +- [ ] Data stores shown as parallel lines or rectangles +- [ ] External entities shown as rectangles +- [ ] Data flows shown as labeled arrows +- [ ] Follows standard DFD notation + +## Structure + +- [ ] All processes numbered correctly +- [ ] All data flows labeled with data names +- [ ] All data stores named appropriately +- [ ] External entities clearly identified + +## Completeness + +- [ ] All inputs and outputs accounted for +- [ ] No orphaned processes (unconnected) +- [ ] Data conservation maintained +- [ ] Level appropriate (context/level 0/level 1) + +## Layout + +- [ ] Logical flow direction (left to right, top to bottom) +- [ ] No crossing data flows where avoidable +- [ ] Balanced layout +- [ ] Grid alignment maintained + +## Technical Quality + +- [ ] All elements properly grouped +- [ ] Arrows have proper bindings +- [ ] Text readable and properly sized +- [ ] No elements with `isDeleted: true` +- [ ] JSON is valid +- [ ] File saved to correct location diff --git a/src/bmm/workflows/excalidraw-diagrams/create-dataflow/instructions.md b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/instructions.md new file mode 100644 index 00000000..30d32ed3 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/instructions.md @@ -0,0 +1,130 @@ +# Create Data Flow Diagram - Workflow Instructions + +```xml +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {installed_path}/workflow.yaml +This workflow creates data flow diagrams (DFD) in Excalidraw format. + + + + + Review user's request and extract: DFD level, processes, data stores, external entities + Skip to Step 4 + + + + Ask: "What level of DFD do you need?" + Present options: + 1. Context Diagram (Level 0) - Single process showing system boundaries + 2. Level 1 DFD - Major processes and data flows + 3. Level 2 DFD - Detailed sub-processes + 4. Custom - Specify your requirements + + WAIT for selection + + + + Ask: "Describe the processes, data stores, and external entities in your system" + WAIT for user description + Summarize what will be included and confirm with user + + + + Check for existing theme.json, ask to use if exists + + Ask: "Choose a DFD color scheme:" + Present numbered options: + 1. Standard DFD + - Process: #e3f2fd (light blue) + - Data Store: #e8f5e9 (light green) + - External Entity: #f3e5f5 (light purple) + - Border: #1976d2 (blue) + + 2. Colorful DFD + - Process: #fff9c4 (light yellow) + - Data Store: #c5e1a5 (light lime) + - External Entity: #ffccbc (light coral) + - Border: #f57c00 (orange) + + 3. Minimal DFD + - Process: #f5f5f5 (light gray) + - Data Store: #eeeeee (gray) + - External Entity: #e0e0e0 (medium gray) + - Border: #616161 (dark gray) + + 4. Custom - Define your own colors + + WAIT for selection + Create theme.json based on selection + + + + + List all processes with numbers (1.0, 2.0, etc.) + List all data stores (D1, D2, etc.) + List all external entities + Map all data flows with labels + Show planned structure, confirm with user + + + + Load {{templates}} and extract `dataflow` section + Load {{library}} + Load theme.json + Load {{helpers}} + + + + Follow standard DFD notation from {{helpers}} + + Build Order: + 1. External entities (rectangles, bold border) + 2. Processes (circles/ellipses with numbers) + 3. Data stores (parallel lines or rectangles) + 4. Data flows (labeled arrows) + + + DFD Rules: + - Processes: Numbered (1.0, 2.0), verb phrases + - Data stores: Named (D1, D2), noun phrases + - External entities: Named, noun phrases + - Data flows: Labeled with data names, arrows show direction + - No direct flow between external entities + - No direct flow between data stores + + + Layout: + - External entities at edges + - Processes in center + - Data stores between processes + - Minimize crossing flows + - Left-to-right or top-to-bottom flow + + + + + Verify DFD rules compliance + Strip unused elements and elements with isDeleted: true + Save to {{default_output_file}} + + + + NEVER delete the file if validation fails - always fix syntax errors + Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')" + + Read the error message carefully - it shows the syntax error and position + Open the file and navigate to the error location + Fix the syntax error (add missing comma, bracket, or quote as indicated) + Save the file + Re-run validation with the same command + Repeat until validation passes + + Once validation passes, confirm with user + + + + Validate against {{validation}} + + + +``` diff --git a/src/bmm/workflows/excalidraw-diagrams/create-dataflow/workflow.yaml b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/workflow.yaml new file mode 100644 index 00000000..2f01e6b5 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-dataflow/workflow.yaml @@ -0,0 +1,27 @@ +name: create-excalidraw-dataflow +description: "Create data flow diagrams (DFD) in Excalidraw format" +author: "BMad" + +# Config values +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/create-dataflow" +shared_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/_shared" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Core Excalidraw resources (universal knowledge) +helpers: "{project-root}/_bmad/core/resources/excalidraw/excalidraw-helpers.md" +json_validation: "{project-root}/_bmad/core/resources/excalidraw/validate-json-instructions.md" + +# Domain-specific resources (technical diagrams) +templates: "{shared_path}/excalidraw-templates.yaml" +library: "{shared_path}/excalidraw-library.json" + +# Output file (respects user's configured output_folder) +default_output_file: "{output_folder}/excalidraw-diagrams/dataflow-{timestamp}.excalidraw" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/excalidraw-diagrams/create-diagram/checklist.md b/src/bmm/workflows/excalidraw-diagrams/create-diagram/checklist.md new file mode 100644 index 00000000..61d216ae --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-diagram/checklist.md @@ -0,0 +1,43 @@ +# Create Diagram - Validation Checklist + +## Element Structure + +- [ ] All components with labels have matching `groupIds` +- [ ] All text elements have `containerId` pointing to parent component +- [ ] Text width calculated properly (no cutoff) +- [ ] Text alignment appropriate for diagram type + +## Layout and Alignment + +- [ ] All elements snapped to 20px grid +- [ ] Component spacing consistent (40px/60px) +- [ ] Hierarchical alignment maintained +- [ ] No overlapping elements + +## Connections + +- [ ] All arrows have `startBinding` and `endBinding` +- [ ] `boundElements` array updated on connected components +- [ ] Arrow routing avoids overlaps +- [ ] Relationship types clearly indicated + +## Notation and Standards + +- [ ] Follows specified notation standard (UML/ERD/etc) +- [ ] Symbols used correctly +- [ ] Cardinality/multiplicity shown where needed +- [ ] Labels and annotations clear + +## Theme and Styling + +- [ ] Theme colors applied consistently +- [ ] Component types visually distinguishable +- [ ] Text is readable +- [ ] Professional appearance + +## Output Quality + +- [ ] Element count under 80 +- [ ] No elements with `isDeleted: true` +- [ ] JSON is valid +- [ ] File saved to correct location diff --git a/src/bmm/workflows/excalidraw-diagrams/create-diagram/instructions.md b/src/bmm/workflows/excalidraw-diagrams/create-diagram/instructions.md new file mode 100644 index 00000000..407a76bf --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-diagram/instructions.md @@ -0,0 +1,141 @@ +# Create Diagram - Workflow Instructions + +```xml +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {installed_path}/workflow.yaml +This workflow creates system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format. + + + + + Review user's request and extract: diagram type, components/entities, relationships, notation preferences + Skip to Step 5 + Only ask about missing info in Steps 1-2 + + + + Ask: "What type of technical diagram do you need?" + Present options: + 1. System Architecture + 2. Entity-Relationship Diagram (ERD) + 3. UML Class Diagram + 4. UML Sequence Diagram + 5. UML Use Case Diagram + 6. Network Diagram + 7. Other + + WAIT for selection + + + + Ask: "Describe the components/entities and their relationships" + Ask: "What notation standard? (Standard/Simplified/Strict UML-ERD)" + WAIT for user input + Summarize what will be included and confirm with user + + + + Check if theme.json exists at output location + Ask to use it, load if yes, else proceed to Step 4 + Proceed to Step 4 + + + + Ask: "Choose a color scheme for your diagram:" + Present numbered options: + 1. Professional + - Component: #e3f2fd (light blue) + - Database: #e8f5e9 (light green) + - Service: #fff3e0 (light orange) + - Border: #1976d2 (blue) + + 2. Colorful + - Component: #e1bee7 (light purple) + - Database: #c5e1a5 (light lime) + - Service: #ffccbc (light coral) + - Border: #7b1fa2 (purple) + + 3. Minimal + - Component: #f5f5f5 (light gray) + - Database: #eeeeee (gray) + - Service: #e0e0e0 (medium gray) + - Border: #616161 (dark gray) + + 4. Custom - Define your own colors + + WAIT for selection + Create theme.json based on selection + Show preview and confirm + + + + List all components/entities + Map all relationships + Show planned layout + Ask: "Structure looks correct? (yes/no)" + Adjust and repeat + + + + Load {{templates}} and extract `diagram` section + Load {{library}} + Load theme.json and merge with template + Load {{helpers}} for guidelines + + + + Follow {{helpers}} for proper element creation + + For Each Component: + - Generate unique IDs (component-id, text-id, group-id) + - Create shape with groupIds + - Calculate text width + - Create text with containerId and matching groupIds + - Add boundElements + + + For Each Connection: + - Determine arrow type (straight/elbow) + - Create with startBinding and endBinding + - Update boundElements on both components + + + Build Order by Type: + - Architecture: Services → Databases → Connections → Labels + - ERD: Entities → Attributes → Relationships → Cardinality + - UML Class: Classes → Attributes → Methods → Relationships + - UML Sequence: Actors → Lifelines → Messages → Returns + - UML Use Case: Actors → Use Cases → Relationships + + + Alignment: + - Snap to 20px grid + - Space: 40px between components, 60px between sections + + + + + Strip unused elements and elements with isDeleted: true + Save to {{default_output_file}} + + + + NEVER delete the file if validation fails - always fix syntax errors + Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')" + + Read the error message carefully - it shows the syntax error and position + Open the file and navigate to the error location + Fix the syntax error (add missing comma, bracket, or quote as indicated) + Save the file + Re-run validation with the same command + Repeat until validation passes + + Once validation passes, confirm: "Diagram created at {{default_output_file}}. Open to view?" + + + + Validate against {{validation}} using {_bmad}/core/tasks/validate-workflow.xml + + + +``` diff --git a/src/bmm/workflows/excalidraw-diagrams/create-diagram/workflow.yaml b/src/bmm/workflows/excalidraw-diagrams/create-diagram/workflow.yaml new file mode 100644 index 00000000..f841a546 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-diagram/workflow.yaml @@ -0,0 +1,27 @@ +name: create-excalidraw-diagram +description: "Create system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format" +author: "BMad" + +# Config values +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/create-diagram" +shared_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/_shared" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Core Excalidraw resources (universal knowledge) +helpers: "{project-root}/_bmad/core/resources/excalidraw/excalidraw-helpers.md" +json_validation: "{project-root}/_bmad/core/resources/excalidraw/validate-json-instructions.md" + +# Domain-specific resources (technical diagrams) +templates: "{shared_path}/excalidraw-templates.yaml" +library: "{shared_path}/excalidraw-library.json" + +# Output file (respects user's configured output_folder) +default_output_file: "{output_folder}/excalidraw-diagrams/diagram-{timestamp}.excalidraw" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/excalidraw-diagrams/create-flowchart/checklist.md b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/checklist.md new file mode 100644 index 00000000..7da7fb78 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/checklist.md @@ -0,0 +1,49 @@ +# Create Flowchart - Validation Checklist + +## Element Structure + +- [ ] All shapes with labels have matching `groupIds` +- [ ] All text elements have `containerId` pointing to parent shape +- [ ] Text width calculated properly (no cutoff) +- [ ] Text alignment set (`textAlign` + `verticalAlign`) + +## Layout and Alignment + +- [ ] All elements snapped to 20px grid +- [ ] Consistent spacing between elements (60px minimum) +- [ ] Vertical alignment maintained for flow direction +- [ ] No overlapping elements + +## Connections + +- [ ] All arrows have `startBinding` and `endBinding` +- [ ] `boundElements` array updated on connected shapes +- [ ] Arrow types appropriate (straight for forward, elbow for backward/upward) +- [ ] Gap set to 10 for all bindings + +## Theme and Styling + +- [ ] Theme colors applied consistently +- [ ] All shapes use theme primary fill color +- [ ] All borders use theme accent color +- [ ] Text color is readable (#1e1e1e) + +## Composition + +- [ ] Element count under 50 +- [ ] Library components referenced where possible +- [ ] No duplicate element definitions + +## Output Quality + +- [ ] No elements with `isDeleted: true` +- [ ] JSON is valid +- [ ] File saved to correct location + +## Functional Requirements + +- [ ] Start point clearly marked +- [ ] End point clearly marked +- [ ] All process steps labeled +- [ ] Decision points use diamond shapes +- [ ] Flow direction is clear and logical diff --git a/src/bmm/workflows/excalidraw-diagrams/create-flowchart/instructions.md b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/instructions.md new file mode 100644 index 00000000..74267905 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/instructions.md @@ -0,0 +1,241 @@ +# Create Flowchart - Workflow Instructions + +```xml +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {installed_path}/workflow.yaml +This workflow creates a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows. + + + + + Before asking any questions, analyze what the user has already told you + + Review the user's initial request and conversation history + Extract any mentioned: flowchart type, complexity, decision points, save location + + + Summarize your understanding + Skip directly to Step 4 (Plan Flowchart Layout) + + + + Note what you already know + Only ask about missing information in Step 1 + + + + Proceed with full elicitation in Step 1 + + + + + Ask Question 1: "What type of process flow do you need to visualize?" + Present numbered options: + 1. Business Process Flow - Document business workflows, approval processes, or operational procedures + 2. Algorithm/Logic Flow - Visualize code logic, decision trees, or computational processes + 3. User Journey Flow - Map user interactions, navigation paths, or experience flows + 4. Data Processing Pipeline - Show data transformation, ETL processes, or processing stages + 5. Other - Describe your specific flowchart needs + + WAIT for user selection (1-5) + + Ask Question 2: "How many main steps are in this flow?" + Present numbered options: + 1. Simple (3-5 steps) - Quick process with few decision points + 2. Medium (6-10 steps) - Standard workflow with some branching + 3. Complex (11-20 steps) - Detailed process with multiple decision points + 4. Very Complex (20+ steps) - Comprehensive workflow requiring careful layout + + WAIT for user selection (1-4) + Store selection in {{complexity}} + + Ask Question 3: "Does your flow include decision points (yes/no branches)?" + Present numbered options: + 1. No decisions - Linear flow from start to end + 2. Few decisions (1-2) - Simple branching with yes/no paths + 3. Multiple decisions (3-5) - Several conditional branches + 4. Complex decisions (6+) - Extensive branching logic + + WAIT for user selection (1-4) + Store selection in {{decision_points}} + + Ask Question 4: "Where should the flowchart be saved?" + Present numbered options: + 1. Default location - docs/flowcharts/[auto-generated-name].excalidraw + 2. Custom path - Specify your own file path + 3. Project root - Save in main project directory + 4. Specific folder - Choose from existing folders + + WAIT for user selection (1-4) + + Ask for specific path + WAIT for user input + + Store final path in {{default_output_file}} + + + + Check if theme.json exists at output location + + Ask: "Found existing theme. Use it? (yes/no)" + WAIT for user response + + Load and use existing theme + Skip to Step 4 + + + Proceed to Step 3 + + + + Proceed to Step 3 + + + + + Ask: "Let's create a theme for your flowchart. Choose a color scheme:" + Present numbered options: + 1. Professional Blue + - Primary Fill: #e3f2fd (light blue) + - Accent/Border: #1976d2 (blue) + - Decision: #fff3e0 (light orange) + - Text: #1e1e1e (dark gray) + + 2. Success Green + - Primary Fill: #e8f5e9 (light green) + - Accent/Border: #388e3c (green) + - Decision: #fff9c4 (light yellow) + - Text: #1e1e1e (dark gray) + + 3. Neutral Gray + - Primary Fill: #f5f5f5 (light gray) + - Accent/Border: #616161 (gray) + - Decision: #e0e0e0 (medium gray) + - Text: #1e1e1e (dark gray) + + 4. Warm Orange + - Primary Fill: #fff3e0 (light orange) + - Accent/Border: #f57c00 (orange) + - Decision: #ffe0b2 (peach) + - Text: #1e1e1e (dark gray) + + 5. Custom Colors - Define your own color palette + + WAIT for user selection (1-5) + Store selection in {{theme_choice}} + + + Ask: "Primary fill color (hex code)?" + WAIT for user input + Store in {{custom_colors.primary_fill}} + Ask: "Accent/border color (hex code)?" + WAIT for user input + Store in {{custom_colors.accent}} + Ask: "Decision color (hex code)?" + WAIT for user input + Store in {{custom_colors.decision}} + + + Create theme.json with selected colors + Show theme preview with all colors + Ask: "Theme looks good?" + Present numbered options: + 1. Yes, use this theme - Proceed with theme + 2. No, adjust colors - Modify color selections + 3. Start over - Choose different preset + + WAIT for selection (1-3) + + Repeat Step 3 + + + + + List all steps and decision points based on gathered requirements + Show user the planned structure + Ask: "Structure looks correct? (yes/no)" + WAIT for user response + + Adjust structure based on feedback + Repeat this step + + + + + Load {{templates}} file + Extract `flowchart` section from YAML + Load {{library}} file + Load theme.json and merge colors with template + Load {{helpers}} for element creation guidelines + + + + Follow guidelines from {{helpers}} for proper element creation + + Build ONE section at a time following these rules: + + For Each Shape with Label: + 1. Generate unique IDs (shape-id, text-id, group-id) + 2. Create shape with groupIds: [group-id] + 3. Calculate text width: (text.length × fontSize × 0.6) + 20, round to nearest 10 + 4. Create text element with: + - containerId: shape-id + - groupIds: [group-id] (SAME as shape) + - textAlign: "center" + - verticalAlign: "middle" + - width: calculated width + 5. Add boundElements to shape referencing text + + + For Each Arrow: + 1. Determine arrow type needed: + - Straight: For forward flow (left-to-right, top-to-bottom) + - Elbow: For upward flow, backward flow, or complex routing + 2. Create arrow with startBinding and endBinding + 3. Set startBinding.elementId to source shape ID + 4. Set endBinding.elementId to target shape ID + 5. Set gap: 10 for both bindings + 6. If elbow arrow, add intermediate points for direction changes + 7. Update boundElements on both connected shapes + + + Alignment: + - Snap all x, y to 20px grid + - Align shapes vertically (same x for vertical flow) + - Space elements: 60px between shapes + + + Build Order: + 1. Start point (circle) with label + 2. Each process step (rectangle) with label + 3. Each decision point (diamond) with label + 4. End point (circle) with label + 5. Connect all with bound arrows + + + + + Strip unused elements and elements with isDeleted: true + Save to {{default_output_file}} + + + + NEVER delete the file if validation fails - always fix syntax errors + Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')" + + Read the error message carefully - it shows the syntax error and position + Open the file and navigate to the error location + Fix the syntax error (add missing comma, bracket, or quote as indicated) + Save the file + Re-run validation with the same command + Repeat until validation passes + + Once validation passes, confirm with user: "Flowchart created at {{default_output_file}}. Open to view?" + + + + Validate against checklist at {{validation}} using {_bmad}/core/tasks/validate-workflow.xml + + + +``` diff --git a/src/bmm/workflows/excalidraw-diagrams/create-flowchart/workflow.yaml b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/workflow.yaml new file mode 100644 index 00000000..6079d6de --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-flowchart/workflow.yaml @@ -0,0 +1,27 @@ +name: create-excalidraw-flowchart +description: "Create a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows" +author: "BMad" + +# Config values +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/create-flowchart" +shared_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/_shared" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Core Excalidraw resources (universal knowledge) +helpers: "{project-root}/_bmad/core/resources/excalidraw/excalidraw-helpers.md" +json_validation: "{project-root}/_bmad/core/resources/excalidraw/validate-json-instructions.md" + +# Domain-specific resources (technical diagrams) +templates: "{shared_path}/excalidraw-templates.yaml" +library: "{shared_path}/excalidraw-library.json" + +# Output file (respects user's configured output_folder) +default_output_file: "{output_folder}/excalidraw-diagrams/flowchart-{timestamp}.excalidraw" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/excalidraw-diagrams/create-wireframe/checklist.md b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/checklist.md new file mode 100644 index 00000000..3e2b26f4 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/checklist.md @@ -0,0 +1,38 @@ +# Create Wireframe - Validation Checklist + +## Layout Structure + +- [ ] Screen dimensions appropriate for device type +- [ ] Grid alignment (20px) maintained +- [ ] Consistent spacing between UI elements +- [ ] Proper hierarchy (header, content, footer) + +## UI Elements + +- [ ] All interactive elements clearly marked +- [ ] Buttons, inputs, and controls properly sized +- [ ] Text labels readable and appropriately sized +- [ ] Navigation elements clearly indicated + +## Fidelity + +- [ ] Matches requested fidelity level (low/medium/high) +- [ ] Appropriate level of detail +- [ ] Placeholder content used where needed +- [ ] No unnecessary decoration for low-fidelity + +## Annotations + +- [ ] Key interactions annotated +- [ ] Flow indicators present if multi-screen +- [ ] Important notes included +- [ ] Element purposes clear + +## Technical Quality + +- [ ] All elements properly grouped +- [ ] Text elements have containerId +- [ ] Snapped to grid +- [ ] No elements with `isDeleted: true` +- [ ] JSON is valid +- [ ] File saved to correct location diff --git a/src/bmm/workflows/excalidraw-diagrams/create-wireframe/instructions.md b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/instructions.md new file mode 100644 index 00000000..dc9506b0 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/instructions.md @@ -0,0 +1,133 @@ +# Create Wireframe - Workflow Instructions + +```xml +The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml +You MUST have already loaded and processed: {installed_path}/workflow.yaml +This workflow creates website or app wireframes in Excalidraw format. + + + + + Review user's request and extract: wireframe type, fidelity level, screen count, device type, save location + Skip to Step 5 + + + + Ask: "What type of wireframe do you need?" + Present options: + 1. Website (Desktop) + 2. Mobile App (iOS/Android) + 3. Web App (Responsive) + 4. Tablet App + 5. Multi-platform + + WAIT for selection + + + + Ask fidelity level (Low/Medium/High) + Ask screen count (Single/Few 2-3/Multiple 4-6/Many 7+) + Ask device dimensions or use standard + Ask save location + + + + Check for existing theme.json, ask to use if exists + + + + Ask: "Choose a wireframe style:" + Present numbered options: + 1. Classic Wireframe + - Background: #ffffff (white) + - Container: #f5f5f5 (light gray) + - Border: #9e9e9e (gray) + - Text: #424242 (dark gray) + + 2. High Contrast + - Background: #ffffff (white) + - Container: #eeeeee (light gray) + - Border: #212121 (black) + - Text: #000000 (black) + + 3. Blueprint Style + - Background: #1a237e (dark blue) + - Container: #3949ab (blue) + - Border: #7986cb (light blue) + - Text: #ffffff (white) + + 4. Custom - Define your own colors + + WAIT for selection + Create theme.json based on selection + Confirm with user + + + + List all screens and their purposes + Map navigation flow between screens + Identify key UI elements for each screen + Show planned structure, confirm with user + + + + Load {{templates}} and extract `wireframe` section + Load {{library}} + Load theme.json + Load {{helpers}} + + + + Follow {{helpers}} for proper element creation + + For Each Screen: + - Create container/frame + - Add header section + - Add content areas + - Add navigation elements + - Add interactive elements (buttons, inputs) + - Add labels and annotations + + + Build Order: + 1. Screen containers + 2. Layout sections (header, content, footer) + 3. Navigation elements + 4. Content blocks + 5. Interactive elements + 6. Labels and annotations + 7. Flow indicators (if multi-screen) + + + Fidelity Guidelines: + - Low: Basic shapes, minimal detail, placeholder text + - Medium: More defined elements, some styling, representative content + - High: Detailed elements, realistic sizing, actual content examples + + + + + Strip unused elements and elements with isDeleted: true + Save to {{default_output_file}} + + + + NEVER delete the file if validation fails - always fix syntax errors + Run: node -e "JSON.parse(require('fs').readFileSync('{{default_output_file}}', 'utf8')); console.log('✓ Valid JSON')" + + Read the error message carefully - it shows the syntax error and position + Open the file and navigate to the error location + Fix the syntax error (add missing comma, bracket, or quote as indicated) + Save the file + Re-run validation with the same command + Repeat until validation passes + + Once validation passes, confirm with user + + + + Validate against {{validation}} + + + +``` diff --git a/src/bmm/workflows/excalidraw-diagrams/create-wireframe/workflow.yaml b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/workflow.yaml new file mode 100644 index 00000000..d89005a7 --- /dev/null +++ b/src/bmm/workflows/excalidraw-diagrams/create-wireframe/workflow.yaml @@ -0,0 +1,27 @@ +name: create-excalidraw-wireframe +description: "Create website or app wireframes in Excalidraw format" +author: "BMad" + +# Config values +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/create-wireframe" +shared_path: "{project-root}/_bmad/bmm/workflows/excalidraw-diagrams/_shared" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Core Excalidraw resources (universal knowledge) +helpers: "{project-root}/_bmad/core/resources/excalidraw/excalidraw-helpers.md" +json_validation: "{project-root}/_bmad/core/resources/excalidraw/validate-json-instructions.md" + +# Domain-specific resources (technical diagrams) +templates: "{shared_path}/excalidraw-templates.yaml" +library: "{shared_path}/excalidraw-library.json" + +# Output file (respects user's configured output_folder) +default_output_file: "{output_folder}/excalidraw-diagrams/wireframe-{timestamp}.excalidraw" + +standalone: true +web_bundle: false diff --git a/src/bmm/workflows/testarch/atdd/atdd-checklist-template.md b/src/bmm/workflows/testarch/atdd/atdd-checklist-template.md new file mode 100644 index 00000000..5de70286 --- /dev/null +++ b/src/bmm/workflows/testarch/atdd/atdd-checklist-template.md @@ -0,0 +1,363 @@ +# ATDD Checklist - Epic {epic_num}, Story {story_num}: {story_title} + +**Date:** {date} +**Author:** {user_name} +**Primary Test Level:** {primary_level} + +--- + +## Story Summary + +{Brief 2-3 sentence summary of the user story} + +**As a** {user_role} +**I want** {feature_description} +**So that** {business_value} + +--- + +## Acceptance Criteria + +{List all testable acceptance criteria from the story} + +1. {Acceptance criterion 1} +2. {Acceptance criterion 2} +3. {Acceptance criterion 3} + +--- + +## Failing Tests Created (RED Phase) + +### E2E Tests ({e2e_test_count} tests) + +**File:** `{e2e_test_file_path}` ({line_count} lines) + +{List each E2E test with its current status and expected failure reason} + +- ✅ **Test:** {test_name} + - **Status:** RED - {failure_reason} + - **Verifies:** {what_this_test_validates} + +### API Tests ({api_test_count} tests) + +**File:** `{api_test_file_path}` ({line_count} lines) + +{List each API test with its current status and expected failure reason} + +- ✅ **Test:** {test_name} + - **Status:** RED - {failure_reason} + - **Verifies:** {what_this_test_validates} + +### Component Tests ({component_test_count} tests) + +**File:** `{component_test_file_path}` ({line_count} lines) + +{List each component test with its current status and expected failure reason} + +- ✅ **Test:** {test_name} + - **Status:** RED - {failure_reason} + - **Verifies:** {what_this_test_validates} + +--- + +## Data Factories Created + +{List all data factory files created with their exports} + +### {Entity} Factory + +**File:** `tests/support/factories/{entity}.factory.ts` + +**Exports:** + +- `create{Entity}(overrides?)` - Create single entity with optional overrides +- `create{Entity}s(count)` - Create array of entities + +**Example Usage:** + +```typescript +const user = createUser({ email: 'specific@example.com' }); +const users = createUsers(5); // Generate 5 random users +``` + +--- + +## Fixtures Created + +{List all test fixture files created with their fixture names and descriptions} + +### {Feature} Fixtures + +**File:** `tests/support/fixtures/{feature}.fixture.ts` + +**Fixtures:** + +- `{fixtureName}` - {description_of_what_fixture_provides} + - **Setup:** {what_setup_does} + - **Provides:** {what_test_receives} + - **Cleanup:** {what_cleanup_does} + +**Example Usage:** + +```typescript +import { test } from './fixtures/{feature}.fixture'; + +test('should do something', async ({ {fixtureName} }) => { + // {fixtureName} is ready to use with auto-cleanup +}); +``` + +--- + +## Mock Requirements + +{Document external services that need mocking and their requirements} + +### {Service Name} Mock + +**Endpoint:** `{HTTP_METHOD} {endpoint_url}` + +**Success Response:** + +```json +{ + {success_response_example} +} +``` + +**Failure Response:** + +```json +{ + {failure_response_example} +} +``` + +**Notes:** {any_special_mock_requirements} + +--- + +## Required data-testid Attributes + +{List all data-testid attributes required in UI implementation for test stability} + +### {Page or Component Name} + +- `{data-testid-name}` - {description_of_element} +- `{data-testid-name}` - {description_of_element} + +**Implementation Example:** + +```tsx + + +
{errorText}
+``` + +--- + +## Implementation Checklist + +{Map each failing test to concrete implementation tasks that will make it pass} + +### Test: {test_name_1} + +**File:** `{test_file_path}` + +**Tasks to make this test pass:** + +- [ ] {Implementation task 1} +- [ ] {Implementation task 2} +- [ ] {Implementation task 3} +- [ ] Add required data-testid attributes: {list_of_testids} +- [ ] Run test: `{test_execution_command}` +- [ ] ✅ Test passes (green phase) + +**Estimated Effort:** {effort_estimate} hours + +--- + +### Test: {test_name_2} + +**File:** `{test_file_path}` + +**Tasks to make this test pass:** + +- [ ] {Implementation task 1} +- [ ] {Implementation task 2} +- [ ] {Implementation task 3} +- [ ] Add required data-testid attributes: {list_of_testids} +- [ ] Run test: `{test_execution_command}` +- [ ] ✅ Test passes (green phase) + +**Estimated Effort:** {effort_estimate} hours + +--- + +## Running Tests + +```bash +# Run all failing tests for this story +{test_command_all} + +# Run specific test file +{test_command_specific_file} + +# Run tests in headed mode (see browser) +{test_command_headed} + +# Debug specific test +{test_command_debug} + +# Run tests with coverage +{test_command_coverage} +``` + +--- + +## Red-Green-Refactor Workflow + +### RED Phase (Complete) ✅ + +**TEA Agent Responsibilities:** + +- ✅ All tests written and failing +- ✅ Fixtures and factories created with auto-cleanup +- ✅ Mock requirements documented +- ✅ data-testid requirements listed +- ✅ Implementation checklist created + +**Verification:** + +- All tests run and fail as expected +- Failure messages are clear and actionable +- Tests fail due to missing implementation, not test bugs + +--- + +### GREEN Phase (DEV Team - Next Steps) + +**DEV Agent Responsibilities:** + +1. **Pick one failing test** from implementation checklist (start with highest priority) +2. **Read the test** to understand expected behavior +3. **Implement minimal code** to make that specific test pass +4. **Run the test** to verify it now passes (green) +5. **Check off the task** in implementation checklist +6. **Move to next test** and repeat + +**Key Principles:** + +- One test at a time (don't try to fix all at once) +- Minimal implementation (don't over-engineer) +- Run tests frequently (immediate feedback) +- Use implementation checklist as roadmap + +**Progress Tracking:** + +- Check off tasks as you complete them +- Share progress in daily standup + +--- + +### REFACTOR Phase (DEV Team - After All Tests Pass) + +**DEV Agent Responsibilities:** + +1. **Verify all tests pass** (green phase complete) +2. **Review code for quality** (readability, maintainability, performance) +3. **Extract duplications** (DRY principle) +4. **Optimize performance** (if needed) +5. **Ensure tests still pass** after each refactor +6. **Update documentation** (if API contracts change) + +**Key Principles:** + +- Tests provide safety net (refactor with confidence) +- Make small refactors (easier to debug if tests fail) +- Run tests after each change +- Don't change test behavior (only implementation) + +**Completion:** + +- All tests pass +- Code quality meets team standards +- No duplications or code smells +- Ready for code review and story approval + +--- + +## Next Steps + +1. **Share this checklist and failing tests** with the dev workflow (manual handoff) +2. **Review this checklist** with team in standup or planning +3. **Run failing tests** to confirm RED phase: `{test_command_all}` +4. **Begin implementation** using implementation checklist as guide +5. **Work one test at a time** (red → green for each) +6. **Share progress** in daily standup +7. **When all tests pass**, refactor code for quality +8. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml + +--- + +## Knowledge Base References Applied + +This ATDD workflow consulted the following knowledge fragments: + +- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's `test.extend()` +- **data-factories.md** - Factory patterns using `@faker-js/faker` for random test data generation with overrides support +- **component-tdd.md** - Component test strategies using Playwright Component Testing +- **network-first.md** - Route interception patterns (intercept BEFORE navigation to prevent race conditions) +- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation) +- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit) + +See `tea-index.csv` for complete knowledge fragment mapping. + +--- + +## Test Execution Evidence + +### Initial Test Run (RED Phase Verification) + +**Command:** `{test_command_all}` + +**Results:** + +``` +{paste_test_run_output_showing_all_tests_failing} +``` + +**Summary:** + +- Total tests: {total_test_count} +- Passing: 0 (expected) +- Failing: {total_test_count} (expected) +- Status: ✅ RED phase verified + +**Expected Failure Messages:** +{list_expected_failure_messages_for_each_test} + +--- + +## Notes + +{Any additional notes, context, or special considerations for this story} + +- {Note 1} +- {Note 2} +- {Note 3} + +--- + +## Contact + +**Questions or Issues?** + +- Ask in team standup +- Tag @{tea_agent_username} in Slack/Discord +- Refer to `./bmm/docs/tea-README.md` for workflow documentation +- Consult `./bmm/testarch/knowledge` for testing best practices + +--- + +**Generated by BMad TEA Agent** - {date} diff --git a/src/bmm/workflows/testarch/atdd/checklist.md b/src/bmm/workflows/testarch/atdd/checklist.md new file mode 100644 index 00000000..ce94a14c --- /dev/null +++ b/src/bmm/workflows/testarch/atdd/checklist.md @@ -0,0 +1,374 @@ +# ATDD Workflow Validation Checklist + +Use this checklist to validate that the ATDD workflow has been executed correctly and all deliverables meet quality standards. + +## Prerequisites + +Before starting this workflow, verify: + +- [ ] Story approved with clear acceptance criteria (AC must be testable) +- [ ] Development sandbox/environment ready +- [ ] Framework scaffolding exists (run `framework` workflow if missing) +- [ ] Test framework configuration available (playwright.config.ts or cypress.config.ts) +- [ ] Package.json has test dependencies installed (Playwright or Cypress) + +**Halt if missing:** Framework scaffolding or story acceptance criteria + +--- + +## Step 1: Story Context and Requirements + +- [ ] Story markdown file loaded and parsed successfully +- [ ] All acceptance criteria identified and extracted +- [ ] Affected systems and components identified +- [ ] Technical constraints documented +- [ ] Framework configuration loaded (playwright.config.ts or cypress.config.ts) +- [ ] Test directory structure identified from config +- [ ] Existing fixture patterns reviewed for consistency +- [ ] Similar test patterns searched and found in `{test_dir}` +- [ ] Knowledge base fragments loaded: + - [ ] `fixture-architecture.md` + - [ ] `data-factories.md` + - [ ] `component-tdd.md` + - [ ] `network-first.md` + - [ ] `test-quality.md` + +--- + +## Step 2: Test Level Selection and Strategy + +- [ ] Each acceptance criterion analyzed for appropriate test level +- [ ] Test level selection framework applied (E2E vs API vs Component vs Unit) +- [ ] E2E tests: Critical user journeys and multi-system integration identified +- [ ] API tests: Business logic and service contracts identified +- [ ] Component tests: UI component behavior and interactions identified +- [ ] Unit tests: Pure logic and edge cases identified (if applicable) +- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels unnecessarily) +- [ ] Tests prioritized using P0-P3 framework (if test-design document exists) +- [ ] Primary test level set in `primary_level` variable (typically E2E or API) +- [ ] Test levels documented in ATDD checklist + +--- + +## Step 3: Failing Tests Generated + +### Test File Structure Created + +- [ ] Test files organized in appropriate directories: + - [ ] `tests/e2e/` for end-to-end tests + - [ ] `tests/api/` for API tests + - [ ] `tests/component/` for component tests + - [ ] `tests/support/` for infrastructure (fixtures, factories, helpers) + +### E2E Tests (If Applicable) + +- [ ] E2E test files created in `tests/e2e/` +- [ ] All tests follow Given-When-Then format +- [ ] Tests use `data-testid` selectors (not CSS classes or fragile selectors) +- [ ] One assertion per test (atomic test design) +- [ ] No hard waits or sleeps (explicit waits only) +- [ ] Network-first pattern applied (route interception BEFORE navigation) +- [ ] Tests fail initially (RED phase verified by local test run) +- [ ] Failure messages are clear and actionable + +### API Tests (If Applicable) + +- [ ] API test files created in `tests/api/` +- [ ] Tests follow Given-When-Then format +- [ ] API contracts validated (request/response structure) +- [ ] HTTP status codes verified +- [ ] Response body validation includes all required fields +- [ ] Error cases tested (400, 401, 403, 404, 500) +- [ ] Tests fail initially (RED phase verified) + +### Component Tests (If Applicable) + +- [ ] Component test files created in `tests/component/` +- [ ] Tests follow Given-When-Then format +- [ ] Component mounting works correctly +- [ ] Interaction testing covers user actions (click, hover, keyboard) +- [ ] State management within component validated +- [ ] Props and events tested +- [ ] Tests fail initially (RED phase verified) + +### Test Quality Validation + +- [ ] All tests use Given-When-Then structure with clear comments +- [ ] All tests have descriptive names explaining what they test +- [ ] No duplicate tests (same behavior tested multiple times) +- [ ] No flaky patterns (race conditions, timing issues) +- [ ] No test interdependencies (tests can run in any order) +- [ ] Tests are deterministic (same input always produces same result) + +--- + +## Step 4: Data Infrastructure Built + +### Data Factories Created + +- [ ] Factory files created in `tests/support/factories/` +- [ ] All factories use `@faker-js/faker` for random data generation (no hardcoded values) +- [ ] Factories support overrides for specific test scenarios +- [ ] Factories generate complete valid objects matching API contracts +- [ ] Helper functions for bulk creation provided (e.g., `createUsers(count)`) +- [ ] Factory exports are properly typed (TypeScript) + +### Test Fixtures Created + +- [ ] Fixture files created in `tests/support/fixtures/` +- [ ] All fixtures use Playwright's `test.extend()` pattern +- [ ] Fixtures have setup phase (arrange test preconditions) +- [ ] Fixtures provide data to tests via `await use(data)` +- [ ] Fixtures have teardown phase with auto-cleanup (delete created data) +- [ ] Fixtures are composable (can use other fixtures if needed) +- [ ] Fixtures are isolated (each test gets fresh data) +- [ ] Fixtures are type-safe (TypeScript types defined) + +### Mock Requirements Documented + +- [ ] External service mocking requirements identified +- [ ] Mock endpoints documented with URLs and methods +- [ ] Success response examples provided +- [ ] Failure response examples provided +- [ ] Mock requirements documented in ATDD checklist for DEV team + +### data-testid Requirements Listed + +- [ ] All required data-testid attributes identified from E2E tests +- [ ] data-testid list organized by page or component +- [ ] Each data-testid has clear description of element it targets +- [ ] data-testid list included in ATDD checklist for DEV team + +--- + +## Step 5: Implementation Checklist Created + +- [ ] Implementation checklist created with clear structure +- [ ] Each failing test mapped to concrete implementation tasks +- [ ] Tasks include: + - [ ] Route/component creation + - [ ] Business logic implementation + - [ ] API integration + - [ ] data-testid attribute additions + - [ ] Error handling + - [ ] Test execution command + - [ ] Completion checkbox +- [ ] Red-Green-Refactor workflow documented in checklist +- [ ] RED phase marked as complete (TEA responsibility) +- [ ] GREEN phase tasks listed for DEV team +- [ ] REFACTOR phase guidance provided +- [ ] Execution commands provided: + - [ ] Run all tests: `npm run test:e2e` + - [ ] Run specific test file + - [ ] Run in headed mode + - [ ] Debug specific test +- [ ] Estimated effort included (hours or story points) + +--- + +## Step 6: Deliverables Generated + +### ATDD Checklist Document Created + +- [ ] Output file created at `{output_folder}/atdd-checklist-{story_id}.md` +- [ ] Document follows template structure from `atdd-checklist-template.md` +- [ ] Document includes all required sections: + - [ ] Story summary + - [ ] Acceptance criteria breakdown + - [ ] Failing tests created (paths and line counts) + - [ ] Data factories created + - [ ] Fixtures created + - [ ] Mock requirements + - [ ] Required data-testid attributes + - [ ] Implementation checklist + - [ ] Red-green-refactor workflow + - [ ] Execution commands + - [ ] Next steps for DEV team +- [ ] Output shared with DEV workflow (manual handoff; not auto-consumed) + +### All Tests Verified to Fail (RED Phase) + +- [ ] Full test suite run locally before finalizing +- [ ] All tests fail as expected (RED phase confirmed) +- [ ] No tests passing before implementation (if passing, test is invalid) +- [ ] Failure messages documented in ATDD checklist +- [ ] Failures are due to missing implementation, not test bugs +- [ ] Test run output captured for reference + +### Summary Provided + +- [ ] Summary includes: + - [ ] Story ID + - [ ] Primary test level + - [ ] Test counts (E2E, API, Component) + - [ ] Test file paths + - [ ] Factory count + - [ ] Fixture count + - [ ] Mock requirements count + - [ ] data-testid count + - [ ] Implementation task count + - [ ] Estimated effort + - [ ] Next steps for DEV team + - [ ] Output file path + - [ ] Knowledge base references applied + +--- + +## Quality Checks + +### Test Design Quality + +- [ ] Tests are readable (clear Given-When-Then structure) +- [ ] Tests are maintainable (use factories and fixtures, not hardcoded data) +- [ ] Tests are isolated (no shared state between tests) +- [ ] Tests are deterministic (no race conditions or flaky patterns) +- [ ] Tests are atomic (one assertion per test) +- [ ] Tests are fast (no unnecessary waits or delays) + +### Knowledge Base Integration + +- [ ] fixture-architecture.md patterns applied to all fixtures +- [ ] data-factories.md patterns applied to all factories +- [ ] network-first.md patterns applied to E2E tests with network requests +- [ ] component-tdd.md patterns applied to component tests +- [ ] test-quality.md principles applied to all test design + +### Code Quality + +- [ ] All TypeScript types are correct and complete +- [ ] No linting errors in generated test files +- [ ] Consistent naming conventions followed +- [ ] Imports are organized and correct +- [ ] Code follows project style guide + +--- + +## Integration Points + +### With DEV Agent + +- [ ] ATDD checklist provides clear implementation guidance +- [ ] Implementation tasks are granular and actionable +- [ ] data-testid requirements are complete and clear +- [ ] Mock requirements include all necessary details +- [ ] Execution commands work correctly + +### With Story Workflow + +- [ ] Story ID correctly referenced in output files +- [ ] Acceptance criteria from story accurately reflected in tests +- [ ] Technical constraints from story considered in test design + +### With Framework Workflow + +- [ ] Test framework configuration correctly detected and used +- [ ] Directory structure matches framework setup +- [ ] Fixtures and helpers follow established patterns +- [ ] Naming conventions consistent with framework standards + +### With test-design Workflow (If Available) + +- [ ] P0 scenarios from test-design prioritized in ATDD +- [ ] Risk assessment from test-design considered in test coverage +- [ ] Coverage strategy from test-design aligned with ATDD tests + +--- + +## Completion Criteria + +All of the following must be true before marking this workflow as complete: + +- [ ] **Story acceptance criteria analyzed** and mapped to appropriate test levels +- [ ] **Failing tests created** at all appropriate levels (E2E, API, Component) +- [ ] **Given-When-Then format** used consistently across all tests +- [ ] **RED phase verified** by local test run (all tests failing as expected) +- [ ] **Network-first pattern** applied to E2E tests with network requests +- [ ] **Data factories created** using faker (no hardcoded test data) +- [ ] **Fixtures created** with auto-cleanup in teardown +- [ ] **Mock requirements documented** for external services +- [ ] **data-testid attributes listed** for DEV team +- [ ] **Implementation checklist created** mapping tests to code tasks +- [ ] **Red-green-refactor workflow documented** in ATDD checklist +- [ ] **Execution commands provided** and verified to work +- [ ] **ATDD checklist document created** and saved to correct location +- [ ] **Output file formatted correctly** using template structure +- [ ] **Knowledge base references applied** and documented in summary +- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data) + +--- + +## Common Issues and Resolutions + +### Issue: Tests pass before implementation + +**Problem:** A test passes even though no implementation code exists yet. + +**Resolution:** + +- Review test to ensure it's testing actual behavior, not mocked/stubbed behavior +- Check if test is accidentally using existing functionality +- Verify test assertions are correct and meaningful +- Rewrite test to fail until implementation is complete + +### Issue: Network-first pattern not applied + +**Problem:** Route interception happens after navigation, causing race conditions. + +**Resolution:** + +- Move `await page.route()` calls BEFORE `await page.goto()` +- Review `network-first.md` knowledge fragment +- Update all E2E tests to follow network-first pattern + +### Issue: Hardcoded test data in tests + +**Problem:** Tests use hardcoded strings/numbers instead of factories. + +**Resolution:** + +- Replace all hardcoded data with factory function calls +- Use `faker` for all random data generation +- Update data-factories to support all required test scenarios + +### Issue: Fixtures missing auto-cleanup + +**Problem:** Fixtures create data but don't clean it up in teardown. + +**Resolution:** + +- Add cleanup logic after `await use(data)` in fixture +- Call deletion/cleanup functions in teardown +- Verify cleanup works by checking database/storage after test run + +### Issue: Tests have multiple assertions + +**Problem:** Tests verify multiple behaviors in single test (not atomic). + +**Resolution:** + +- Split into separate tests (one assertion per test) +- Each test should verify exactly one behavior +- Use descriptive test names to clarify what each test verifies + +### Issue: Tests depend on execution order + +**Problem:** Tests fail when run in isolation or different order. + +**Resolution:** + +- Remove shared state between tests +- Each test should create its own test data +- Use fixtures for consistent setup across tests +- Verify tests can run with `.only` flag + +--- + +## Notes for TEA Agent + +- **Preflight halt is critical:** Do not proceed if story has no acceptance criteria or framework is missing +- **RED phase verification is mandatory:** Tests must fail before sharing with DEV team +- **Network-first pattern:** Route interception BEFORE navigation prevents race conditions +- **One assertion per test:** Atomic tests provide clear failure diagnosis +- **Auto-cleanup is non-negotiable:** Every fixture must clean up data in teardown +- **Use knowledge base:** Load relevant fragments (fixture-architecture, data-factories, network-first, component-tdd, test-quality) for guidance +- **Share with DEV agent:** ATDD checklist provides implementation roadmap from red to green diff --git a/src/bmm/workflows/testarch/atdd/instructions.md b/src/bmm/workflows/testarch/atdd/instructions.md new file mode 100644 index 00000000..aa748905 --- /dev/null +++ b/src/bmm/workflows/testarch/atdd/instructions.md @@ -0,0 +1,806 @@ + + +# Acceptance Test-Driven Development (ATDD) + +**Workflow ID**: `_bmad/bmm/testarch/atdd` +**Version**: 4.0 (BMad v6) + +--- + +## Overview + +Generates failing acceptance tests BEFORE implementation following TDD's red-green-refactor cycle. This workflow creates comprehensive test coverage at appropriate levels (E2E, API, Component) with supporting infrastructure (fixtures, factories, mocks) and provides an implementation checklist to guide development. + +**Core Principle**: Tests fail first (red phase), then guide development to green, then enable confident refactoring. + +--- + +## Preflight Requirements + +**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user. + +- ✅ Story approved with clear acceptance criteria +- ✅ Development sandbox/environment ready +- ✅ Framework scaffolding exists (run `framework` workflow if missing) +- ✅ Test framework configuration available (playwright.config.ts or cypress.config.ts) + +--- + +## Step 1: Load Story Context and Requirements + +### Actions + +1. **Read Story Markdown** + - Load story file from `{story_file}` variable + - Extract acceptance criteria (all testable requirements) + - Identify affected systems and components + - Note any technical constraints or dependencies + +2. **Load Framework Configuration** + - Read framework config (playwright.config.ts or cypress.config.ts) + - Identify test directory structure + - Check existing fixture patterns + - Note test runner capabilities + +3. **Load Existing Test Patterns** + - Search `{test_dir}` for similar tests + - Identify reusable fixtures and helpers + - Check data factory patterns + - Note naming conventions + +4. **Check Playwright Utils Flag** + + Read `{config_source}` and check `config.tea_use_playwright_utils`. + +5. **Load Knowledge Base Fragments** + + **Critical:** Consult `{project-root}/_bmad/bmm/testarch/tea-index.csv` to load: + + **Core Patterns (Always load):** + - `data-factories.md` - Factory patterns using faker (override patterns, nested factories, API seeding, 498 lines, 5 examples) + - `component-tdd.md` - Component test strategies (red-green-refactor, provider isolation, accessibility, visual regression, 480 lines, 4 examples) + - `test-quality.md` - Test design principles (deterministic tests, isolated with cleanup, explicit assertions, length limits, execution time optimization, 658 lines, 5 examples) + - `test-healing-patterns.md` - Common failure patterns and healing strategies (stale selectors, race conditions, dynamic data, network errors, hard waits, 648 lines, 5 examples) + - `selector-resilience.md` - Selector best practices (data-testid > ARIA > text > CSS hierarchy, dynamic patterns, anti-patterns, 541 lines, 4 examples) + - `timing-debugging.md` - Race condition prevention and async debugging (network-first, deterministic waiting, anti-patterns, 370 lines, 3 examples) + + **If `config.tea_use_playwright_utils: true` (All Utilities):** + - `overview.md` - Playwright utils for ATDD patterns + - `api-request.md` - API test examples with schema validation + - `network-recorder.md` - HAR record/playback for UI acceptance tests + - `auth-session.md` - Auth setup for acceptance tests + - `intercept-network-call.md` - Network interception in ATDD scenarios + - `recurse.md` - Polling for async acceptance criteria + - `log.md` - Logging in ATDD tests + - `file-utils.md` - File download validation in acceptance tests + - `network-error-monitor.md` - Catch silent failures in ATDD + - `fixtures-composition.md` - Composing utilities for ATDD + + **If `config.tea_use_playwright_utils: false`:** + - `fixture-architecture.md` - Test fixture patterns with auto-cleanup (pure function → fixture → mergeTests composition, 406 lines, 5 examples) + - `network-first.md` - Route interception patterns (intercept before navigate, HAR capture, deterministic waiting, 489 lines, 5 examples) + +**Halt Condition:** If story has no acceptance criteria or framework is missing, HALT with message: "ATDD requires clear acceptance criteria and test framework setup" + +--- + +## Step 1.5: Generation Mode Selection (NEW - Phase 2.5) + +### Actions + +1. **Detect Generation Mode** + + Determine mode based on scenario complexity: + + **AI Generation Mode (DEFAULT)**: + - Clear acceptance criteria with standard patterns + - Uses: AI-generated tests from requirements + - Appropriate for: CRUD, auth, navigation, API tests + - Fastest approach + + **Recording Mode (OPTIONAL - Complex UI)**: + - Complex UI interactions (drag-drop, wizards, multi-page flows) + - Uses: Interactive test recording with Playwright MCP + - Appropriate for: Visual workflows, unclear requirements + - Only if config.tea_use_mcp_enhancements is true AND MCP available + +2. **AI Generation Mode (DEFAULT - Continue to Step 2)** + + For standard scenarios: + - Continue with existing workflow (Step 2: Select Test Levels and Strategy) + - AI generates tests based on acceptance criteria from Step 1 + - Use knowledge base patterns for test structure + +3. **Recording Mode (OPTIONAL - Complex UI Only)** + + For complex UI scenarios AND config.tea_use_mcp_enhancements is true: + + **A. Check MCP Availability** + + If Playwright MCP tools are available in your IDE: + - Use MCP recording mode (Step 3.B) + + If MCP unavailable: + - Fallback to AI generation mode (silent, automatic) + - Continue to Step 2 + + **B. Interactive Test Recording (MCP-Based)** + + Use Playwright MCP test-generator tools: + + **Setup:** + + ``` + 1. Use generator_setup_page to initialize recording session + 2. Navigate to application starting URL (from story context) + 3. Ready to record user interactions + ``` + + **Recording Process (Per Acceptance Criterion):** + + ``` + 4. Read acceptance criterion from story + 5. Manually execute test scenario using browser_* tools: + - browser_navigate: Navigate to pages + - browser_click: Click buttons, links, elements + - browser_type: Fill form fields + - browser_select: Select dropdown options + - browser_check: Check/uncheck checkboxes + 6. Add verification steps using browser_verify_* tools: + - browser_verify_text: Verify text content + - browser_verify_visible: Verify element visibility + - browser_verify_url: Verify URL navigation + 7. Capture interaction log with generator_read_log + 8. Generate test file with generator_write_test + 9. Repeat for next acceptance criterion + ``` + + **Post-Recording Enhancement:** + + ``` + 10. Review generated test code + 11. Enhance with knowledge base patterns: + - Add Given-When-Then comments + - Replace recorded selectors with data-testid (if needed) + - Add network-first interception (from network-first.md) + - Add fixtures for auth/data setup (from fixture-architecture.md) + - Use factories for test data (from data-factories.md) + 12. Verify tests fail (missing implementation) + 13. Continue to Step 4 (Build Data Infrastructure) + ``` + + **When to Use Recording Mode:** + - ✅ Complex UI interactions (drag-drop, multi-step forms, wizards) + - ✅ Visual workflows (modals, dialogs, animations) + - ✅ Unclear requirements (exploratory, discovering expected behavior) + - ✅ Multi-page flows (checkout, registration, onboarding) + - ❌ NOT for simple CRUD (AI generation faster) + - ❌ NOT for API-only tests (no UI to record) + + **When to Use AI Generation (Default):** + - ✅ Clear acceptance criteria available + - ✅ Standard patterns (login, CRUD, navigation) + - ✅ Need many tests quickly + - ✅ API/backend tests (no UI interaction) + +4. **Proceed to Test Level Selection** + + After mode selection: + - AI Generation: Continue to Step 2 (Select Test Levels and Strategy) + - Recording: Skip to Step 4 (Build Data Infrastructure) - tests already generated + +--- + +## Step 2: Select Test Levels and Strategy + +### Actions + +1. **Analyze Acceptance Criteria** + + For each acceptance criterion, determine: + - Does it require full user journey? → E2E test + - Does it test business logic/API contract? → API test + - Does it validate UI component behavior? → Component test + - Can it be unit tested? → Unit test + +2. **Apply Test Level Selection Framework** + + **Knowledge Base Reference**: `test-levels-framework.md` + + **E2E (End-to-End)**: + - Critical user journeys (login, checkout, core workflow) + - Multi-system integration + - User-facing acceptance criteria + - **Characteristics**: High confidence, slow execution, brittle + + **API (Integration)**: + - Business logic validation + - Service contracts + - Data transformations + - **Characteristics**: Fast feedback, good balance, stable + + **Component**: + - UI component behavior (buttons, forms, modals) + - Interaction testing + - Visual regression + - **Characteristics**: Fast, isolated, granular + + **Unit**: + - Pure business logic + - Edge cases + - Error handling + - **Characteristics**: Fastest, most granular + +3. **Avoid Duplicate Coverage** + + Don't test same behavior at multiple levels unless necessary: + - Use E2E for critical happy path only + - Use API tests for complex business logic variations + - Use component tests for UI interaction edge cases + - Use unit tests for pure logic edge cases + +4. **Prioritize Tests** + + If test-design document exists, align with priority levels: + - P0 scenarios → Must cover in failing tests + - P1 scenarios → Should cover if time permits + - P2/P3 scenarios → Optional for this iteration + +**Decision Point:** Set `primary_level` variable to main test level for this story (typically E2E or API) + +--- + +## Step 3: Generate Failing Tests + +### Actions + +1. **Create Test File Structure** + + ``` + tests/ + ├── e2e/ + │ └── {feature-name}.spec.ts # E2E acceptance tests + ├── api/ + │ └── {feature-name}.api.spec.ts # API contract tests + ├── component/ + │ └── {ComponentName}.test.tsx # Component tests + └── support/ + ├── fixtures/ # Test fixtures + ├── factories/ # Data factories + └── helpers/ # Utility functions + ``` + +2. **Write Failing E2E Tests (If Applicable)** + + **Use Given-When-Then format:** + + ```typescript + import { test, expect } from '@playwright/test'; + + test.describe('User Login', () => { + test('should display error for invalid credentials', async ({ page }) => { + // GIVEN: User is on login page + await page.goto('/login'); + + // WHEN: User submits invalid credentials + await page.fill('[data-testid="email-input"]', 'invalid@example.com'); + await page.fill('[data-testid="password-input"]', 'wrongpassword'); + await page.click('[data-testid="login-button"]'); + + // THEN: Error message is displayed + await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password'); + }); + }); + ``` + + **Critical patterns:** + - One assertion per test (atomic tests) + - Explicit waits (no hard waits/sleeps) + - Network-first approach (route interception before navigation) + - data-testid selectors for stability + - Clear Given-When-Then structure + +3. **Apply Network-First Pattern** + + **Knowledge Base Reference**: `network-first.md` + + ```typescript + test('should load user dashboard after login', async ({ page }) => { + // CRITICAL: Intercept routes BEFORE navigation + await page.route('**/api/user', (route) => + route.fulfill({ + status: 200, + body: JSON.stringify({ id: 1, name: 'Test User' }), + }), + ); + + // NOW navigate + await page.goto('/dashboard'); + + await expect(page.locator('[data-testid="user-name"]')).toHaveText('Test User'); + }); + ``` + +4. **Write Failing API Tests (If Applicable)** + + ```typescript + import { test, expect } from '@playwright/test'; + + test.describe('User API', () => { + test('POST /api/users - should create new user', async ({ request }) => { + // GIVEN: Valid user data + const userData = { + email: 'newuser@example.com', + name: 'New User', + }; + + // WHEN: Creating user via API + const response = await request.post('/api/users', { + data: userData, + }); + + // THEN: User is created successfully + expect(response.status()).toBe(201); + const body = await response.json(); + expect(body).toMatchObject({ + email: userData.email, + name: userData.name, + id: expect.any(Number), + }); + }); + }); + ``` + +5. **Write Failing Component Tests (If Applicable)** + + **Knowledge Base Reference**: `component-tdd.md` + + ```typescript + import { test, expect } from '@playwright/experimental-ct-react'; + import { LoginForm } from './LoginForm'; + + test.describe('LoginForm Component', () => { + test('should disable submit button when fields are empty', async ({ mount }) => { + // GIVEN: LoginForm is mounted + const component = await mount(); + + // WHEN: Form is initially rendered + const submitButton = component.locator('button[type="submit"]'); + + // THEN: Submit button is disabled + await expect(submitButton).toBeDisabled(); + }); + }); + ``` + +6. **Verify Tests Fail Initially** + + **Critical verification:** + - Run tests locally to confirm they fail + - Failure should be due to missing implementation, not test errors + - Failure messages should be clear and actionable + - All tests must be in RED phase before sharing with DEV + +**Important:** Tests MUST fail initially. If a test passes before implementation, it's not a valid acceptance test. + +--- + +## Step 4: Build Data Infrastructure + +### Actions + +1. **Create Data Factories** + + **Knowledge Base Reference**: `data-factories.md` + + ```typescript + // tests/support/factories/user.factory.ts + import { faker } from '@faker-js/faker'; + + export const createUser = (overrides = {}) => ({ + id: faker.number.int(), + email: faker.internet.email(), + name: faker.person.fullName(), + createdAt: faker.date.recent().toISOString(), + ...overrides, + }); + + export const createUsers = (count: number) => Array.from({ length: count }, () => createUser()); + ``` + + **Factory principles:** + - Use faker for random data (no hardcoded values) + - Support overrides for specific scenarios + - Generate complete valid objects + - Include helper functions for bulk creation + +2. **Create Test Fixtures** + + **Knowledge Base Reference**: `fixture-architecture.md` + + ```typescript + // tests/support/fixtures/auth.fixture.ts + import { test as base } from '@playwright/test'; + + export const test = base.extend({ + authenticatedUser: async ({ page }, use) => { + // Setup: Create and authenticate user + const user = await createUser(); + await page.goto('/login'); + await page.fill('[data-testid="email"]', user.email); + await page.fill('[data-testid="password"]', 'password123'); + await page.click('[data-testid="login-button"]'); + await page.waitForURL('/dashboard'); + + // Provide to test + await use(user); + + // Cleanup: Delete user + await deleteUser(user.id); + }, + }); + ``` + + **Fixture principles:** + - Auto-cleanup (always delete created data) + - Composable (fixtures can use other fixtures) + - Isolated (each test gets fresh data) + - Type-safe + +3. **Document Mock Requirements** + + If external services need mocking, document requirements: + + ```markdown + ### Mock Requirements for DEV Team + + **Payment Gateway Mock**: + + - Endpoint: `POST /api/payments` + - Success response: `{ status: 'success', transactionId: '123' }` + - Failure response: `{ status: 'failed', error: 'Insufficient funds' }` + + **Email Service Mock**: + + - Should not send real emails in test environment + - Log email contents for verification + ``` + +4. **List Required data-testid Attributes** + + ```markdown + ### Required data-testid Attributes + + **Login Page**: + + - `email-input` - Email input field + - `password-input` - Password input field + - `login-button` - Submit button + - `error-message` - Error message container + + **Dashboard Page**: + + - `user-name` - User name display + - `logout-button` - Logout button + ``` + +--- + +## Step 5: Create Implementation Checklist + +### Actions + +1. **Map Tests to Implementation Tasks** + + For each failing test, create corresponding implementation task: + + ```markdown + ## Implementation Checklist + + ### Epic X - User Authentication + + #### Test: User Login with Valid Credentials + + - [ ] Create `/login` route + - [ ] Implement login form component + - [ ] Add email/password validation + - [ ] Integrate authentication API + - [ ] Add `data-testid` attributes: `email-input`, `password-input`, `login-button` + - [ ] Implement error handling + - [ ] Run test: `npm run test:e2e -- login.spec.ts` + - [ ] ✅ Test passes (green phase) + + #### Test: Display Error for Invalid Credentials + + - [ ] Add error state management + - [ ] Display error message UI + - [ ] Add `data-testid="error-message"` + - [ ] Run test: `npm run test:e2e -- login.spec.ts` + - [ ] ✅ Test passes (green phase) + ``` + +2. **Include Red-Green-Refactor Guidance** + + ```markdown + ## Red-Green-Refactor Workflow + + **RED Phase** (Complete): + + - ✅ All tests written and failing + - ✅ Fixtures and factories created + - ✅ Mock requirements documented + + **GREEN Phase** (DEV Team): + + 1. Pick one failing test + 2. Implement minimal code to make it pass + 3. Run test to verify green + 4. Move to next test + 5. Repeat until all tests pass + + **REFACTOR Phase** (DEV Team): + + 1. All tests passing (green) + 2. Improve code quality + 3. Extract duplications + 4. Optimize performance + 5. Ensure tests still pass + ``` + +3. **Add Execution Commands** + + ````markdown + ## Running Tests + + ```bash + # Run all failing tests + npm run test:e2e + + # Run specific test file + npm run test:e2e -- login.spec.ts + + # Run tests in headed mode (see browser) + npm run test:e2e -- --headed + + # Debug specific test + npm run test:e2e -- login.spec.ts --debug + ``` + ```` + + ``` + + ``` + +--- + +## Step 6: Generate Deliverables + +### Actions + +1. **Create ATDD Checklist Document** + + Use template structure at `{installed_path}/atdd-checklist-template.md`: + - Story summary + - Acceptance criteria breakdown + - Test files created (with paths) + - Data factories created + - Fixtures created + - Mock requirements + - Required data-testid attributes + - Implementation checklist + - Red-green-refactor workflow + - Execution commands + +2. **Verify All Tests Fail** + + Before finalizing: + - Run full test suite locally + - Confirm all tests in RED phase + - Document expected failure messages + - Ensure failures are due to missing implementation, not test bugs + +3. **Write to Output File** + + Save to `{output_folder}/atdd-checklist-{story_id}.md` + +--- + +## Important Notes + +### Red-Green-Refactor Cycle + +**RED Phase** (TEA responsibility): + +- Write failing tests first +- Tests define expected behavior +- Tests must fail for right reason (missing implementation) + +**GREEN Phase** (DEV responsibility): + +- Implement minimal code to pass tests +- One test at a time +- Don't over-engineer + +**REFACTOR Phase** (DEV responsibility): + +- Improve code quality with confidence +- Tests provide safety net +- Extract duplications, optimize + +### Given-When-Then Structure + +**GIVEN** (Setup): + +- Arrange test preconditions +- Create necessary data +- Navigate to starting point + +**WHEN** (Action): + +- Execute the behavior being tested +- Single action per test + +**THEN** (Assertion): + +- Verify expected outcome +- One assertion per test (atomic) + +### Network-First Testing + +**Critical pattern:** + +```typescript +// ✅ CORRECT: Intercept BEFORE navigation +await page.route('**/api/data', handler); +await page.goto('/page'); + +// ❌ WRONG: Navigate then intercept (race condition) +await page.goto('/page'); +await page.route('**/api/data', handler); // Too late! +``` + +### Data Factory Best Practices + +**Use faker for all test data:** + +```typescript +// ✅ CORRECT: Random data +email: faker.internet.email(); + +// ❌ WRONG: Hardcoded data (collisions, maintenance burden) +email: 'test@example.com'; +``` + +**Auto-cleanup principle:** + +- Every factory that creates data must provide cleanup +- Fixtures automatically cleanup in teardown +- No manual cleanup in test code + +### One Assertion Per Test + +**Atomic test design:** + +```typescript +// ✅ CORRECT: One assertion +test('should display user name', async ({ page }) => { + await expect(page.locator('[data-testid="user-name"]')).toHaveText('John'); +}); + +// ❌ WRONG: Multiple assertions (not atomic) +test('should display user info', async ({ page }) => { + await expect(page.locator('[data-testid="user-name"]')).toHaveText('John'); + await expect(page.locator('[data-testid="user-email"]')).toHaveText('john@example.com'); +}); +``` + +**Why?** If second assertion fails, you don't know if first is still valid. + +### Component Test Strategy + +**When to use component tests:** + +- Complex UI interactions (drag-drop, keyboard nav) +- Form validation logic +- State management within component +- Visual edge cases + +**When NOT to use:** + +- Simple rendering (snapshot tests are sufficient) +- Integration with backend (use E2E or API tests) +- Full user journeys (use E2E tests) + +### Knowledge Base Integration + +**Core Fragments (Auto-loaded in Step 1):** + +- `fixture-architecture.md` - Pure function → fixture → mergeTests patterns (406 lines, 5 examples) +- `data-factories.md` - Factory patterns with faker, overrides, API seeding (498 lines, 5 examples) +- `component-tdd.md` - Red-green-refactor, provider isolation, accessibility, visual regression (480 lines, 4 examples) +- `network-first.md` - Intercept before navigate, HAR capture, deterministic waiting (489 lines, 5 examples) +- `test-quality.md` - Deterministic tests, cleanup, explicit assertions, length/time limits (658 lines, 5 examples) +- `test-healing-patterns.md` - Common failure patterns: stale selectors, race conditions, dynamic data, network errors, hard waits (648 lines, 5 examples) +- `selector-resilience.md` - Selector hierarchy (data-testid > ARIA > text > CSS), dynamic patterns, anti-patterns (541 lines, 4 examples) +- `timing-debugging.md` - Race condition prevention, deterministic waiting, async debugging (370 lines, 3 examples) + +**Reference for Test Level Selection:** + +- `test-levels-framework.md` - E2E vs API vs Component vs Unit decision framework (467 lines, 4 examples) + +**Manual Reference (Optional):** + +- Use `tea-index.csv` to find additional specialized fragments as needed + +--- + +## Output Summary + +After completing this workflow, provide a summary: + +```markdown +## ATDD Complete - Tests in RED Phase + +**Story**: {story_id} +**Primary Test Level**: {primary_level} + +**Failing Tests Created**: + +- E2E tests: {e2e_count} tests in {e2e_files} +- API tests: {api_count} tests in {api_files} +- Component tests: {component_count} tests in {component_files} + +**Supporting Infrastructure**: + +- Data factories: {factory_count} factories created +- Fixtures: {fixture_count} fixtures with auto-cleanup +- Mock requirements: {mock_count} services documented + +**Implementation Checklist**: + +- Total tasks: {task_count} +- Estimated effort: {effort_estimate} hours + +**Required data-testid Attributes**: {data_testid_count} attributes documented + +**Next Steps for DEV Team**: + +1. Run failing tests: `npm run test:e2e` +2. Review implementation checklist +3. Implement one test at a time (RED → GREEN) +4. Refactor with confidence (tests provide safety net) +5. Share progress in daily standup + +**Output File**: {output_file} +**Manual Handoff**: Share `{output_file}` and failing tests with the dev workflow (not auto-consumed). + +**Knowledge Base References Applied**: + +- Fixture architecture patterns +- Data factory patterns with faker +- Network-first route interception +- Component TDD strategies +- Test quality principles +``` + +--- + +## Validation + +After completing all steps, verify: + +- [ ] Story acceptance criteria analyzed and mapped to tests +- [ ] Appropriate test levels selected (E2E, API, Component) +- [ ] All tests written in Given-When-Then format +- [ ] All tests fail initially (RED phase verified) +- [ ] Network-first pattern applied (route interception before navigation) +- [ ] Data factories created with faker +- [ ] Fixtures created with auto-cleanup +- [ ] Mock requirements documented for DEV team +- [ ] Required data-testid attributes listed +- [ ] Implementation checklist created with clear tasks +- [ ] Red-green-refactor workflow documented +- [ ] Execution commands provided +- [ ] Output file created and formatted correctly + +Refer to `checklist.md` for comprehensive validation criteria. diff --git a/src/bmm/workflows/testarch/atdd/workflow.yaml b/src/bmm/workflows/testarch/atdd/workflow.yaml new file mode 100644 index 00000000..12b8808b --- /dev/null +++ b/src/bmm/workflows/testarch/atdd/workflow.yaml @@ -0,0 +1,47 @@ +# Test Architect workflow: atdd +name: testarch-atdd +description: "Generate failing acceptance tests before implementation using TDD red-green-refactor cycle" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/atdd" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +template: "{installed_path}/atdd-checklist-template.md" + +# Variables and inputs +variables: + test_dir: "{project-root}/tests" # Root test directory + +# Output configuration +default_output_file: "{output_folder}/atdd-checklist-{story_id}.md" + +# Required tools +required_tools: + - read_file # Read story markdown, framework config + - write_file # Create test files, checklist, factory stubs + - create_directory # Create test directories + - list_files # Find existing fixtures and helpers + - search_repo # Search for similar test patterns + +tags: + - qa + - atdd + - test-architect + - tdd + - red-green-refactor + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/automate/checklist.md b/src/bmm/workflows/testarch/automate/checklist.md new file mode 100644 index 00000000..cc8c50a5 --- /dev/null +++ b/src/bmm/workflows/testarch/automate/checklist.md @@ -0,0 +1,582 @@ +# Automate Workflow Validation Checklist + +Use this checklist to validate that the automate workflow has been executed correctly and all deliverables meet quality standards. + +## Prerequisites + +Before starting this workflow, verify: + +- [ ] Framework scaffolding configured (playwright.config.ts or cypress.config.ts exists) +- [ ] Test directory structure exists (tests/ folder with subdirectories) +- [ ] Package.json has test framework dependencies installed + +**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first) + +**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them +**Note:** `automate` generates tests; it does not run `*atdd` or `*test-review`. If ATDD outputs exist, use them as input and avoid duplicate coverage. + +--- + +## Step 1: Execution Mode Determination and Context Loading + +### Mode Detection + +- [ ] Execution mode correctly determined: + - [ ] BMad-Integrated Mode (story_file variable set) OR + - [ ] Standalone Mode (target_feature or target_files set) OR + - [ ] Auto-discover Mode (no targets specified) + +### BMad Artifacts (If Available - OPTIONAL) + +- [ ] Story markdown loaded (if `{story_file}` provided) +- [ ] Acceptance criteria extracted from story (if available) +- [ ] Tech-spec.md loaded (if `{use_tech_spec}` true and file exists) +- [ ] Test-design.md loaded (if `{use_test_design}` true and file exists) +- [ ] PRD.md loaded (if `{use_prd}` true and file exists) +- [ ] **Note**: Absence of BMad artifacts does NOT halt workflow + +### Framework Configuration + +- [ ] Test framework config loaded (playwright.config.ts or cypress.config.ts) +- [ ] Test directory structure identified from `{test_dir}` +- [ ] Existing test patterns reviewed +- [ ] Test runner capabilities noted (parallel execution, fixtures, etc.) + +### Coverage Analysis + +- [ ] Existing test files searched in `{test_dir}` (if `{analyze_coverage}` true) +- [ ] Tested features vs untested features identified +- [ ] Coverage gaps mapped (tests to source files) +- [ ] Existing fixture and factory patterns checked + +### Knowledge Base Fragments Loaded + +- [ ] `test-levels-framework.md` - Test level selection +- [ ] `test-priorities.md` - Priority classification (P0-P3) +- [ ] `fixture-architecture.md` - Fixture patterns with auto-cleanup +- [ ] `data-factories.md` - Factory patterns using faker +- [ ] `selective-testing.md` - Targeted test execution strategies +- [ ] `ci-burn-in.md` - Flaky test detection patterns +- [ ] `test-quality.md` - Test design principles + +--- + +## Step 2: Automation Targets Identification + +### Target Determination + +**BMad-Integrated Mode (if story available):** + +- [ ] Acceptance criteria mapped to test scenarios +- [ ] Features implemented in story identified +- [ ] Existing ATDD tests checked (if any) +- [ ] Expansion beyond ATDD planned (edge cases, negative paths) + +**Standalone Mode (if no story):** + +- [ ] Specific feature analyzed (if `{target_feature}` specified) +- [ ] Specific files analyzed (if `{target_files}` specified) +- [ ] Features auto-discovered (if `{auto_discover_features}` true) +- [ ] Features prioritized by: + - [ ] No test coverage (highest priority) + - [ ] Complex business logic + - [ ] External integrations (API, database, auth) + - [ ] Critical user paths (login, checkout, etc.) + +### Test Level Selection + +- [ ] Test level selection framework applied (from `test-levels-framework.md`) +- [ ] E2E tests identified: Critical user journeys, multi-system integration +- [ ] API tests identified: Business logic, service contracts, data transformations +- [ ] Component tests identified: UI behavior, interactions, state management +- [ ] Unit tests identified: Pure logic, edge cases, error handling + +### Duplicate Coverage Avoidance + +- [ ] Same behavior NOT tested at multiple levels unnecessarily +- [ ] E2E used for critical happy path only +- [ ] API tests used for business logic variations +- [ ] Component tests used for UI interaction edge cases +- [ ] Unit tests used for pure logic edge cases + +### Priority Assignment + +- [ ] Test priorities assigned using `test-priorities.md` framework +- [ ] P0 tests: Critical paths, security-critical, data integrity +- [ ] P1 tests: Important features, integration points, error handling +- [ ] P2 tests: Edge cases, less-critical variations, performance +- [ ] P3 tests: Nice-to-have, rarely-used features, exploratory +- [ ] Priority variables respected: + - [ ] `{include_p0}` = true (always include) + - [ ] `{include_p1}` = true (high priority) + - [ ] `{include_p2}` = true (medium priority) + - [ ] `{include_p3}` = false (low priority, skip by default) + +### Coverage Plan Created + +- [ ] Test coverage plan documented +- [ ] What will be tested at each level listed +- [ ] Priorities assigned to each test +- [ ] Coverage strategy clear (critical-paths, comprehensive, or selective) + +--- + +## Step 3: Test Infrastructure Generated + +### Fixture Architecture + +- [ ] Existing fixtures checked in `tests/support/fixtures/` +- [ ] Fixture architecture created/enhanced (if `{generate_fixtures}` true) +- [ ] All fixtures use Playwright's `test.extend()` pattern +- [ ] All fixtures have auto-cleanup in teardown +- [ ] Common fixtures created/enhanced: + - [ ] authenticatedUser (with auto-delete) + - [ ] apiRequest (authenticated client) + - [ ] mockNetwork (external service mocking) + - [ ] testDatabase (with auto-cleanup) + +### Data Factories + +- [ ] Existing factories checked in `tests/support/factories/` +- [ ] Factory architecture created/enhanced (if `{generate_factories}` true) +- [ ] All factories use `@faker-js/faker` for random data (no hardcoded values) +- [ ] All factories support overrides for specific scenarios +- [ ] Common factories created/enhanced: + - [ ] User factory (email, password, name, role) + - [ ] Product factory (name, price, SKU) + - [ ] Order factory (items, total, status) +- [ ] Cleanup helpers provided (e.g., deleteUser(), deleteProduct()) + +### Helper Utilities + +- [ ] Existing helpers checked in `tests/support/helpers/` (if `{update_helpers}` true) +- [ ] Common utilities created/enhanced: + - [ ] waitFor (polling for complex conditions) + - [ ] retry (retry helper for flaky operations) + - [ ] testData (test data generation) + - [ ] assertions (custom assertion helpers) + +--- + +## Step 4: Test Files Generated + +### Test File Structure + +- [ ] Test files organized correctly: + - [ ] `tests/e2e/` for E2E tests + - [ ] `tests/api/` for API tests + - [ ] `tests/component/` for component tests + - [ ] `tests/unit/` for unit tests + - [ ] `tests/support/` for fixtures/factories/helpers + +### E2E Tests (If Applicable) + +- [ ] E2E test files created in `tests/e2e/` +- [ ] All tests follow Given-When-Then format +- [ ] All tests have priority tags ([P0], [P1], [P2], [P3]) in test name +- [ ] All tests use data-testid selectors (not CSS classes) +- [ ] One assertion per test (atomic design) +- [ ] No hard waits or sleeps (explicit waits only) +- [ ] Network-first pattern applied (route interception BEFORE navigation) +- [ ] Clear Given-When-Then comments in test code + +### API Tests (If Applicable) + +- [ ] API test files created in `tests/api/` +- [ ] All tests follow Given-When-Then format +- [ ] All tests have priority tags in test name +- [ ] API contracts validated (request/response structure) +- [ ] HTTP status codes verified +- [ ] Response body validation includes required fields +- [ ] Error cases tested (400, 401, 403, 404, 500) +- [ ] JWT token format validated (if auth tests) + +### Component Tests (If Applicable) + +- [ ] Component test files created in `tests/component/` +- [ ] All tests follow Given-When-Then format +- [ ] All tests have priority tags in test name +- [ ] Component mounting works correctly +- [ ] Interaction testing covers user actions (click, hover, keyboard) +- [ ] State management validated +- [ ] Props and events tested + +### Unit Tests (If Applicable) + +- [ ] Unit test files created in `tests/unit/` +- [ ] All tests follow Given-When-Then format +- [ ] All tests have priority tags in test name +- [ ] Pure logic tested (no dependencies) +- [ ] Edge cases covered +- [ ] Error handling tested + +### Quality Standards Enforced + +- [ ] All tests use Given-When-Then format with clear comments +- [ ] All tests have descriptive names with priority tags +- [ ] No duplicate tests (same behavior tested multiple times) +- [ ] No flaky patterns (race conditions, timing issues) +- [ ] No test interdependencies (tests can run in any order) +- [ ] Tests are deterministic (same input always produces same result) +- [ ] All tests use data-testid selectors (E2E tests) +- [ ] No hard waits: `await page.waitForTimeout()` (forbidden) +- [ ] No conditional flow: `if (await element.isVisible())` (forbidden) +- [ ] No try-catch for test logic (only for cleanup) +- [ ] No hardcoded test data (use factories with faker) +- [ ] No page object classes (tests are direct and simple) +- [ ] No shared state between tests + +### Network-First Pattern Applied + +- [ ] Route interception set up BEFORE navigation (E2E tests with network requests) +- [ ] `page.route()` called before `page.goto()` to prevent race conditions +- [ ] Network-first pattern verified in all E2E tests that make API calls + +--- + +## Step 5: Test Validation and Healing (NEW - Phase 2.5) + +### Healing Configuration + +- [ ] Healing configuration checked: + - [ ] `{auto_validate}` setting noted (default: true) + - [ ] `{auto_heal_failures}` setting noted (default: false) + - [ ] `{max_healing_iterations}` setting noted (default: 3) + - [ ] `{use_mcp_healing}` setting noted (default: true) + +### Healing Knowledge Fragments Loaded (If Healing Enabled) + +- [ ] `test-healing-patterns.md` loaded (common failure patterns and fixes) +- [ ] `selector-resilience.md` loaded (selector refactoring guide) +- [ ] `timing-debugging.md` loaded (race condition fixes) + +### Test Execution and Validation + +- [ ] Generated tests executed (if `{auto_validate}` true) +- [ ] Test results captured: + - [ ] Total tests run + - [ ] Passing tests count + - [ ] Failing tests count + - [ ] Error messages and stack traces captured + +### Healing Loop (If Enabled and Tests Failed) + +- [ ] Healing loop entered (if `{auto_heal_failures}` true AND tests failed) +- [ ] For each failing test: + - [ ] Failure pattern identified (selector, timing, data, network, hard wait) + - [ ] Appropriate healing strategy applied: + - [ ] Stale selector → Replaced with data-testid or ARIA role + - [ ] Race condition → Added network-first interception or state waits + - [ ] Dynamic data → Replaced hardcoded values with regex/dynamic generation + - [ ] Network error → Added route mocking + - [ ] Hard wait → Replaced with event-based wait + - [ ] Healed test re-run to validate fix + - [ ] Iteration count tracked (max 3 attempts) + +### Unfixable Tests Handling + +- [ ] Tests that couldn't be healed after 3 iterations marked with `test.fixme()` (if `{mark_unhealable_as_fixme}` true) +- [ ] Detailed comment added to test.fixme() tests: + - [ ] What failure occurred + - [ ] What healing was attempted (3 iterations) + - [ ] Why healing failed + - [ ] Manual investigation steps needed +- [ ] Original test logic preserved in comments + +### Healing Report Generated + +- [ ] Healing report generated (if healing attempted) +- [ ] Report includes: + - [ ] Auto-heal enabled status + - [ ] Healing mode (MCP-assisted or Pattern-based) + - [ ] Iterations allowed (max_healing_iterations) + - [ ] Validation results (total, passing, failing) + - [ ] Successfully healed tests (count, file:line, fix applied) + - [ ] Unable to heal tests (count, file:line, reason) + - [ ] Healing patterns applied (selector fixes, timing fixes, data fixes) + - [ ] Knowledge base references used + +--- + +## Step 6: Documentation and Scripts Updated + +### Test README Updated + +- [ ] `tests/README.md` created or updated (if `{update_readme}` true) +- [ ] Test suite structure overview included +- [ ] Test execution instructions provided (all, specific files, by priority) +- [ ] Fixture usage examples provided +- [ ] Factory usage examples provided +- [ ] Priority tagging convention explained ([P0], [P1], [P2], [P3]) +- [ ] How to write new tests documented +- [ ] Common patterns documented +- [ ] Anti-patterns documented (what to avoid) + +### package.json Scripts Updated + +- [ ] package.json scripts added/updated (if `{update_package_scripts}` true) +- [ ] `test:e2e` script for all E2E tests +- [ ] `test:e2e:p0` script for P0 tests only +- [ ] `test:e2e:p1` script for P0 + P1 tests +- [ ] `test:api` script for API tests +- [ ] `test:component` script for component tests +- [ ] `test:unit` script for unit tests (if applicable) + +### Test Suite Executed + +- [ ] Test suite run locally (if `{run_tests_after_generation}` true) +- [ ] Test results captured (passing/failing counts) +- [ ] No flaky patterns detected (tests are deterministic) +- [ ] Setup requirements documented (if any) +- [ ] Known issues documented (if any) + +--- + +## Step 6: Automation Summary Generated + +### Automation Summary Document + +- [ ] Output file created at `{output_summary}` +- [ ] Document includes execution mode (BMad-Integrated, Standalone, Auto-discover) +- [ ] Feature analysis included (source files, coverage gaps) - Standalone mode +- [ ] Tests created listed (E2E, API, Component, Unit) with counts and paths +- [ ] Infrastructure created listed (fixtures, factories, helpers) +- [ ] Test execution instructions provided +- [ ] Coverage analysis included: + - [ ] Total test count + - [ ] Priority breakdown (P0, P1, P2, P3 counts) + - [ ] Test level breakdown (E2E, API, Component, Unit counts) + - [ ] Coverage percentage (if calculated) + - [ ] Coverage status (acceptance criteria covered, gaps identified) +- [ ] Definition of Done checklist included +- [ ] Next steps provided +- [ ] Recommendations included (if Standalone mode) + +### Summary Provided to User + +- [ ] Concise summary output provided +- [ ] Total tests created across test levels +- [ ] Priority breakdown (P0, P1, P2, P3 counts) +- [ ] Infrastructure counts (fixtures, factories, helpers) +- [ ] Test execution command provided +- [ ] Output file path provided +- [ ] Next steps listed + +--- + +## Quality Checks + +### Test Design Quality + +- [ ] Tests are readable (clear Given-When-Then structure) +- [ ] Tests are maintainable (use factories/fixtures, not hardcoded data) +- [ ] Tests are isolated (no shared state between tests) +- [ ] Tests are deterministic (no race conditions or flaky patterns) +- [ ] Tests are atomic (one assertion per test) +- [ ] Tests are fast (no unnecessary waits or delays) +- [ ] Tests are lean (files under {max_file_lines} lines) + +### Knowledge Base Integration + +- [ ] Test level selection framework applied (from `test-levels-framework.md`) +- [ ] Priority classification applied (from `test-priorities.md`) +- [ ] Fixture architecture patterns applied (from `fixture-architecture.md`) +- [ ] Data factory patterns applied (from `data-factories.md`) +- [ ] Selective testing strategies considered (from `selective-testing.md`) +- [ ] Flaky test detection patterns considered (from `ci-burn-in.md`) +- [ ] Test quality principles applied (from `test-quality.md`) + +### Code Quality + +- [ ] All TypeScript types are correct and complete +- [ ] No linting errors in generated test files +- [ ] Consistent naming conventions followed +- [ ] Imports are organized and correct +- [ ] Code follows project style guide +- [ ] No console.log or debug statements in test code + +--- + +## Integration Points + +### With Framework Workflow + +- [ ] Test framework configuration detected and used +- [ ] Directory structure matches framework setup +- [ ] Fixtures and helpers follow established patterns +- [ ] Naming conventions consistent with framework standards + +### With BMad Workflows (If Available - OPTIONAL) + +**With Story Workflow:** + +- [ ] Story ID correctly referenced in output (if story available) +- [ ] Acceptance criteria from story reflected in tests (if story available) +- [ ] Technical constraints from story considered (if story available) + +**With test-design Workflow:** + +- [ ] P0 scenarios from test-design prioritized (if test-design available) +- [ ] Risk assessment from test-design considered (if test-design available) +- [ ] Coverage strategy aligned with test-design (if test-design available) + +**With atdd Workflow:** + +- [ ] ATDD artifacts provided or located (manual handoff; `atdd` not auto-run) +- [ ] Existing ATDD tests checked (if story had ATDD workflow run) +- [ ] Expansion beyond ATDD planned (edge cases, negative paths) +- [ ] No duplicate coverage with ATDD tests + +### With CI Pipeline + +- [ ] Tests can run in CI environment +- [ ] Tests are parallelizable (no shared state) +- [ ] Tests have appropriate timeouts +- [ ] Tests clean up their data (no CI environment pollution) + +--- + +## Completion Criteria + +All of the following must be true before marking this workflow as complete: + +- [ ] **Execution mode determined** (BMad-Integrated, Standalone, or Auto-discover) +- [ ] **Framework configuration loaded** and validated +- [ ] **Coverage analysis completed** (gaps identified if analyze_coverage true) +- [ ] **Automation targets identified** (what needs testing) +- [ ] **Test levels selected** appropriately (E2E, API, Component, Unit) +- [ ] **Duplicate coverage avoided** (same behavior not tested at multiple levels) +- [ ] **Test priorities assigned** (P0, P1, P2, P3) +- [ ] **Fixture architecture created/enhanced** with auto-cleanup +- [ ] **Data factories created/enhanced** using faker (no hardcoded data) +- [ ] **Helper utilities created/enhanced** (if needed) +- [ ] **Test files generated** at appropriate levels (E2E, API, Component, Unit) +- [ ] **Given-When-Then format used** consistently across all tests +- [ ] **Priority tags added** to all test names ([P0], [P1], [P2], [P3]) +- [ ] **data-testid selectors used** in E2E tests (not CSS classes) +- [ ] **Network-first pattern applied** (route interception before navigation) +- [ ] **Quality standards enforced** (no hard waits, no flaky patterns, self-cleaning, deterministic) +- [ ] **Test README updated** with execution instructions and patterns +- [ ] **package.json scripts updated** with test execution commands +- [ ] **Test suite run locally** (if run_tests_after_generation true) +- [ ] **Tests validated** (if auto_validate enabled) +- [ ] **Failures healed** (if auto_heal_failures enabled and tests failed) +- [ ] **Healing report generated** (if healing attempted) +- [ ] **Unfixable tests marked** with test.fixme() and detailed comments (if any) +- [ ] **Automation summary created** and saved to correct location +- [ ] **Output file formatted correctly** +- [ ] **Knowledge base references applied** and documented (including healing fragments if used) +- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data, page objects) + +--- + +## Common Issues and Resolutions + +### Issue: BMad artifacts not found + +**Problem:** Story, tech-spec, or PRD files not found when variables are set. + +**Resolution:** + +- **automate does NOT require BMad artifacts** - they are OPTIONAL enhancements +- If files not found, switch to Standalone Mode automatically +- Analyze source code directly without BMad context +- Continue workflow without halting + +### Issue: Framework configuration not found + +**Problem:** No playwright.config.ts or cypress.config.ts found. + +**Resolution:** + +- **HALT workflow** - framework is required +- Message: "Framework scaffolding required. Run `bmad tea *framework` first." +- User must run framework workflow before automate + +### Issue: No automation targets identified + +**Problem:** Neither story, target_feature, nor target_files specified, and auto-discover finds nothing. + +**Resolution:** + +- Check if source_dir variable is correct +- Verify source code exists in project +- Ask user to specify target_feature or target_files explicitly +- Provide examples: `target_feature: "src/auth/"` or `target_files: "src/auth/login.ts,src/auth/session.ts"` + +### Issue: Duplicate coverage detected + +**Problem:** Same behavior tested at multiple levels (E2E + API + Component). + +**Resolution:** + +- Review test level selection framework (test-levels-framework.md) +- Use E2E for critical happy path ONLY +- Use API for business logic variations +- Use Component for UI edge cases +- Remove redundant tests that duplicate coverage + +### Issue: Tests have hardcoded data + +**Problem:** Tests use hardcoded email addresses, passwords, or other data. + +**Resolution:** + +- Replace all hardcoded data with factory function calls +- Use faker for all random data generation +- Update data-factories to support all required test scenarios +- Example: `createUser({ email: faker.internet.email() })` + +### Issue: Tests are flaky + +**Problem:** Tests fail intermittently, pass on retry. + +**Resolution:** + +- Remove all hard waits (`page.waitForTimeout()`) +- Use explicit waits (`page.waitForSelector()`) +- Apply network-first pattern (route interception before navigation) +- Remove conditional flow (`if (await element.isVisible())`) +- Ensure tests are deterministic (no race conditions) +- Run burn-in loop (10 iterations) to detect flakiness + +### Issue: Fixtures don't clean up data + +**Problem:** Test data persists after test run, causing test pollution. + +**Resolution:** + +- Ensure all fixtures have cleanup in teardown phase +- Cleanup happens AFTER `await use(data)` +- Call deletion/cleanup functions (deleteUser, deleteProduct, etc.) +- Verify cleanup works by checking database/storage after test run + +### Issue: Tests too slow + +**Problem:** Tests take longer than 90 seconds (max_test_duration). + +**Resolution:** + +- Remove unnecessary waits and delays +- Use parallel execution where possible +- Mock external services (don't make real API calls) +- Use API tests instead of E2E for business logic +- Optimize test data creation (use in-memory database, etc.) + +--- + +## Notes for TEA Agent + +- **automate is flexible:** Can work with or without BMad artifacts (story, tech-spec, PRD are OPTIONAL) +- **Standalone mode is powerful:** Analyze any codebase and generate tests independently +- **Auto-discover mode:** Scan codebase for features needing tests when no targets specified +- **Framework is the ONLY hard requirement:** HALT if framework config missing, otherwise proceed +- **Avoid duplicate coverage:** E2E for critical paths only, API/Component for variations +- **Priority tagging enables selective execution:** P0 tests run on every commit, P1 on PR, P2 nightly +- **Network-first pattern prevents race conditions:** Route interception BEFORE navigation +- **No page objects:** Keep tests simple, direct, and maintainable +- **Use knowledge base:** Load relevant fragments (test-levels, test-priorities, fixture-architecture, data-factories, healing patterns) for guidance +- **Deterministic tests only:** No hard waits, no conditional flow, no flaky patterns allowed +- **Optional healing:** auto_heal_failures disabled by default (opt-in for automatic test healing) +- **Graceful degradation:** Healing works without Playwright MCP (pattern-based fallback) +- **Unfixable tests handled:** Mark with test.fixme() and detailed comments (not silently broken) diff --git a/src/bmm/workflows/testarch/automate/instructions.md b/src/bmm/workflows/testarch/automate/instructions.md new file mode 100644 index 00000000..7ba8da51 --- /dev/null +++ b/src/bmm/workflows/testarch/automate/instructions.md @@ -0,0 +1,1324 @@ + + +# Test Automation Expansion + +**Workflow ID**: `_bmad/bmm/testarch/automate` +**Version**: 4.0 (BMad v6) + +--- + +## Overview + +Expands test automation coverage by generating comprehensive test suites at appropriate levels (E2E, API, Component, Unit) with supporting infrastructure. This workflow operates in **dual mode**: + +1. **BMad-Integrated Mode**: Works WITH BMad artifacts (story, tech-spec, PRD, test-design) to expand coverage after story implementation +2. **Standalone Mode**: Works WITHOUT BMad artifacts - analyzes existing codebase and generates tests independently + +**Core Principle**: Generate prioritized, deterministic tests that avoid duplicate coverage and follow testing best practices. + +--- + +## Preflight Requirements + +**Flexible:** This workflow can run with minimal prerequisites. Only HALT if framework is completely missing. + +### Required (Always) + +- ✅ Framework scaffolding configured (run `framework` workflow if missing) +- ✅ Test framework configuration available (playwright.config.ts or cypress.config.ts) + +### Optional (BMad-Integrated Mode) + +- Story markdown with acceptance criteria (enhances coverage targeting) +- Tech spec or PRD (provides architectural context) +- Test design document (provides risk/priority context) + +### Optional (Standalone Mode) + +- Source code to analyze (feature implementation) +- Existing tests (for gap analysis) + +**If framework is missing:** HALT with message: "Framework scaffolding required. Run `bmad tea *framework` first." + +--- + +## Step 1: Determine Execution Mode and Load Context + +### Actions + +1. **Detect Execution Mode** + + Check if BMad artifacts are available: + - If `{story_file}` variable is set → BMad-Integrated Mode + - If `{target_feature}` or `{target_files}` set → Standalone Mode + - If neither set → Auto-discover mode (scan codebase for features needing tests) + +2. **Load BMad Artifacts (If Available)** + + **BMad-Integrated Mode:** + - Read story markdown from `{story_file}` + - Extract acceptance criteria and technical requirements + - Load tech-spec.md if `{use_tech_spec}` is true + - Load test-design.md if `{use_test_design}` is true + - Load PRD.md if `{use_prd}` is true + - Note: These are **optional enhancements**, not hard requirements + + **Standalone Mode:** + - Skip BMad artifact loading + - Proceed directly to source code analysis + +3. **Load Framework Configuration** + - Read test framework config (playwright.config.ts or cypress.config.ts) + - Identify test directory structure from `{test_dir}` + - Check existing test patterns in `{test_dir}` + - Note test runner capabilities (parallel execution, fixtures, etc.) + +4. **Analyze Existing Test Coverage** + + If `{analyze_coverage}` is true: + - Search `{test_dir}` for existing test files + - Identify tested features vs untested features + - Map tests to source files (coverage gaps) + - Check existing fixture and factory patterns + +5. **Check Playwright Utils Flag** + + Read `{config_source}` and check `config.tea_use_playwright_utils`. + +6. **Load Knowledge Base Fragments** + + **Critical:** Consult `{project-root}/_bmad/bmm/testarch/tea-index.csv` to load: + + **Core Testing Patterns (Always load):** + - `test-levels-framework.md` - Test level selection (E2E vs API vs Component vs Unit with decision matrix, 467 lines, 4 examples) + - `test-priorities-matrix.md` - Priority classification (P0-P3 with automated scoring, risk mapping, 389 lines, 2 examples) + - `data-factories.md` - Factory patterns with faker (overrides, nested factories, API seeding, 498 lines, 5 examples) + - `selective-testing.md` - Targeted test execution strategies (tag-based, spec filters, diff-based, promotion rules, 727 lines, 4 examples) + - `ci-burn-in.md` - Flaky test detection patterns (10-iteration burn-in, sharding, selective execution, 678 lines, 4 examples) + - `test-quality.md` - Test design principles (deterministic, isolated, explicit assertions, length/time limits, 658 lines, 5 examples) + + **If `config.tea_use_playwright_utils: true` (Playwright Utils Integration - All Utilities):** + - `overview.md` - Playwright utils installation, design principles, fixture patterns + - `api-request.md` - Typed HTTP client with schema validation + - `network-recorder.md` - HAR record/playback for offline testing + - `auth-session.md` - Token persistence and multi-user support + - `intercept-network-call.md` - Network spy/stub with automatic JSON parsing + - `recurse.md` - Cypress-style polling for async conditions + - `log.md` - Playwright report-integrated logging + - `file-utils.md` - CSV/XLSX/PDF/ZIP reading and validation + - `burn-in.md` - Smart test selection (relevant for CI test generation) + - `network-error-monitor.md` - Automatic HTTP error detection + - `fixtures-composition.md` - mergeTests composition patterns + + **If `config.tea_use_playwright_utils: false` (Traditional Patterns):** + - `fixture-architecture.md` - Test fixture patterns (pure function → fixture → mergeTests, auto-cleanup, 406 lines, 5 examples) + - `network-first.md` - Route interception patterns (intercept before navigate, HAR capture, deterministic waiting, 489 lines, 5 examples) + + **Healing Knowledge (If `{auto_heal_failures}` is true):** + - `test-healing-patterns.md` - Common failure patterns and automated fixes (stale selectors, race conditions, dynamic data, network errors, hard waits, 648 lines, 5 examples) + - `selector-resilience.md` - Selector debugging and refactoring guide (data-testid > ARIA > text > CSS hierarchy, anti-patterns, 541 lines, 4 examples) + - `timing-debugging.md` - Race condition identification and fixes (network-first, deterministic waiting, async debugging, 370 lines, 3 examples) + +--- + +## Step 2: Identify Automation Targets + +### Actions + +1. **Determine What Needs Testing** + + **BMad-Integrated Mode (story available):** + - Map acceptance criteria from story to test scenarios + - Identify features implemented in this story + - Check if story has existing ATDD tests (from `*atdd` workflow) + - Expand beyond ATDD with edge cases and negative paths + + **Standalone Mode (no story):** + - If `{target_feature}` specified: Analyze that specific feature + - If `{target_files}` specified: Analyze those specific files + - If `{auto_discover_features}` is true: Scan `{source_dir}` for features + - Prioritize features with: + - No test coverage (highest priority) + - Complex business logic + - External integrations (API calls, database, auth) + - Critical user paths (login, checkout, etc.) + +2. **Apply Test Level Selection Framework** + + **Knowledge Base Reference**: `test-levels-framework.md` + + For each feature or acceptance criterion, determine appropriate test level: + + **E2E (End-to-End)**: + - Critical user journeys (login, checkout, core workflows) + - Multi-system integration + - Full user-facing scenarios + - Characteristics: High confidence, slow, brittle + + **API (Integration)**: + - Business logic validation + - Service contracts and data transformations + - Backend integration without UI + - Characteristics: Fast feedback, stable, good balance + + **Component**: + - UI component behavior (buttons, forms, modals) + - Interaction testing (click, hover, keyboard) + - State management within component + - Characteristics: Fast, isolated, granular + + **Unit**: + - Pure business logic and algorithms + - Edge cases and error handling + - Minimal dependencies + - Characteristics: Fastest, most granular + +3. **Avoid Duplicate Coverage** + + **Critical principle:** Don't test same behavior at multiple levels unless necessary + - Use E2E for critical happy path only + - Use API tests for business logic variations + - Use component tests for UI interaction edge cases + - Use unit tests for pure logic edge cases + + **Example:** + - E2E: User can log in with valid credentials → Dashboard loads + - API: POST /auth/login returns 401 for invalid credentials + - API: POST /auth/login returns 200 and JWT token for valid credentials + - Component: LoginForm disables submit button when fields are empty + - Unit: validateEmail() returns false for malformed email addresses + +4. **Assign Test Priorities** + + **Knowledge Base Reference**: `test-priorities-matrix.md` + + **P0 (Critical - Every commit)**: + - Critical user paths that must always work + - Security-critical functionality (auth, permissions) + - Data integrity scenarios + - Run in pre-commit hooks or PR checks + + **P1 (High - PR to main)**: + - Important features with high user impact + - Integration points between systems + - Error handling for common failures + - Run before merging to main branch + + **P2 (Medium - Nightly)**: + - Edge cases with moderate impact + - Less-critical feature variations + - Performance/load testing + - Run in nightly CI builds + + **P3 (Low - On-demand)**: + - Nice-to-have validations + - Rarely-used features + - Exploratory testing scenarios + - Run manually or weekly + + **Priority Variables:** + - `{include_p0}` - Always include (default: true) + - `{include_p1}` - High priority (default: true) + - `{include_p2}` - Medium priority (default: true) + - `{include_p3}` - Low priority (default: false) + +5. **Create Test Coverage Plan** + + Document what will be tested at each level with priorities: + + ```markdown + ## Test Coverage Plan + + ### E2E Tests (P0) + + - User login with valid credentials → Dashboard loads + - User logout → Redirects to login page + + ### API Tests (P1) + + - POST /auth/login - valid credentials → 200 + JWT token + - POST /auth/login - invalid credentials → 401 + error message + - POST /auth/login - missing fields → 400 + validation errors + + ### Component Tests (P1) + + - LoginForm - empty fields → submit button disabled + - LoginForm - valid input → submit button enabled + + ### Unit Tests (P2) + + - validateEmail() - valid email → returns true + - validateEmail() - malformed email → returns false + ``` + +--- + +## Step 3: Generate Test Infrastructure + +### Actions + +1. **Enhance Fixture Architecture** + + **Knowledge Base Reference**: `fixture-architecture.md` + + Check existing fixtures in `tests/support/fixtures/`: + - If missing or incomplete, create fixture architecture + - Use Playwright's `test.extend()` pattern + - Ensure all fixtures have auto-cleanup in teardown + + **Common fixtures to create/enhance:** + - **authenticatedUser**: User with valid session (auto-deletes user after test) + - **apiRequest**: Authenticated API client with base URL and headers + - **mockNetwork**: Network mocking for external services + - **testDatabase**: Database with test data (auto-cleanup after test) + + **Example fixture:** + + ```typescript + // tests/support/fixtures/auth.fixture.ts + import { test as base } from '@playwright/test'; + import { createUser, deleteUser } from '../factories/user.factory'; + + export const test = base.extend({ + authenticatedUser: async ({ page }, use) => { + // Setup: Create and authenticate user + const user = await createUser(); + await page.goto('/login'); + await page.fill('[data-testid="email"]', user.email); + await page.fill('[data-testid="password"]', user.password); + await page.click('[data-testid="login-button"]'); + await page.waitForURL('/dashboard'); + + // Provide to test + await use(user); + + // Cleanup: Delete user automatically + await deleteUser(user.id); + }, + }); + ``` + +2. **Enhance Data Factories** + + **Knowledge Base Reference**: `data-factories.md` + + Check existing factories in `tests/support/factories/`: + - If missing or incomplete, create factory architecture + - Use `@faker-js/faker` for all random data (no hardcoded values) + - Support overrides for specific test scenarios + + **Common factories to create/enhance:** + - User factory (email, password, name, role) + - Product factory (name, price, description, SKU) + - Order factory (items, total, status, customer) + + **Example factory:** + + ```typescript + // tests/support/factories/user.factory.ts + import { faker } from '@faker-js/faker'; + + export const createUser = (overrides = {}) => ({ + id: faker.number.int(), + email: faker.internet.email(), + password: faker.internet.password(), + name: faker.person.fullName(), + role: 'user', + createdAt: faker.date.recent().toISOString(), + ...overrides, + }); + + export const createUsers = (count: number) => Array.from({ length: count }, () => createUser()); + + // API helper for cleanup + export const deleteUser = async (userId: number) => { + await fetch(`/api/users/${userId}`, { method: 'DELETE' }); + }; + ``` + +3. **Create/Enhance Helper Utilities** + + If `{update_helpers}` is true: + + Check `tests/support/helpers/` for common utilities: + - **waitFor**: Polling helper for complex conditions + - **retry**: Retry helper for flaky operations + - **testData**: Test data generation helpers + - **assertions**: Custom assertion helpers + + **Example helper:** + + ```typescript + // tests/support/helpers/wait-for.ts + export const waitFor = async (condition: () => Promise, timeout = 5000, interval = 100): Promise => { + const startTime = Date.now(); + while (Date.now() - startTime < timeout) { + if (await condition()) return; + await new Promise((resolve) => setTimeout(resolve, interval)); + } + throw new Error(`Condition not met within ${timeout}ms`); + }; + ``` + +--- + +## Step 4: Generate Test Files + +### Actions + +1. **Create Test File Structure** + + ``` + tests/ + ├── e2e/ + │ └── {feature-name}.spec.ts # E2E tests (P0-P1) + ├── api/ + │ └── {feature-name}.api.spec.ts # API tests (P1-P2) + ├── component/ + │ └── {ComponentName}.test.tsx # Component tests (P1-P2) + ├── unit/ + │ └── {module-name}.test.ts # Unit tests (P2-P3) + └── support/ + ├── fixtures/ # Test fixtures + ├── factories/ # Data factories + └── helpers/ # Utility functions + ``` + +2. **Write E2E Tests (If Applicable)** + + **Follow Given-When-Then format:** + + ```typescript + import { test, expect } from '@playwright/test'; + + test.describe('User Authentication', () => { + test('[P0] should login with valid credentials and load dashboard', async ({ page }) => { + // GIVEN: User is on login page + await page.goto('/login'); + + // WHEN: User submits valid credentials + await page.fill('[data-testid="email-input"]', 'user@example.com'); + await page.fill('[data-testid="password-input"]', 'Password123!'); + await page.click('[data-testid="login-button"]'); + + // THEN: User is redirected to dashboard + await expect(page).toHaveURL('/dashboard'); + await expect(page.locator('[data-testid="user-name"]')).toBeVisible(); + }); + + test('[P1] should display error for invalid credentials', async ({ page }) => { + // GIVEN: User is on login page + await page.goto('/login'); + + // WHEN: User submits invalid credentials + await page.fill('[data-testid="email-input"]', 'invalid@example.com'); + await page.fill('[data-testid="password-input"]', 'wrongpassword'); + await page.click('[data-testid="login-button"]'); + + // THEN: Error message is displayed + await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password'); + }); + }); + ``` + + **Critical patterns:** + - Tag tests with priority: `[P0]`, `[P1]`, `[P2]`, `[P3]` in test name + - One assertion per test (atomic tests) + - Explicit waits (no hard waits/sleeps) + - Network-first approach (route interception before navigation) + - data-testid selectors for stability + - Clear Given-When-Then structure + +3. **Write API Tests (If Applicable)** + + ```typescript + import { test, expect } from '@playwright/test'; + + test.describe('User Authentication API', () => { + test('[P1] POST /api/auth/login - should return token for valid credentials', async ({ request }) => { + // GIVEN: Valid user credentials + const credentials = { + email: 'user@example.com', + password: 'Password123!', + }; + + // WHEN: Logging in via API + const response = await request.post('/api/auth/login', { + data: credentials, + }); + + // THEN: Returns 200 and JWT token + expect(response.status()).toBe(200); + const body = await response.json(); + expect(body).toHaveProperty('token'); + expect(body.token).toMatch(/^[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+$/); // JWT format + }); + + test('[P1] POST /api/auth/login - should return 401 for invalid credentials', async ({ request }) => { + // GIVEN: Invalid credentials + const credentials = { + email: 'invalid@example.com', + password: 'wrongpassword', + }; + + // WHEN: Attempting login + const response = await request.post('/api/auth/login', { + data: credentials, + }); + + // THEN: Returns 401 with error + expect(response.status()).toBe(401); + const body = await response.json(); + expect(body).toMatchObject({ + error: 'Invalid credentials', + }); + }); + }); + ``` + +4. **Write Component Tests (If Applicable)** + + **Knowledge Base Reference**: `component-tdd.md` + + ```typescript + import { test, expect } from '@playwright/experimental-ct-react'; + import { LoginForm } from './LoginForm'; + + test.describe('LoginForm Component', () => { + test('[P1] should disable submit button when fields are empty', async ({ mount }) => { + // GIVEN: LoginForm is mounted + const component = await mount(); + + // WHEN: Form is initially rendered + const submitButton = component.locator('button[type="submit"]'); + + // THEN: Submit button is disabled + await expect(submitButton).toBeDisabled(); + }); + + test('[P1] should enable submit button when fields are filled', async ({ mount }) => { + // GIVEN: LoginForm is mounted + const component = await mount(); + + // WHEN: User fills in email and password + await component.locator('[data-testid="email-input"]').fill('user@example.com'); + await component.locator('[data-testid="password-input"]').fill('Password123!'); + + // THEN: Submit button is enabled + const submitButton = component.locator('button[type="submit"]'); + await expect(submitButton).toBeEnabled(); + }); + }); + ``` + +5. **Write Unit Tests (If Applicable)** + + ```typescript + import { validateEmail } from './validation'; + + describe('Email Validation', () => { + test('[P2] should return true for valid email', () => { + // GIVEN: Valid email address + const email = 'user@example.com'; + + // WHEN: Validating email + const result = validateEmail(email); + + // THEN: Returns true + expect(result).toBe(true); + }); + + test('[P2] should return false for malformed email', () => { + // GIVEN: Malformed email addresses + const invalidEmails = ['notanemail', '@example.com', 'user@', 'user @example.com']; + + // WHEN/THEN: Each should fail validation + invalidEmails.forEach((email) => { + expect(validateEmail(email)).toBe(false); + }); + }); + }); + ``` + +6. **Apply Network-First Pattern (E2E tests)** + + **Knowledge Base Reference**: `network-first.md` + + **Critical pattern to prevent race conditions:** + + ```typescript + test('should load user dashboard after login', async ({ page }) => { + // CRITICAL: Intercept routes BEFORE navigation + await page.route('**/api/user', (route) => + route.fulfill({ + status: 200, + body: JSON.stringify({ id: 1, name: 'Test User' }), + }), + ); + + // NOW navigate + await page.goto('/dashboard'); + + await expect(page.locator('[data-testid="user-name"]')).toHaveText('Test User'); + }); + ``` + +7. **Enforce Quality Standards** + + **For every test:** + - ✅ Uses Given-When-Then format + - ✅ Has clear, descriptive name with priority tag + - ✅ One assertion per test (atomic) + - ✅ No hard waits or sleeps (use explicit waits) + - ✅ Self-cleaning (uses fixtures with auto-cleanup) + - ✅ Deterministic (no flaky patterns) + - ✅ Fast (under {max_test_duration} seconds) + - ✅ Lean (test file under {max_file_lines} lines) + + **Forbidden patterns:** + - ❌ Hard waits: `await page.waitForTimeout(2000)` + - ❌ Conditional flow: `if (await element.isVisible()) { ... }` + - ❌ Try-catch for test logic (use for cleanup only) + - ❌ Hardcoded test data (use factories) + - ❌ Page objects (keep tests simple and direct) + - ❌ Shared state between tests + +--- + +## Step 5: Execute, Validate & Heal Generated Tests (NEW - Phase 2.5) + +**Purpose**: Automatically validate generated tests and heal common failures before delivery + +### Actions + +1. **Validate Generated Tests** + + Always validate (auto_validate is always true): + - Run generated tests to verify they work + - Continue with healing if config.tea_use_mcp_enhancements is true + +2. **Run Generated Tests** + + Execute the full test suite that was just generated: + + ```bash + npx playwright test {generated_test_files} + ``` + + Capture results: + - Total tests run + - Passing tests count + - Failing tests count + - Error messages and stack traces for failures + +3. **Evaluate Results** + + **If ALL tests pass:** + - ✅ Generate report with success summary + - Proceed to Step 6 (Documentation and Scripts) + + **If tests FAIL:** + - Check config.tea_use_mcp_enhancements setting + - If true: Enter healing loop (Step 5.4) + - If false: Document failures for manual review, proceed to Step 6 + +4. **Healing Loop (If config.tea_use_mcp_enhancements is true)** + + **Iteration limit**: 3 attempts per test (constant) + + **For each failing test:** + + **A. Load Healing Knowledge Fragments** + + Consult `tea-index.csv` to load healing patterns: + - `test-healing-patterns.md` - Common failure patterns and fixes + - `selector-resilience.md` - Selector debugging and refactoring + - `timing-debugging.md` - Race condition identification and fixes + + **B. Identify Failure Pattern** + + Analyze error message and stack trace to classify failure type: + + **Stale Selector Failure:** + - Error contains: "locator resolved to 0 elements", "element not found", "unable to find element" + - Extract selector from error message + - Apply selector healing (knowledge from `selector-resilience.md`): + - If CSS class → Replace with `page.getByTestId()` + - If nth() → Replace with `filter({ hasText })` + - If ID → Replace with data-testid + - If complex XPath → Replace with ARIA role + + **Race Condition Failure:** + - Error contains: "timeout waiting for", "element not visible", "timed out retrying" + - Detect missing network waits or hard waits in test code + - Apply timing healing (knowledge from `timing-debugging.md`): + - Add network-first interception before navigate + - Replace `waitForTimeout()` with `waitForResponse()` + - Add explicit element state waits (`waitFor({ state: 'visible' })`) + + **Dynamic Data Failure:** + - Error contains: "Expected 'User 123' but received 'User 456'", timestamp mismatches + - Identify hardcoded assertions + - Apply data healing (knowledge from `test-healing-patterns.md`): + - Replace hardcoded IDs with regex (`/User \d+/`) + - Replace hardcoded dates with dynamic generation + - Capture dynamic values and use in assertions + + **Network Error Failure:** + - Error contains: "API call failed", "500 error", "network error" + - Detect missing route interception + - Apply network healing (knowledge from `test-healing-patterns.md`): + - Add `page.route()` or `cy.intercept()` for API mocking + - Mock error scenarios (500, 429, timeout) + + **Hard Wait Detection:** + - Scan test code for `page.waitForTimeout()`, `cy.wait(number)`, `sleep()` + - Apply hard wait healing (knowledge from `timing-debugging.md`): + - Replace with event-based waits + - Add network response waits + - Use element state changes + + **C. MCP Healing Mode (If MCP Tools Available)** + + If Playwright MCP tools are available in your IDE: + + Use MCP tools for interactive healing: + - `playwright_test_debug_test`: Pause on failure for visual inspection + - `browser_snapshot`: Capture visual context at failure point + - `browser_console_messages`: Retrieve console logs for JS errors + - `browser_network_requests`: Analyze network activity + - `browser_generate_locator`: Generate better selectors interactively + + Apply MCP-generated fixes to test code. + + **D. Pattern-Based Healing Mode (Fallback)** + + If MCP unavailable, use pattern-based analysis: + - Parse error message and stack trace + - Match against failure patterns from knowledge base + - Apply fixes programmatically: + - Selector fixes: Use suggestions from `selector-resilience.md` + - Timing fixes: Apply patterns from `timing-debugging.md` + - Data fixes: Use patterns from `test-healing-patterns.md` + + **E. Apply Healing Fix** + - Modify test file with healed code + - Re-run test to validate fix + - If test passes: Mark as healed, move to next failure + - If test fails: Increment iteration count, try different pattern + + **F. Iteration Limit Handling** + + After 3 failed healing attempts: + + Always mark unfixable tests: + - Mark test with `test.fixme()` instead of `test()` + - Add detailed comment explaining: + - What failure occurred + - What healing was attempted (3 iterations) + - Why healing failed + - Manual investigation needed + + ```typescript + test.fixme('[P1] should handle complex interaction', async ({ page }) => { + // FIXME: Test healing failed after 3 attempts + // Failure: "Locator 'button[data-action="submit"]' resolved to 0 elements" + // Attempted fixes: + // 1. Replaced with page.getByTestId('submit-button') - still failing + // 2. Replaced with page.getByRole('button', { name: 'Submit' }) - still failing + // 3. Added waitForLoadState('networkidle') - still failing + // Manual investigation needed: Selector may require application code changes + // TODO: Review with team, may need data-testid added to button component + // Original test code... + }); + ``` + + **Note**: Workflow continues even with unfixable tests (marked as test.fixme() for manual review) + +5. **Generate Healing Report** + + Document healing outcomes: + + ```markdown + ## Test Healing Report + + **Auto-Heal Enabled**: {auto_heal_failures} + **Healing Mode**: {use_mcp_healing ? "MCP-assisted" : "Pattern-based"} + **Iterations Allowed**: {max_healing_iterations} + + ### Validation Results + + - **Total tests**: {total_tests} + - **Passing**: {passing_tests} + - **Failing**: {failing_tests} + + ### Healing Outcomes + + **Successfully Healed ({healed_count} tests):** + + - `tests/e2e/login.spec.ts:15` - Stale selector (CSS class → data-testid) + - `tests/e2e/checkout.spec.ts:42` - Race condition (added network-first interception) + - `tests/api/users.spec.ts:28` - Dynamic data (hardcoded ID → regex pattern) + + **Unable to Heal ({unfixable_count} tests):** + + - `tests/e2e/complex-flow.spec.ts:67` - Marked as test.fixme() with manual investigation needed + - Failure: Locator not found after 3 healing attempts + - Requires application code changes (add data-testid to component) + + ### Healing Patterns Applied + + - **Selector fixes**: 2 (CSS class → data-testid, nth() → filter()) + - **Timing fixes**: 1 (added network-first interception) + - **Data fixes**: 1 (hardcoded ID → regex) + + ### Knowledge Base References + + - `test-healing-patterns.md` - Common failure patterns + - `selector-resilience.md` - Selector refactoring guide + - `timing-debugging.md` - Race condition prevention + ``` + +6. **Update Test Files with Healing Results** + - Save healed test code to files + - Mark unfixable tests with `test.fixme()` and detailed comments + - Preserve original test logic in comments (for debugging) + +--- + +## Step 6: Update Documentation and Scripts + +### Actions + +1. **Update Test README** + + If `{update_readme}` is true: + + Create or update `tests/README.md` with: + - Overview of test suite structure + - How to run tests (all, specific files, by priority) + - Fixture and factory usage examples + - Priority tagging convention ([P0], [P1], [P2], [P3]) + - How to write new tests + - Common patterns and anti-patterns + + **Example section:** + + ````markdown + ## Running Tests + + ```bash + # Run all tests + npm run test:e2e + + # Run by priority + npm run test:e2e -- --grep "@P0" + npm run test:e2e -- --grep "@P1" + + # Run specific file + npm run test:e2e -- user-authentication.spec.ts + + # Run in headed mode + npm run test:e2e -- --headed + + # Debug specific test + npm run test:e2e -- user-authentication.spec.ts --debug + ``` + ```` + + ## Priority Tags + - **[P0]**: Critical paths, run every commit + - **[P1]**: High priority, run on PR to main + - **[P2]**: Medium priority, run nightly + - **[P3]**: Low priority, run on-demand + + ``` + + ``` + +2. **Update package.json Scripts** + + If `{update_package_scripts}` is true: + + Add or update test execution scripts: + + ```json + { + "scripts": { + "test:e2e": "playwright test", + "test:e2e:p0": "playwright test --grep '@P0'", + "test:e2e:p1": "playwright test --grep '@P1|@P0'", + "test:api": "playwright test tests/api", + "test:component": "playwright test tests/component", + "test:unit": "vitest" + } + } + ``` + +3. **Run Test Suite** + + If `{run_tests_after_generation}` is true: + - Run full test suite locally + - Capture results (passing/failing counts) + - Verify no flaky patterns (tests should be deterministic) + - Document any setup requirements or known issues + +--- + +## Step 6: Generate Automation Summary + +### Actions + +1. **Create Automation Summary Document** + + Save to `{output_summary}` with: + + **BMad-Integrated Mode:** + + ````markdown + # Automation Summary - {feature_name} + + **Date:** {date} + **Story:** {story_id} + **Coverage Target:** {coverage_target} + + ## Tests Created + + ### E2E Tests (P0-P1) + + - `tests/e2e/user-authentication.spec.ts` (2 tests, 87 lines) + - [P0] Login with valid credentials → Dashboard loads + - [P1] Display error for invalid credentials + + ### API Tests (P1-P2) + + - `tests/api/auth.api.spec.ts` (3 tests, 102 lines) + - [P1] POST /auth/login - valid credentials → 200 + token + - [P1] POST /auth/login - invalid credentials → 401 + error + - [P2] POST /auth/login - missing fields → 400 + validation + + ### Component Tests (P1) + + - `tests/component/LoginForm.test.tsx` (2 tests, 45 lines) + - [P1] Empty fields → submit button disabled + - [P1] Valid input → submit button enabled + + ## Infrastructure Created + + ### Fixtures + + - `tests/support/fixtures/auth.fixture.ts` - authenticatedUser with auto-cleanup + + ### Factories + + - `tests/support/factories/user.factory.ts` - createUser(), deleteUser() + + ### Helpers + + - `tests/support/helpers/wait-for.ts` - Polling helper for complex conditions + + ## Test Execution + + ```bash + # Run all new tests + npm run test:e2e + + # Run by priority + npm run test:e2e:p0 # Critical paths only + npm run test:e2e:p1 # P0 + P1 tests + ``` + ```` + + ## Coverage Analysis + + **Total Tests:** 7 + - P0: 1 test (critical path) + - P1: 5 tests (high priority) + - P2: 1 test (medium priority) + + **Test Levels:** + - E2E: 2 tests (user journeys) + - API: 3 tests (business logic) + - Component: 2 tests (UI behavior) + + **Coverage Status:** + - ✅ All acceptance criteria covered + - ✅ Happy path covered (E2E + API) + - ✅ Error cases covered (API) + - ✅ UI validation covered (Component) + - ⚠️ Edge case: Password reset flow not yet covered (future story) + + ## Definition of Done + - [x] All tests follow Given-When-Then format + - [x] All tests use data-testid selectors + - [x] All tests have priority tags + - [x] All tests are self-cleaning (fixtures with auto-cleanup) + - [x] No hard waits or flaky patterns + - [x] Test files under 300 lines + - [x] All tests run under 1.5 minutes each + - [x] README updated with test execution instructions + - [x] package.json scripts updated + + ## Next Steps + 1. Review generated tests with team + 2. Run tests in CI pipeline: `npm run test:e2e` + 3. Integrate with quality gate: `bmad tea *gate` + 4. Monitor for flaky tests in burn-in loop + + ```` + + **Standalone Mode:** + ```markdown + # Automation Summary - {target_feature} + + **Date:** {date} + **Target:** {target_feature} (standalone analysis) + **Coverage Target:** {coverage_target} + + ## Feature Analysis + + **Source Files Analyzed:** + - `src/auth/login.ts` - Login logic and validation + - `src/auth/session.ts` - Session management + - `src/auth/validation.ts` - Email/password validation + + **Existing Coverage:** + - E2E tests: 0 found + - API tests: 0 found + - Component tests: 0 found + - Unit tests: 0 found + + **Coverage Gaps Identified:** + - ❌ No E2E tests for login flow + - ❌ No API tests for /auth/login endpoint + - ❌ No component tests for LoginForm + - ❌ No unit tests for validateEmail() + + ## Tests Created + + {Same structure as BMad-Integrated Mode} + + ## Recommendations + + 1. **High Priority (P0-P1):** + - Add E2E test for password reset flow + - Add API tests for token refresh endpoint + - Add component tests for logout button + + 2. **Medium Priority (P2):** + - Add unit tests for session timeout logic + - Add E2E test for "remember me" functionality + + 3. **Future Enhancements:** + - Consider contract testing for auth API + - Add visual regression tests for login page + - Set up burn-in loop for flaky test detection + + ## Definition of Done + + {Same checklist as BMad-Integrated Mode} + ```` + +2. **Provide Summary to User** + + Output concise summary: + + ```markdown + ## Automation Complete + + **Coverage:** {total_tests} tests created across {test_levels} levels + **Priority Breakdown:** P0: {p0_count}, P1: {p1_count}, P2: {p2_count}, P3: {p3_count} + **Infrastructure:** {fixture_count} fixtures, {factory_count} factories + **Output:** {output_summary} + + **Run tests:** `npm run test:e2e` + **Next steps:** Review tests, run in CI, integrate with quality gate + ``` + +--- + +## Important Notes + +### Dual-Mode Operation + +**BMad-Integrated Mode** (story available): + +- Uses story acceptance criteria for coverage targeting +- Aligns with test-design risk/priority assessment +- Expands ATDD tests with edge cases and negative paths +- Updates BMad status tracking + +**Standalone Mode** (no story): + +- Analyzes source code independently +- Identifies coverage gaps automatically +- Generates tests based on code analysis +- Works with any project (BMad or non-BMad) + +**Auto-discover Mode** (no targets specified): + +- Scans codebase for features needing tests +- Prioritizes features with no coverage +- Generates comprehensive test plan + +### Avoid Duplicate Coverage + +**Critical principle:** Don't test same behavior at multiple levels + +**Good coverage:** + +- E2E: User can login → Dashboard loads (critical happy path) +- API: POST /auth/login returns correct status codes (variations) +- Component: LoginForm validates input (UI edge cases) + +**Bad coverage (duplicate):** + +- E2E: User can login → Dashboard loads +- E2E: User can login with different emails → Dashboard loads (unnecessary duplication) +- API: POST /auth/login returns 200 (already covered in E2E) + +Use E2E sparingly for critical paths. Use API/Component for variations and edge cases. + +### Priority Tagging + +**Tag every test with priority in test name:** + +```typescript +test('[P0] should login with valid credentials', async ({ page }) => { ... }); +test('[P1] should display error for invalid credentials', async ({ page }) => { ... }); +test('[P2] should remember login preference', async ({ page }) => { ... }); +``` + +**Enables selective test execution:** + +```bash +# Run only P0 tests (critical paths) +npm run test:e2e -- --grep "@P0" + +# Run P0 + P1 tests (pre-merge) +npm run test:e2e -- --grep "@P0|@P1" +``` + +### No Page Objects + +**Do NOT create page object classes.** Keep tests simple and direct: + +```typescript +// ✅ CORRECT: Direct test +test('should login', async ({ page }) => { + await page.goto('/login'); + await page.fill('[data-testid="email"]', 'user@example.com'); + await page.click('[data-testid="login-button"]'); + await expect(page).toHaveURL('/dashboard'); +}); + +// ❌ WRONG: Page object abstraction +class LoginPage { + async login(email, password) { ... } +} +``` + +Use fixtures for setup/teardown, not page objects for actions. + +### Deterministic Tests Only + +**No flaky patterns allowed:** + +```typescript +// ❌ WRONG: Hard wait +await page.waitForTimeout(2000); + +// ✅ CORRECT: Explicit wait +await page.waitForSelector('[data-testid="user-name"]'); +await expect(page.locator('[data-testid="user-name"]')).toBeVisible(); + +// ❌ WRONG: Conditional flow +if (await element.isVisible()) { + await element.click(); +} + +// ✅ CORRECT: Deterministic assertion +await expect(element).toBeVisible(); +await element.click(); + +// ❌ WRONG: Try-catch for test logic +try { + await element.click(); +} catch (e) { + // Test shouldn't catch errors +} + +// ✅ CORRECT: Let test fail if element not found +await element.click(); +``` + +### Self-Cleaning Tests + +**Every test must clean up its data:** + +```typescript +// ✅ CORRECT: Fixture with auto-cleanup +export const test = base.extend({ + testUser: async ({ page }, use) => { + const user = await createUser(); + await use(user); + await deleteUser(user.id); // Auto-cleanup + }, +}); + +// ❌ WRONG: Manual cleanup (can be forgotten) +test('should login', async ({ page }) => { + const user = await createUser(); + // ... test logic ... + // Forgot to delete user! +}); +``` + +### File Size Limits + +**Keep test files lean (under {max_file_lines} lines):** + +- If file exceeds limit, split into multiple files by feature area +- Group related tests in describe blocks +- Extract common setup to fixtures + +### Knowledge Base Integration + +**Core Fragments (Auto-loaded in Step 1):** + +- `test-levels-framework.md` - E2E vs API vs Component vs Unit decision framework with characteristics matrix (467 lines, 4 examples) +- `test-priorities-matrix.md` - P0-P3 classification with automated scoring and risk mapping (389 lines, 2 examples) +- `fixture-architecture.md` - Pure function → fixture → mergeTests composition with auto-cleanup (406 lines, 5 examples) +- `data-factories.md` - Factory patterns with faker: overrides, nested factories, API seeding (498 lines, 5 examples) +- `selective-testing.md` - Tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples) +- `ci-burn-in.md` - 10-iteration burn-in loop, parallel sharding, selective execution (678 lines, 4 examples) +- `test-quality.md` - Deterministic tests, isolated with cleanup, explicit assertions, length/time optimization (658 lines, 5 examples) +- `network-first.md` - Intercept before navigate, HAR capture, deterministic waiting strategies (489 lines, 5 examples) + +**Healing Fragments (Auto-loaded if `{auto_heal_failures}` enabled):** + +- `test-healing-patterns.md` - Common failure patterns: stale selectors, race conditions, dynamic data, network errors, hard waits (648 lines, 5 examples) +- `selector-resilience.md` - Selector hierarchy (data-testid > ARIA > text > CSS), dynamic patterns, anti-patterns refactoring (541 lines, 4 examples) +- `timing-debugging.md` - Race condition prevention, deterministic waiting, async debugging techniques (370 lines, 3 examples) + +**Manual Reference (Optional):** + +- Use `tea-index.csv` to find additional specialized fragments as needed + +--- + +## Output Summary + +After completing this workflow, provide a summary: + +````markdown +## Automation Complete + +**Mode:** {standalone_mode ? "Standalone" : "BMad-Integrated"} +**Target:** {story_id || target_feature || "Auto-discovered features"} + +**Tests Created:** + +- E2E: {e2e_count} tests ({p0_count} P0, {p1_count} P1, {p2_count} P2) +- API: {api_count} tests ({p0_count} P0, {p1_count} P1, {p2_count} P2) +- Component: {component_count} tests ({p1_count} P1, {p2_count} P2) +- Unit: {unit_count} tests ({p2_count} P2, {p3_count} P3) + +**Infrastructure:** + +- Fixtures: {fixture_count} created/enhanced +- Factories: {factory_count} created/enhanced +- Helpers: {helper_count} created/enhanced + +**Documentation Updated:** + +- ✅ Test README with execution instructions +- ✅ package.json scripts for test execution + +**Test Execution:** + +```bash +# Run all tests +npm run test:e2e + +# Run by priority +npm run test:e2e:p0 # Critical paths only +npm run test:e2e:p1 # P0 + P1 tests + +# Run specific file +npm run test:e2e -- {first_test_file} +``` +```` + +**Coverage Status:** + +- ✅ {coverage_percentage}% of features covered +- ✅ All P0 scenarios covered +- ✅ All P1 scenarios covered +- ⚠️ {gap_count} coverage gaps identified (documented in summary) + +**Quality Checks:** + +- ✅ All tests follow Given-When-Then format +- ✅ All tests have priority tags +- ✅ All tests use data-testid selectors +- ✅ All tests are self-cleaning +- ✅ No hard waits or flaky patterns +- ✅ All test files under {max_file_lines} lines + +**Output File:** {output_summary} + +**Next Steps:** + +1. Review generated tests with team +2. Run tests in CI pipeline +3. Monitor for flaky tests in burn-in loop +4. Integrate with quality gate: `bmad tea *gate` + +**Knowledge Base References Applied:** + +- Test level selection framework (E2E vs API vs Component vs Unit) +- Priority classification (P0-P3) +- Fixture architecture patterns with auto-cleanup +- Data factory patterns using faker +- Selective testing strategies +- Test quality principles + +``` + +--- + +## Validation + +After completing all steps, verify: + +- [ ] Execution mode determined (BMad-Integrated, Standalone, or Auto-discover) +- [ ] BMad artifacts loaded if available (story, tech-spec, test-design, PRD) +- [ ] Framework configuration loaded +- [ ] Existing test coverage analyzed (gaps identified) +- [ ] Knowledge base fragments loaded (test-levels, test-priorities, fixture-architecture, data-factories, selective-testing) +- [ ] Automation targets identified (what needs testing) +- [ ] Test levels selected appropriately (E2E, API, Component, Unit) +- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels) +- [ ] Test priorities assigned (P0, P1, P2, P3) +- [ ] Fixture architecture created/enhanced (with auto-cleanup) +- [ ] Data factories created/enhanced (using faker) +- [ ] Helper utilities created/enhanced (if needed) +- [ ] E2E tests written (Given-When-Then, priority tags, data-testid selectors) +- [ ] API tests written (Given-When-Then, priority tags, comprehensive coverage) +- [ ] Component tests written (Given-When-Then, priority tags, UI behavior) +- [ ] Unit tests written (Given-When-Then, priority tags, pure logic) +- [ ] Network-first pattern applied (route interception before navigation) +- [ ] Quality standards enforced (no hard waits, no flaky patterns, self-cleaning, deterministic) +- [ ] Test README updated (execution instructions, priority tagging, patterns) +- [ ] package.json scripts updated (test execution commands) +- [ ] Test suite run locally (results captured) +- [ ] Tests validated (if auto_validate enabled) +- [ ] Failures healed (if auto_heal_failures enabled) +- [ ] Healing report generated (if healing attempted) +- [ ] Unfixable tests marked with test.fixme() (if any) +- [ ] Automation summary created (tests, infrastructure, coverage, healing, DoD) +- [ ] Output file formatted correctly + +Refer to `checklist.md` for comprehensive validation criteria. +``` diff --git a/src/bmm/workflows/testarch/automate/workflow.yaml b/src/bmm/workflows/testarch/automate/workflow.yaml new file mode 100644 index 00000000..e244c051 --- /dev/null +++ b/src/bmm/workflows/testarch/automate/workflow.yaml @@ -0,0 +1,54 @@ +# Test Architect workflow: automate +name: testarch-automate +description: "Expand test automation coverage after implementation or analyze existing codebase to generate comprehensive test suite" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/automate" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +template: false + +# Variables and inputs +variables: + # Execution mode and targeting + standalone_mode: true # Can work without BMad artifacts (true) or integrate with BMad (false) + coverage_target: "critical-paths" # critical-paths, comprehensive, selective + + # Directory paths + test_dir: "{project-root}/tests" # Root test directory + source_dir: "{project-root}/src" # Source code directory + +# Output configuration +default_output_file: "{output_folder}/automation-summary.md" + +# Required tools +required_tools: + - read_file # Read source code, existing tests, BMad artifacts + - write_file # Create test files, fixtures, factories, summaries + - create_directory # Create test directories + - list_files # Discover features and existing tests + - search_repo # Find coverage gaps and patterns + - glob # Find test files and source files + +tags: + - qa + - automation + - test-architect + - regression + - coverage + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/ci/checklist.md b/src/bmm/workflows/testarch/ci/checklist.md new file mode 100644 index 00000000..984e3308 --- /dev/null +++ b/src/bmm/workflows/testarch/ci/checklist.md @@ -0,0 +1,247 @@ +# CI/CD Pipeline Setup - Validation Checklist + +## Prerequisites + +- [ ] Git repository initialized (`.git/` exists) +- [ ] Git remote configured (`git remote -v` shows origin) +- [ ] Test framework configured (`playwright.config._` or `cypress.config._`) +- [ ] Local tests pass (`npm run test:e2e` succeeds) +- [ ] Team agrees on CI platform +- [ ] Access to CI platform settings (if updating) + +Note: CI setup is typically a one-time task per repo and can be run any time after the test framework is configured. + +## Process Steps + +### Step 1: Preflight Checks + +- [ ] Git repository validated +- [ ] Framework configuration detected +- [ ] Local test execution successful +- [ ] CI platform detected or selected +- [ ] Node version identified (.nvmrc or default) +- [ ] No blocking issues found + +### Step 2: CI Pipeline Configuration + +- [ ] CI configuration file created (`.github/workflows/test.yml` or `.gitlab-ci.yml`) +- [ ] File is syntactically valid (no YAML errors) +- [ ] Correct framework commands configured +- [ ] Node version matches project +- [ ] Test directory paths correct + +### Step 3: Parallel Sharding + +- [ ] Matrix strategy configured (4 shards default) +- [ ] Shard syntax correct for framework +- [ ] fail-fast set to false +- [ ] Shard count appropriate for test suite size + +### Step 4: Burn-In Loop + +- [ ] Burn-in job created +- [ ] 10 iterations configured +- [ ] Proper exit on failure (`|| exit 1`) +- [ ] Runs on appropriate triggers (PR, cron) +- [ ] Failure artifacts uploaded + +### Step 5: Caching Configuration + +- [ ] Dependency cache configured (npm/yarn) +- [ ] Cache key uses lockfile hash +- [ ] Browser cache configured (Playwright/Cypress) +- [ ] Restore-keys defined for fallback +- [ ] Cache paths correct for platform + +### Step 6: Artifact Collection + +- [ ] Artifacts upload on failure only +- [ ] Correct artifact paths (test-results/, traces/, etc.) +- [ ] Retention days set (30 default) +- [ ] Artifact names unique per shard +- [ ] No sensitive data in artifacts + +### Step 7: Retry Logic + +- [ ] Retry action/strategy configured +- [ ] Max attempts: 2-3 +- [ ] Timeout appropriate (30 min) +- [ ] Retry only on transient errors + +### Step 8: Helper Scripts + +- [ ] `scripts/test-changed.sh` created +- [ ] `scripts/ci-local.sh` created +- [ ] `scripts/burn-in.sh` created (optional) +- [ ] Scripts are executable (`chmod +x`) +- [ ] Scripts use correct test commands +- [ ] Shebang present (`#!/bin/bash`) + +### Step 9: Documentation + +- [ ] `docs/ci.md` created with pipeline guide +- [ ] `docs/ci-secrets-checklist.md` created +- [ ] Required secrets documented +- [ ] Setup instructions clear +- [ ] Troubleshooting section included +- [ ] Badge URLs provided (optional) + +## Output Validation + +### Configuration Validation + +- [ ] CI file loads without errors +- [ ] All paths resolve correctly +- [ ] No hardcoded values (use env vars) +- [ ] Triggers configured (push, pull_request, schedule) +- [ ] Platform-specific syntax correct + +### Execution Validation + +- [ ] First CI run triggered (push to remote) +- [ ] Pipeline starts without errors +- [ ] All jobs appear in CI dashboard +- [ ] Caching works (check logs for cache hit) +- [ ] Tests execute in parallel +- [ ] Artifacts collected on failure + +### Performance Validation + +- [ ] Lint stage: <2 minutes +- [ ] Test stage (per shard): <10 minutes +- [ ] Burn-in stage: <30 minutes +- [ ] Total pipeline: <45 minutes +- [ ] Cache reduces install time by 2-5 minutes + +## Quality Checks + +### Best Practices Compliance + +- [ ] Burn-in loop follows production patterns +- [ ] Parallel sharding configured optimally +- [ ] Failure-only artifact collection +- [ ] Selective testing enabled (optional) +- [ ] Retry logic handles transient failures only +- [ ] No secrets in configuration files + +### Knowledge Base Alignment + +- [ ] Burn-in pattern matches `ci-burn-in.md` +- [ ] Selective testing matches `selective-testing.md` +- [ ] Artifact collection matches `visual-debugging.md` +- [ ] Test quality matches `test-quality.md` + +### Security Checks + +- [ ] No credentials in CI configuration +- [ ] Secrets use platform secret management +- [ ] Environment variables for sensitive data +- [ ] Artifact retention appropriate (not too long) +- [ ] No debug output exposing secrets + +## Integration Points + +### Status File Integration + +- [ ] CI setup logged in Quality & Testing Progress section +- [ ] Status updated with completion timestamp +- [ ] Platform and configuration noted + +### Knowledge Base Integration + +- [ ] Relevant knowledge fragments loaded +- [ ] Patterns applied from knowledge base +- [ ] Documentation references knowledge base +- [ ] Knowledge base references in README + +### Workflow Dependencies + +- [ ] `framework` workflow completed first +- [ ] Can proceed to `atdd` workflow after CI setup +- [ ] Can proceed to `automate` workflow +- [ ] CI integrates with `gate` workflow + +## Completion Criteria + +**All must be true:** + +- [ ] All prerequisites met +- [ ] All process steps completed +- [ ] All output validations passed +- [ ] All quality checks passed +- [ ] All integration points verified +- [ ] First CI run successful +- [ ] Performance targets met +- [ ] Documentation complete + +## Post-Workflow Actions + +**User must complete:** + +1. [ ] Commit CI configuration +2. [ ] Push to remote repository +3. [ ] Configure required secrets in CI platform +4. [ ] Open PR to trigger first CI run +5. [ ] Monitor and verify pipeline execution +6. [ ] Adjust parallelism if needed (based on actual run times) +7. [ ] Set up notifications (optional) + +**Recommended next workflows:** + +1. [ ] Run `atdd` workflow for test generation +2. [ ] Run `automate` workflow for coverage expansion +3. [ ] Run `gate` workflow for quality gates + +## Rollback Procedure + +If workflow fails: + +1. [ ] Delete CI configuration file +2. [ ] Remove helper scripts directory +3. [ ] Remove documentation (docs/ci.md, etc.) +4. [ ] Clear CI platform secrets (if added) +5. [ ] Review error logs +6. [ ] Fix issues and retry workflow + +## Notes + +### Common Issues + +**Issue**: CI file syntax errors + +- **Solution**: Validate YAML syntax online or with linter + +**Issue**: Tests fail in CI but pass locally + +- **Solution**: Use `scripts/ci-local.sh` to mirror CI environment + +**Issue**: Caching not working + +- **Solution**: Check cache key formula, verify paths + +**Issue**: Burn-in too slow + +- **Solution**: Reduce iterations or run on cron only + +### Platform-Specific + +**GitHub Actions:** + +- Secrets: Repository Settings → Secrets and variables → Actions +- Runners: Ubuntu latest recommended +- Concurrency limits: 20 jobs for free tier + +**GitLab CI:** + +- Variables: Project Settings → CI/CD → Variables +- Runners: Shared or project-specific +- Pipeline quota: 400 minutes/month free tier + +--- + +**Checklist Complete**: Sign off when all items validated. + +**Completed by:** {name} +**Date:** {date} +**Platform:** {GitHub Actions, GitLab CI, Other} +**Notes:** {notes} diff --git a/src/bmm/workflows/testarch/ci/github-actions-template.yaml b/src/bmm/workflows/testarch/ci/github-actions-template.yaml new file mode 100644 index 00000000..9f09a73f --- /dev/null +++ b/src/bmm/workflows/testarch/ci/github-actions-template.yaml @@ -0,0 +1,198 @@ +# GitHub Actions CI/CD Pipeline for Test Execution +# Generated by BMad TEA Agent - Test Architect Module +# Optimized for: Playwright/Cypress, Parallel Sharding, Burn-In Loop + +name: Test Pipeline + +on: + push: + branches: [main, develop] + pull_request: + branches: [main, develop] + schedule: + # Weekly burn-in on Sundays at 2 AM UTC + - cron: "0 2 * * 0" + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + +jobs: + # Lint stage - Code quality checks + lint: + name: Lint + runs-on: ubuntu-latest + timeout-minutes: 5 + + steps: + - uses: actions/checkout@v4 + + - name: Determine Node version + id: node-version + run: | + if [ -f .nvmrc ]; then + echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT" + echo "Using Node from .nvmrc" + else + echo "value=24" >> "$GITHUB_OUTPUT" + echo "Using default Node 24 (current LTS)" + fi + + - name: Setup Node.js + uses: actions/setup-node@v4 + with: + node-version: ${{ steps.node-version.outputs.value }} + cache: "npm" + + - name: Install dependencies + run: npm ci + + - name: Run linter + run: npm run lint + + # Test stage - Parallel execution with sharding + test: + name: Test (Shard ${{ matrix.shard }}) + runs-on: ubuntu-latest + timeout-minutes: 30 + needs: lint + + strategy: + fail-fast: false + matrix: + shard: [1, 2, 3, 4] + + steps: + - uses: actions/checkout@v4 + + - name: Determine Node version + id: node-version + run: | + if [ -f .nvmrc ]; then + echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT" + echo "Using Node from .nvmrc" + else + echo "value=22" >> "$GITHUB_OUTPUT" + echo "Using default Node 22 (current LTS)" + fi + + - name: Setup Node.js + uses: actions/setup-node@v4 + with: + node-version: ${{ steps.node-version.outputs.value }} + cache: "npm" + + - name: Cache Playwright browsers + uses: actions/cache@v4 + with: + path: ~/.cache/ms-playwright + key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }} + restore-keys: | + ${{ runner.os }}-playwright- + + - name: Install dependencies + run: npm ci + + - name: Install Playwright browsers + run: npx playwright install --with-deps chromium + + - name: Run tests (shard ${{ matrix.shard }}/4) + run: npm run test:e2e -- --shard=${{ matrix.shard }}/4 + + - name: Upload test results + if: failure() + uses: actions/upload-artifact@v4 + with: + name: test-results-${{ matrix.shard }} + path: | + test-results/ + playwright-report/ + retention-days: 30 + + # Burn-in stage - Flaky test detection + burn-in: + name: Burn-In (Flaky Detection) + runs-on: ubuntu-latest + timeout-minutes: 60 + needs: test + # Only run burn-in on PRs to main/develop or on schedule + if: github.event_name == 'pull_request' || github.event_name == 'schedule' + + steps: + - uses: actions/checkout@v4 + + - name: Determine Node version + id: node-version + run: | + if [ -f .nvmrc ]; then + echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT" + echo "Using Node from .nvmrc" + else + echo "value=22" >> "$GITHUB_OUTPUT" + echo "Using default Node 22 (current LTS)" + fi + + - name: Setup Node.js + uses: actions/setup-node@v4 + with: + node-version: ${{ steps.node-version.outputs.value }} + cache: "npm" + + - name: Cache Playwright browsers + uses: actions/cache@v4 + with: + path: ~/.cache/ms-playwright + key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }} + + - name: Install dependencies + run: npm ci + + - name: Install Playwright browsers + run: npx playwright install --with-deps chromium + + - name: Run burn-in loop (10 iterations) + run: | + echo "🔥 Starting burn-in loop - detecting flaky tests" + for i in {1..10}; do + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "🔥 Burn-in iteration $i/10" + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + npm run test:e2e || exit 1 + done + echo "✅ Burn-in complete - no flaky tests detected" + + - name: Upload burn-in failure artifacts + if: failure() + uses: actions/upload-artifact@v4 + with: + name: burn-in-failures + path: | + test-results/ + playwright-report/ + retention-days: 30 + + # Report stage - Aggregate and publish results + report: + name: Test Report + runs-on: ubuntu-latest + needs: [test, burn-in] + if: always() + + steps: + - name: Download all artifacts + uses: actions/download-artifact@v4 + with: + path: artifacts + + - name: Generate summary + run: | + echo "## Test Execution Summary" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + echo "- **Status**: ${{ needs.test.result }}" >> $GITHUB_STEP_SUMMARY + echo "- **Burn-in**: ${{ needs.burn-in.result }}" >> $GITHUB_STEP_SUMMARY + echo "- **Shards**: 4" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + + if [ "${{ needs.burn-in.result }}" == "failure" ]; then + echo "⚠️ **Flaky tests detected** - Review burn-in artifacts" >> $GITHUB_STEP_SUMMARY + fi diff --git a/src/bmm/workflows/testarch/ci/gitlab-ci-template.yaml b/src/bmm/workflows/testarch/ci/gitlab-ci-template.yaml new file mode 100644 index 00000000..f5336de4 --- /dev/null +++ b/src/bmm/workflows/testarch/ci/gitlab-ci-template.yaml @@ -0,0 +1,149 @@ +# GitLab CI/CD Pipeline for Test Execution +# Generated by BMad TEA Agent - Test Architect Module +# Optimized for: Playwright/Cypress, Parallel Sharding, Burn-In Loop + +stages: + - lint + - test + - burn-in + - report + +variables: + # Disable git depth for accurate change detection + GIT_DEPTH: 0 + # Use npm ci for faster, deterministic installs + npm_config_cache: "$CI_PROJECT_DIR/.npm" + # Playwright browser cache + PLAYWRIGHT_BROWSERS_PATH: "$CI_PROJECT_DIR/.cache/ms-playwright" + # Default Node version when .nvmrc is missing + DEFAULT_NODE_VERSION: "24" + +# Caching configuration +cache: + key: + files: + - package-lock.json + paths: + - .npm/ + - .cache/ms-playwright/ + - node_modules/ + +# Lint stage - Code quality checks +lint: + stage: lint + image: node:$DEFAULT_NODE_VERSION + before_script: + - | + NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION") + echo "Using Node $NODE_VERSION" + npm install -g n + n "$NODE_VERSION" + node -v + - npm ci + script: + - npm run lint + timeout: 5 minutes + +# Test stage - Parallel execution with sharding +.test-template: &test-template + stage: test + image: node:$DEFAULT_NODE_VERSION + needs: + - lint + before_script: + - | + NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION") + echo "Using Node $NODE_VERSION" + npm install -g n + n "$NODE_VERSION" + node -v + - npm ci + - npx playwright install --with-deps chromium + artifacts: + when: on_failure + paths: + - test-results/ + - playwright-report/ + expire_in: 30 days + timeout: 30 minutes + +test:shard-1: + <<: *test-template + script: + - npm run test:e2e -- --shard=1/4 + +test:shard-2: + <<: *test-template + script: + - npm run test:e2e -- --shard=2/4 + +test:shard-3: + <<: *test-template + script: + - npm run test:e2e -- --shard=3/4 + +test:shard-4: + <<: *test-template + script: + - npm run test:e2e -- --shard=4/4 + +# Burn-in stage - Flaky test detection +burn-in: + stage: burn-in + image: node:$DEFAULT_NODE_VERSION + needs: + - test:shard-1 + - test:shard-2 + - test:shard-3 + - test:shard-4 + # Only run burn-in on merge requests to main/develop or on schedule + rules: + - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' + - if: '$CI_PIPELINE_SOURCE == "schedule"' + before_script: + - | + NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION") + echo "Using Node $NODE_VERSION" + npm install -g n + n "$NODE_VERSION" + node -v + - npm ci + - npx playwright install --with-deps chromium + script: + - | + echo "🔥 Starting burn-in loop - detecting flaky tests" + for i in {1..10}; do + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "🔥 Burn-in iteration $i/10" + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + npm run test:e2e || exit 1 + done + echo "✅ Burn-in complete - no flaky tests detected" + artifacts: + when: on_failure + paths: + - test-results/ + - playwright-report/ + expire_in: 30 days + timeout: 60 minutes + +# Report stage - Aggregate results +report: + stage: report + image: alpine:latest + needs: + - test:shard-1 + - test:shard-2 + - test:shard-3 + - test:shard-4 + - burn-in + when: always + script: + - | + echo "## Test Execution Summary" + echo "" + echo "- Pipeline: $CI_PIPELINE_ID" + echo "- Shards: 4" + echo "- Branch: $CI_COMMIT_REF_NAME" + echo "" + echo "View detailed results in job artifacts" diff --git a/src/bmm/workflows/testarch/ci/instructions.md b/src/bmm/workflows/testarch/ci/instructions.md new file mode 100644 index 00000000..a23d2c16 --- /dev/null +++ b/src/bmm/workflows/testarch/ci/instructions.md @@ -0,0 +1,536 @@ + + +# CI/CD Pipeline Setup + +**Workflow ID**: `_bmad/bmm/testarch/ci` +**Version**: 4.0 (BMad v6) + +--- + +## Overview + +Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, artifact collection, and notification configuration. This workflow creates platform-specific CI configuration optimized for fast feedback and reliable test execution. + +Note: This is typically a one-time setup per repo; run it any time after the test framework exists, ideally before feature work starts. + +--- + +## Preflight Requirements + +**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user. + +- ✅ Git repository is initialized (`.git/` directory exists) +- ✅ Local test suite passes (`npm run test:e2e` succeeds) +- ✅ Test framework is configured (from `framework` workflow) +- ✅ Team agrees on target CI platform (GitHub Actions, GitLab CI, Circle CI, etc.) +- ✅ Access to CI platform settings/secrets available (if updating existing pipeline) + +--- + +## Step 1: Run Preflight Checks + +### Actions + +1. **Verify Git Repository** + - Check for `.git/` directory + - Confirm remote repository configured (`git remote -v`) + - If not initialized, HALT with message: "Git repository required for CI/CD setup" + +2. **Validate Test Framework** + - Look for `playwright.config.*` or `cypress.config.*` + - Read framework configuration to extract: + - Test directory location + - Test command + - Reporter configuration + - Timeout settings + - If not found, HALT with message: "Run `framework` workflow first to set up test infrastructure" + +3. **Run Local Tests** + - Execute `npm run test:e2e` (or equivalent from package.json) + - Ensure tests pass before CI setup + - If tests fail, HALT with message: "Fix failing tests before setting up CI/CD" + +4. **Detect CI Platform** + - Check for existing CI configuration: + - `.github/workflows/*.yml` (GitHub Actions) + - `.gitlab-ci.yml` (GitLab CI) + - `.circleci/config.yml` (Circle CI) + - `Jenkinsfile` (Jenkins) + - If found, ask user: "Update existing CI configuration or create new?" + - If not found, detect platform from git remote: + - `github.com` → GitHub Actions (default) + - `gitlab.com` → GitLab CI + - Ask user if unable to auto-detect + +5. **Read Environment Configuration** + - Use `.nvmrc` for Node version if present + - If missing, default to a current LTS (Node 24) or newer instead of a fixed old version + - Read `package.json` to identify dependencies (affects caching strategy) + +**Halt Condition:** If preflight checks fail, stop immediately and report which requirement failed. + +--- + +## Step 2: Scaffold CI Pipeline + +### Actions + +1. **Select CI Platform Template** + + Based on detection or user preference, use the appropriate template: + + **GitHub Actions** (`.github/workflows/test.yml`): + - Most common platform + - Excellent caching and matrix support + - Free for public repos, generous free tier for private + + **GitLab CI** (`.gitlab-ci.yml`): + - Integrated with GitLab + - Built-in registry and runners + - Powerful pipeline features + + **Circle CI** (`.circleci/config.yml`): + - Fast execution with parallelism + - Docker-first approach + - Enterprise features + + **Jenkins** (`Jenkinsfile`): + - Self-hosted option + - Maximum customization + - Requires infrastructure management + +2. **Generate Pipeline Configuration** + + Use templates from `{installed_path}/` directory: + - `github-actions-template.yml` + - `gitlab-ci-template.yml` + + **Key pipeline stages:** + + ```yaml + stages: + - lint # Code quality checks + - test # Test execution (parallel shards) + - burn-in # Flaky test detection + - report # Aggregate results and publish + ``` + +3. **Configure Test Execution** + + **Parallel Sharding:** + + ```yaml + strategy: + fail-fast: false + matrix: + shard: [1, 2, 3, 4] + + steps: + - name: Run tests + run: npm run test:e2e -- --shard=${{ matrix.shard }}/${{ strategy.job-total }} + ``` + + **Purpose:** Splits tests into N parallel jobs for faster execution (target: <10 min per shard) + +4. **Add Burn-In Loop** + + **Critical pattern from production systems:** + + ```yaml + burn-in: + name: Flaky Test Detection + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Setup Node + uses: actions/setup-node@v4 + with: + node-version-file: '.nvmrc' + + - name: Install dependencies + run: npm ci + + - name: Run burn-in loop (10 iterations) + run: | + for i in {1..10}; do + echo "🔥 Burn-in iteration $i/10" + npm run test:e2e || exit 1 + done + + - name: Upload failure artifacts + if: failure() + uses: actions/upload-artifact@v4 + with: + name: burn-in-failures + path: test-results/ + retention-days: 30 + ``` + + **Purpose:** Runs tests multiple times to catch non-deterministic failures before they reach main branch. + + **When to run:** + - On pull requests to main/develop + - Weekly on cron schedule + - After significant test infrastructure changes + +5. **Configure Caching** + + **Node modules cache:** + + ```yaml + - name: Cache dependencies + uses: actions/cache@v4 + with: + path: ~/.npm + key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} + restore-keys: | + ${{ runner.os }}-node- + ``` + + **Browser binaries cache (Playwright):** + + ```yaml + - name: Cache Playwright browsers + uses: actions/cache@v4 + with: + path: ~/.cache/ms-playwright + key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }} + ``` + + **Purpose:** Reduces CI execution time by 2-5 minutes per run. + +6. **Configure Artifact Collection** + + **Failure artifacts only:** + + ```yaml + - name: Upload test results + if: failure() + uses: actions/upload-artifact@v4 + with: + name: test-results-${{ matrix.shard }} + path: | + test-results/ + playwright-report/ + retention-days: 30 + ``` + + **Artifacts to collect:** + - Traces (Playwright) - full debugging context + - Screenshots - visual evidence of failures + - Videos - interaction playback + - HTML reports - detailed test results + - Console logs - error messages and warnings + +7. **Add Retry Logic** + + ```yaml + - name: Run tests with retries + uses: nick-invision/retry@v2 + with: + timeout_minutes: 30 + max_attempts: 3 + retry_on: error + command: npm run test:e2e + ``` + + **Purpose:** Handles transient failures (network issues, race conditions) + +8. **Configure Notifications** (Optional) + + If `notify_on_failure` is enabled: + + ```yaml + - name: Notify on failure + if: failure() + uses: 8398a7/action-slack@v3 + with: + status: ${{ job.status }} + text: 'Test failures detected in PR #${{ github.event.pull_request.number }}' + webhook_url: ${{ secrets.SLACK_WEBHOOK }} + ``` + +9. **Generate Helper Scripts** + + **Selective testing script** (`scripts/test-changed.sh`): + + ```bash + #!/bin/bash + # Run only tests for changed files + + CHANGED_FILES=$(git diff --name-only HEAD~1) + + if echo "$CHANGED_FILES" | grep -q "src/.*\.ts$"; then + echo "Running affected tests..." + npm run test:e2e -- --grep="$(echo $CHANGED_FILES | sed 's/src\///g' | sed 's/\.ts//g')" + else + echo "No test-affecting changes detected" + fi + ``` + + **Local mirror script** (`scripts/ci-local.sh`): + + ```bash + #!/bin/bash + # Mirror CI execution locally for debugging + + echo "🔍 Running CI pipeline locally..." + + # Lint + npm run lint || exit 1 + + # Tests + npm run test:e2e || exit 1 + + # Burn-in (reduced iterations) + for i in {1..3}; do + echo "🔥 Burn-in $i/3" + npm run test:e2e || exit 1 + done + + echo "✅ Local CI pipeline passed" + ``` + +10. **Generate Documentation** + + **CI README** (`docs/ci.md`): + - Pipeline stages and purpose + - How to run locally + - Debugging failed CI runs + - Secrets and environment variables needed + - Notification setup + - Badge URLs for README + + **Secrets checklist** (`docs/ci-secrets-checklist.md`): + - Required secrets list (SLACK_WEBHOOK, etc.) + - Where to configure in CI platform + - Security best practices + +--- + +## Step 3: Deliverables + +### Primary Artifacts Created + +1. **CI Configuration File** + - `.github/workflows/test.yml` (GitHub Actions) + - `.gitlab-ci.yml` (GitLab CI) + - `.circleci/config.yml` (Circle CI) + +2. **Pipeline Stages** + - **Lint**: Code quality checks (ESLint, Prettier) + - **Test**: Parallel test execution (4 shards) + - **Burn-in**: Flaky test detection (10 iterations) + - **Report**: Result aggregation and publishing + +3. **Helper Scripts** + - `scripts/test-changed.sh` - Selective testing + - `scripts/ci-local.sh` - Local CI mirror + - `scripts/burn-in.sh` - Standalone burn-in execution + +4. **Documentation** + - `docs/ci.md` - CI pipeline guide + - `docs/ci-secrets-checklist.md` - Required secrets + - Inline comments in CI configuration + +5. **Optimization Features** + - Dependency caching (npm, browser binaries) + - Parallel sharding (4 jobs default) + - Retry logic (2 retries on failure) + - Failure-only artifact upload + +### Performance Targets + +- **Lint stage**: <2 minutes +- **Test stage** (per shard): <10 minutes +- **Burn-in stage**: <30 minutes (10 iterations) +- **Total pipeline**: <45 minutes + +**Speedup:** 20× faster than sequential execution through parallelism and caching. + +--- + +## Important Notes + +### Knowledge Base Integration + +**Critical:** Check configuration and load appropriate fragments. + +Read `{config_source}` and check `config.tea_use_playwright_utils`. + +**Core CI Patterns (Always load):** + +- `ci-burn-in.md` - Burn-in loop patterns: 10-iteration detection, GitHub Actions workflow, shard orchestration, selective execution (678 lines, 4 examples) +- `selective-testing.md` - Changed test detection strategies: tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples) +- `visual-debugging.md` - Artifact collection best practices: trace viewer, HAR recording, custom artifacts, accessibility integration (522 lines, 5 examples) +- `test-quality.md` - CI-specific test quality criteria: deterministic tests, isolated with cleanup, explicit assertions, length/time optimization (658 lines, 5 examples) +- `playwright-config.md` - CI-optimized configuration: parallelization, artifact output, project dependencies, sharding (722 lines, 5 examples) + +**If `config.tea_use_playwright_utils: true`:** + +Load playwright-utils CI-relevant fragments: + +- `burn-in.md` - Smart test selection with git diff analysis (very important for CI optimization) +- `network-error-monitor.md` - Automatic HTTP 4xx/5xx detection (recommend in CI pipelines) + +Recommend: + +- Add burn-in script for pull request validation +- Enable network-error-monitor in merged fixtures for catching silent failures +- Reference full docs in `*framework` and `*automate` workflows + +### CI Platform-Specific Guidance + +**GitHub Actions:** + +- Use `actions/cache` for caching +- Matrix strategy for parallelism +- Secrets in repository settings +- Free 2000 minutes/month for private repos + +**GitLab CI:** + +- Use `.gitlab-ci.yml` in root +- `cache:` directive for caching +- Parallel execution with `parallel: 4` +- Variables in project CI/CD settings + +**Circle CI:** + +- Use `.circleci/config.yml` +- Docker executors recommended +- Parallelism with `parallelism: 4` +- Context for shared secrets + +### Burn-In Loop Strategy + +**When to run:** + +- ✅ On PRs to main/develop branches +- ✅ Weekly on schedule (cron) +- ✅ After test infrastructure changes +- ❌ Not on every commit (too slow) + +**Iterations:** + +- **10 iterations** for thorough detection +- **3 iterations** for quick feedback +- **100 iterations** for high-confidence stability + +**Failure threshold:** + +- Even ONE failure in burn-in → tests are flaky +- Must fix before merging + +### Artifact Retention + +**Failure artifacts only:** + +- Saves storage costs +- Maintains debugging capability +- 30-day retention default + +**Artifact types:** + +- Traces (Playwright) - 5-10 MB per test +- Screenshots - 100-500 KB per screenshot +- Videos - 2-5 MB per test +- HTML reports - 1-2 MB per run + +### Selective Testing + +**Detect changed files:** + +```bash +git diff --name-only HEAD~1 +``` + +**Run affected tests only:** + +- Faster feedback for small changes +- Full suite still runs on main branch +- Reduces CI time by 50-80% for focused PRs + +**Trade-off:** + +- May miss integration issues +- Run full suite at least on merge + +### Local CI Mirror + +**Purpose:** Debug CI failures locally + +**Usage:** + +```bash +./scripts/ci-local.sh +``` + +**Mirrors CI environment:** + +- Same Node version +- Same test command +- Same stages (lint → test → burn-in) +- Reduced burn-in iterations (3 vs 10) + +--- + +## Output Summary + +After completing this workflow, provide a summary: + +```markdown +## CI/CD Pipeline Complete + +**Platform**: GitHub Actions (or GitLab CI, etc.) + +**Artifacts Created**: + +- ✅ Pipeline configuration: .github/workflows/test.yml +- ✅ Burn-in loop: 10 iterations for flaky detection +- ✅ Parallel sharding: 4 jobs for fast execution +- ✅ Caching: Dependencies + browser binaries +- ✅ Artifact collection: Failure-only traces/screenshots/videos +- ✅ Helper scripts: test-changed.sh, ci-local.sh, burn-in.sh +- ✅ Documentation: docs/ci.md, docs/ci-secrets-checklist.md + +**Performance:** + +- Lint: <2 min +- Test (per shard): <10 min +- Burn-in: <30 min +- Total: <45 min (20× speedup vs sequential) + +**Next Steps**: + +1. Commit CI configuration: `git add .github/workflows/test.yml && git commit -m "ci: add test pipeline"` +2. Push to remote: `git push` +3. Configure required secrets in CI platform settings (see docs/ci-secrets-checklist.md) +4. Open a PR to trigger first CI run +5. Monitor pipeline execution and adjust parallelism if needed + +**Knowledge Base References Applied**: + +- Burn-in loop pattern (ci-burn-in.md) +- Selective testing strategy (selective-testing.md) +- Artifact collection (visual-debugging.md) +- Test quality criteria (test-quality.md) +``` + +--- + +## Validation + +After completing all steps, verify: + +- [ ] CI configuration file created and syntactically valid +- [ ] Burn-in loop configured (10 iterations) +- [ ] Parallel sharding enabled (4 jobs) +- [ ] Caching configured (dependencies + browsers) +- [ ] Artifact collection on failure only +- [ ] Helper scripts created and executable (`chmod +x`) +- [ ] Documentation complete (ci.md, secrets checklist) +- [ ] No errors or warnings during scaffold + +Refer to `checklist.md` for comprehensive validation criteria. diff --git a/src/bmm/workflows/testarch/ci/workflow.yaml b/src/bmm/workflows/testarch/ci/workflow.yaml new file mode 100644 index 00000000..223af205 --- /dev/null +++ b/src/bmm/workflows/testarch/ci/workflow.yaml @@ -0,0 +1,47 @@ +# Test Architect workflow: ci +name: testarch-ci +description: "Scaffold CI/CD quality pipeline with test execution, burn-in loops, and artifact collection" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/ci" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Variables and inputs +variables: + ci_platform: "auto" # auto, github-actions, gitlab-ci, circle-ci, jenkins - user can override + test_dir: "{project-root}/tests" # Root test directory + +# Output configuration +default_output_file: "{project-root}/.github/workflows/test.yml" # GitHub Actions default + +# Required tools +required_tools: + - read_file # Read .nvmrc, package.json, framework config + - write_file # Create CI config, scripts, documentation + - create_directory # Create .github/workflows/ or .gitlab-ci/ directories + - list_files # Detect existing CI configuration + - search_repo # Find test files for selective testing + +tags: + - qa + - ci-cd + - test-architect + - pipeline + - automation + +execution_hints: + interactive: false # Minimize prompts, auto-detect when possible + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/framework/checklist.md b/src/bmm/workflows/testarch/framework/checklist.md new file mode 100644 index 00000000..07c6fe8d --- /dev/null +++ b/src/bmm/workflows/testarch/framework/checklist.md @@ -0,0 +1,320 @@ +# Test Framework Setup - Validation Checklist + +This checklist ensures the framework workflow completes successfully and all deliverables meet quality standards. + +--- + +## Prerequisites + +Before starting the workflow: + +- [ ] Project root contains valid `package.json` +- [ ] No existing modern E2E framework detected (`playwright.config.*`, `cypress.config.*`) +- [ ] Project type identifiable (React, Vue, Angular, Next.js, Node, etc.) +- [ ] Bundler identifiable (Vite, Webpack, Rollup, esbuild) or not applicable +- [ ] User has write permissions to create directories and files + +--- + +## Process Steps + +### Step 1: Preflight Checks + +- [ ] package.json successfully read and parsed +- [ ] Project type extracted correctly +- [ ] Bundler identified (or marked as N/A for backend projects) +- [ ] No framework conflicts detected +- [ ] Architecture documents located (if available) + +### Step 2: Framework Selection + +- [ ] Framework auto-detection logic executed +- [ ] Framework choice justified (Playwright vs Cypress) +- [ ] Framework preference respected (if explicitly set) +- [ ] User notified of framework selection and rationale + +### Step 3: Directory Structure + +- [ ] `tests/` root directory created +- [ ] `tests/e2e/` directory created (or user's preferred structure) +- [ ] `tests/support/` directory created (critical pattern) +- [ ] `tests/support/fixtures/` directory created +- [ ] `tests/support/fixtures/factories/` directory created +- [ ] `tests/support/helpers/` directory created +- [ ] `tests/support/page-objects/` directory created (if applicable) +- [ ] All directories have correct permissions + +**Note**: Test organization is flexible (e2e/, api/, integration/). The **support/** folder is the key pattern. + +### Step 4: Configuration Files + +- [ ] Framework config file created (`playwright.config.ts` or `cypress.config.ts`) +- [ ] Config file uses TypeScript (if `use_typescript: true`) +- [ ] Timeouts configured correctly (action: 15s, navigation: 30s, test: 60s) +- [ ] Base URL configured with environment variable fallback +- [ ] Trace/screenshot/video set to retain-on-failure +- [ ] Multiple reporters configured (HTML + JUnit + console) +- [ ] Parallel execution enabled +- [ ] CI-specific settings configured (retries, workers) +- [ ] Config file is syntactically valid (no compilation errors) + +### Step 5: Environment Configuration + +- [ ] `.env.example` created in project root +- [ ] `TEST_ENV` variable defined +- [ ] `BASE_URL` variable defined with default +- [ ] `API_URL` variable defined (if applicable) +- [ ] Authentication variables defined (if applicable) +- [ ] Feature flag variables defined (if applicable) +- [ ] `.nvmrc` created with appropriate Node version + +### Step 6: Fixture Architecture + +- [ ] `tests/support/fixtures/index.ts` created +- [ ] Base fixture extended from Playwright/Cypress +- [ ] Type definitions for fixtures created +- [ ] mergeTests pattern implemented (if multiple fixtures) +- [ ] Auto-cleanup logic included in fixtures +- [ ] Fixture architecture follows knowledge base patterns + +### Step 7: Data Factories + +- [ ] At least one factory created (e.g., UserFactory) +- [ ] Factories use @faker-js/faker for realistic data +- [ ] Factories track created entities (for cleanup) +- [ ] Factories implement `cleanup()` method +- [ ] Factories integrate with fixtures +- [ ] Factories follow knowledge base patterns + +### Step 8: Sample Tests + +- [ ] Example test file created (`tests/e2e/example.spec.ts`) +- [ ] Test uses fixture architecture +- [ ] Test demonstrates data factory usage +- [ ] Test uses proper selector strategy (data-testid) +- [ ] Test follows Given-When-Then structure +- [ ] Test includes proper assertions +- [ ] Network interception demonstrated (if applicable) + +### Step 9: Helper Utilities + +- [ ] API helper created (if API testing needed) +- [ ] Network helper created (if network mocking needed) +- [ ] Auth helper created (if authentication needed) +- [ ] Helpers follow functional patterns +- [ ] Helpers have proper error handling + +### Step 10: Documentation + +- [ ] `tests/README.md` created +- [ ] Setup instructions included +- [ ] Running tests section included +- [ ] Architecture overview section included +- [ ] Best practices section included +- [ ] CI integration section included +- [ ] Knowledge base references included +- [ ] Troubleshooting section included + +### Step 11: Package.json Updates + +- [ ] Minimal test script added to package.json: `test:e2e` +- [ ] Test framework dependency added (if not already present) +- [ ] Type definitions added (if TypeScript) +- [ ] Users can extend with additional scripts as needed + +--- + +## Output Validation + +### Configuration Validation + +- [ ] Config file loads without errors +- [ ] Config file passes linting (if linter configured) +- [ ] Config file uses correct syntax for chosen framework +- [ ] All paths in config resolve correctly +- [ ] Reporter output directories exist or are created on test run + +### Test Execution Validation + +- [ ] Sample test runs successfully +- [ ] Test execution produces expected output (pass/fail) +- [ ] Test artifacts generated correctly (traces, screenshots, videos) +- [ ] Test report generated successfully +- [ ] No console errors or warnings during test run + +### Directory Structure Validation + +- [ ] All required directories exist +- [ ] Directory structure matches framework conventions +- [ ] No duplicate or conflicting directories +- [ ] Directories accessible with correct permissions + +### File Integrity Validation + +- [ ] All generated files are syntactically correct +- [ ] No placeholder text left in files (e.g., "TODO", "FIXME") +- [ ] All imports resolve correctly +- [ ] No hardcoded credentials or secrets in files +- [ ] All file paths use correct separators for OS + +--- + +## Quality Checks + +### Code Quality + +- [ ] Generated code follows project coding standards +- [ ] TypeScript types are complete and accurate (no `any` unless necessary) +- [ ] No unused imports or variables +- [ ] Consistent code formatting (matches project style) +- [ ] No linting errors in generated files + +### Best Practices Compliance + +- [ ] Fixture architecture follows pure function → fixture → mergeTests pattern +- [ ] Data factories implement auto-cleanup +- [ ] Network interception occurs before navigation +- [ ] Selectors use data-testid strategy +- [ ] Artifacts only captured on failure +- [ ] Tests follow Given-When-Then structure +- [ ] No hard-coded waits or sleeps + +### Knowledge Base Alignment + +- [ ] Fixture pattern matches `fixture-architecture.md` +- [ ] Data factories match `data-factories.md` +- [ ] Network handling matches `network-first.md` +- [ ] Config follows `playwright-config.md` or `test-config.md` +- [ ] Test quality matches `test-quality.md` + +### Security Checks + +- [ ] No credentials in configuration files +- [ ] .env.example contains placeholders, not real values +- [ ] Sensitive test data handled securely +- [ ] API keys and tokens use environment variables +- [ ] No secrets committed to version control + +--- + +## Integration Points + +### Status File Integration + +- [ ] Framework initialization logged in Quality & Testing Progress section +- [ ] Status file updated with completion timestamp +- [ ] Status file shows framework: Playwright or Cypress + +### Knowledge Base Integration + +- [ ] Relevant knowledge fragments identified from tea-index.csv +- [ ] Knowledge fragments successfully loaded +- [ ] Patterns from knowledge base applied correctly +- [ ] Knowledge base references included in documentation + +### Workflow Dependencies + +- [ ] Can proceed to `ci` workflow after completion +- [ ] Can proceed to `test-design` workflow after completion +- [ ] Can proceed to `atdd` workflow after completion +- [ ] Framework setup compatible with downstream workflows + +--- + +## Completion Criteria + +**All of the following must be true:** + +- [ ] All prerequisite checks passed +- [ ] All process steps completed without errors +- [ ] All output validations passed +- [ ] All quality checks passed +- [ ] All integration points verified +- [ ] Sample test executes successfully +- [ ] User can run `npm run test:e2e` without errors +- [ ] Documentation is complete and accurate +- [ ] No critical issues or blockers identified + +--- + +## Post-Workflow Actions + +**User must complete:** + +1. [ ] Copy `.env.example` to `.env` +2. [ ] Fill in environment-specific values in `.env` +3. [ ] Run `npm install` to install test dependencies +4. [ ] Run `npm run test:e2e` to verify setup +5. [ ] Review `tests/README.md` for project-specific guidance + +**Recommended next workflows:** + +1. [ ] Run `ci` workflow to set up CI/CD pipeline +2. [ ] Run `test-design` workflow to plan test coverage +3. [ ] Run `atdd` workflow when ready to develop stories + +--- + +## Rollback Procedure + +If workflow fails and needs to be rolled back: + +1. [ ] Delete `tests/` directory +2. [ ] Remove test scripts from package.json +3. [ ] Delete `.env.example` (if created) +4. [ ] Delete `.nvmrc` (if created) +5. [ ] Delete framework config file +6. [ ] Remove test dependencies from package.json (if added) +7. [ ] Run `npm install` to clean up node_modules + +--- + +## Notes + +### Common Issues + +**Issue**: Config file has TypeScript errors + +- **Solution**: Ensure `@playwright/test` or `cypress` types are installed + +**Issue**: Sample test fails to run + +- **Solution**: Check BASE_URL in .env, ensure app is running + +**Issue**: Fixture cleanup not working + +- **Solution**: Verify cleanup() is called in fixture teardown + +**Issue**: Network interception not working + +- **Solution**: Ensure route setup occurs before page.goto() + +### Framework-Specific Considerations + +**Playwright:** + +- Requires Node.js 18+ +- Browser binaries auto-installed on first run +- Trace viewer requires running `npx playwright show-trace` + +**Cypress:** + +- Requires Node.js 18+ +- Cypress app opens on first run +- Component testing requires additional setup + +### Version Compatibility + +- [ ] Node.js version matches .nvmrc +- [ ] Framework version compatible with Node.js version +- [ ] TypeScript version compatible with framework +- [ ] All peer dependencies satisfied + +--- + +**Checklist Complete**: Sign off when all items checked and validated. + +**Completed by:** {name} +**Date:** {date} +**Framework:** { Playwright / Cypress or something else} +**Notes:** {notes} diff --git a/src/bmm/workflows/testarch/framework/instructions.md b/src/bmm/workflows/testarch/framework/instructions.md new file mode 100644 index 00000000..9f7af84e --- /dev/null +++ b/src/bmm/workflows/testarch/framework/instructions.md @@ -0,0 +1,481 @@ + + +# Test Framework Setup + +**Workflow ID**: `_bmad/bmm/testarch/framework` +**Version**: 4.0 (BMad v6) + +--- + +## Overview + +Initialize a production-ready test framework architecture (Playwright or Cypress) with fixtures, helpers, configuration, and best practices. This workflow scaffolds the complete testing infrastructure for modern web applications. + +--- + +## Preflight Requirements + +**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user. + +- ✅ `package.json` exists in project root +- ✅ No modern E2E test harness is already configured (check for existing `playwright.config.*` or `cypress.config.*`) +- ✅ Architectural/stack context available (project type, bundler, dependencies) + +--- + +## Step 1: Run Preflight Checks + +### Actions + +1. **Validate package.json** + - Read `{project-root}/package.json` + - Extract project type (React, Vue, Angular, Next.js, Node, etc.) + - Identify bundler (Vite, Webpack, Rollup, esbuild) + - Note existing test dependencies + +2. **Check for Existing Framework** + - Search for `playwright.config.*`, `cypress.config.*`, `cypress.json` + - Check `package.json` for `@playwright/test` or `cypress` dependencies + - If found, HALT with message: "Existing test framework detected. Use workflow `upgrade-framework` instead." + +3. **Gather Context** + - Look for architecture documents (`architecture.md`, `tech-spec*.md`) + - Check for API documentation or endpoint lists + - Identify authentication requirements + +**Halt Condition:** If preflight checks fail, stop immediately and report which requirement failed. + +--- + +## Step 2: Scaffold Framework + +### Actions + +1. **Framework Selection** + + **Default Logic:** + - **Playwright** (recommended for): + - Large repositories (100+ files) + - Performance-critical applications + - Multi-browser support needed + - Complex user flows requiring video/trace debugging + - Projects requiring worker parallelism + + - **Cypress** (recommended for): + - Small teams prioritizing developer experience + - Component testing focus + - Real-time reloading during test development + - Simpler setup requirements + + **Detection Strategy:** + - Check `package.json` for existing preference + - Consider `project_size` variable from workflow config + - Use `framework_preference` variable if set + - Default to **Playwright** if uncertain + +2. **Create Directory Structure** + + ``` + {project-root}/ + ├── tests/ # Root test directory + │ ├── e2e/ # Test files (users organize as needed) + │ ├── support/ # Framework infrastructure (key pattern) + │ │ ├── fixtures/ # Test fixtures (data, mocks) + │ │ ├── helpers/ # Utility functions + │ │ └── page-objects/ # Page object models (optional) + │ └── README.md # Test suite documentation + ``` + + **Note**: Users organize test files (e2e/, api/, integration/, component/) as needed. The **support/** folder is the critical pattern for fixtures and helpers used across tests. + +3. **Generate Configuration File** + + **For Playwright** (`playwright.config.ts` or `playwright.config.js`): + + ```typescript + import { defineConfig, devices } from '@playwright/test'; + + export default defineConfig({ + testDir: './tests/e2e', + fullyParallel: true, + forbidOnly: !!process.env.CI, + retries: process.env.CI ? 2 : 0, + workers: process.env.CI ? 1 : undefined, + + timeout: 60 * 1000, // Test timeout: 60s + expect: { + timeout: 15 * 1000, // Assertion timeout: 15s + }, + + use: { + baseURL: process.env.BASE_URL || 'http://localhost:3000', + trace: 'retain-on-failure', + screenshot: 'only-on-failure', + video: 'retain-on-failure', + actionTimeout: 15 * 1000, // Action timeout: 15s + navigationTimeout: 30 * 1000, // Navigation timeout: 30s + }, + + reporter: [['html', { outputFolder: 'test-results/html' }], ['junit', { outputFile: 'test-results/junit.xml' }], ['list']], + + projects: [ + { name: 'chromium', use: { ...devices['Desktop Chrome'] } }, + { name: 'firefox', use: { ...devices['Desktop Firefox'] } }, + { name: 'webkit', use: { ...devices['Desktop Safari'] } }, + ], + }); + ``` + + **For Cypress** (`cypress.config.ts` or `cypress.config.js`): + + ```typescript + import { defineConfig } from 'cypress'; + + export default defineConfig({ + e2e: { + baseUrl: process.env.BASE_URL || 'http://localhost:3000', + specPattern: 'tests/e2e/**/*.cy.{js,jsx,ts,tsx}', + supportFile: 'tests/support/e2e.ts', + video: false, + screenshotOnRunFailure: true, + + setupNodeEvents(on, config) { + // implement node event listeners here + }, + }, + + retries: { + runMode: 2, + openMode: 0, + }, + + defaultCommandTimeout: 15000, + requestTimeout: 30000, + responseTimeout: 30000, + pageLoadTimeout: 60000, + }); + ``` + +4. **Generate Environment Configuration** + + Create `.env.example`: + + ```bash + # Test Environment Configuration + TEST_ENV=local + BASE_URL=http://localhost:3000 + API_URL=http://localhost:3001/api + + # Authentication (if applicable) + TEST_USER_EMAIL=test@example.com + TEST_USER_PASSWORD= + + # Feature Flags (if applicable) + FEATURE_FLAG_NEW_UI=true + + # API Keys (if applicable) + TEST_API_KEY= + ``` + +5. **Generate Node Version File** + + Create `.nvmrc`: + + ``` + 20.11.0 + ``` + + (Use Node version from existing `.nvmrc` or default to current LTS) + +6. **Implement Fixture Architecture** + + **Knowledge Base Reference**: `testarch/knowledge/fixture-architecture.md` + + Create `tests/support/fixtures/index.ts`: + + ```typescript + import { test as base } from '@playwright/test'; + import { UserFactory } from './factories/user-factory'; + + type TestFixtures = { + userFactory: UserFactory; + }; + + export const test = base.extend({ + userFactory: async ({}, use) => { + const factory = new UserFactory(); + await use(factory); + await factory.cleanup(); // Auto-cleanup + }, + }); + + export { expect } from '@playwright/test'; + ``` + +7. **Implement Data Factories** + + **Knowledge Base Reference**: `testarch/knowledge/data-factories.md` + + Create `tests/support/fixtures/factories/user-factory.ts`: + + ```typescript + import { faker } from '@faker-js/faker'; + + export class UserFactory { + private createdUsers: string[] = []; + + async createUser(overrides = {}) { + const user = { + email: faker.internet.email(), + name: faker.person.fullName(), + password: faker.internet.password({ length: 12 }), + ...overrides, + }; + + // API call to create user + const response = await fetch(`${process.env.API_URL}/users`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(user), + }); + + const created = await response.json(); + this.createdUsers.push(created.id); + return created; + } + + async cleanup() { + // Delete all created users + for (const userId of this.createdUsers) { + await fetch(`${process.env.API_URL}/users/${userId}`, { + method: 'DELETE', + }); + } + this.createdUsers = []; + } + } + ``` + +8. **Generate Sample Tests** + + Create `tests/e2e/example.spec.ts`: + + ```typescript + import { test, expect } from '../support/fixtures'; + + test.describe('Example Test Suite', () => { + test('should load homepage', async ({ page }) => { + await page.goto('/'); + await expect(page).toHaveTitle(/Home/i); + }); + + test('should create user and login', async ({ page, userFactory }) => { + // Create test user + const user = await userFactory.createUser(); + + // Login + await page.goto('/login'); + await page.fill('[data-testid="email-input"]', user.email); + await page.fill('[data-testid="password-input"]', user.password); + await page.click('[data-testid="login-button"]'); + + // Assert login success + await expect(page.locator('[data-testid="user-menu"]')).toBeVisible(); + }); + }); + ``` + +9. **Update package.json Scripts** + + Add minimal test script to `package.json`: + + ```json + { + "scripts": { + "test:e2e": "playwright test" + } + } + ``` + + **Note**: Users can add additional scripts as needed (e.g., `--ui`, `--headed`, `--debug`, `show-report`). + +10. **Generate Documentation** + + Create `tests/README.md` with setup instructions (see Step 3 deliverables). + +--- + +## Step 3: Deliverables + +### Primary Artifacts Created + +1. **Configuration File** + - `playwright.config.ts` or `cypress.config.ts` + - Timeouts: action 15s, navigation 30s, test 60s + - Reporters: HTML + JUnit XML + +2. **Directory Structure** + - `tests/` with `e2e/`, `api/`, `support/` subdirectories + - `support/fixtures/` for test fixtures + - `support/helpers/` for utility functions + +3. **Environment Configuration** + - `.env.example` with `TEST_ENV`, `BASE_URL`, `API_URL` + - `.nvmrc` with Node version + +4. **Test Infrastructure** + - Fixture architecture (`mergeTests` pattern) + - Data factories (faker-based, with auto-cleanup) + - Sample tests demonstrating patterns + +5. **Documentation** + - `tests/README.md` with setup instructions + - Comments in config files explaining options + +### README Contents + +The generated `tests/README.md` should include: + +- **Setup Instructions**: How to install dependencies, configure environment +- **Running Tests**: Commands for local execution, headed mode, debug mode +- **Architecture Overview**: Fixture pattern, data factories, page objects +- **Best Practices**: Selector strategy (data-testid), test isolation, cleanup +- **CI Integration**: How tests run in CI/CD pipeline +- **Knowledge Base References**: Links to relevant TEA knowledge fragments + +--- + +## Important Notes + +### Knowledge Base Integration + +**Critical:** Check configuration and load appropriate fragments. + +Read `{config_source}` and check `config.tea_use_playwright_utils`. + +**If `config.tea_use_playwright_utils: true` (Playwright Utils Integration):** + +Consult `{project-root}/_bmad/bmm/testarch/tea-index.csv` and load: + +- `overview.md` - Playwright utils installation and design principles +- `fixtures-composition.md` - mergeTests composition with playwright-utils +- `auth-session.md` - Token persistence setup (if auth needed) +- `api-request.md` - API testing utilities (if API tests planned) +- `burn-in.md` - Smart test selection for CI (recommend during framework setup) +- `network-error-monitor.md` - Automatic HTTP error detection (recommend in merged fixtures) +- `data-factories.md` - Factory patterns with faker (498 lines, 5 examples) + +Recommend installing playwright-utils: + +```bash +npm install -D @seontechnologies/playwright-utils +``` + +Recommend adding burn-in and network-error-monitor to merged fixtures for enhanced reliability. + +**If `config.tea_use_playwright_utils: false` (Traditional Patterns):** + +Consult `{project-root}/_bmad/bmm/testarch/tea-index.csv` and load: + +- `fixture-architecture.md` - Pure function → fixture → `mergeTests` composition with auto-cleanup (406 lines, 5 examples) +- `data-factories.md` - Faker-based factories with overrides, nested factories, API seeding, auto-cleanup (498 lines, 5 examples) +- `network-first.md` - Network-first testing safeguards: intercept before navigate, HAR capture, deterministic waiting (489 lines, 5 examples) +- `playwright-config.md` - Playwright-specific configuration: environment-based, timeout standards, artifact output, parallelization, project config (722 lines, 5 examples) +- `test-quality.md` - Test design principles: deterministic, isolated with cleanup, explicit assertions, length/time limits (658 lines, 5 examples) + +### Framework-Specific Guidance + +**Playwright Advantages:** + +- Worker parallelism (significantly faster for large suites) +- Trace viewer (powerful debugging with screenshots, network, console) +- Multi-language support (TypeScript, JavaScript, Python, C#, Java) +- Built-in API testing capabilities +- Better handling of multiple browser contexts + +**Cypress Advantages:** + +- Superior developer experience (real-time reloading) +- Excellent for component testing (Cypress CT or use Vitest) +- Simpler setup for small teams +- Better suited for watch mode during development + +**Avoid Cypress when:** + +- API chains are heavy and complex +- Multi-tab/window scenarios are common +- Worker parallelism is critical for CI performance + +### Selector Strategy + +**Always recommend**: + +- `data-testid` attributes for UI elements +- `data-cy` attributes if Cypress is chosen +- Avoid brittle CSS selectors or XPath + +### Contract Testing + +For microservices architectures, **recommend Pact** for consumer-driven contract testing alongside E2E tests. + +### Failure Artifacts + +Configure **failure-only** capture: + +- Screenshots: only on failure +- Videos: retain on failure (delete on success) +- Traces: retain on failure (Playwright) + +This reduces storage overhead while maintaining debugging capability. + +--- + +## Output Summary + +After completing this workflow, provide a summary: + +```markdown +## Framework Scaffold Complete + +**Framework Selected**: Playwright (or Cypress) + +**Artifacts Created**: + +- ✅ Configuration file: `playwright.config.ts` +- ✅ Directory structure: `tests/e2e/`, `tests/support/` +- ✅ Environment config: `.env.example` +- ✅ Node version: `.nvmrc` +- ✅ Fixture architecture: `tests/support/fixtures/` +- ✅ Data factories: `tests/support/fixtures/factories/` +- ✅ Sample tests: `tests/e2e/example.spec.ts` +- ✅ Documentation: `tests/README.md` + +**Next Steps**: + +1. Copy `.env.example` to `.env` and fill in environment variables +2. Run `npm install` to install test dependencies +3. Run `npm run test:e2e` to execute sample tests +4. Review `tests/README.md` for detailed setup instructions + +**Knowledge Base References Applied**: + +- Fixture architecture pattern (pure functions + mergeTests) +- Data factories with auto-cleanup (faker-based) +- Network-first testing safeguards +- Failure-only artifact capture +``` + +--- + +## Validation + +After completing all steps, verify: + +- [ ] Configuration file created and valid +- [ ] Directory structure exists +- [ ] Environment configuration generated +- [ ] Sample tests run successfully +- [ ] Documentation complete and accurate +- [ ] No errors or warnings during scaffold + +Refer to `checklist.md` for comprehensive validation criteria. diff --git a/src/bmm/workflows/testarch/framework/workflow.yaml b/src/bmm/workflows/testarch/framework/workflow.yaml new file mode 100644 index 00000000..07fcea0c --- /dev/null +++ b/src/bmm/workflows/testarch/framework/workflow.yaml @@ -0,0 +1,49 @@ +# Test Architect workflow: framework +name: testarch-framework +description: "Initialize production-ready test framework architecture (Playwright or Cypress) with fixtures, helpers, and configuration" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/framework" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" + +# Variables and inputs +variables: + test_dir: "{project-root}/tests" # Root test directory + use_typescript: true # Prefer TypeScript configuration + framework_preference: "auto" # auto, playwright, cypress - user can override auto-detection + project_size: "auto" # auto, small, large - influences framework recommendation + +# Output configuration +default_output_file: "{test_dir}/README.md" # Main deliverable is test setup README + +# Required tools +required_tools: + - read_file # Read package.json, existing configs + - write_file # Create config files, helpers, fixtures, tests + - create_directory # Create test directory structure + - list_files # Check for existing framework + - search_repo # Find architecture docs + +tags: + - qa + - setup + - test-architect + - framework + - initialization + +execution_hints: + interactive: false # Minimize prompts; auto-detect when possible + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/nfr-assess/checklist.md b/src/bmm/workflows/testarch/nfr-assess/checklist.md new file mode 100644 index 00000000..1e76f366 --- /dev/null +++ b/src/bmm/workflows/testarch/nfr-assess/checklist.md @@ -0,0 +1,407 @@ +# Non-Functional Requirements Assessment - Validation Checklist + +**Workflow:** `testarch-nfr` +**Purpose:** Ensure comprehensive and evidence-based NFR assessment with actionable recommendations + +--- + +Note: `nfr-assess` evaluates existing evidence; it does not run tests or CI workflows. + +## Prerequisites Validation + +- [ ] Implementation is deployed and accessible for evaluation +- [ ] Evidence sources are available (test results, metrics, logs, CI results) +- [ ] NFR categories are determined (performance, security, reliability, maintainability, custom) +- [ ] Evidence directories exist and are accessible (`test_results_dir`, `metrics_dir`, `logs_dir`) +- [ ] Knowledge base is loaded (nfr-criteria, ci-burn-in, test-quality) + +--- + +## Context Loading + +- [ ] Tech-spec.md loaded successfully (if available) +- [ ] PRD.md loaded (if available) +- [ ] Story file loaded (if applicable) +- [ ] Relevant knowledge fragments loaded from `tea-index.csv`: + - [ ] `nfr-criteria.md` + - [ ] `ci-burn-in.md` + - [ ] `test-quality.md` + - [ ] `playwright-config.md` (if using Playwright) + +--- + +## NFR Categories and Thresholds + +### Performance + +- [ ] Response time threshold defined or marked as UNKNOWN +- [ ] Throughput threshold defined or marked as UNKNOWN +- [ ] Resource usage thresholds defined or marked as UNKNOWN +- [ ] Scalability requirements defined or marked as UNKNOWN + +### Security + +- [ ] Authentication requirements defined or marked as UNKNOWN +- [ ] Authorization requirements defined or marked as UNKNOWN +- [ ] Data protection requirements defined or marked as UNKNOWN +- [ ] Vulnerability management thresholds defined or marked as UNKNOWN +- [ ] Compliance requirements identified (GDPR, HIPAA, PCI-DSS, etc.) + +### Reliability + +- [ ] Availability (uptime) threshold defined or marked as UNKNOWN +- [ ] Error rate threshold defined or marked as UNKNOWN +- [ ] MTTR (Mean Time To Recovery) threshold defined or marked as UNKNOWN +- [ ] Fault tolerance requirements defined or marked as UNKNOWN +- [ ] Disaster recovery requirements defined (RTO, RPO) or marked as UNKNOWN + +### Maintainability + +- [ ] Test coverage threshold defined or marked as UNKNOWN +- [ ] Code quality threshold defined or marked as UNKNOWN +- [ ] Technical debt threshold defined or marked as UNKNOWN +- [ ] Documentation completeness threshold defined or marked as UNKNOWN + +### Custom NFR Categories (if applicable) + +- [ ] Custom NFR category 1: Thresholds defined or marked as UNKNOWN +- [ ] Custom NFR category 2: Thresholds defined or marked as UNKNOWN +- [ ] Custom NFR category 3: Thresholds defined or marked as UNKNOWN + +--- + +## Evidence Gathering + +### Performance Evidence + +- [ ] Load test results collected (JMeter, k6, Gatling, etc.) +- [ ] Application metrics collected (response times, throughput, resource usage) +- [ ] APM data collected (New Relic, Datadog, Dynatrace, etc.) +- [ ] Lighthouse reports collected (if web app) +- [ ] Playwright performance traces collected (if applicable) + +### Security Evidence + +- [ ] SAST results collected (SonarQube, Checkmarx, Veracode, etc.) +- [ ] DAST results collected (OWASP ZAP, Burp Suite, etc.) +- [ ] Dependency scanning results collected (Snyk, Dependabot, npm audit) +- [ ] Penetration test reports collected (if available) +- [ ] Security audit logs collected +- [ ] Compliance audit results collected (if applicable) + +### Reliability Evidence + +- [ ] Uptime monitoring data collected (Pingdom, UptimeRobot, StatusCake) +- [ ] Error logs collected +- [ ] Error rate metrics collected +- [ ] CI burn-in results collected (stability over time) +- [ ] Chaos engineering test results collected (if available) +- [ ] Failover/recovery test results collected (if available) +- [ ] Incident reports and postmortems collected (if applicable) + +### Maintainability Evidence + +- [ ] Code coverage reports collected (Istanbul, NYC, c8, JaCoCo) +- [ ] Static analysis results collected (ESLint, SonarQube, CodeClimate) +- [ ] Technical debt metrics collected +- [ ] Documentation audit results collected +- [ ] Test review report collected (from test-review workflow, if available) +- [ ] Git metrics collected (code churn, commit frequency, etc.) + +--- + +## NFR Assessment with Deterministic Rules + +### Performance Assessment + +- [ ] Response time assessed against threshold +- [ ] Throughput assessed against threshold +- [ ] Resource usage assessed against threshold +- [ ] Scalability assessed against requirements +- [ ] Status classified (PASS/CONCERNS/FAIL) with justification +- [ ] Evidence source documented (file path, metric name) + +### Security Assessment + +- [ ] Authentication strength assessed against requirements +- [ ] Authorization controls assessed against requirements +- [ ] Data protection assessed against requirements +- [ ] Vulnerability management assessed against thresholds +- [ ] Compliance assessed against requirements +- [ ] Status classified (PASS/CONCERNS/FAIL) with justification +- [ ] Evidence source documented (file path, scan result) + +### Reliability Assessment + +- [ ] Availability (uptime) assessed against threshold +- [ ] Error rate assessed against threshold +- [ ] MTTR assessed against threshold +- [ ] Fault tolerance assessed against requirements +- [ ] Disaster recovery assessed against requirements (RTO, RPO) +- [ ] CI burn-in assessed (stability over time) +- [ ] Status classified (PASS/CONCERNS/FAIL) with justification +- [ ] Evidence source documented (file path, monitoring data) + +### Maintainability Assessment + +- [ ] Test coverage assessed against threshold +- [ ] Code quality assessed against threshold +- [ ] Technical debt assessed against threshold +- [ ] Documentation completeness assessed against threshold +- [ ] Test quality assessed (from test-review, if available) +- [ ] Status classified (PASS/CONCERNS/FAIL) with justification +- [ ] Evidence source documented (file path, coverage report) + +### Custom NFR Assessment (if applicable) + +- [ ] Custom NFR 1 assessed against threshold with justification +- [ ] Custom NFR 2 assessed against threshold with justification +- [ ] Custom NFR 3 assessed against threshold with justification + +--- + +## Status Classification Validation + +### PASS Criteria Verified + +- [ ] Evidence exists for PASS status +- [ ] Evidence meets or exceeds threshold +- [ ] No concerns flagged in evidence +- [ ] Quality is acceptable + +### CONCERNS Criteria Verified + +- [ ] Threshold is UNKNOWN (documented) OR +- [ ] Evidence is MISSING or INCOMPLETE (documented) OR +- [ ] Evidence is close to threshold (within 10%, documented) OR +- [ ] Evidence shows intermittent issues (documented) + +### FAIL Criteria Verified + +- [ ] Evidence exists BUT does not meet threshold (documented) OR +- [ ] Critical evidence is MISSING (documented) OR +- [ ] Evidence shows consistent failures (documented) OR +- [ ] Quality is unacceptable (documented) + +### No Threshold Guessing + +- [ ] All thresholds are either defined or marked as UNKNOWN +- [ ] No thresholds were guessed or inferred +- [ ] All UNKNOWN thresholds result in CONCERNS status + +--- + +## Quick Wins and Recommended Actions + +### Quick Wins Identified + +- [ ] Low-effort, high-impact improvements identified for CONCERNS/FAIL +- [ ] Configuration changes (no code changes) identified +- [ ] Optimization opportunities identified (caching, indexing, compression) +- [ ] Monitoring additions identified (detect issues before failures) + +### Recommended Actions + +- [ ] Specific remediation steps provided (not generic advice) +- [ ] Priority assigned (CRITICAL, HIGH, MEDIUM, LOW) +- [ ] Estimated effort provided (hours, days) +- [ ] Owner suggestions provided (dev, ops, security) + +### Monitoring Hooks + +- [ ] Performance monitoring suggested (APM, synthetic monitoring) +- [ ] Error tracking suggested (Sentry, Rollbar, error logs) +- [ ] Security monitoring suggested (intrusion detection, audit logs) +- [ ] Alerting thresholds suggested (notify before breach) + +### Fail-Fast Mechanisms + +- [ ] Circuit breakers suggested for reliability +- [ ] Rate limiting suggested for performance +- [ ] Validation gates suggested for security +- [ ] Smoke tests suggested for maintainability + +--- + +## Deliverables Generated + +### NFR Assessment Report + +- [ ] File created at `{output_folder}/nfr-assessment.md` +- [ ] Template from `nfr-report-template.md` used +- [ ] Executive summary included (overall status, critical issues) +- [ ] Assessment by category included (performance, security, reliability, maintainability) +- [ ] Evidence for each NFR documented +- [ ] Status classifications documented (PASS/CONCERNS/FAIL) +- [ ] Findings summary included (PASS count, CONCERNS count, FAIL count) +- [ ] Quick wins section included +- [ ] Recommended actions section included +- [ ] Evidence gaps checklist included + +### Gate YAML Snippet (if enabled) + +- [ ] YAML snippet generated +- [ ] Date included +- [ ] Categories status included (performance, security, reliability, maintainability) +- [ ] Overall status included (PASS/CONCERNS/FAIL) +- [ ] Issue counts included (critical, high, medium, concerns) +- [ ] Blockers flag included (true/false) +- [ ] Recommendations included + +### Evidence Checklist (if enabled) + +- [ ] All NFRs with MISSING or INCOMPLETE evidence listed +- [ ] Owners assigned for evidence collection +- [ ] Suggested evidence sources provided +- [ ] Deadlines set for evidence collection + +### Updated Story File (if enabled and requested) + +- [ ] "NFR Assessment" section added to story markdown +- [ ] Link to NFR assessment report included +- [ ] Overall status and critical issues included +- [ ] Gate status included + +--- + +## Quality Assurance + +### Accuracy Checks + +- [ ] All NFR categories assessed (none skipped) +- [ ] All thresholds documented (defined or UNKNOWN) +- [ ] All evidence sources documented (file paths, metric names) +- [ ] Status classifications are deterministic and consistent +- [ ] No false positives (status correctly assigned) +- [ ] No false negatives (all issues identified) + +### Completeness Checks + +- [ ] All NFR categories covered (performance, security, reliability, maintainability, custom) +- [ ] All evidence sources checked (test results, metrics, logs, CI results) +- [ ] All status types used appropriately (PASS, CONCERNS, FAIL) +- [ ] All NFRs with CONCERNS/FAIL have recommendations +- [ ] All evidence gaps have owners and deadlines + +### Actionability Checks + +- [ ] Recommendations are specific (not generic) +- [ ] Remediation steps are clear and actionable +- [ ] Priorities are assigned (CRITICAL, HIGH, MEDIUM, LOW) +- [ ] Effort estimates are provided (hours, days) +- [ ] Owners are suggested (dev, ops, security) + +--- + +## Integration with BMad Artifacts + +### With tech-spec.md + +- [ ] Tech spec loaded for NFR requirements and thresholds +- [ ] Performance targets extracted +- [ ] Security requirements extracted +- [ ] Reliability SLAs extracted +- [ ] Architectural decisions considered + +### With test-design.md + +- [ ] Test design loaded for NFR test plan +- [ ] Test priorities referenced (P0/P1/P2/P3) +- [ ] Assessment aligned with planned NFR validation + +### With PRD.md + +- [ ] PRD loaded for product-level NFR context +- [ ] User experience goals considered +- [ ] Unstated requirements checked +- [ ] Product-level SLAs referenced + +--- + +## Quality Gates Validation + +### Release Blocker (FAIL) + +- [ ] Critical NFR status checked (security, reliability) +- [ ] Performance failures assessed for user impact +- [ ] Release blocker flagged if critical NFR has FAIL status + +### PR Blocker (HIGH CONCERNS) + +- [ ] High-priority NFR status checked +- [ ] Multiple CONCERNS assessed +- [ ] PR blocker flagged if HIGH priority issues exist + +### Warning (CONCERNS) + +- [ ] Any NFR with CONCERNS status flagged +- [ ] Missing or incomplete evidence documented +- [ ] Warning issued to address before next release + +### Pass (PASS) + +- [ ] All NFRs have PASS status +- [ ] No blockers or concerns exist +- [ ] Ready for release confirmed + +--- + +## Non-Prescriptive Validation + +- [ ] NFR categories adapted to team needs +- [ ] Thresholds appropriate for project context +- [ ] Assessment criteria customized as needed +- [ ] Teams can extend with custom NFR categories +- [ ] Integration with external tools supported (New Relic, Datadog, SonarQube, JIRA) + +--- + +## Documentation and Communication + +- [ ] NFR assessment report is readable and well-formatted +- [ ] Tables render correctly in markdown +- [ ] Code blocks have proper syntax highlighting +- [ ] Links are valid and accessible +- [ ] Recommendations are clear and prioritized +- [ ] Overall status is prominent and unambiguous +- [ ] Executive summary provides quick understanding + +--- + +## Final Validation + +- [ ] All prerequisites met +- [ ] All NFR categories assessed with evidence (or gaps documented) +- [ ] No thresholds were guessed (all defined or UNKNOWN) +- [ ] Status classifications are deterministic and justified +- [ ] Quick wins identified for all CONCERNS/FAIL +- [ ] Recommended actions are specific and actionable +- [ ] Evidence gaps documented with owners and deadlines +- [ ] NFR assessment report generated and saved +- [ ] Gate YAML snippet generated (if enabled) +- [ ] Evidence checklist generated (if enabled) +- [ ] Workflow completed successfully + +--- + +## Sign-Off + +**NFR Assessment Status:** + +- [ ] ✅ PASS - All NFRs meet requirements, ready for release +- [ ] ⚠️ CONCERNS - Some NFRs have concerns, address before next release +- [ ] ❌ FAIL - Critical NFRs not met, BLOCKER for release + +**Next Actions:** + +- If PASS ✅: Proceed to `*gate` workflow or release +- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess` +- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess` + +**Critical Issues:** {COUNT} +**High Priority Issues:** {COUNT} +**Concerns:** {COUNT} + +--- + + diff --git a/src/bmm/workflows/testarch/nfr-assess/instructions.md b/src/bmm/workflows/testarch/nfr-assess/instructions.md new file mode 100644 index 00000000..f23e6b10 --- /dev/null +++ b/src/bmm/workflows/testarch/nfr-assess/instructions.md @@ -0,0 +1,726 @@ +# Non-Functional Requirements Assessment - Instructions v4.0 + +**Workflow:** `testarch-nfr` +**Purpose:** Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation +**Agent:** Test Architect (TEA) +**Format:** Pure Markdown v4.0 (no XML blocks) + +--- + +## Overview + +This workflow performs a comprehensive assessment of non-functional requirements (NFRs) to validate that the implementation meets performance, security, reliability, and maintainability standards before release. It uses evidence-based validation with deterministic PASS/CONCERNS/FAIL rules and provides actionable recommendations for remediation. + +**Key Capabilities:** + +- Assess multiple NFR categories (performance, security, reliability, maintainability, custom) +- Validate NFRs against defined thresholds from tech specs, PRD, or defaults +- Classify status deterministically (PASS/CONCERNS/FAIL) based on evidence +- Never guess thresholds - mark as CONCERNS if unknown +- Generate gate-ready YAML snippets for CI/CD integration +- Provide quick wins and recommended actions for remediation +- Create evidence checklists for gaps + +--- + +## Prerequisites + +**Required:** + +- Implementation deployed locally or accessible for evaluation +- Evidence sources available (test results, metrics, logs, CI results) + +**Recommended:** + +- NFR requirements defined in tech-spec.md, PRD.md, or story +- Test results from performance, security, reliability tests +- Application metrics (response times, error rates, throughput) +- CI/CD pipeline results for burn-in validation + +**Halt Conditions:** + +- If NFR targets are undefined and cannot be obtained, halt and request definition +- If implementation is not accessible for evaluation, halt and request deployment + +--- + +## Workflow Steps + +### Step 1: Load Context and Knowledge Base + +**Actions:** + +1. Load relevant knowledge fragments from `{project-root}/_bmad/bmm/testarch/tea-index.csv`: + - `adr-quality-readiness-checklist.md` - 8-category 29-criteria NFR framework (testability, test data, scalability, DR, security, monitorability, QoS/QoE, deployability, ~450 lines) + - `ci-burn-in.md` - CI/CD burn-in patterns for reliability validation (10-iteration detection, sharding, selective execution, 678 lines, 4 examples) + - `test-quality.md` - Test quality expectations for maintainability (deterministic, isolated, explicit assertions, length/time limits, 658 lines, 5 examples) + - `playwright-config.md` - Performance configuration patterns: parallelization, timeout standards, artifact output (722 lines, 5 examples) + - `error-handling.md` - Reliability validation patterns: scoped exceptions, retry validation, telemetry logging, graceful degradation (736 lines, 4 examples) + +2. Read story file (if provided): + - Extract NFR requirements + - Identify specific thresholds or SLAs + - Note any custom NFR categories + +3. Read related BMad artifacts (if available): + - `tech-spec.md` - Technical NFR requirements and targets + - `PRD.md` - Product-level NFR context (user expectations) + - `test-design.md` - NFR test plan and priorities + +**Output:** Complete understanding of NFR targets, evidence sources, and validation criteria + +--- + +### Step 2: Identify NFR Categories and Thresholds + +**Actions:** + +1. Determine which NFR categories to assess using ADR Quality Readiness Checklist (8 standard categories): + - **1. Testability & Automation**: Isolation, headless interaction, state control, sample requests (4 criteria) + - **2. Test Data Strategy**: Segregation, generation, teardown (3 criteria) + - **3. Scalability & Availability**: Statelessness, bottlenecks, SLA definitions, circuit breakers (4 criteria) + - **4. Disaster Recovery**: RTO/RPO, failover, backups (3 criteria) + - **5. Security**: AuthN/AuthZ, encryption, secrets, input validation (4 criteria) + - **6. Monitorability, Debuggability & Manageability**: Tracing, logs, metrics, config (4 criteria) + - **7. QoS & QoE**: Latency, throttling, perceived performance, degradation (4 criteria) + - **8. Deployability**: Zero downtime, backward compatibility, rollback (3 criteria) + +2. Add custom NFR categories if specified (e.g., accessibility, internationalization, compliance) beyond the 8 standard categories + +3. Gather thresholds for each NFR: + - From tech-spec.md (primary source) + - From PRD.md (product-level SLAs) + - From story file (feature-specific requirements) + - From workflow variables (default thresholds) + - Mark thresholds as UNKNOWN if not defined + +4. Never guess thresholds - if a threshold is unknown, mark the NFR as CONCERNS + +**Output:** Complete list of NFRs to assess with defined (or UNKNOWN) thresholds + +--- + +### Step 3: Gather Evidence + +**Actions:** + +1. For each NFR category, discover evidence sources: + + **Performance Evidence:** + - Load test results (JMeter, k6, Lighthouse) + - Application metrics (response times, throughput, resource usage) + - Performance monitoring data (New Relic, Datadog, APM) + - Playwright performance traces (if applicable) + + **Security Evidence:** + - Security scan results (SAST, DAST, dependency scanning) + - Authentication/authorization test results + - Penetration test reports + - Vulnerability assessment reports + - Compliance audit results + + **Reliability Evidence:** + - Error logs and error rates + - Uptime monitoring data + - Chaos engineering test results + - Failover/recovery test results + - CI burn-in results (stability over time) + + **Maintainability Evidence:** + - Code coverage reports (Istanbul, NYC, c8) + - Static analysis results (ESLint, SonarQube) + - Technical debt metrics + - Documentation completeness + - Test quality assessment (from test-review workflow) + +2. Read relevant files from evidence directories: + - `{test_results_dir}` for test execution results + - `{metrics_dir}` for application metrics + - `{logs_dir}` for application logs + - CI/CD pipeline results (if `include_ci_results` is true) + +3. Mark NFRs without evidence as "NO EVIDENCE" - never infer or assume + +**Output:** Comprehensive evidence inventory for each NFR + +--- + +### Step 4: Assess NFRs with Deterministic Rules + +**Actions:** + +1. For each NFR, apply deterministic PASS/CONCERNS/FAIL rules: + + **PASS Criteria:** + - Evidence exists AND meets defined threshold + - No concerns flagged in evidence + - Example: Response time is 350ms (threshold: 500ms) → PASS + + **CONCERNS Criteria:** + - Threshold is UNKNOWN (not defined) + - Evidence is MISSING or INCOMPLETE + - Evidence is close to threshold (within 10%) + - Evidence shows intermittent issues + - Example: Response time is 480ms (threshold: 500ms, 96% of threshold) → CONCERNS + + **FAIL Criteria:** + - Evidence exists BUT does not meet threshold + - Critical evidence is MISSING + - Evidence shows consistent failures + - Example: Response time is 750ms (threshold: 500ms) → FAIL + +2. Document findings for each NFR: + - Status (PASS/CONCERNS/FAIL) + - Evidence source (file path, test name, metric name) + - Actual value vs threshold + - Justification for status classification + +3. Classify severity based on category: + - **CRITICAL**: Security failures, reliability failures (affect users immediately) + - **HIGH**: Performance failures, maintainability failures (affect users soon) + - **MEDIUM**: Concerns without failures (may affect users eventually) + - **LOW**: Missing evidence for non-critical NFRs + +**Output:** Complete NFR assessment with deterministic status classifications + +--- + +### Step 5: Identify Quick Wins and Recommended Actions + +**Actions:** + +1. For each NFR with CONCERNS or FAIL status, identify quick wins: + - Low-effort, high-impact improvements + - Configuration changes (no code changes needed) + - Optimization opportunities (caching, indexing, compression) + - Monitoring additions (detect issues before they become failures) + +2. Provide recommended actions for each issue: + - Specific steps to remediate (not generic advice) + - Priority (CRITICAL, HIGH, MEDIUM, LOW) + - Estimated effort (hours, days) + - Owner suggestion (dev, ops, security) + +3. Suggest monitoring hooks for gaps: + - Add performance monitoring (APM, synthetic monitoring) + - Add error tracking (Sentry, Rollbar, error logs) + - Add security monitoring (intrusion detection, audit logs) + - Add alerting thresholds (notify before thresholds are breached) + +4. Suggest fail-fast mechanisms: + - Add circuit breakers for reliability + - Add rate limiting for performance + - Add validation gates for security + - Add smoke tests for maintainability + +**Output:** Actionable remediation plan with prioritized recommendations + +--- + +### Step 6: Generate Deliverables + +**Actions:** + +1. Create NFR assessment markdown file: + - Use template from `nfr-report-template.md` + - Include executive summary (overall status, critical issues) + - Add NFR-by-NFR assessment (status, evidence, thresholds) + - Add findings summary (PASS count, CONCERNS count, FAIL count) + - Add quick wins section + - Add recommended actions section + - Add evidence gaps checklist + - Save to `{output_folder}/nfr-assessment.md` + +2. Generate gate YAML snippet (if enabled): + + ```yaml + nfr_assessment: + date: '2025-10-14' + categories: + performance: 'PASS' + security: 'CONCERNS' + reliability: 'PASS' + maintainability: 'PASS' + overall_status: 'CONCERNS' + critical_issues: 0 + high_priority_issues: 1 + concerns: 2 + blockers: false + ``` + +3. Generate evidence checklist (if enabled): + - List all NFRs with MISSING or INCOMPLETE evidence + - Assign owners for evidence collection + - Suggest evidence sources (tests, metrics, logs) + - Set deadlines for evidence collection + +4. Update story file (if enabled and requested): + - Add "NFR Assessment" section to story markdown + - Link to NFR assessment report + - Include overall status and critical issues + - Add gate status + +**Output:** Complete NFR assessment documentation ready for review and CI/CD integration + +--- + +## Non-Prescriptive Approach + +**Minimal Examples:** This workflow provides principles and patterns, not rigid templates. Teams should adapt NFR categories, thresholds, and assessment criteria to their needs. + +**Key Patterns to Follow:** + +- Use evidence-based validation (no guessing or inference) +- Apply deterministic rules (consistent PASS/CONCERNS/FAIL classification) +- Never guess thresholds (mark as CONCERNS if unknown) +- Provide actionable recommendations (specific steps, not generic advice) +- Generate gate-ready artifacts (YAML snippets for CI/CD) + +**Extend as Needed:** + +- Add custom NFR categories (accessibility, internationalization, compliance) +- Integrate with external tools (New Relic, Datadog, SonarQube, JIRA) +- Add custom thresholds and rules +- Link to external assessment systems + +--- + +## NFR Categories and Criteria + +### Performance + +**Criteria:** + +- Response time (p50, p95, p99 percentiles) +- Throughput (requests per second, transactions per second) +- Resource usage (CPU, memory, disk, network) +- Scalability (horizontal, vertical) + +**Thresholds (Default):** + +- Response time p95: 500ms +- Throughput: 100 RPS +- CPU usage: < 70% average +- Memory usage: < 80% max + +**Evidence Sources:** + +- Load test results (JMeter, k6, Gatling) +- APM data (New Relic, Datadog, Dynatrace) +- Lighthouse reports (for web apps) +- Playwright performance traces + +--- + +### Security + +**Criteria:** + +- Authentication (login security, session management) +- Authorization (access control, permissions) +- Data protection (encryption, PII handling) +- Vulnerability management (SAST, DAST, dependency scanning) +- Compliance (GDPR, HIPAA, PCI-DSS) + +**Thresholds (Default):** + +- Security score: >= 85/100 +- Critical vulnerabilities: 0 +- High vulnerabilities: < 3 +- Authentication strength: MFA enabled + +**Evidence Sources:** + +- SAST results (SonarQube, Checkmarx, Veracode) +- DAST results (OWASP ZAP, Burp Suite) +- Dependency scanning (Snyk, Dependabot, npm audit) +- Penetration test reports +- Security audit logs + +--- + +### Reliability + +**Criteria:** + +- Availability (uptime percentage) +- Error handling (graceful degradation, error recovery) +- Fault tolerance (redundancy, failover) +- Disaster recovery (backup, restore, RTO/RPO) +- Stability (CI burn-in, chaos engineering) + +**Thresholds (Default):** + +- Uptime: >= 99.9% (three nines) +- Error rate: < 0.1% (1 in 1000 requests) +- MTTR (Mean Time To Recovery): < 15 minutes +- CI burn-in: 100 consecutive successful runs + +**Evidence Sources:** + +- Uptime monitoring (Pingdom, UptimeRobot, StatusCake) +- Error logs and error rates +- CI burn-in results (see `ci-burn-in.md`) +- Chaos engineering test results (Chaos Monkey, Gremlin) +- Incident reports and postmortems + +--- + +### Maintainability + +**Criteria:** + +- Code quality (complexity, duplication, code smells) +- Test coverage (unit, integration, E2E) +- Documentation (code comments, README, architecture docs) +- Technical debt (debt ratio, code churn) +- Test quality (from test-review workflow) + +**Thresholds (Default):** + +- Test coverage: >= 80% +- Code quality score: >= 85/100 +- Technical debt ratio: < 5% +- Documentation completeness: >= 90% + +**Evidence Sources:** + +- Coverage reports (Istanbul, NYC, c8, JaCoCo) +- Static analysis (ESLint, SonarQube, CodeClimate) +- Documentation audit (manual or automated) +- Test review report (from test-review workflow) +- Git metrics (code churn, commit frequency) + +--- + +## Deterministic Assessment Rules + +### PASS Rules + +- Evidence exists +- Evidence meets or exceeds threshold +- No concerns flagged +- Quality is acceptable + +**Example:** + +```markdown +NFR: Response Time p95 +Threshold: 500ms +Evidence: Load test result shows 350ms p95 +Status: PASS ✅ +``` + +--- + +### CONCERNS Rules + +- Threshold is UNKNOWN +- Evidence is MISSING or INCOMPLETE +- Evidence is close to threshold (within 10%) +- Evidence shows intermittent issues +- Quality is marginal + +**Example:** + +```markdown +NFR: Response Time p95 +Threshold: 500ms +Evidence: Load test result shows 480ms p95 (96% of threshold) +Status: CONCERNS ⚠️ +Recommendation: Optimize before production - very close to threshold +``` + +--- + +### FAIL Rules + +- Evidence exists BUT does not meet threshold +- Critical evidence is MISSING +- Evidence shows consistent failures +- Quality is unacceptable + +**Example:** + +```markdown +NFR: Response Time p95 +Threshold: 500ms +Evidence: Load test result shows 750ms p95 (150% of threshold) +Status: FAIL ❌ +Recommendation: BLOCKER - optimize performance before release +``` + +--- + +## Integration with BMad Artifacts + +### With tech-spec.md + +- Primary source for NFR requirements and thresholds +- Load performance targets, security requirements, reliability SLAs +- Use architectural decisions to understand NFR trade-offs + +### With test-design.md + +- Understand NFR test plan and priorities +- Reference test priorities (P0/P1/P2/P3) for severity classification +- Align assessment with planned NFR validation + +### With PRD.md + +- Understand product-level NFR expectations +- Verify NFRs align with user experience goals +- Check for unstated NFR requirements (implied by product goals) + +--- + +## Quality Gates + +### Release Blocker (FAIL) + +- Critical NFR has FAIL status (security, reliability) +- Performance failure affects user experience severely +- Do not release until FAIL is resolved + +### PR Blocker (HIGH CONCERNS) + +- High-priority NFR has FAIL status +- Multiple CONCERNS exist +- Block PR merge until addressed + +### Warning (CONCERNS) + +- Any NFR has CONCERNS status +- Evidence is missing or incomplete +- Address before next release + +### Pass (PASS) + +- All NFRs have PASS status +- No blockers or concerns +- Ready for release + +--- + +## Example NFR Assessment + +````markdown +# NFR Assessment - Story 1.3 + +**Feature:** User Authentication +**Date:** 2025-10-14 +**Overall Status:** CONCERNS ⚠️ (1 HIGH issue) + +## Executive Summary + +**Assessment:** 3 PASS, 1 CONCERNS, 0 FAIL +**Blockers:** None +**High Priority Issues:** 1 (Security - MFA not enforced) +**Recommendation:** Address security concern before release + +## Performance Assessment + +### Response Time (p95) + +- **Status:** PASS ✅ +- **Threshold:** 500ms +- **Actual:** 320ms (64% of threshold) +- **Evidence:** Load test results (test-results/load-2025-10-14.json) +- **Findings:** Response time well below threshold across all percentiles + +### Throughput + +- **Status:** PASS ✅ +- **Threshold:** 100 RPS +- **Actual:** 250 RPS (250% of threshold) +- **Evidence:** Load test results (test-results/load-2025-10-14.json) +- **Findings:** System handles 2.5x target load without degradation + +## Security Assessment + +### Authentication Strength + +- **Status:** CONCERNS ⚠️ +- **Threshold:** MFA enabled for all users +- **Actual:** MFA optional (not enforced) +- **Evidence:** Security audit (security-audit-2025-10-14.md) +- **Findings:** MFA is implemented but not enforced by default +- **Recommendation:** HIGH - Enforce MFA for all new accounts, provide migration path for existing users + +### Data Protection + +- **Status:** PASS ✅ +- **Threshold:** PII encrypted at rest and in transit +- **Actual:** AES-256 at rest, TLS 1.3 in transit +- **Evidence:** Security scan (security-scan-2025-10-14.json) +- **Findings:** All PII properly encrypted + +## Reliability Assessment + +### Uptime + +- **Status:** PASS ✅ +- **Threshold:** 99.9% (three nines) +- **Actual:** 99.95% over 30 days +- **Evidence:** Uptime monitoring (uptime-report-2025-10-14.csv) +- **Findings:** Exceeds target with margin + +### Error Rate + +- **Status:** PASS ✅ +- **Threshold:** < 0.1% (1 in 1000) +- **Actual:** 0.05% (1 in 2000) +- **Evidence:** Error logs (logs/errors-2025-10.log) +- **Findings:** Error rate well below threshold + +## Maintainability Assessment + +### Test Coverage + +- **Status:** PASS ✅ +- **Threshold:** >= 80% +- **Actual:** 87% +- **Evidence:** Coverage report (coverage/lcov-report/index.html) +- **Findings:** Coverage exceeds threshold with good distribution + +### Code Quality + +- **Status:** PASS ✅ +- **Threshold:** >= 85/100 +- **Actual:** 92/100 +- **Evidence:** SonarQube analysis (sonarqube-report-2025-10-14.pdf) +- **Findings:** High code quality score with low technical debt + +## Quick Wins + +1. **Enforce MFA (Security)** - HIGH - 4 hours + - Add configuration flag to enforce MFA for new accounts + - No code changes needed, only config adjustment + +## Recommended Actions + +### Immediate (Before Release) + +1. **Enforce MFA for all new accounts** - HIGH - 4 hours - Security Team + - Add `ENFORCE_MFA=true` to production config + - Update user onboarding flow to require MFA setup + - Test MFA enforcement in staging environment + +### Short-term (Next Sprint) + +1. **Migrate existing users to MFA** - MEDIUM - 3 days - Product + Engineering + - Design migration UX (prompt, incentives, deadline) + - Implement migration flow with grace period + - Communicate migration to existing users + +## Evidence Gaps + +- [ ] Chaos engineering test results (reliability) + - Owner: DevOps Team + - Deadline: 2025-10-21 + - Suggested evidence: Run chaos monkey tests in staging + +- [ ] Penetration test report (security) + - Owner: Security Team + - Deadline: 2025-10-28 + - Suggested evidence: Schedule third-party pentest + +## Gate YAML Snippet + +```yaml +nfr_assessment: + date: '2025-10-14' + story_id: '1.3' + categories: + performance: 'PASS' + security: 'CONCERNS' + reliability: 'PASS' + maintainability: 'PASS' + overall_status: 'CONCERNS' + critical_issues: 0 + high_priority_issues: 1 + medium_priority_issues: 0 + concerns: 1 + blockers: false + recommendations: + - 'Enforce MFA for all new accounts (HIGH - 4 hours)' + evidence_gaps: 2 +``` +```` + +## Recommendations Summary + +- **Release Blocker:** None ✅ +- **High Priority:** 1 (Enforce MFA before release) +- **Medium Priority:** 1 (Migrate existing users to MFA) +- **Next Steps:** Address HIGH priority item, then proceed to gate workflow + +``` + +--- + +## Validation Checklist + +Before completing this workflow, verify: + +- ✅ All NFR categories assessed (performance, security, reliability, maintainability, custom) +- ✅ Thresholds defined or marked as UNKNOWN +- ✅ Evidence gathered for each NFR (or marked as MISSING) +- ✅ Status classified deterministically (PASS/CONCERNS/FAIL) +- ✅ No thresholds were guessed (marked as CONCERNS if unknown) +- ✅ Quick wins identified for CONCERNS/FAIL +- ✅ Recommended actions are specific and actionable +- ✅ Evidence gaps documented with owners and deadlines +- ✅ NFR assessment report generated and saved +- ✅ Gate YAML snippet generated (if enabled) +- ✅ Evidence checklist generated (if enabled) + +--- + +## Notes + +- **Never Guess Thresholds:** If a threshold is unknown, mark as CONCERNS and recommend defining it +- **Evidence-Based:** Every assessment must be backed by evidence (tests, metrics, logs, CI results) +- **Deterministic Rules:** Use consistent PASS/CONCERNS/FAIL classification based on evidence +- **Actionable Recommendations:** Provide specific steps, not generic advice +- **Gate Integration:** Generate YAML snippets that can be consumed by CI/CD pipelines + +--- + +## Troubleshooting + +### "NFR thresholds not defined" +- Check tech-spec.md for NFR requirements +- Check PRD.md for product-level SLAs +- Check story file for feature-specific requirements +- If thresholds truly unknown, mark as CONCERNS and recommend defining them + +### "No evidence found" +- Check evidence directories (test-results, metrics, logs) +- Check CI/CD pipeline for test results +- If evidence truly missing, mark NFR as "NO EVIDENCE" and recommend generating it + +### "CONCERNS status but no threshold exceeded" +- CONCERNS is correct when threshold is UNKNOWN or evidence is MISSING/INCOMPLETE +- CONCERNS is also correct when evidence is close to threshold (within 10%) +- Document why CONCERNS was assigned + +### "FAIL status blocks release" +- This is intentional - FAIL means critical NFR not met +- Recommend remediation actions with specific steps +- Re-run assessment after remediation + +--- + +## Related Workflows + +- **testarch-test-design** - Define NFR requirements and test plan +- **testarch-framework** - Set up performance/security testing frameworks +- **testarch-ci** - Configure CI/CD for NFR validation +- **testarch-gate** - Use NFR assessment as input for quality gate decisions +- **testarch-test-review** - Review test quality (maintainability NFR) + +--- + + +``` diff --git a/src/bmm/workflows/testarch/nfr-assess/nfr-report-template.md b/src/bmm/workflows/testarch/nfr-assess/nfr-report-template.md new file mode 100644 index 00000000..115ee969 --- /dev/null +++ b/src/bmm/workflows/testarch/nfr-assess/nfr-report-template.md @@ -0,0 +1,461 @@ +# NFR Assessment - {FEATURE_NAME} + +**Date:** {DATE} +**Story:** {STORY_ID} (if applicable) +**Overall Status:** {OVERALL_STATUS} {STATUS_ICON} + +--- + +Note: This assessment summarizes existing evidence; it does not run tests or CI workflows. + +## Executive Summary + +**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL + +**Blockers:** {BLOCKER_COUNT} {BLOCKER_DESCRIPTION} + +**High Priority Issues:** {HIGH_PRIORITY_COUNT} {HIGH_PRIORITY_DESCRIPTION} + +**Recommendation:** {OVERALL_RECOMMENDATION} + +--- + +## Performance Assessment + +### Response Time (p95) + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} +- **Actual:** {ACTUAL_VALUE} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### Throughput + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} +- **Actual:** {ACTUAL_VALUE} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### Resource Usage + +- **CPU Usage** + - **Status:** {STATUS} {STATUS_ICON} + - **Threshold:** {THRESHOLD_VALUE} + - **Actual:** {ACTUAL_VALUE} + - **Evidence:** {EVIDENCE_SOURCE} + +- **Memory Usage** + - **Status:** {STATUS} {STATUS_ICON} + - **Threshold:** {THRESHOLD_VALUE} + - **Actual:** {ACTUAL_VALUE} + - **Evidence:** {EVIDENCE_SOURCE} + +### Scalability + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +--- + +## Security Assessment + +### Authentication Strength + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} +- **Recommendation:** {RECOMMENDATION} (if CONCERNS or FAIL) + +### Authorization Controls + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### Data Protection + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### Vulnerability Management + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} (e.g., "0 critical, <3 high vulnerabilities") +- **Actual:** {ACTUAL_DESCRIPTION} (e.g., "0 critical, 1 high, 5 medium vulnerabilities") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Snyk scan results - scan-2025-10-14.json") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Compliance (if applicable) + +- **Status:** {STATUS} {STATUS_ICON} +- **Standards:** {COMPLIANCE_STANDARDS} (e.g., "GDPR, HIPAA, PCI-DSS") +- **Actual:** {ACTUAL_COMPLIANCE_STATUS} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +--- + +## Reliability Assessment + +### Availability (Uptime) + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., "99.9%") +- **Actual:** {ACTUAL_VALUE} (e.g., "99.95%") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Uptime monitoring - uptime-report-2025-10-14.csv") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Error Rate + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., "<0.1%") +- **Actual:** {ACTUAL_VALUE} (e.g., "0.05%") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Error logs - logs/errors-2025-10.log") +- **Findings:** {FINDINGS_DESCRIPTION} + +### MTTR (Mean Time To Recovery) + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., "<15 minutes") +- **Actual:** {ACTUAL_VALUE} (e.g., "12 minutes") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Incident reports - incidents/") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Fault Tolerance + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### CI Burn-In (Stability) + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., "100 consecutive successful runs") +- **Actual:** {ACTUAL_VALUE} (e.g., "150 consecutive successful runs") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CI burn-in results - ci-burn-in-2025-10-14.log") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Disaster Recovery (if applicable) + +- **RTO (Recovery Time Objective)** + - **Status:** {STATUS} {STATUS_ICON} + - **Threshold:** {THRESHOLD_VALUE} + - **Actual:** {ACTUAL_VALUE} + - **Evidence:** {EVIDENCE_SOURCE} + +- **RPO (Recovery Point Objective)** + - **Status:** {STATUS} {STATUS_ICON} + - **Threshold:** {THRESHOLD_VALUE} + - **Actual:** {ACTUAL_VALUE} + - **Evidence:** {EVIDENCE_SOURCE} + +--- + +## Maintainability Assessment + +### Test Coverage + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=80%") +- **Actual:** {ACTUAL_VALUE} (e.g., "87%") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Coverage report - coverage/lcov-report/index.html") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Code Quality + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=85/100") +- **Actual:** {ACTUAL_VALUE} (e.g., "92/100") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "SonarQube analysis - sonarqube-report-2025-10-14.pdf") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Technical Debt + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., "<5% debt ratio") +- **Actual:** {ACTUAL_VALUE} (e.g., "3.2% debt ratio") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CodeClimate analysis - codeclimate-2025-10-14.json") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Documentation Completeness + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=90%") +- **Actual:** {ACTUAL_VALUE} (e.g., "95%") +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Documentation audit - docs-audit-2025-10-14.md") +- **Findings:** {FINDINGS_DESCRIPTION} + +### Test Quality (from test-review, if available) + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Test review report - test-review-2025-10-14.md") +- **Findings:** {FINDINGS_DESCRIPTION} + +--- + +## Custom NFR Assessments (if applicable) + +### {CUSTOM_NFR_NAME_1} + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +### {CUSTOM_NFR_NAME_2} + +- **Status:** {STATUS} {STATUS_ICON} +- **Threshold:** {THRESHOLD_DESCRIPTION} +- **Actual:** {ACTUAL_DESCRIPTION} +- **Evidence:** {EVIDENCE_SOURCE} +- **Findings:** {FINDINGS_DESCRIPTION} + +--- + +## Quick Wins + +{QUICK_WIN_COUNT} quick wins identified for immediate implementation: + +1. **{QUICK_WIN_TITLE_1}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT} + - {QUICK_WIN_DESCRIPTION} + - No code changes needed / Minimal code changes + +2. **{QUICK_WIN_TITLE_2}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT} + - {QUICK_WIN_DESCRIPTION} + +--- + +## Recommended Actions + +### Immediate (Before Release) - CRITICAL/HIGH Priority + +1. **{ACTION_TITLE_1}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER} + - {ACTION_DESCRIPTION} + - {SPECIFIC_STEPS} + - {VALIDATION_CRITERIA} + +2. **{ACTION_TITLE_2}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER} + - {ACTION_DESCRIPTION} + - {SPECIFIC_STEPS} + - {VALIDATION_CRITERIA} + +### Short-term (Next Sprint) - MEDIUM Priority + +1. **{ACTION_TITLE_3}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER} + - {ACTION_DESCRIPTION} + +2. **{ACTION_TITLE_4}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER} + - {ACTION_DESCRIPTION} + +### Long-term (Backlog) - LOW Priority + +1. **{ACTION_TITLE_5}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER} + - {ACTION_DESCRIPTION} + +--- + +## Monitoring Hooks + +{MONITORING_HOOK_COUNT} monitoring hooks recommended to detect issues before failures: + +### Performance Monitoring + +- [ ] {MONITORING_TOOL_1} - {MONITORING_DESCRIPTION} + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + +- [ ] {MONITORING_TOOL_2} - {MONITORING_DESCRIPTION} + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + +### Security Monitoring + +- [ ] {MONITORING_TOOL_3} - {MONITORING_DESCRIPTION} + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + +### Reliability Monitoring + +- [ ] {MONITORING_TOOL_4} - {MONITORING_DESCRIPTION} + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + +### Alerting Thresholds + +- [ ] {ALERT_DESCRIPTION} - Notify when {THRESHOLD_CONDITION} + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + +--- + +## Fail-Fast Mechanisms + +{FAIL_FAST_COUNT} fail-fast mechanisms recommended to prevent failures: + +### Circuit Breakers (Reliability) + +- [ ] {CIRCUIT_BREAKER_DESCRIPTION} + - **Owner:** {OWNER} + - **Estimated Effort:** {EFFORT} + +### Rate Limiting (Performance) + +- [ ] {RATE_LIMITING_DESCRIPTION} + - **Owner:** {OWNER} + - **Estimated Effort:** {EFFORT} + +### Validation Gates (Security) + +- [ ] {VALIDATION_GATE_DESCRIPTION} + - **Owner:** {OWNER} + - **Estimated Effort:** {EFFORT} + +### Smoke Tests (Maintainability) + +- [ ] {SMOKE_TEST_DESCRIPTION} + - **Owner:** {OWNER} + - **Estimated Effort:** {EFFORT} + +--- + +## Evidence Gaps + +{EVIDENCE_GAP_COUNT} evidence gaps identified - action required: + +- [ ] **{NFR_NAME_1}** ({NFR_CATEGORY}) + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + - **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE} + - **Impact:** {IMPACT_DESCRIPTION} + +- [ ] **{NFR_NAME_2}** ({NFR_CATEGORY}) + - **Owner:** {OWNER} + - **Deadline:** {DEADLINE} + - **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE} + - **Impact:** {IMPACT_DESCRIPTION} + +--- + +## Findings Summary + +**Based on ADR Quality Readiness Checklist (8 categories, 29 criteria)** + +| Category | Criteria Met | PASS | CONCERNS | FAIL | Overall Status | +|----------|--------------|------|----------|------|----------------| +| 1. Testability & Automation | {T_MET}/4 | {T_PASS} | {T_CONCERNS} | {T_FAIL} | {T_STATUS} {T_ICON} | +| 2. Test Data Strategy | {TD_MET}/3 | {TD_PASS} | {TD_CONCERNS} | {TD_FAIL} | {TD_STATUS} {TD_ICON} | +| 3. Scalability & Availability | {SA_MET}/4 | {SA_PASS} | {SA_CONCERNS} | {SA_FAIL} | {SA_STATUS} {SA_ICON} | +| 4. Disaster Recovery | {DR_MET}/3 | {DR_PASS} | {DR_CONCERNS} | {DR_FAIL} | {DR_STATUS} {DR_ICON} | +| 5. Security | {SEC_MET}/4 | {SEC_PASS} | {SEC_CONCERNS} | {SEC_FAIL} | {SEC_STATUS} {SEC_ICON} | +| 6. Monitorability, Debuggability & Manageability | {MON_MET}/4 | {MON_PASS} | {MON_CONCERNS} | {MON_FAIL} | {MON_STATUS} {MON_ICON} | +| 7. QoS & QoE | {QOS_MET}/4 | {QOS_PASS} | {QOS_CONCERNS} | {QOS_FAIL} | {QOS_STATUS} {QOS_ICON} | +| 8. Deployability | {DEP_MET}/3 | {DEP_PASS} | {DEP_CONCERNS} | {DEP_FAIL} | {DEP_STATUS} {DEP_ICON} | +| **Total** | **{TOTAL_MET}/29** | **{TOTAL_PASS}** | **{TOTAL_CONCERNS}** | **{TOTAL_FAIL}** | **{OVERALL_STATUS} {OVERALL_ICON}** | + +**Criteria Met Scoring:** +- ≥26/29 (90%+) = Strong foundation +- 20-25/29 (69-86%) = Room for improvement +- <20/29 (<69%) = Significant gaps + +--- + +## Gate YAML Snippet + +```yaml +nfr_assessment: + date: '{DATE}' + story_id: '{STORY_ID}' + feature_name: '{FEATURE_NAME}' + adr_checklist_score: '{TOTAL_MET}/29' # ADR Quality Readiness Checklist + categories: + testability_automation: '{T_STATUS}' + test_data_strategy: '{TD_STATUS}' + scalability_availability: '{SA_STATUS}' + disaster_recovery: '{DR_STATUS}' + security: '{SEC_STATUS}' + monitorability: '{MON_STATUS}' + qos_qoe: '{QOS_STATUS}' + deployability: '{DEP_STATUS}' + overall_status: '{OVERALL_STATUS}' + critical_issues: { CRITICAL_COUNT } + high_priority_issues: { HIGH_COUNT } + medium_priority_issues: { MEDIUM_COUNT } + concerns: { CONCERNS_COUNT } + blockers: { BLOCKER_BOOLEAN } # true/false + quick_wins: { QUICK_WIN_COUNT } + evidence_gaps: { EVIDENCE_GAP_COUNT } + recommendations: + - '{RECOMMENDATION_1}' + - '{RECOMMENDATION_2}' + - '{RECOMMENDATION_3}' +``` + +--- + +## Related Artifacts + +- **Story File:** {STORY_FILE_PATH} (if applicable) +- **Tech Spec:** {TECH_SPEC_PATH} (if available) +- **PRD:** {PRD_PATH} (if available) +- **Test Design:** {TEST_DESIGN_PATH} (if available) +- **Evidence Sources:** + - Test Results: {TEST_RESULTS_DIR} + - Metrics: {METRICS_DIR} + - Logs: {LOGS_DIR} + - CI Results: {CI_RESULTS_PATH} + +--- + +## Recommendations Summary + +**Release Blocker:** {RELEASE_BLOCKER_SUMMARY} + +**High Priority:** {HIGH_PRIORITY_SUMMARY} + +**Medium Priority:** {MEDIUM_PRIORITY_SUMMARY} + +**Next Steps:** {NEXT_STEPS_DESCRIPTION} + +--- + +## Sign-Off + +**NFR Assessment:** + +- Overall Status: {OVERALL_STATUS} {OVERALL_ICON} +- Critical Issues: {CRITICAL_COUNT} +- High Priority Issues: {HIGH_COUNT} +- Concerns: {CONCERNS_COUNT} +- Evidence Gaps: {EVIDENCE_GAP_COUNT} + +**Gate Status:** {GATE_STATUS} {GATE_ICON} + +**Next Actions:** + +- If PASS ✅: Proceed to `*gate` workflow or release +- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess` +- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess` + +**Generated:** {DATE} +**Workflow:** testarch-nfr v4.0 + +--- + + diff --git a/src/bmm/workflows/testarch/nfr-assess/workflow.yaml b/src/bmm/workflows/testarch/nfr-assess/workflow.yaml new file mode 100644 index 00000000..ce3f7381 --- /dev/null +++ b/src/bmm/workflows/testarch/nfr-assess/workflow.yaml @@ -0,0 +1,49 @@ +# Test Architect workflow: nfr-assess +name: testarch-nfr +description: "Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/nfr-assess" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +template: "{installed_path}/nfr-report-template.md" + +# Variables and inputs +variables: + # NFR category assessment (defaults to all categories) + custom_nfr_categories: "" # Optional additional categories beyond standard (security, performance, reliability, maintainability) + +# Output configuration +default_output_file: "{output_folder}/nfr-assessment.md" + +# Required tools +required_tools: + - read_file # Read story, test results, metrics, logs, BMad artifacts + - write_file # Create NFR assessment, gate YAML, evidence checklist + - list_files # Discover test results, metrics, logs + - search_repo # Find NFR-related tests and evidence + - glob # Find result files matching patterns + +tags: + - qa + - nfr + - test-architect + - performance + - security + - reliability + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/test-design/checklist.md b/src/bmm/workflows/testarch/test-design/checklist.md new file mode 100644 index 00000000..7c4475ca --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/checklist.md @@ -0,0 +1,407 @@ +# Test Design and Risk Assessment - Validation Checklist + +## Prerequisites (Mode-Dependent) + +**System-Level Mode (Phase 3):** +- [ ] PRD exists with functional and non-functional requirements +- [ ] ADR (Architecture Decision Record) exists +- [ ] Architecture document available (architecture.md or tech-spec) +- [ ] Requirements are testable and unambiguous + +**Epic-Level Mode (Phase 4):** +- [ ] Story markdown with clear acceptance criteria exists +- [ ] PRD or epic documentation available +- [ ] Architecture documents available (test-design-architecture.md + test-design-qa.md from Phase 3, if exists) +- [ ] Requirements are testable and unambiguous + +## Process Steps + +### Step 1: Context Loading + +- [ ] PRD.md read and requirements extracted +- [ ] Epics.md or specific epic documentation loaded +- [ ] Story markdown with acceptance criteria analyzed +- [ ] Architecture documents reviewed (if available) +- [ ] Existing test coverage analyzed +- [ ] Knowledge base fragments loaded (risk-governance, probability-impact, test-levels, test-priorities) + +### Step 2: Risk Assessment + +- [ ] Genuine risks identified (not just features) +- [ ] Risks classified by category (TECH/SEC/PERF/DATA/BUS/OPS) +- [ ] Probability scored (1-3 for each risk) +- [ ] Impact scored (1-3 for each risk) +- [ ] Risk scores calculated (probability × impact) +- [ ] High-priority risks (score ≥6) flagged +- [ ] Mitigation plans defined for high-priority risks +- [ ] Owners assigned for each mitigation +- [ ] Timelines set for mitigations +- [ ] Residual risk documented + +### Step 3: Coverage Design + +- [ ] Acceptance criteria broken into atomic scenarios +- [ ] Test levels selected (E2E/API/Component/Unit) +- [ ] No duplicate coverage across levels +- [ ] Priority levels assigned (P0/P1/P2/P3) +- [ ] P0 scenarios meet strict criteria (blocks core + high risk + no workaround) +- [ ] Data prerequisites identified +- [ ] Tooling requirements documented +- [ ] Execution order defined (smoke → P0 → P1 → P2/P3) + +### Step 4: Deliverables Generation + +- [ ] Risk assessment matrix created +- [ ] Coverage matrix created +- [ ] Execution order documented +- [ ] Resource estimates calculated +- [ ] Quality gate criteria defined +- [ ] Output file written to correct location +- [ ] Output file uses template structure + +## Output Validation + +### Risk Assessment Matrix + +- [ ] All risks have unique IDs (R-001, R-002, etc.) +- [ ] Each risk has category assigned +- [ ] Probability values are 1, 2, or 3 +- [ ] Impact values are 1, 2, or 3 +- [ ] Scores calculated correctly (P × I) +- [ ] High-priority risks (≥6) clearly marked +- [ ] Mitigation strategies specific and actionable + +### Coverage Matrix + +- [ ] All requirements mapped to test levels +- [ ] Priorities assigned to all scenarios +- [ ] Risk linkage documented +- [ ] Test counts realistic +- [ ] Owners assigned where applicable +- [ ] No duplicate coverage (same behavior at multiple levels) + +### Execution Strategy + +**CRITICAL: Keep execution strategy simple, avoid redundancy** + +- [ ] **Simple structure**: PR / Nightly / Weekly (NOT complex smoke/P0/P1/P2 tiers) +- [ ] **PR execution**: All functional tests unless significant infrastructure overhead +- [ ] **Nightly/Weekly**: Only performance, chaos, long-running, manual tests +- [ ] **No redundancy**: Don't re-list all tests (already in coverage plan) +- [ ] **Philosophy stated**: "Run everything in PRs if <15 min, defer only if expensive/long" +- [ ] **Playwright parallelization noted**: 100s of tests in 10-15 min + +### Resource Estimates + +**CRITICAL: Use intervals/ranges, NOT exact numbers** + +- [ ] P0 effort provided as interval range (e.g., "~25-40 hours" NOT "36 hours") +- [ ] P1 effort provided as interval range (e.g., "~20-35 hours" NOT "27 hours") +- [ ] P2 effort provided as interval range (e.g., "~10-30 hours" NOT "15.5 hours") +- [ ] P3 effort provided as interval range (e.g., "~2-5 hours" NOT "2.5 hours") +- [ ] Total effort provided as interval range (e.g., "~55-110 hours" NOT "81 hours") +- [ ] Timeline provided as week range (e.g., "~1.5-3 weeks" NOT "11 days") +- [ ] Estimates include setup time and account for complexity variations +- [ ] **No false precision**: Avoid exact calculations like "18 tests × 2 hours = 36 hours" + +### Quality Gate Criteria + +- [ ] P0 pass rate threshold defined (should be 100%) +- [ ] P1 pass rate threshold defined (typically ≥95%) +- [ ] High-risk mitigation completion required +- [ ] Coverage targets specified (≥80% recommended) + +## Quality Checks + +### Evidence-Based Assessment + +- [ ] Risk assessment based on documented evidence +- [ ] No speculation on business impact +- [ ] Assumptions clearly documented +- [ ] Clarifications requested where needed +- [ ] Historical data referenced where available + +### Risk Classification Accuracy + +- [ ] TECH risks are architecture/integration issues +- [ ] SEC risks are security vulnerabilities +- [ ] PERF risks are performance/scalability concerns +- [ ] DATA risks are data integrity issues +- [ ] BUS risks are business/revenue impacts +- [ ] OPS risks are deployment/operational issues + +### Priority Assignment Accuracy + +**CRITICAL: Priority classification is separate from execution timing** + +- [ ] **Priority sections (P0/P1/P2/P3) do NOT include execution context** (e.g., no "Run on every commit" in headers) +- [ ] **Priority sections have only "Criteria" and "Purpose"** (no "Execution:" field) +- [ ] **Execution Strategy section** is separate and handles timing based on infrastructure overhead +- [ ] P0: Truly blocks core functionality + High-risk (≥6) + No workaround +- [ ] P1: Important features + Medium-risk (3-4) + Common workflows +- [ ] P2: Secondary features + Low-risk (1-2) + Edge cases +- [ ] P3: Nice-to-have + Exploratory + Benchmarks +- [ ] **Note at top of Test Coverage Plan**: Clarifies P0/P1/P2/P3 = priority/risk, NOT execution timing + +### Test Level Selection + +- [ ] E2E used only for critical paths +- [ ] API tests cover complex business logic +- [ ] Component tests for UI interactions +- [ ] Unit tests for edge cases and algorithms +- [ ] No redundant coverage + +## Integration Points + +### Knowledge Base Integration + +- [ ] risk-governance.md consulted +- [ ] probability-impact.md applied +- [ ] test-levels-framework.md referenced +- [ ] test-priorities-matrix.md used +- [ ] Additional fragments loaded as needed + +### Status File Integration + +- [ ] Test design logged in Quality & Testing Progress +- [ ] Epic number and scope documented +- [ ] Completion timestamp recorded + +### Workflow Dependencies + +- [ ] Can proceed to `*atdd` workflow with P0 scenarios +- [ ] `*atdd` is a separate workflow and must be run explicitly (not auto-run) +- [ ] Can proceed to `automate` workflow with full coverage plan +- [ ] Risk assessment informs `gate` workflow criteria +- [ ] Integrates with `ci` workflow execution order + +## System-Level Mode: Two-Document Validation + +**When in system-level mode (PRD + ADR input), validate BOTH documents:** + +### test-design-architecture.md + +- [ ] **Purpose statement** at top (serves as contract with Architecture team) +- [ ] **Executive Summary** with scope, business context, architecture decisions, risk summary +- [ ] **Quick Guide** section with three tiers: + - [ ] 🚨 BLOCKERS - Team Must Decide (Sprint 0 critical path items) + - [ ] ⚠️ HIGH PRIORITY - Team Should Validate (recommendations for approval) + - [ ] 📋 INFO ONLY - Solutions Provided (no decisions needed) +- [ ] **Risk Assessment** section - **ACTIONABLE** + - [ ] Total risks identified count + - [ ] High-priority risks table (score ≥6) with all columns: Risk ID, Category, Description, Probability, Impact, Score, Mitigation, Owner, Timeline + - [ ] Medium and low-priority risks tables + - [ ] Risk category legend included +- [ ] **Testability Concerns and Architectural Gaps** section - **ACTIONABLE** + - [ ] **Sub-section: 🚨 ACTIONABLE CONCERNS** at TOP + - [ ] Blockers to Fast Feedback table (WHAT architecture must provide) + - [ ] Architectural Improvements Needed (WHAT must be changed) + - [ ] Each concern has: Owner, Timeline, Impact + - [ ] **Sub-section: Testability Assessment Summary** at BOTTOM (FYI) + - [ ] What Works Well (passing items) + - [ ] Accepted Trade-offs (no action required) + - [ ] This section only included if worth mentioning; otherwise omitted +- [ ] **Risk Mitigation Plans** for all high-priority risks (≥6) + - [ ] Each plan has: Strategy (numbered steps), Owner, Timeline, Status, Verification + - [ ] **Only Backend/DevOps/Arch/Security mitigations** (production code changes) + - [ ] QA-owned mitigations belong in QA doc instead +- [ ] **Assumptions and Dependencies** section + - [ ] **Architectural assumptions only** (SLO targets, replication lag, system design) + - [ ] Assumptions list (numbered) + - [ ] Dependencies list with required dates + - [ ] Risks to plan with impact and contingency + - [ ] QA execution assumptions belong in QA doc instead +- [ ] **NO test implementation code** (long examples belong in QA doc) +- [ ] **NO test scripts** (no Playwright test(...) blocks, no assertions, no test setup code) +- [ ] **NO NFR test examples** (NFR sections describe WHAT to test, not HOW to test) +- [ ] **NO test scenario checklists** (belong in QA doc) +- [ ] **NO bloat or repetition** (consolidate repeated notes, avoid over-explanation) +- [ ] **Cross-references to QA doc** where appropriate (instead of duplication) +- [ ] **RECIPE SECTIONS NOT IN ARCHITECTURE DOC:** + - [ ] NO "Test Levels Strategy" section (unit/integration/E2E split belongs in QA doc only) + - [ ] NO "NFR Testing Approach" section with detailed test procedures (belongs in QA doc only) + - [ ] NO "Test Environment Requirements" section (belongs in QA doc only) + - [ ] NO "Recommendations for Sprint 0" section with test framework setup (belongs in QA doc only) + - [ ] NO "Quality Gate Criteria" section (pass rates, coverage targets belong in QA doc only) + - [ ] NO "Tool Selection" section (Playwright, k6, etc. belongs in QA doc only) + +### test-design-qa.md + +**REQUIRED SECTIONS:** + +- [ ] **Purpose statement** at top (test execution recipe) +- [ ] **Executive Summary** with risk summary and coverage summary +- [ ] **Dependencies & Test Blockers** section in POSITION 2 (right after Executive Summary) + - [ ] Backend/Architecture dependencies listed (what QA needs from other teams) + - [ ] QA infrastructure setup listed (factories, fixtures, environments) + - [ ] Code example with playwright-utils if config.tea_use_playwright_utils is true + - [ ] Test from '@seontechnologies/playwright-utils/api-request/fixtures' + - [ ] Expect from '@playwright/test' (playwright-utils does not re-export expect) + - [ ] Code examples include assertions (no unused imports) +- [ ] **Risk Assessment** section (brief, references Architecture doc) + - [ ] High-priority risks table + - [ ] Medium/low-priority risks table + - [ ] Each risk shows "QA Test Coverage" column (how QA validates) +- [ ] **Test Coverage Plan** with P0/P1/P2/P3 sections + - [ ] Priority sections have ONLY "Criteria" (no execution context) + - [ ] Note at top: "P0/P1/P2/P3 = priority, NOT execution timing" + - [ ] Test tables with columns: Test ID | Requirement | Test Level | Risk Link | Notes +- [ ] **Execution Strategy** section (organized by TOOL TYPE) + - [ ] Every PR: Playwright tests (~10-15 min) + - [ ] Nightly: k6 performance tests (~30-60 min) + - [ ] Weekly: Chaos & long-running (~hours) + - [ ] Philosophy: "Run everything in PRs unless expensive/long-running" +- [ ] **QA Effort Estimate** section (QA effort ONLY) + - [ ] Interval-based estimates (e.g., "~1-2 weeks" NOT "36 hours") + - [ ] NO DevOps, Backend, Data Eng, Finance effort + - [ ] NO Sprint breakdowns (too prescriptive) +- [ ] **Appendix A: Code Examples & Tagging** +- [ ] **Appendix B: Knowledge Base References** + +**DON'T INCLUDE (bloat):** +- [ ] ❌ NO Quick Reference section +- [ ] ❌ NO System Architecture Summary +- [ ] ❌ NO Test Environment Requirements as separate section (integrate into Dependencies) +- [ ] ❌ NO Testability Assessment section (covered in Dependencies) +- [ ] ❌ NO Test Levels Strategy section (obvious from test scenarios) +- [ ] ❌ NO NFR Readiness Summary +- [ ] ❌ NO Quality Gate Criteria section (teams decide for themselves) +- [ ] ❌ NO Follow-on Workflows section (BMAD commands self-explanatory) +- [ ] ❌ NO Approval section +- [ ] ❌ NO Infrastructure/DevOps/Finance effort tables (out of scope) +- [ ] ❌ NO Sprint 0/1/2/3 breakdown tables +- [ ] ❌ NO Next Steps section + +### Cross-Document Consistency + +- [ ] Both documents reference same risks by ID (R-001, R-002, etc.) +- [ ] Both documents use consistent priority levels (P0, P1, P2, P3) +- [ ] Both documents reference same Sprint 0 blockers +- [ ] No duplicate content (cross-reference instead) +- [ ] Dates and authors match across documents +- [ ] ADR and PRD references consistent + +### Document Quality (Anti-Bloat Check) + +**CRITICAL: Check for bloat and repetition across BOTH documents** + +- [ ] **No repeated notes 10+ times** (e.g., "Timing is pessimistic until R-005 fixed" on every section) +- [ ] **Repeated information consolidated** (write once at top, reference briefly if needed) +- [ ] **No excessive detail** that doesn't add value (obvious concepts, redundant examples) +- [ ] **Focus on unique/critical info** (only document what's different from standard practice) +- [ ] **Architecture doc**: Concerns-focused, NOT implementation-focused +- [ ] **QA doc**: Implementation-focused, NOT theory-focused +- [ ] **Clear separation**: Architecture = WHAT and WHY, QA = HOW +- [ ] **Professional tone**: No AI slop markers + - [ ] Avoid excessive ✅/❌ emojis (use sparingly, only when adding clarity) + - [ ] Avoid "absolutely", "excellent", "fantastic", overly enthusiastic language + - [ ] Write professionally and directly +- [ ] **Architecture doc length**: Target ~150-200 lines max (focus on actionable concerns only) +- [ ] **QA doc length**: Keep concise, remove bloat sections + +### Architecture Doc Structure (Actionable-First Principle) + +**CRITICAL: Validate structure follows actionable-first, FYI-last principle** + +- [ ] **Actionable sections at TOP:** + - [ ] Quick Guide (🚨 BLOCKERS first, then ⚠️ HIGH PRIORITY, then 📋 INFO ONLY last) + - [ ] Risk Assessment (high-priority risks ≥6 at top) + - [ ] Testability Concerns (concerns/blockers at top, passing items at bottom) + - [ ] Risk Mitigation Plans (for high-priority risks ≥6) +- [ ] **FYI sections at BOTTOM:** + - [ ] Testability Assessment Summary (what works well - only if worth mentioning) + - [ ] Assumptions and Dependencies +- [ ] **ASRs categorized correctly:** + - [ ] Actionable ASRs included in 🚨 or ⚠️ sections + - [ ] FYI ASRs included in 📋 section or omitted if obvious + +## Completion Criteria + +**All must be true:** + +- [ ] All prerequisites met +- [ ] All process steps completed +- [ ] All output validations passed +- [ ] All quality checks passed +- [ ] All integration points verified +- [ ] Output file(s) complete and well-formatted +- [ ] **System-level mode:** Both documents validated (if applicable) +- [ ] **Epic-level mode:** Single document validated (if applicable) +- [ ] Team review scheduled (if required) + +## Post-Workflow Actions + +**User must complete:** + +1. [ ] Review risk assessment with team +2. [ ] Prioritize mitigation for high-priority risks (score ≥6) +3. [ ] Allocate resources per estimates +4. [ ] Run `*atdd` workflow to generate P0 tests (separate workflow; not auto-run) +5. [ ] Set up test data factories and fixtures +6. [ ] Schedule team review of test design document + +**Recommended next workflows:** + +1. [ ] Run `atdd` workflow for P0 test generation +2. [ ] Run `framework` workflow if not already done +3. [ ] Run `ci` workflow to configure pipeline stages + +## Rollback Procedure + +If workflow fails: + +1. [ ] Delete output file +2. [ ] Review error logs +3. [ ] Fix missing context (PRD, architecture docs) +4. [ ] Clarify ambiguous requirements +5. [ ] Retry workflow + +## Notes + +### Common Issues + +**Issue**: Too many P0 tests + +- **Solution**: Apply strict P0 criteria - must block core AND high risk AND no workaround + +**Issue**: Risk scores all high + +- **Solution**: Differentiate between high-impact (3) and degraded (2) impacts + +**Issue**: Duplicate coverage across levels + +- **Solution**: Use test pyramid - E2E for critical paths only + +**Issue**: Resource estimates too high or too precise + +- **Solution**: + - Invest in fixtures/factories to reduce per-test setup time + - Use interval ranges (e.g., "~55-110 hours") instead of exact numbers (e.g., "81 hours") + - Widen intervals if high uncertainty exists + +**Issue**: Execution order section too complex or redundant + +- **Solution**: + - Default: Run everything in PRs (<15 min with Playwright parallelization) + - Only defer to nightly/weekly if expensive (k6, chaos, 4+ hour tests) + - Don't create smoke/P0/P1/P2/P3 tier structure + - Don't re-list all tests (already in coverage plan) + +### Best Practices + +- Base risk assessment on evidence, not assumptions +- High-priority risks (≥6) require immediate mitigation +- P0 tests should cover <10% of total scenarios +- Avoid testing same behavior at multiple levels +- **Use interval-based estimates** (e.g., "~25-40 hours") instead of exact numbers to avoid false precision and provide flexibility +- **Keep execution strategy simple**: Default to "run everything in PRs" (<15 min with Playwright), only defer if expensive/long-running +- **Avoid execution order redundancy**: Don't create complex tier structures or re-list tests + +--- + +**Checklist Complete**: Sign off when all items validated. + +**Completed by:** {name} +**Date:** {date} +**Epic:** {epic title} +**Notes:** {additional notes} diff --git a/src/bmm/workflows/testarch/test-design/instructions.md b/src/bmm/workflows/testarch/test-design/instructions.md new file mode 100644 index 00000000..1eae05be --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/instructions.md @@ -0,0 +1,1158 @@ + + +# Test Design and Risk Assessment + +**Workflow ID**: `_bmad/bmm/testarch/test-design` +**Version**: 4.0 (BMad v6) + +--- + +## Overview + +Plans comprehensive test coverage strategy with risk assessment, priority classification, and execution ordering. This workflow operates in **two modes**: + +- **System-Level Mode (Phase 3)**: Testability review of architecture before solutioning gate check +- **Epic-Level Mode (Phase 4)**: Per-epic test planning with risk assessment (current behavior) + +The workflow auto-detects which mode to use based on project phase. + +--- + +## Preflight: Detect Mode and Load Context + +**Critical:** Determine mode before proceeding. + +### Mode Detection (Flexible for Standalone Use) + +TEA test-design workflow supports TWO modes, detected automatically: + +1. **Check User Intent Explicitly (Priority 1)** + + **Deterministic Rules:** + - User provided **PRD+ADR only** (no Epic+Stories) → **System-Level Mode** + - User provided **Epic+Stories only** (no PRD+ADR) → **Epic-Level Mode** + - User provided **BOTH PRD+ADR AND Epic+Stories** → **Prefer System-Level Mode** (architecture review comes first in Phase 3, then epic planning in Phase 4). If mode preference is unclear, ask user: "Should I create (A) System-level test design (PRD + ADR → Architecture doc + QA doc) or (B) Epic-level test design (Epic → Single test plan)?" + - If user intent is clear from context, use that mode regardless of file structure + +2. **Fallback to File-Based Detection (Priority 2 - BMad-Integrated)** + - Check for `{implementation_artifacts}/sprint-status.yaml` + - If exists → **Epic-Level Mode** (Phase 4, single document output) + - If NOT exists → **System-Level Mode** (Phase 3, TWO document outputs) + +3. **If Ambiguous, ASK USER (Priority 3)** + - "I see you have [PRD/ADR/Epic/Stories]. Should I create: + - (A) System-level test design (PRD + ADR → Architecture doc + QA doc)? + - (B) Epic-level test design (Epic → Single test plan)?" + +**Mode Descriptions:** + +**System-Level Mode (PRD + ADR Input)** +- **When to use:** Early in project (Phase 3 Solutioning), architecture being designed +- **Input:** PRD, ADR, architecture.md (optional) +- **Output:** TWO documents + - `test-design-architecture.md` (for Architecture/Dev teams) + - `test-design-qa.md` (for QA team) +- **Focus:** Testability assessment, ASRs, NFR requirements, Sprint 0 setup + +**Epic-Level Mode (Epic + Stories Input)** +- **When to use:** During implementation (Phase 4), per-epic planning +- **Input:** Epic, Stories, tech-specs (optional) +- **Output:** ONE document + - `test-design-epic-{N}.md` (combined risk assessment + test plan) +- **Focus:** Risk assessment, coverage plan, execution order, quality gates + +**Key Insight: TEA Works Standalone OR Integrated** + +**Standalone (No BMad artifacts):** +- User provides PRD + ADR → System-Level Mode +- User provides Epic description → Epic-Level Mode +- TEA doesn't mandate full BMad workflow + +**BMad-Integrated (Full workflow):** +- BMad creates `sprint-status.yaml` → Automatic Epic-Level detection +- BMad creates PRD, ADR, architecture.md → Automatic System-Level detection +- TEA leverages BMad artifacts for richer context + +**Message to User:** +> You don't need to follow full BMad methodology to use TEA test-design. +> Just provide PRD + ADR for system-level, or Epic for epic-level. +> TEA will auto-detect and produce appropriate documents. + +**Halt Condition:** If mode cannot be determined AND user intent unclear AND required files missing, HALT and notify user: +- "Please provide either: (A) PRD + ADR for system-level test design, OR (B) Epic + Stories for epic-level test design" + +--- + +## Step 1: Load Context (Mode-Aware) + +**Mode-Specific Loading:** + +### System-Level Mode (Phase 3) + +1. **Read Architecture Documentation** + - Load architecture.md or tech-spec (REQUIRED) + - Load PRD.md for functional and non-functional requirements + - Load epics.md for feature scope + - Identify technology stack decisions (frameworks, databases, deployment targets) + - Note integration points and external system dependencies + - Extract NFR requirements (performance SLOs, security requirements, etc.) + +2. **Check Playwright Utils Flag** + + Read `{config_source}` and check `config.tea_use_playwright_utils`. + + If true, note that `@seontechnologies/playwright-utils` provides utilities for test implementation. Reference in test design where relevant. + +3. **Load Knowledge Base Fragments (System-Level)** + + **Critical:** Consult `src/bmm/testarch/tea-index.csv` to load: + - `adr-quality-readiness-checklist.md` - 8-category 29-criteria NFR framework (testability, security, scalability, DR, QoS, deployability, etc.) + - `test-levels-framework.md` - Test levels strategy guidance + - `risk-governance.md` - Testability risk identification + - `test-quality.md` - Quality standards and Definition of Done + +4. **Analyze Existing Test Setup (if brownfield)** + - Search for existing test directories + - Identify current test framework (if any) + - Note testability concerns in existing codebase + +### Epic-Level Mode (Phase 4) + +1. **Read Requirements Documentation** + - Load PRD.md for high-level product requirements + - Read epics.md or specific epic for feature scope + - Read story markdown for detailed acceptance criteria + - Identify all testable requirements + +2. **Load Architecture Context** + - Read architecture.md for system design + - Read tech-spec for implementation details + - Read test-design-architecture.md and test-design-qa.md (if exist from Phase 3 system-level test design) + - Identify technical constraints and dependencies + - Note integration points and external systems + +3. **Analyze Existing Test Coverage** + - Search for existing test files in `{test_dir}` + - Identify coverage gaps + - Note areas with insufficient testing + - Check for flaky or outdated tests + +4. **Load Knowledge Base Fragments (Epic-Level)** + + **Critical:** Consult `src/bmm/testarch/tea-index.csv` to load: + - `risk-governance.md` - Risk classification framework (6 categories: TECH, SEC, PERF, DATA, BUS, OPS), automated scoring, gate decision engine, owner tracking (625 lines, 4 examples) + - `probability-impact.md` - Risk scoring methodology (probability × impact matrix, automated classification, dynamic re-assessment, gate integration, 604 lines, 4 examples) + - `test-levels-framework.md` - Test level selection guidance (E2E vs API vs Component vs Unit with decision matrix, characteristics, when to use each, 467 lines, 4 examples) + - `test-priorities-matrix.md` - P0-P3 prioritization criteria (automated priority calculation, risk-based mapping, tagging strategy, time budgets, 389 lines, 2 examples) + +**Halt Condition (Epic-Level only):** If story data or acceptance criteria are missing, check if brownfield exploration is needed. If neither requirements NOR exploration possible, HALT with message: "Epic-level test design requires clear requirements, acceptance criteria, or brownfield app URL for exploration" + +--- + +## Step 1.5: System-Level Testability Review (Phase 3 Only) + +**Skip this step if Epic-Level Mode.** This step only executes in System-Level Mode. + +### Actions + +1. **Review Architecture for Testability** + + **STRUCTURE PRINCIPLE: CONCERNS FIRST, PASSING ITEMS LAST** + + Evaluate architecture against these criteria and structure output as: + 1. **Testability Concerns** (ACTIONABLE - what's broken/missing) + 2. **Testability Assessment Summary** (FYI - what works well) + + **Testability Criteria:** + + **Controllability:** + - Can we control system state for testing? (API seeding, factories, database reset) + - Are external dependencies mockable? (interfaces, dependency injection) + - Can we trigger error conditions? (chaos engineering, fault injection) + + **Observability:** + - Can we inspect system state? (logging, metrics, traces) + - Are test results deterministic? (no race conditions, clear success/failure) + - Can we validate NFRs? (performance metrics, security audit logs) + + **Reliability:** + - Are tests isolated? (parallel-safe, stateless, cleanup discipline) + - Can we reproduce failures? (deterministic waits, HAR capture, seed data) + - Are components loosely coupled? (mockable, testable boundaries) + + **In Architecture Doc Output:** + - **Section A: Testability Concerns** (TOP) - List what's BROKEN or MISSING + - Example: "No API for test data seeding → Cannot parallelize tests" + - Example: "Hardcoded DB connection → Cannot test in CI" + - **Section B: Testability Assessment Summary** (BOTTOM) - List what PASSES + - Example: "✅ API-first design supports test isolation" + - Only include if worth mentioning; otherwise omit this section entirely + +2. **Identify Architecturally Significant Requirements (ASRs)** + + **CRITICAL: ASRs must indicate if ACTIONABLE or FYI** + + From PRD NFRs and architecture decisions, identify quality requirements that: + - Drive architecture decisions (e.g., "Must handle 10K concurrent users" → caching architecture) + - Pose testability challenges (e.g., "Sub-second response time" → performance test infrastructure) + - Require special test environments (e.g., "Multi-region deployment" → regional test instances) + + Score each ASR using risk matrix (probability × impact). + + **In Architecture Doc, categorize ASRs:** + - **ACTIONABLE ASRs** (require architecture changes): Include in "Quick Guide" 🚨 or ⚠️ sections + - **FYI ASRs** (already satisfied by architecture): Include in "Quick Guide" 📋 section OR omit if obvious + + **Example:** + - ASR-001 (Score 9): "Multi-region deployment requires region-specific test infrastructure" → **ACTIONABLE** (goes in 🚨 BLOCKERS) + - ASR-002 (Score 4): "OAuth 2.1 authentication already implemented in ADR-5" → **FYI** (goes in 📋 INFO ONLY or omit) + + **Structure Principle:** Actionable ASRs at TOP, FYI ASRs at BOTTOM (or omit) + +3. **Define Test Levels Strategy** + + **IMPORTANT: This section goes in QA doc ONLY, NOT in Architecture doc** + + Based on architecture (mobile, web, API, microservices, monolith): + - Recommend unit/integration/E2E split (e.g., 70/20/10 for API-heavy, 40/30/30 for UI-heavy) + - Identify test environment needs (local, staging, ephemeral, production-like) + - Define testing approach per technology (Playwright for web, Maestro for mobile, k6 for performance) + + **In Architecture doc:** Only mention test level split if it's an ACTIONABLE concern + - Example: "API response time <100ms requires load testing infrastructure" (concern) + - DO NOT include full test level strategy table in Architecture doc + +4. **Assess NFR Requirements (MINIMAL in Architecture Doc)** + + **CRITICAL: NFR testing approach is a RECIPE - belongs in QA doc ONLY** + + **In Architecture Doc:** + - Only mention NFRs if they create testability CONCERNS + - Focus on WHAT architecture must provide, not HOW to test + - Keep it brief - 1-2 sentences per NFR category at most + + **Example - Security NFR in Architecture doc (if there's a concern):** + ✅ CORRECT (concern-focused, brief, WHAT/WHY only): + - "System must prevent cross-customer data access (GDPR requirement). Requires test infrastructure for multi-tenant isolation in Sprint 0." + - "OAuth tokens must expire after 1 hour (ADR-5). Requires test harness for token expiration validation." + + ❌ INCORRECT (too detailed, belongs in QA doc): + - Full table of security test scenarios + - Test scripts with code examples + - Detailed test procedures + - Tool selection (e.g., "use Playwright E2E + OWASP ZAP") + - Specific test approaches (e.g., "Test approach: Playwright E2E for auth/authz") + + **In QA Doc (full NFR testing approach):** + - **Security**: Full test scenarios, tooling (Playwright + OWASP ZAP), test procedures + - **Performance**: Load/stress/spike test scenarios, k6 scripts, SLO thresholds + - **Reliability**: Error handling tests, retry logic validation, circuit breaker tests + - **Maintainability**: Coverage targets, code quality gates, observability validation + + **Rule of Thumb:** + - Architecture doc: "What NFRs exist and what concerns they create" (1-2 sentences) + - QA doc: "How to test those NFRs" (full sections with tables, code, procedures) + +5. **Flag Testability Concerns** + + Identify architecture decisions that harm testability: + - ❌ Tight coupling (no interfaces, hard dependencies) + - ❌ No dependency injection (can't mock external services) + - ❌ Hardcoded configurations (can't test different envs) + - ❌ Missing observability (can't validate NFRs) + - ❌ Stateful designs (can't parallelize tests) + + **Critical:** If testability concerns are blockers (e.g., "Architecture makes performance testing impossible"), document as CONCERNS or FAIL recommendation for gate check. + +6. **Output System-Level Test Design (TWO Documents)** + + **IMPORTANT:** System-level mode produces TWO documents instead of one: + + **Document 1: test-design-architecture.md** (for Architecture/Dev teams) + - Purpose: Architectural concerns, testability gaps, NFR requirements + - Audience: Architects, Backend Devs, Frontend Devs, DevOps, Security Engineers + - Focus: What architecture must deliver for testability + - Template: `test-design-architecture-template.md` + + **Document 2: test-design-qa.md** (for QA team) + - Purpose: Test execution recipe, coverage plan, Sprint 0 setup + - Audience: QA Engineers, Test Automation Engineers, QA Leads + - Focus: How QA will execute tests + - Template: `test-design-qa-template.md` + + **Standard Structures (REQUIRED):** + + **test-design-architecture.md sections (in this order):** + + **STRUCTURE PRINCIPLE: Actionable items FIRST, FYI items LAST** + + 1. Executive Summary (scope, business context, architecture, risk summary) + 2. Quick Guide (🚨 BLOCKERS / ⚠️ HIGH PRIORITY / 📋 INFO ONLY) + 3. Risk Assessment (high/medium/low-priority risks with scoring) - **ACTIONABLE** + 4. Testability Concerns and Architectural Gaps - **ACTIONABLE** (what arch team must do) + - Sub-section: Blockers to Fast Feedback (ACTIONABLE - concerns FIRST) + - Sub-section: Architectural Improvements Needed (ACTIONABLE) + - Sub-section: Testability Assessment Summary (FYI - passing items LAST, only if worth mentioning) + 5. Risk Mitigation Plans (detailed for high-priority risks ≥6) - **ACTIONABLE** + 6. Assumptions and Dependencies - **FYI** + + **SECTIONS THAT DO NOT BELONG IN ARCHITECTURE DOC:** + - ❌ Test Levels Strategy (unit/integration/E2E split) - This is a RECIPE, belongs in QA doc ONLY + - ❌ NFR Testing Approach with test examples - This is a RECIPE, belongs in QA doc ONLY + - ❌ Test Environment Requirements - This is a RECIPE, belongs in QA doc ONLY + - ❌ Recommendations for Sprint 0 (test framework setup, factories) - This is a RECIPE, belongs in QA doc ONLY + - ❌ Quality Gate Criteria (pass rates, coverage targets) - This is a RECIPE, belongs in QA doc ONLY + - ❌ Tool Selection (Playwright, k6, etc.) - This is a RECIPE, belongs in QA doc ONLY + + **WHAT BELONGS IN ARCHITECTURE DOC:** + - ✅ Testability CONCERNS (what makes it hard to test) + - ✅ Architecture GAPS (what's missing for testability) + - ✅ What architecture team must DO (blockers, improvements) + - ✅ Risks and mitigation plans + - ✅ ASRs (Architecturally Significant Requirements) - but clarify if FYI or actionable + + **test-design-qa.md sections (in this order):** + 1. Executive Summary (risk summary, coverage summary) + 2. **Dependencies & Test Blockers** (CRITICAL: RIGHT AFTER SUMMARY - what QA needs from other teams) + 3. Risk Assessment (scored risks with categories - reference Arch doc, don't duplicate) + 4. Test Coverage Plan (P0/P1/P2/P3 with detailed scenarios + checkboxes) + 5. **Execution Strategy** (SIMPLE: Organized by TOOL TYPE: PR (Playwright) / Nightly (k6) / Weekly (chaos/manual)) + 6. QA Effort Estimate (QA effort ONLY - no DevOps, Data Eng, Finance, Backend) + 7. Appendices (code examples with playwright-utils, tagging strategy, knowledge base refs) + + **SECTIONS TO EXCLUDE FROM QA DOC:** + - ❌ Quality Gate Criteria (pass/fail thresholds - teams decide for themselves) + - ❌ Follow-on Workflows (bloat - BMAD commands are self-explanatory) + - ❌ Approval section (unnecessary formality) + - ❌ Test Environment Requirements (remove as separate section - integrate into Dependencies if needed) + - ❌ NFR Readiness Summary (bloat - covered in Risk Assessment) + - ❌ Testability Assessment (bloat - covered in Dependencies) + - ❌ Test Levels Strategy (bloat - obvious from test scenarios) + - ❌ Sprint breakdowns (too prescriptive) + - ❌ Infrastructure/DevOps/Data Eng effort tables (out of scope) + - ❌ Mitigation plans for non-QA work (belongs in Arch doc) + + **Content Guidelines:** + + **Architecture doc (DO):** + - ✅ Risk scoring visible (Probability × Impact = Score) + - ✅ Clear ownership (each blocker/ASR has owner + timeline) + - ✅ Testability requirements (what architecture must support) + - ✅ Mitigation plans (for each high-risk item ≥6) + - ✅ Brief conceptual examples ONLY if needed to clarify architecture concerns (5-10 lines max) + - ✅ **Target length**: ~150-200 lines max (focus on actionable concerns only) + - ✅ **Professional tone**: Avoid AI slop (excessive ✅/❌ emojis, "absolutely", "excellent", overly enthusiastic language) + + **Architecture doc (DON'T) - CRITICAL:** + - ❌ NO test scripts or test implementation code AT ALL - This is a communication doc for architects, not a testing guide + - ❌ NO Playwright test examples (e.g., test('...', async ({ request }) => ...)) + - ❌ NO assertion logic (e.g., expect(...).toBe(...)) + - ❌ NO test scenario checklists with checkboxes (belongs in QA doc) + - ❌ NO implementation details about HOW QA will test + - ❌ Focus on CONCERNS, not IMPLEMENTATION + + **QA doc (DO):** + - ✅ Test scenario recipes (clear P0/P1/P2/P3 with checkboxes) + - ✅ Full test implementation code samples when helpful + - ✅ **IMPORTANT: If config.tea_use_playwright_utils is true, ALL code samples MUST use @seontechnologies/playwright-utils fixtures and utilities** + - ✅ Import test fixtures from '@seontechnologies/playwright-utils/api-request/fixtures' + - ✅ Import expect from '@playwright/test' (playwright-utils does not re-export expect) + - ✅ Use apiRequest fixture with schema validation, retry logic, and structured responses + - ✅ Dependencies & Test Blockers section RIGHT AFTER Executive Summary (what QA needs from other teams) + - ✅ **QA effort estimates ONLY** (no DevOps, Data Eng, Finance, Backend effort - out of scope) + - ✅ Cross-references to Architecture doc (not duplication) + - ✅ **Professional tone**: Avoid AI slop (excessive ✅/❌ emojis, "absolutely", "excellent", overly enthusiastic language) + + **QA doc (DON'T):** + - ❌ NO architectural theory (just reference Architecture doc) + - ❌ NO ASR explanations (link to Architecture doc instead) + - ❌ NO duplicate risk assessments (reference Architecture doc) + - ❌ NO Quality Gate Criteria section (teams decide pass/fail thresholds for themselves) + - ❌ NO Follow-on Workflows section (bloat - BMAD commands are self-explanatory) + - ❌ NO Approval section (unnecessary formality) + - ❌ NO effort estimates for other teams (DevOps, Backend, Data Eng, Finance - out of scope, QA effort only) + - ❌ NO Sprint breakdowns (too prescriptive - e.g., "Sprint 0: 40 hours, Sprint 1: 48 hours") + - ❌ NO mitigation plans for Backend/Arch/DevOps work (those belong in Architecture doc) + - ❌ NO architectural assumptions or debates (those belong in Architecture doc) + + **Anti-Patterns to Avoid (Cross-Document Redundancy):** + + **CRITICAL: NO BLOAT, NO REPETITION, NO OVERINFO** + + ❌ **DON'T duplicate OAuth requirements:** + - Architecture doc: Explain OAuth 2.1 flow in detail + - QA doc: Re-explain why OAuth 2.1 is required + + ✅ **DO cross-reference instead:** + - Architecture doc: "ASR-1: OAuth 2.1 required (see QA doc for 12 test scenarios)" + - QA doc: "OAuth tests: 12 P0 scenarios (see Architecture doc R-001 for risk details)" + + ❌ **DON'T repeat the same note 10+ times:** + - Example: "Timing is pessimistic until R-005 is fixed" repeated on every P0, P1, P2 section + - This creates bloat and makes docs hard to read + + ✅ **DO consolidate repeated information:** + - Write once at the top: "**Note**: All timing estimates are pessimistic pending R-005 resolution" + - Reference briefly if needed: "(pessimistic timing)" + + ❌ **DON'T include excessive detail that doesn't add value:** + - Long explanations of obvious concepts + - Redundant examples showing the same pattern + - Over-documentation of standard practices + + ✅ **DO focus on what's unique or critical:** + - Document only what's different from standard practice + - Highlight critical decisions and risks + - Keep explanations concise and actionable + + **Markdown Cross-Reference Syntax Examples:** + + ```markdown + # In test-design-architecture.md + + ### 🚨 R-001: Multi-Tenant Isolation (Score: 9) + + **Test Coverage:** 8 P0 tests (see [QA doc - Multi-Tenant Isolation](test-design-qa.md#multi-tenant-isolation-8-tests-security-critical) for detailed scenarios) + + --- + + # In test-design-qa.md + + ## Testability Assessment + + **Prerequisites from Architecture Doc:** + - [ ] R-001: Multi-tenant isolation validated (see [Architecture doc R-001](test-design-architecture.md#r-001-multi-tenant-isolation-score-9) for mitigation plan) + - [ ] R-002: Test customer provisioned (see [Architecture doc 🚨 BLOCKERS](test-design-architecture.md#blockers---team-must-decide-cant-proceed-without)) + + ## Sprint 0 Setup Requirements + + **Source:** See [Architecture doc "Quick Guide"](test-design-architecture.md#quick-guide) for detailed mitigation plans + ``` + + **Key Points:** + - Use relative links: `[Link Text](test-design-qa.md#section-anchor)` + - Anchor format: lowercase, hyphens for spaces, remove emojis/special chars + - Example anchor: `### 🚨 R-001: Title` → `#r-001-title` + + ❌ **DON'T put long code examples in Architecture doc:** + - Example: 50+ lines of test implementation + + ✅ **DO keep examples SHORT in Architecture doc:** + - Example: 5-10 lines max showing what architecture must support + - Full implementation goes in QA doc + + ❌ **DON'T repeat same note 10+ times:** + - Example: "Pessimistic timing until R-005 fixed" on every P0/P1/P2 section + + ✅ **DO consolidate repeated notes:** + - Single timing note at top + - Reference briefly throughout: "(pessimistic)" + + **Write Both Documents:** + - Use `test-design-architecture-template.md` for Architecture doc + - Use `test-design-qa-template.md` for QA doc + - Follow standard structures defined above + - Cross-reference between docs (no duplication) + - Validate against checklist.md (System-Level Mode section) + +**Common Over-Engineering to Avoid:** + + **In QA Doc:** + 1. ❌ Quality gate thresholds ("P0 must be 100%, P1 ≥95%") - Let teams decide for themselves + 2. ❌ Effort estimates for other teams - QA doc should only estimate QA effort + 3. ❌ Sprint breakdowns ("Sprint 0: 40 hours, Sprint 1: 48 hours") - Too prescriptive + 4. ❌ Approval sections - Unnecessary formality + 5. ❌ Assumptions about architecture (SLO targets, replication lag) - These are architectural concerns, belong in Arch doc + 6. ❌ Mitigation plans for Backend/Arch/DevOps - Those belong in Arch doc + 7. ❌ Follow-on workflows section - Bloat, BMAD commands are self-explanatory + 8. ❌ NFR Readiness Summary - Bloat, covered in Risk Assessment + + **Test Coverage Numbers Reality Check:** + - With Playwright parallelization, running ALL Playwright tests is as fast as running just P0 + - Don't split Playwright tests by priority into different CI gates - it adds no value + - Tool type matters, not priority labels + - Defer based on infrastructure cost, not importance + +**After System-Level Mode:** Workflow COMPLETE. System-level outputs (test-design-architecture.md + test-design-qa.md) are written in this step. Steps 2-4 are epic-level only - do NOT execute them in system-level mode. + +--- + +## Step 1.6: Exploratory Mode Selection (Epic-Level Only) + +### Actions + +1. **Detect Planning Mode** + + Determine mode based on context: + + **Requirements-Based Mode (DEFAULT)**: + - Have clear story/PRD with acceptance criteria + - Uses: Existing workflow (Steps 2-4) + - Appropriate for: Documented features, greenfield projects + + **Exploratory Mode (OPTIONAL - Brownfield)**: + - Missing/incomplete requirements AND brownfield application exists + - Uses: UI exploration to discover functionality + - Appropriate for: Undocumented brownfield apps, legacy systems + +2. **Requirements-Based Mode (DEFAULT - Skip to Step 2)** + + If requirements are clear: + - Continue with existing workflow (Step 2: Assess and Classify Risks) + - Use loaded requirements from Step 1 + - Proceed with risk assessment based on documented requirements + +3. **Exploratory Mode (OPTIONAL - Brownfield Apps)** + + If exploring brownfield application: + + **A. Check MCP Availability** + + If config.tea_use_mcp_enhancements is true AND Playwright MCP tools available: + - Use MCP-assisted exploration (Step 3.B) + + If MCP unavailable OR config.tea_use_mcp_enhancements is false: + - Use manual exploration fallback (Step 3.C) + + **B. MCP-Assisted Exploration (If MCP Tools Available)** + + Use Playwright MCP browser tools to explore UI: + + **Setup:** + + ``` + 1. Use planner_setup_page to initialize browser + 2. Navigate to {exploration_url} + 3. Capture initial state with browser_snapshot + ``` + + **Exploration Process:** + + ``` + 4. Use browser_navigate to explore different pages + 5. Use browser_click to interact with buttons, links, forms + 6. Use browser_hover to reveal hidden menus/tooltips + 7. Capture browser_snapshot at each significant state + 8. Take browser_screenshot for documentation + 9. Monitor browser_console_messages for JavaScript errors + 10. Track browser_network_requests to identify API calls + 11. Map user flows and interactive elements + 12. Document discovered functionality + ``` + + **Discovery Documentation:** + - Create list of discovered features (pages, workflows, forms) + - Identify user journeys (navigation paths) + - Map API endpoints (from network requests) + - Note error states (from console messages) + - Capture screenshots for visual reference + + **Convert to Test Scenarios:** + - Transform discoveries into testable requirements + - Prioritize based on user flow criticality + - Identify risks from discovered functionality + - Continue with Step 2 (Assess and Classify Risks) using discovered requirements + + **C. Manual Exploration Fallback (If MCP Unavailable)** + + If Playwright MCP is not available: + + **Notify User:** + + ```markdown + Exploratory mode enabled but Playwright MCP unavailable. + + **Manual exploration required:** + + 1. Open application at: {exploration_url} + 2. Explore all pages, workflows, and features + 3. Document findings in markdown: + - List of pages/features discovered + - User journeys identified + - API endpoints observed (DevTools Network tab) + - JavaScript errors noted (DevTools Console) + - Critical workflows mapped + + 4. Provide exploration findings to continue workflow + + **Alternative:** Disable exploratory_mode and provide requirements documentation + ``` + + Wait for user to provide exploration findings, then: + - Parse user-provided discovery documentation + - Convert to testable requirements + - Continue with Step 2 (risk assessment) + +4. **Proceed to Risk Assessment** + + After mode selection (Requirements-Based OR Exploratory): + - Continue to Step 2: Assess and Classify Risks + - Use requirements from documentation (Requirements-Based) OR discoveries (Exploratory) + +--- + +## Step 2: Assess and Classify Risks + +### Actions + +1. **Identify Genuine Risks** + + Filter requirements to isolate actual risks (not just features): + - Unresolved technical gaps + - Security vulnerabilities + - Performance bottlenecks + - Data loss or corruption potential + - Business impact failures + - Operational deployment issues + +2. **Classify Risks by Category** + + Use these standard risk categories: + + **TECH** (Technical/Architecture): + - Architecture flaws + - Integration failures + - Scalability issues + - Technical debt + + **SEC** (Security): + - Missing access controls + - Authentication bypass + - Data exposure + - Injection vulnerabilities + + **PERF** (Performance): + - SLA violations + - Response time degradation + - Resource exhaustion + - Scalability limits + + **DATA** (Data Integrity): + - Data loss + - Data corruption + - Inconsistent state + - Migration failures + + **BUS** (Business Impact): + - User experience degradation + - Business logic errors + - Revenue impact + - Compliance violations + + **OPS** (Operations): + - Deployment failures + - Configuration errors + - Monitoring gaps + - Rollback issues + +3. **Score Risk Probability** + + Rate likelihood (1-3): + - **1 (Unlikely)**: <10% chance, edge case + - **2 (Possible)**: 10-50% chance, known scenario + - **3 (Likely)**: >50% chance, common occurrence + +4. **Score Risk Impact** + + Rate severity (1-3): + - **1 (Minor)**: Cosmetic, workaround exists, limited users + - **2 (Degraded)**: Feature impaired, workaround difficult, affects many users + - **3 (Critical)**: System failure, data loss, no workaround, blocks usage + +5. **Calculate Risk Score** + + ``` + Risk Score = Probability × Impact + + Scores: + 1-2: Low risk (monitor) + 3-4: Medium risk (plan mitigation) + 6-9: High risk (immediate mitigation required) + ``` + +6. **Highlight High-Priority Risks** + + Flag all risks with score ≥6 for immediate attention. + +7. **Request Clarification** + + If evidence is missing or assumptions required: + - Document assumptions clearly + - Request user clarification + - Do NOT speculate on business impact + +8. **Plan Mitigations** + + **CRITICAL: Mitigation placement depends on WHO does the work** + + For each high-priority risk: + - Define mitigation strategy + - Assign owner (dev, QA, ops) + - Set timeline + - Update residual risk expectation + + **Mitigation Plan Placement:** + + **Architecture Doc:** + - Mitigations owned by Backend, DevOps, Architecture, Security, Data Eng + - Example: "Add authorization layer for customer-scoped access" (Backend work) + - Example: "Configure AWS Fault Injection Simulator" (DevOps work) + - Example: "Define CloudWatch log schema for backfill events" (Architecture work) + + **QA Doc:** + - Mitigations owned by QA (test development work) + - Example: "Create factories for test data with randomization" (QA work) + - Example: "Implement polling with retry for async validation" (QA test code) + - Brief reference to Architecture doc mitigations (don't duplicate) + + **Rule of Thumb:** + - If mitigation requires production code changes → Architecture doc + - If mitigation is test infrastructure/code → QA doc + - If mitigation involves multiple teams → Architecture doc with QA validation approach + + **Assumptions Placement:** + + **Architecture Doc:** + - Architectural assumptions (SLO targets, replication lag, system design assumptions) + - Example: "P95 <500ms inferred from <2s timeout (requires Product approval)" + - Example: "Multi-region replication lag <1s assumed (ADR doesn't specify SLA)" + - Example: "Recent Cache hit ratio >80% assumed (not in PRD/ADR)" + + **QA Doc:** + - Test execution assumptions (test infrastructure readiness, test data availability) + - Example: "Assumes test factories already created" + - Example: "Assumes CI/CD pipeline configured" + - Brief reference to Architecture doc for architectural assumptions + + **Rule of Thumb:** + - If assumption is about system architecture/design → Architecture doc + - If assumption is about test infrastructure/execution → QA doc + +--- + +## Step 3: Design Test Coverage + +### Actions + +1. **Break Down Acceptance Criteria** + + Convert each acceptance criterion into atomic test scenarios: + - One scenario per testable behavior + - Scenarios are independent + - Scenarios are repeatable + - Scenarios tie back to risk mitigations + +2. **Select Appropriate Test Levels** + + **Knowledge Base Reference**: `test-levels-framework.md` + + Map requirements to optimal test levels (avoid duplication): + + **E2E (End-to-End)**: + - Critical user journeys + - Multi-system integration + - Production-like environment + - Highest confidence, slowest execution + + **API (Integration)**: + - Service contracts + - Business logic validation + - Fast feedback + - Good for complex scenarios + + **Component**: + - UI component behavior + - Interaction testing + - Visual regression + - Fast, isolated + + **Unit**: + - Business logic + - Edge cases + - Error handling + - Fastest, most granular + + **Avoid duplicate coverage**: Don't test same behavior at multiple levels unless necessary. + +3. **Assign Priority Levels** + + **CRITICAL: P0/P1/P2/P3 indicates priority and risk level, NOT execution timing** + + **Knowledge Base Reference**: `test-priorities-matrix.md` + + **P0 (Critical)**: + - Blocks core user journey + - High-risk areas (score ≥6) + - Revenue-impacting + - Security-critical + - No workaround exists + - Affects majority of users + + **P1 (High)**: + - Important user features + - Medium-risk areas (score 3-4) + - Common workflows + - Workaround exists but difficult + + **P2 (Medium)**: + - Secondary features + - Low-risk areas (score 1-2) + - Edge cases + - Regression prevention + + **P3 (Low)**: + - Nice-to-have + - Exploratory + - Performance benchmarks + - Documentation validation + + **NOTE:** Priority classification is separate from execution timing. A P1 test might run in PRs if it's fast, or nightly if it requires expensive infrastructure (e.g., k6 performance test). See "Execution Strategy" section for timing guidance. + +4. **Outline Data and Tooling Prerequisites** + + For each test scenario, identify: + - Test data requirements (factories, fixtures) + - External services (mocks, stubs) + - Environment setup + - Tools and dependencies + +5. **Define Execution Strategy** (Keep It Simple) + + **IMPORTANT: Avoid over-engineering execution order** + + **Default Philosophy:** + - Run **everything** in PRs if total duration <15 minutes + - Playwright is fast with parallelization (100s of tests in ~10-15 min) + - Only defer to nightly/weekly if there's significant overhead: + - Performance tests (k6, load testing) - expensive infrastructure + - Chaos engineering - requires special setup (AWS FIS) + - Long-running tests - endurance (4+ hours), disaster recovery + - Manual tests - require human intervention + + **Simple Execution Strategy (Organized by TOOL TYPE):** + + ```markdown + ## Execution Strategy + + **Philosophy**: Run everything in PRs unless significant infrastructure overhead. + Playwright with parallelization is extremely fast (100s of tests in ~10-15 min). + + **Organized by TOOL TYPE:** + + ### Every PR: Playwright Tests (~10-15 min) + All functional tests (from any priority level): + - All E2E, API, integration, unit tests using Playwright + - Parallelized across {N} shards + - Total: ~{N} tests (includes P0, P1, P2, P3) + + ### Nightly: k6 Performance Tests (~30-60 min) + All performance tests (from any priority level): + - Load, stress, spike, endurance + - Reason: Expensive infrastructure, long-running (10-40 min per test) + + ### Weekly: Chaos & Long-Running (~hours) + Special infrastructure tests (from any priority level): + - Multi-region failover, disaster recovery, endurance + - Reason: Very expensive, very long (4+ hours) + ``` + + **KEY INSIGHT: Organize by TOOL TYPE, not priority** + - Playwright (fast, cheap) → PR + - k6 (expensive, long) → Nightly + - Chaos/Manual (very expensive, very long) → Weekly + + **Avoid:** + - ❌ Don't organize by priority (smoke → P0 → P1 → P2 → P3) + - ❌ Don't say "P1 runs on PR to main" (some P1 are Playwright/PR, some are k6/Nightly) + - ❌ Don't create artificial tiers - organize by tool type and infrastructure overhead + +--- + +## Step 4: Generate Deliverables + +### Actions + +1. **Create Risk Assessment Matrix** + + Use template structure: + + ```markdown + | Risk ID | Category | Description | Probability | Impact | Score | Mitigation | + | ------- | -------- | ----------- | ----------- | ------ | ----- | --------------- | + | R-001 | SEC | Auth bypass | 2 | 3 | 6 | Add authz check | + ``` + +2. **Create Coverage Matrix** + + ```markdown + | Requirement | Test Level | Priority | Risk Link | Test Count | Owner | + | ----------- | ---------- | -------- | --------- | ---------- | ----- | + | Login flow | E2E | P0 | R-001 | 3 | QA | + ``` + +3. **Document Execution Strategy** (Simple, Not Redundant) + + **IMPORTANT: Keep execution strategy simple and avoid redundancy** + + ```markdown + ## Execution Strategy + + **Default: Run all functional tests in PRs (~10-15 min)** + - All Playwright tests (parallelized across 4 shards) + - Includes E2E, API, integration, unit tests + - Total: ~{N} tests + + **Nightly: Performance & Infrastructure tests** + - k6 load/stress/spike tests (~30-60 min) + - Reason: Expensive infrastructure, long-running + + **Weekly: Chaos & Disaster Recovery** + - Endurance tests (4+ hours) + - Multi-region failover (requires AWS FIS) + - Backup restore validation + - Reason: Special infrastructure, very long-running + ``` + + **DO NOT:** + - ❌ Create redundant smoke/P0/P1/P2/P3 tier structure + - ❌ List all tests again in execution order (already in coverage plan) + - ❌ Split tests by priority unless there's infrastructure overhead + +4. **Include Resource Estimates** + + **IMPORTANT: Use intervals/ranges, not exact numbers** + + Provide rough estimates with intervals to avoid false precision: + + ```markdown + ### Test Effort Estimates + + - P0 scenarios: 15 tests (~1.5-2.5 hours each) = **~25-40 hours** + - P1 scenarios: 25 tests (~0.75-1.5 hours each) = **~20-35 hours** + - P2 scenarios: 40 tests (~0.25-0.75 hours each) = **~10-30 hours** + - **Total:** **~55-105 hours** (~1.5-3 weeks with 1 QA engineer) + ``` + + **Why intervals:** + - Avoids false precision (estimates are never exact) + - Provides flexibility for complexity variations + - Accounts for unknowns and dependencies + - More realistic and less prescriptive + + **Guidelines:** + - P0 tests: 1.5-2.5 hours each (complex setup, security, performance) + - P1 tests: 0.75-1.5 hours each (standard integration, API tests) + - P2 tests: 0.25-0.75 hours each (edge cases, simple validation) + - P3 tests: 0.1-0.5 hours each (exploratory, documentation) + + **Express totals as:** + - Hour ranges: "~55-105 hours" + - Week ranges: "~1.5-3 weeks" + - Avoid: Exact numbers like "75 hours" or "11 days" + +5. **Add Gate Criteria** + + ```markdown + ### Quality Gate Criteria + + - All P0 tests pass (100%) + - P1 tests pass rate ≥95% + - No high-risk (score ≥6) items unmitigated + - Test coverage ≥80% for critical paths + ``` + +6. **Write to Output File** + + Save to `{output_folder}/test-design-epic-{epic_num}.md` using template structure. + +--- + +## Important Notes + +### Risk Category Definitions + +**TECH** (Technical/Architecture): + +- Architecture flaws or technical debt +- Integration complexity +- Scalability concerns + +**SEC** (Security): + +- Missing security controls +- Authentication/authorization gaps +- Data exposure risks + +**PERF** (Performance): + +- SLA risk or performance degradation +- Resource constraints +- Scalability bottlenecks + +**DATA** (Data Integrity): + +- Data loss or corruption potential +- State consistency issues +- Migration risks + +**BUS** (Business Impact): + +- User experience harm +- Business logic errors +- Revenue or compliance impact + +**OPS** (Operations): + +- Deployment or runtime failures +- Configuration issues +- Monitoring/observability gaps + +### Risk Scoring Methodology + +**Probability × Impact = Risk Score** + +Examples: + +- High likelihood (3) × Critical impact (3) = **Score 9** (highest priority) +- Possible (2) × Critical (3) = **Score 6** (high priority threshold) +- Unlikely (1) × Minor (1) = **Score 1** (low priority) + +**Threshold**: Scores ≥6 require immediate mitigation. + +### Test Level Selection Strategy + +**Avoid duplication:** + +- Don't test same behavior at E2E and API level +- Use E2E for critical paths only +- Use API tests for complex business logic +- Use unit tests for edge cases + +**Tradeoffs:** + +- E2E: High confidence, slow execution, brittle +- API: Good balance, fast, stable +- Unit: Fastest feedback, narrow scope + +### Priority Assignment Guidelines + +**P0 criteria** (all must be true): + +- Blocks core functionality +- High-risk (score ≥6) +- No workaround exists +- Affects majority of users + +**P1 criteria**: + +- Important feature +- Medium risk (score 3-5) +- Workaround exists but difficult + +**P2/P3**: Everything else, prioritized by value + +### Knowledge Base Integration + +**Core Fragments (Auto-loaded in Step 1):** + +- `risk-governance.md` - Risk classification (6 categories), automated scoring, gate decision engine, coverage traceability, owner tracking (625 lines, 4 examples) +- `probability-impact.md` - Probability × impact matrix, automated classification thresholds, dynamic re-assessment, gate integration (604 lines, 4 examples) +- `test-levels-framework.md` - E2E vs API vs Component vs Unit decision framework with characteristics matrix (467 lines, 4 examples) +- `test-priorities-matrix.md` - P0-P3 automated priority calculation, risk-based mapping, tagging strategy, time budgets (389 lines, 2 examples) + +**Reference for Test Planning:** + +- `selective-testing.md` - Execution strategy: tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples) +- `fixture-architecture.md` - Data setup patterns: pure function → fixture → mergeTests, auto-cleanup (406 lines, 5 examples) + +**Manual Reference (Optional):** + +- Use `tea-index.csv` to find additional specialized fragments as needed + +### Evidence-Based Assessment + +**Critical principle:** Base risk assessment on evidence, not speculation. + +**Evidence sources:** + +- PRD and user research +- Architecture documentation +- Historical bug data +- User feedback +- Security audit results + +**Avoid:** + +- Guessing business impact +- Assuming user behavior +- Inventing requirements + +**When uncertain:** Document assumptions and request clarification from user. + +--- + +## Output Summary + +After completing this workflow, provide a summary: + +```markdown +## Test Design Complete + +**Epic**: {epic_num} +**Scope**: {design_level} + +**Risk Assessment**: + +- Total risks identified: {count} +- High-priority risks (≥6): {high_count} +- Categories: {categories} + +**Coverage Plan**: + +- P0 scenarios: {p0_count} ({p0_hours} hours) +- P1 scenarios: {p1_count} ({p1_hours} hours) +- P2/P3 scenarios: {p2p3_count} ({p2p3_hours} hours) +- **Total effort**: {total_hours} hours (~{total_days} days) + +**Test Levels**: + +- E2E: {e2e_count} +- API: {api_count} +- Component: {component_count} +- Unit: {unit_count} + +**Quality Gate Criteria**: + +- P0 pass rate: 100% +- P1 pass rate: ≥95% +- High-risk mitigations: 100% +- Coverage: ≥80% + +**Output File**: {output_file} + +**Next Steps**: + +1. Review risk assessment with team +2. Prioritize mitigation for high-risk items (score ≥6) +3. Run `*atdd` to generate failing tests for P0 scenarios (separate workflow; not auto-run by `*test-design`) +4. Allocate resources per effort estimates +5. Set up test data factories and fixtures +``` + +--- + +## Validation + +After completing all steps, verify: + +- [ ] Risk assessment complete with all categories +- [ ] All risks scored (probability × impact) +- [ ] High-priority risks (≥6) flagged +- [ ] Coverage matrix maps requirements to test levels +- [ ] Priority levels assigned (P0-P3) +- [ ] Execution order defined +- [ ] Resource estimates provided +- [ ] Quality gate criteria defined +- [ ] Output file created and formatted correctly + +Refer to `checklist.md` for comprehensive validation criteria. diff --git a/src/bmm/workflows/testarch/test-design/test-design-architecture-template.md b/src/bmm/workflows/testarch/test-design/test-design-architecture-template.md new file mode 100644 index 00000000..571f6f20 --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/test-design-architecture-template.md @@ -0,0 +1,213 @@ +# Test Design for Architecture: {Feature Name} + +**Purpose:** Architectural concerns, testability gaps, and NFR requirements for review by Architecture/Dev teams. Serves as a contract between QA and Engineering on what must be addressed before test development begins. + +**Date:** {date} +**Author:** {author} +**Status:** Architecture Review Pending +**Project:** {project_name} +**PRD Reference:** {prd_link} +**ADR Reference:** {adr_link} + +--- + +## Executive Summary + +**Scope:** {Brief description of feature scope} + +**Business Context** (from PRD): +- **Revenue/Impact:** {Business metrics if applicable} +- **Problem:** {Problem being solved} +- **GA Launch:** {Target date or timeline} + +**Architecture** (from ADR {adr_number}): +- **Key Decision 1:** {e.g., OAuth 2.1 authentication} +- **Key Decision 2:** {e.g., Centralized MCP Server pattern} +- **Key Decision 3:** {e.g., Stack: TypeScript, SDK v1.x} + +**Expected Scale** (from ADR): +- {RPS, volume, users, etc.} + +**Risk Summary:** +- **Total risks**: {N} +- **High-priority (≥6)**: {N} risks requiring immediate mitigation +- **Test effort**: ~{N} tests (~{X} weeks for 1 QA, ~{Y} weeks for 2 QAs) + +--- + +## Quick Guide + +### 🚨 BLOCKERS - Team Must Decide (Can't Proceed Without) + +**Sprint 0 Critical Path** - These MUST be completed before QA can write integration tests: + +1. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role}) +2. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role}) +3. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role}) + +**What we need from team:** Complete these {N} items in Sprint 0 or test development is blocked. + +--- + +### ⚠️ HIGH PRIORITY - Team Should Validate (We Provide Recommendation, You Approve) + +1. **{Risk ID}: {Title}** - {Recommendation + who should approve} (Sprint {N}) +2. **{Risk ID}: {Title}** - {Recommendation + who should approve} (Sprint {N}) +3. **{Risk ID}: {Title}** - {Recommendation + who should approve} (Sprint {N}) + +**What we need from team:** Review recommendations and approve (or suggest changes). + +--- + +### 📋 INFO ONLY - Solutions Provided (Review, No Decisions Needed) + +1. **Test strategy**: {Test level split} ({Rationale}) +2. **Tooling**: {Test frameworks and utilities} +3. **Tiered CI/CD**: {Execution tiers with timing} +4. **Coverage**: ~{N} test scenarios prioritized P0-P3 with risk-based classification +5. **Quality gates**: {Pass criteria} + +**What we need from team:** Just review and acknowledge (we already have the solution). + +--- + +## For Architects and Devs - Open Topics 👷 + +### Risk Assessment + +**Total risks identified**: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low) + +#### High-Priority Risks (Score ≥6) - IMMEDIATE ATTENTION + +| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline | +|---------|----------|-------------|-------------|--------|-------|------------|-------|----------| +| **{R-ID}** | **{CAT}** | {Description} | {1-3} | {1-3} | **{Score}** | {Mitigation strategy} | {Owner} | {Date} | + +#### Medium-Priority Risks (Score 3-5) + +| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | +|---------|----------|-------------|-------------|--------|-------|------------|-------| +| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | {Mitigation} | {Owner} | + +#### Low-Priority Risks (Score 1-2) + +| Risk ID | Category | Description | Probability | Impact | Score | Action | +|---------|----------|-------------|-------------|--------|-------|--------| +| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | Monitor | + +#### Risk Category Legend + +- **TECH**: Technical/Architecture (flaws, integration, scalability) +- **SEC**: Security (access controls, auth, data exposure) +- **PERF**: Performance (SLA violations, degradation, resource limits) +- **DATA**: Data Integrity (loss, corruption, inconsistency) +- **BUS**: Business Impact (UX harm, logic errors, revenue) +- **OPS**: Operations (deployment, config, monitoring) + +--- + +### Testability Concerns and Architectural Gaps + +**🚨 ACTIONABLE CONCERNS - Architecture Team Must Address** + +{If system has critical testability concerns, list them here. If architecture supports testing well, state "No critical testability concerns identified" and skip to Testability Assessment Summary} + +#### 1. Blockers to Fast Feedback (WHAT WE NEED FROM ARCHITECTURE) + +| Concern | Impact | What Architecture Must Provide | Owner | Timeline | +|---------|--------|--------------------------------|-------|----------| +| **{Concern name}** | {Impact on testing} | {Specific architectural change needed} | {Team} | {Sprint} | + +**Example:** +- **No API for test data seeding** → Cannot parallelize tests → Provide POST /test/seed endpoint (Backend, Sprint 0) + +#### 2. Architectural Improvements Needed (WHAT SHOULD BE CHANGED) + +{List specific improvements that would make the system more testable} + +1. **{Improvement name}** + - **Current problem**: {What's wrong} + - **Required change**: {What architecture must do} + - **Impact if not fixed**: {Consequences} + - **Owner**: {Team} + - **Timeline**: {Sprint} + +--- + +### Testability Assessment Summary + +**📊 CURRENT STATE - FYI** + +{Only include this section if there are passing items worth mentioning. Otherwise omit.} + +#### What Works Well + +- ✅ {Passing item 1} (e.g., "API-first design supports parallel test execution") +- ✅ {Passing item 2} (e.g., "Feature flags enable test isolation") +- ✅ {Passing item 3} + +#### Accepted Trade-offs (No Action Required) + +For {Feature} Phase 1, the following trade-offs are acceptable: +- **{Trade-off 1}** - {Why acceptable for now} +- **{Trade-off 2}** - {Why acceptable for now} + +{This is technical debt OR acceptable for Phase 1} that {should be revisited post-GA OR maintained as-is} + +--- + +### Risk Mitigation Plans (High-Priority Risks ≥6) + +**Purpose**: Detailed mitigation strategies for all {N} high-priority risks (score ≥6). These risks MUST be addressed before {GA launch date or milestone}. + +#### {R-ID}: {Risk Description} (Score: {Score}) - {CRITICALITY LEVEL} + +**Mitigation Strategy:** +1. {Step 1} +2. {Step 2} +3. {Step 3} + +**Owner:** {Owner} +**Timeline:** {Sprint or date} +**Status:** Planned / In Progress / Complete +**Verification:** {How to verify mitigation is effective} + +--- + +{Repeat for all high-priority risks} + +--- + +### Assumptions and Dependencies + +#### Assumptions + +1. {Assumption about architecture or requirements} +2. {Assumption about team or timeline} +3. {Assumption about scope or constraints} + +#### Dependencies + +1. {Dependency} - Required by {date/sprint} +2. {Dependency} - Required by {date/sprint} + +#### Risks to Plan + +- **Risk**: {Risk to the test plan itself} + - **Impact**: {How it affects testing} + - **Contingency**: {Backup plan} + +--- + +**End of Architecture Document** + +**Next Steps for Architecture Team:** +1. Review Quick Guide (🚨/⚠️/📋) and prioritize blockers +2. Assign owners and timelines for high-priority risks (≥6) +3. Validate assumptions and dependencies +4. Provide feedback to QA on testability gaps + +**Next Steps for QA Team:** +1. Wait for Sprint 0 blockers to be resolved +2. Refer to companion QA doc (test-design-qa.md) for test scenarios +3. Begin test infrastructure setup (factories, fixtures, environments) diff --git a/src/bmm/workflows/testarch/test-design/test-design-qa-template.md b/src/bmm/workflows/testarch/test-design/test-design-qa-template.md new file mode 100644 index 00000000..037856b7 --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/test-design-qa-template.md @@ -0,0 +1,286 @@ +# Test Design for QA: {Feature Name} + +**Purpose:** Test execution recipe for QA team. Defines what to test, how to test it, and what QA needs from other teams. + +**Date:** {date} +**Author:** {author} +**Status:** Draft +**Project:** {project_name} + +**Related:** See Architecture doc (test-design-architecture.md) for testability concerns and architectural blockers. + +--- + +## Executive Summary + +**Scope:** {Brief description of testing scope} + +**Risk Summary:** +- Total Risks: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low) +- Critical Categories: {Categories with most high-priority risks} + +**Coverage Summary:** +- P0 tests: ~{N} (critical paths, security) +- P1 tests: ~{N} (important features, integration) +- P2 tests: ~{N} (edge cases, regression) +- P3 tests: ~{N} (exploratory, benchmarks) +- **Total**: ~{N} tests (~{X}-{Y} weeks with 1 QA) + +--- + +## Dependencies & Test Blockers + +**CRITICAL:** QA cannot proceed without these items from other teams. + +### Backend/Architecture Dependencies (Sprint 0) + +**Source:** See Architecture doc "Quick Guide" for detailed mitigation plans + +1. **{Dependency 1}** - {Team} - {Timeline} + - {What QA needs} + - {Why it blocks testing} + +2. **{Dependency 2}** - {Team} - {Timeline} + - {What QA needs} + - {Why it blocks testing} + +### QA Infrastructure Setup (Sprint 0) + +1. **Test Data Factories** - QA + - {Entity} factory with faker-based randomization + - Auto-cleanup fixtures for parallel safety + +2. **Test Environments** - QA + - Local: {Setup details} + - CI/CD: {Setup details} + - Staging: {Setup details} + +**Example factory pattern:** + +```typescript +import { test } from '@seontechnologies/playwright-utils/api-request/fixtures'; +import { expect } from '@playwright/test'; +import { faker } from '@faker-js/faker'; + +test('example test @p0', async ({ apiRequest }) => { + const testData = { + id: `test-${faker.string.uuid()}`, + email: faker.internet.email(), + }; + + const { status } = await apiRequest({ + method: 'POST', + path: '/api/resource', + body: testData, + }); + + expect(status).toBe(201); +}); +``` + +--- + +## Risk Assessment + +**Note:** Full risk details in Architecture doc. This section summarizes risks relevant to QA test planning. + +### High-Priority Risks (Score ≥6) + +| Risk ID | Category | Description | Score | QA Test Coverage | +|---------|----------|-------------|-------|------------------| +| **{R-ID}** | {CAT} | {Brief description} | **{Score}** | {How QA validates this risk} | + +### Medium/Low-Priority Risks + +| Risk ID | Category | Description | Score | QA Test Coverage | +|---------|----------|-------------|-------|------------------| +| {R-ID} | {CAT} | {Brief description} | {Score} | {How QA validates this risk} | + +--- + +## Test Coverage Plan + +**IMPORTANT:** P0/P1/P2/P3 = **priority and risk level** (what to focus on if time-constrained), NOT execution timing. See "Execution Strategy" for when tests run. + +### P0 (Critical) + +**Criteria:** Blocks core functionality + High risk (≥6) + No workaround + Affects majority of users + +| Test ID | Requirement | Test Level | Risk Link | Notes | +|---------|-------------|------------|-----------|-------| +| **P0-001** | {Requirement} | {Level} | {R-ID} | {Notes} | +| **P0-002** | {Requirement} | {Level} | {R-ID} | {Notes} | + +**Total P0:** ~{N} tests + +--- + +### P1 (High) + +**Criteria:** Important features + Medium risk (3-4) + Common workflows + Workaround exists but difficult + +| Test ID | Requirement | Test Level | Risk Link | Notes | +|---------|-------------|------------|-----------|-------| +| **P1-001** | {Requirement} | {Level} | {R-ID} | {Notes} | +| **P1-002** | {Requirement} | {Level} | {R-ID} | {Notes} | + +**Total P1:** ~{N} tests + +--- + +### P2 (Medium) + +**Criteria:** Secondary features + Low risk (1-2) + Edge cases + Regression prevention + +| Test ID | Requirement | Test Level | Risk Link | Notes | +|---------|-------------|------------|-----------|-------| +| **P2-001** | {Requirement} | {Level} | {R-ID} | {Notes} | + +**Total P2:** ~{N} tests + +--- + +### P3 (Low) + +**Criteria:** Nice-to-have + Exploratory + Performance benchmarks + Documentation validation + +| Test ID | Requirement | Test Level | Notes | +|---------|-------------|------------|-------| +| **P3-001** | {Requirement} | {Level} | {Notes} | + +**Total P3:** ~{N} tests + +--- + +## Execution Strategy + +**Philosophy:** Run everything in PRs unless there's significant infrastructure overhead. Playwright with parallelization is extremely fast (100s of tests in ~10-15 min). + +**Organized by TOOL TYPE:** + +### Every PR: Playwright Tests (~10-15 min) + +**All functional tests** (from any priority level): +- All E2E, API, integration, unit tests using Playwright +- Parallelized across {N} shards +- Total: ~{N} Playwright tests (includes P0, P1, P2, P3) + +**Why run in PRs:** Fast feedback, no expensive infrastructure + +### Nightly: k6 Performance Tests (~30-60 min) + +**All performance tests** (from any priority level): +- Load, stress, spike, endurance tests +- Total: ~{N} k6 tests (may include P0, P1, P2) + +**Why defer to nightly:** Expensive infrastructure (k6 Cloud), long-running (10-40 min per test) + +### Weekly: Chaos & Long-Running (~hours) + +**Special infrastructure tests** (from any priority level): +- Multi-region failover (requires AWS Fault Injection Simulator) +- Disaster recovery (backup restore, 4+ hours) +- Endurance tests (4+ hours runtime) + +**Why defer to weekly:** Very expensive infrastructure, very long-running, infrequent validation sufficient + +**Manual tests** (excluded from automation): +- DevOps validation (deployment, monitoring) +- Finance validation (cost alerts) +- Documentation validation + +--- + +## QA Effort Estimate + +**QA test development effort only** (excludes DevOps, Backend, Data Eng, Finance work): + +| Priority | Count | Effort Range | Notes | +|----------|-------|--------------|-------| +| P0 | ~{N} | ~{X}-{Y} weeks | Complex setup (security, performance, multi-step) | +| P1 | ~{N} | ~{X}-{Y} weeks | Standard coverage (integration, API tests) | +| P2 | ~{N} | ~{X}-{Y} days | Edge cases, simple validation | +| P3 | ~{N} | ~{X}-{Y} days | Exploratory, benchmarks | +| **Total** | ~{N} | **~{X}-{Y} weeks** | **1 QA engineer, full-time** | + +**Assumptions:** +- Includes test design, implementation, debugging, CI integration +- Excludes ongoing maintenance (~10% effort) +- Assumes test infrastructure (factories, fixtures) ready + +**Dependencies from other teams:** +- See "Dependencies & Test Blockers" section for what QA needs from Backend, DevOps, Data Eng + +--- + +## Appendix A: Code Examples & Tagging + +**Playwright Tags for Selective Execution:** + +```typescript +import { test } from '@seontechnologies/playwright-utils/api-request/fixtures'; +import { expect } from '@playwright/test'; + +// P0 critical test +test('@P0 @API @Security unauthenticated request returns 401', async ({ apiRequest }) => { + const { status, body } = await apiRequest({ + method: 'POST', + path: '/api/endpoint', + body: { data: 'test' }, + skipAuth: true, + }); + + expect(status).toBe(401); + expect(body.error).toContain('unauthorized'); +}); + +// P1 integration test +test('@P1 @Integration data syncs correctly', async ({ apiRequest }) => { + // Seed data + await apiRequest({ + method: 'POST', + path: '/api/seed', + body: { /* test data */ }, + }); + + // Validate + const { status, body } = await apiRequest({ + method: 'GET', + path: '/api/resource', + }); + + expect(status).toBe(200); + expect(body).toHaveProperty('data'); +}); +``` + +**Run specific tags:** + +```bash +# Run only P0 tests +npx playwright test --grep @P0 + +# Run P0 + P1 tests +npx playwright test --grep "@P0|@P1" + +# Run only security tests +npx playwright test --grep @Security + +# Run all Playwright tests in PR (default) +npx playwright test +``` + +--- + +## Appendix B: Knowledge Base References + +- **Risk Governance**: `risk-governance.md` - Risk scoring methodology +- **Test Priorities Matrix**: `test-priorities-matrix.md` - P0-P3 criteria +- **Test Levels Framework**: `test-levels-framework.md` - E2E vs API vs Unit selection +- **Test Quality**: `test-quality.md` - Definition of Done (no hard waits, <300 lines, <1.5 min) + +--- + +**Generated by:** BMad TEA Agent +**Workflow:** `_bmad/bmm/testarch/test-design` +**Version:** 4.0 (BMad v6) diff --git a/src/bmm/workflows/testarch/test-design/test-design-template.md b/src/bmm/workflows/testarch/test-design/test-design-template.md new file mode 100644 index 00000000..a064fe58 --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/test-design-template.md @@ -0,0 +1,294 @@ +# Test Design: Epic {epic_num} - {epic_title} + +**Date:** {date} +**Author:** {user_name} +**Status:** Draft / Approved + +--- + +## Executive Summary + +**Scope:** {design_level} test design for Epic {epic_num} + +**Risk Summary:** + +- Total risks identified: {total_risks} +- High-priority risks (≥6): {high_priority_count} +- Critical categories: {top_categories} + +**Coverage Summary:** + +- P0 scenarios: {p0_count} ({p0_hours} hours) +- P1 scenarios: {p1_count} ({p1_hours} hours) +- P2/P3 scenarios: {p2p3_count} ({p2p3_hours} hours) +- **Total effort**: {total_hours} hours (~{total_days} days) + +--- + +## Risk Assessment + +### High-Priority Risks (Score ≥6) + +| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline | +| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- | -------- | +| R-001 | SEC | {description} | 2 | 3 | 6 | {mitigation} | {owner} | {date} | +| R-002 | PERF | {description} | 3 | 2 | 6 | {mitigation} | {owner} | {date} | + +### Medium-Priority Risks (Score 3-4) + +| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | +| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- | +| R-003 | TECH | {description} | 2 | 2 | 4 | {mitigation} | {owner} | +| R-004 | DATA | {description} | 1 | 3 | 3 | {mitigation} | {owner} | + +### Low-Priority Risks (Score 1-2) + +| Risk ID | Category | Description | Probability | Impact | Score | Action | +| ------- | -------- | ------------- | ----------- | ------ | ----- | ------- | +| R-005 | OPS | {description} | 1 | 2 | 2 | Monitor | +| R-006 | BUS | {description} | 1 | 1 | 1 | Monitor | + +### Risk Category Legend + +- **TECH**: Technical/Architecture (flaws, integration, scalability) +- **SEC**: Security (access controls, auth, data exposure) +- **PERF**: Performance (SLA violations, degradation, resource limits) +- **DATA**: Data Integrity (loss, corruption, inconsistency) +- **BUS**: Business Impact (UX harm, logic errors, revenue) +- **OPS**: Operations (deployment, config, monitoring) + +--- + +## Test Coverage Plan + +### P0 (Critical) - Run on every commit + +**Criteria**: Blocks core journey + High risk (≥6) + No workaround + +| Requirement | Test Level | Risk Link | Test Count | Owner | Notes | +| ------------- | ---------- | --------- | ---------- | ----- | ------- | +| {requirement} | E2E | R-001 | 3 | QA | {notes} | +| {requirement} | API | R-002 | 5 | QA | {notes} | + +**Total P0**: {p0_count} tests, {p0_hours} hours + +### P1 (High) - Run on PR to main + +**Criteria**: Important features + Medium risk (3-4) + Common workflows + +| Requirement | Test Level | Risk Link | Test Count | Owner | Notes | +| ------------- | ---------- | --------- | ---------- | ----- | ------- | +| {requirement} | API | R-003 | 4 | QA | {notes} | +| {requirement} | Component | - | 6 | DEV | {notes} | + +**Total P1**: {p1_count} tests, {p1_hours} hours + +### P2 (Medium) - Run nightly/weekly + +**Criteria**: Secondary features + Low risk (1-2) + Edge cases + +| Requirement | Test Level | Risk Link | Test Count | Owner | Notes | +| ------------- | ---------- | --------- | ---------- | ----- | ------- | +| {requirement} | API | R-004 | 8 | QA | {notes} | +| {requirement} | Unit | - | 15 | DEV | {notes} | + +**Total P2**: {p2_count} tests, {p2_hours} hours + +### P3 (Low) - Run on-demand + +**Criteria**: Nice-to-have + Exploratory + Performance benchmarks + +| Requirement | Test Level | Test Count | Owner | Notes | +| ------------- | ---------- | ---------- | ----- | ------- | +| {requirement} | E2E | 2 | QA | {notes} | +| {requirement} | Unit | 8 | DEV | {notes} | + +**Total P3**: {p3_count} tests, {p3_hours} hours + +--- + +## Execution Order + +### Smoke Tests (<5 min) + +**Purpose**: Fast feedback, catch build-breaking issues + +- [ ] {scenario} (30s) +- [ ] {scenario} (45s) +- [ ] {scenario} (1min) + +**Total**: {smoke_count} scenarios + +### P0 Tests (<10 min) + +**Purpose**: Critical path validation + +- [ ] {scenario} (E2E) +- [ ] {scenario} (API) +- [ ] {scenario} (API) + +**Total**: {p0_count} scenarios + +### P1 Tests (<30 min) + +**Purpose**: Important feature coverage + +- [ ] {scenario} (API) +- [ ] {scenario} (Component) + +**Total**: {p1_count} scenarios + +### P2/P3 Tests (<60 min) + +**Purpose**: Full regression coverage + +- [ ] {scenario} (Unit) +- [ ] {scenario} (API) + +**Total**: {p2p3_count} scenarios + +--- + +## Resource Estimates + +### Test Development Effort + +| Priority | Count | Hours/Test | Total Hours | Notes | +| --------- | ----------------- | ---------- | ----------------- | ----------------------- | +| P0 | {p0_count} | 2.0 | {p0_hours} | Complex setup, security | +| P1 | {p1_count} | 1.0 | {p1_hours} | Standard coverage | +| P2 | {p2_count} | 0.5 | {p2_hours} | Simple scenarios | +| P3 | {p3_count} | 0.25 | {p3_hours} | Exploratory | +| **Total** | **{total_count}** | **-** | **{total_hours}** | **~{total_days} days** | + +### Prerequisites + +**Test Data:** + +- {factory_name} factory (faker-based, auto-cleanup) +- {fixture_name} fixture (setup/teardown) + +**Tooling:** + +- {tool} for {purpose} +- {tool} for {purpose} + +**Environment:** + +- {env_requirement} +- {env_requirement} + +--- + +## Quality Gate Criteria + +### Pass/Fail Thresholds + +- **P0 pass rate**: 100% (no exceptions) +- **P1 pass rate**: ≥95% (waivers required for failures) +- **P2/P3 pass rate**: ≥90% (informational) +- **High-risk mitigations**: 100% complete or approved waivers + +### Coverage Targets + +- **Critical paths**: ≥80% +- **Security scenarios**: 100% +- **Business logic**: ≥70% +- **Edge cases**: ≥50% + +### Non-Negotiable Requirements + +- [ ] All P0 tests pass +- [ ] No high-risk (≥6) items unmitigated +- [ ] Security tests (SEC category) pass 100% +- [ ] Performance targets met (PERF category) + +--- + +## Mitigation Plans + +### R-001: {Risk Description} (Score: 6) + +**Mitigation Strategy:** {detailed_mitigation} +**Owner:** {owner} +**Timeline:** {date} +**Status:** Planned / In Progress / Complete +**Verification:** {how_to_verify} + +### R-002: {Risk Description} (Score: 6) + +**Mitigation Strategy:** {detailed_mitigation} +**Owner:** {owner} +**Timeline:** {date} +**Status:** Planned / In Progress / Complete +**Verification:** {how_to_verify} + +--- + +## Assumptions and Dependencies + +### Assumptions + +1. {assumption} +2. {assumption} +3. {assumption} + +### Dependencies + +1. {dependency} - Required by {date} +2. {dependency} - Required by {date} + +### Risks to Plan + +- **Risk**: {risk_to_plan} + - **Impact**: {impact} + - **Contingency**: {contingency} + +--- + +--- + +## Follow-on Workflows (Manual) + +- Run `*atdd` to generate failing P0 tests (separate workflow; not auto-run). +- Run `*automate` for broader coverage once implementation exists. + +--- + +## Approval + +**Test Design Approved By:** + +- [ ] Product Manager: {name} Date: {date} +- [ ] Tech Lead: {name} Date: {date} +- [ ] QA Lead: {name} Date: {date} + +**Comments:** + +--- + +--- + +--- + +## Appendix + +### Knowledge Base References + +- `risk-governance.md` - Risk classification framework +- `probability-impact.md` - Risk scoring methodology +- `test-levels-framework.md` - Test level selection +- `test-priorities-matrix.md` - P0-P3 prioritization + +### Related Documents + +- PRD: {prd_link} +- Epic: {epic_link} +- Architecture: {arch_link} +- Tech Spec: {tech_spec_link} + +--- + +**Generated by**: BMad TEA Agent - Test Architect Module +**Workflow**: `_bmad/bmm/testarch/test-design` +**Version**: 4.0 (BMad v6) diff --git a/src/bmm/workflows/testarch/test-design/workflow.yaml b/src/bmm/workflows/testarch/test-design/workflow.yaml new file mode 100644 index 00000000..961eff34 --- /dev/null +++ b/src/bmm/workflows/testarch/test-design/workflow.yaml @@ -0,0 +1,71 @@ +# Test Architect workflow: test-design +name: testarch-test-design +description: "Dual-mode workflow: (1) System-level testability review in Solutioning phase, or (2) Epic-level test planning in Implementation phase. Auto-detects mode based on project phase." +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/test-design" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +# Note: Template selection is mode-based (see instructions.md Step 1.5): +# - System-level: test-design-architecture-template.md + test-design-qa-template.md +# - Epic-level: test-design-template.md (unchanged) +template: "{installed_path}/test-design-template.md" + +# Variables and inputs +variables: + design_level: "full" # full, targeted, minimal - scope of design effort + mode: "auto-detect" # auto-detect (default), system-level, epic-level + +# Output configuration +# Note: Actual output file determined dynamically based on mode detection +# Declared outputs for new workflow format +outputs: + # System-Level Mode (Phase 3) - TWO documents + - id: test-design-architecture + description: "System-level test architecture: Architectural concerns, testability gaps, NFR requirements for Architecture/Dev teams" + path: "{output_folder}/test-design-architecture.md" + mode: system-level + audience: architecture + + - id: test-design-qa + description: "System-level test design: Test execution recipe, coverage plan, Sprint 0 setup for QA team" + path: "{output_folder}/test-design-qa.md" + mode: system-level + audience: qa + + # Epic-Level Mode (Phase 4) - ONE document (unchanged) + - id: epic-level + description: "Epic-level test plan (Phase 4)" + path: "{output_folder}/test-design-epic-{epic_num}.md" + mode: epic-level +# Note: No default_output_file - mode detection determines which outputs to write + +# Required tools +required_tools: + - read_file # Read PRD, epics, stories, architecture docs + - write_file # Create test design document + - list_files # Find related documentation + - search_repo # Search for existing tests and patterns + +tags: + - qa + - planning + - test-architect + - risk-assessment + - coverage + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/bmm/workflows/testarch/test-review/checklist.md b/src/bmm/workflows/testarch/test-review/checklist.md new file mode 100644 index 00000000..f4fca8af --- /dev/null +++ b/src/bmm/workflows/testarch/test-review/checklist.md @@ -0,0 +1,472 @@ +# Test Quality Review - Validation Checklist + +Use this checklist to validate that the test quality review workflow completed successfully and all quality criteria were properly evaluated. + +--- + +## Prerequisites + +Note: `test-review` is optional and only audits existing tests; it does not generate tests. + +### Test File Discovery + +- [ ] Test file(s) identified for review (single/directory/suite scope) +- [ ] Test files exist and are readable +- [ ] Test framework detected (Playwright, Jest, Cypress, Vitest, etc.) +- [ ] Test framework configuration found (playwright.config.ts, jest.config.js, etc.) + +### Knowledge Base Loading + +- [ ] tea-index.csv loaded successfully +- [ ] `test-quality.md` loaded (Definition of Done) +- [ ] `fixture-architecture.md` loaded (Pure function → Fixture patterns) +- [ ] `network-first.md` loaded (Route intercept before navigate) +- [ ] `data-factories.md` loaded (Factory patterns) +- [ ] `test-levels-framework.md` loaded (E2E vs API vs Component vs Unit) +- [ ] All other enabled fragments loaded successfully + +### Context Gathering + +- [ ] Story file discovered or explicitly provided (if available) +- [ ] Test design document discovered or explicitly provided (if available) +- [ ] Acceptance criteria extracted from story (if available) +- [ ] Priority context (P0/P1/P2/P3) extracted from test-design (if available) + +--- + +## Process Steps + +### Step 1: Context Loading + +- [ ] Review scope determined (single/directory/suite) +- [ ] Test file paths collected +- [ ] Related artifacts discovered (story, test-design) +- [ ] Knowledge base fragments loaded successfully +- [ ] Quality criteria flags read from workflow variables + +### Step 2: Test File Parsing + +**For Each Test File:** + +- [ ] File read successfully +- [ ] File size measured (lines, KB) +- [ ] File structure parsed (describe blocks, it blocks) +- [ ] Test IDs extracted (if present) +- [ ] Priority markers extracted (if present) +- [ ] Imports analyzed +- [ ] Dependencies identified + +**Test Structure Analysis:** + +- [ ] Describe block count calculated +- [ ] It/test block count calculated +- [ ] BDD structure identified (Given-When-Then) +- [ ] Fixture usage detected +- [ ] Data factory usage detected +- [ ] Network interception patterns identified +- [ ] Assertions counted +- [ ] Waits and timeouts cataloged +- [ ] Conditionals (if/else) detected +- [ ] Try/catch blocks detected +- [ ] Shared state or globals detected + +### Step 3: Quality Criteria Validation + +**For Each Enabled Criterion:** + +#### BDD Format (if `check_given_when_then: true`) + +- [ ] Given-When-Then structure evaluated +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with line numbers +- [ ] Examples of good/bad patterns noted + +#### Test IDs (if `check_test_ids: true`) + +- [ ] Test ID presence validated +- [ ] Test ID format checked (e.g., 1.3-E2E-001) +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Missing IDs cataloged + +#### Priority Markers (if `check_priority_markers: true`) + +- [ ] P0/P1/P2/P3 classification validated +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Missing priorities cataloged + +#### Hard Waits (if `check_hard_waits: true`) + +- [ ] sleep(), waitForTimeout(), hardcoded delays detected +- [ ] Justification comments checked +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with line numbers and recommended fixes + +#### Determinism (if `check_determinism: true`) + +- [ ] Conditionals (if/else/switch) detected +- [ ] Try/catch abuse detected +- [ ] Random values (Math.random, Date.now) detected +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Isolation (if `check_isolation: true`) + +- [ ] Cleanup hooks (afterEach/afterAll) validated +- [ ] Shared state detected +- [ ] Global variable mutations detected +- [ ] Resource cleanup verified +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Fixture Patterns (if `check_fixture_patterns: true`) + +- [ ] Fixtures detected (test.extend) +- [ ] Pure functions validated +- [ ] mergeTests usage checked +- [ ] beforeEach complexity analyzed +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Data Factories (if `check_data_factories: true`) + +- [ ] Factory functions detected +- [ ] Hardcoded data (magic strings/numbers) detected +- [ ] Faker.js or similar usage validated +- [ ] API-first setup pattern checked +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Network-First (if `check_network_first: true`) + +- [ ] page.route() before page.goto() validated +- [ ] Race conditions detected (route after navigate) +- [ ] waitForResponse patterns checked +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Assertions (if `check_assertions: true`) + +- [ ] Explicit assertions counted +- [ ] Implicit waits without assertions detected +- [ ] Assertion specificity validated +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +#### Test Length (if `check_test_length: true`) + +- [ ] File line count calculated +- [ ] Threshold comparison (≤300 lines ideal) +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Splitting recommendations generated (if >300 lines) + +#### Test Duration (if `check_test_duration: true`) + +- [ ] Test complexity analyzed (as proxy for duration if no execution data) +- [ ] Threshold comparison (≤1.5 min target) +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Optimization recommendations generated + +#### Flakiness Patterns (if `check_flakiness_patterns: true`) + +- [ ] Tight timeouts detected (e.g., { timeout: 1000 }) +- [ ] Race conditions detected +- [ ] Timing-dependent assertions detected +- [ ] Retry logic detected +- [ ] Environment-dependent assumptions detected +- [ ] Status assigned (PASS/WARN/FAIL) +- [ ] Violations recorded with recommended fixes + +--- + +### Step 4: Quality Score Calculation + +**Violation Counting:** + +- [ ] Critical (P0) violations counted +- [ ] High (P1) violations counted +- [ ] Medium (P2) violations counted +- [ ] Low (P3) violations counted +- [ ] Violation breakdown by criterion recorded + +**Score Calculation:** + +- [ ] Starting score: 100 +- [ ] Critical violations deducted (-10 each) +- [ ] High violations deducted (-5 each) +- [ ] Medium violations deducted (-2 each) +- [ ] Low violations deducted (-1 each) +- [ ] Bonus points added (max +30): + - [ ] Excellent BDD structure (+5 if applicable) + - [ ] Comprehensive fixtures (+5 if applicable) + - [ ] Comprehensive data factories (+5 if applicable) + - [ ] Network-first pattern (+5 if applicable) + - [ ] Perfect isolation (+5 if applicable) + - [ ] All test IDs present (+5 if applicable) +- [ ] Final score calculated: max(0, min(100, Starting - Violations + Bonus)) + +**Quality Grade:** + +- [ ] Grade assigned based on score: + - 90-100: A+ (Excellent) + - 80-89: A (Good) + - 70-79: B (Acceptable) + - 60-69: C (Needs Improvement) + - <60: F (Critical Issues) + +--- + +### Step 5: Review Report Generation + +**Report Sections Created:** + +- [ ] **Header Section**: + - [ ] Test file(s) reviewed listed + - [ ] Review date recorded + - [ ] Review scope noted (single/directory/suite) + - [ ] Quality score and grade displayed + +- [ ] **Executive Summary**: + - [ ] Overall assessment (Excellent/Good/Needs Improvement/Critical) + - [ ] Key strengths listed (3-5 bullet points) + - [ ] Key weaknesses listed (3-5 bullet points) + - [ ] Recommendation stated (Approve/Approve with comments/Request changes/Block) + +- [ ] **Quality Criteria Assessment**: + - [ ] Table with all criteria evaluated + - [ ] Status for each criterion (PASS/WARN/FAIL) + - [ ] Violation count per criterion + +- [ ] **Critical Issues (Must Fix)**: + - [ ] P0/P1 violations listed + - [ ] Code location provided for each (file:line) + - [ ] Issue explanation clear + - [ ] Recommended fix provided with code example + - [ ] Knowledge base reference provided + +- [ ] **Recommendations (Should Fix)**: + - [ ] P2/P3 violations listed + - [ ] Code location provided for each (file:line) + - [ ] Issue explanation clear + - [ ] Recommended improvement provided with code example + - [ ] Knowledge base reference provided + +- [ ] **Best Practices Examples** (if good patterns found): + - [ ] Good patterns highlighted from tests + - [ ] Knowledge base fragments referenced + - [ ] Examples provided for others to follow + +- [ ] **Knowledge Base References**: + - [ ] All fragments consulted listed + - [ ] Links to detailed guidance provided + +--- + +### Step 6: Optional Outputs Generation + +**Inline Comments** (if `generate_inline_comments: true`): + +- [ ] Inline comments generated at violation locations +- [ ] Comment format: `// TODO (TEA Review): [Issue] - See test-review-{filename}.md` +- [ ] Comments added to test files (no logic changes) +- [ ] Test files remain valid and executable + +**Quality Badge** (if `generate_quality_badge: true`): + +- [ ] Badge created with quality score (e.g., "Test Quality: 87/100 (A)") +- [ ] Badge format suitable for README or documentation +- [ ] Badge saved to output folder + +**Story Update** (if `append_to_story: true` and story file exists): + +- [ ] "Test Quality Review" section created +- [ ] Quality score included +- [ ] Critical issues summarized +- [ ] Link to full review report provided +- [ ] Story file updated successfully + +--- + +### Step 7: Save and Notify + +**Outputs Saved:** + +- [ ] Review report saved to `{output_file}` +- [ ] Inline comments written to test files (if enabled) +- [ ] Quality badge saved (if enabled) +- [ ] Story file updated (if enabled) +- [ ] All outputs are valid and readable + +**Summary Message Generated:** + +- [ ] Quality score and grade included +- [ ] Critical issue count stated +- [ ] Recommendation provided (Approve/Request changes/Block) +- [ ] Next steps clarified +- [ ] Message displayed to user + +--- + +## Output Validation + +### Review Report Completeness + +- [ ] All required sections present +- [ ] No placeholder text or TODOs in report +- [ ] All code locations are accurate (file:line) +- [ ] All code examples are valid and demonstrate fix +- [ ] All knowledge base references are correct + +### Review Report Accuracy + +- [ ] Quality score matches violation breakdown +- [ ] Grade matches score range +- [ ] Violations correctly categorized by severity (P0/P1/P2/P3) +- [ ] Violations correctly attributed to quality criteria +- [ ] No false positives (violations are legitimate issues) +- [ ] No false negatives (critical issues not missed) + +### Review Report Clarity + +- [ ] Executive summary is clear and actionable +- [ ] Issue explanations are understandable +- [ ] Recommended fixes are implementable +- [ ] Code examples are correct and runnable +- [ ] Recommendation (Approve/Request changes) is clear + +--- + +## Quality Checks + +### Knowledge-Based Validation + +- [ ] All feedback grounded in knowledge base fragments +- [ ] Recommendations follow proven patterns +- [ ] No arbitrary or opinion-based feedback +- [ ] Knowledge fragment references accurate and relevant + +### Actionable Feedback + +- [ ] Every issue includes recommended fix +- [ ] Every fix includes code example +- [ ] Code examples demonstrate correct pattern +- [ ] Fixes reference knowledge base for more detail + +### Severity Classification + +- [ ] Critical (P0) issues are genuinely critical (hard waits, race conditions, no assertions) +- [ ] High (P1) issues impact maintainability/reliability (missing IDs, hardcoded data) +- [ ] Medium (P2) issues are nice-to-have improvements (long files, missing priorities) +- [ ] Low (P3) issues are minor style/preference (verbose tests) + +### Context Awareness + +- [ ] Review considers project context (some patterns may be justified) +- [ ] Violations with justification comments noted as acceptable +- [ ] Edge cases acknowledged +- [ ] Recommendations are pragmatic, not dogmatic + +--- + +## Integration Points + +### Story File Integration + +- [ ] Story file discovered correctly (if available) +- [ ] Acceptance criteria extracted and used for context +- [ ] Test quality section appended to story (if enabled) +- [ ] Link to review report added to story + +### Test Design Integration + +- [ ] Test design document discovered correctly (if available) +- [ ] Priority context (P0/P1/P2/P3) extracted and used +- [ ] Review validates tests align with prioritization +- [ ] Misalignment flagged (e.g., P0 scenario missing tests) + +### Knowledge Base Integration + +- [ ] tea-index.csv loaded successfully +- [ ] All required fragments loaded +- [ ] Fragments applied correctly to validation +- [ ] Fragment references in report are accurate + +--- + +## Edge Cases and Special Situations + +### Empty or Minimal Tests + +- [ ] If test file is empty, report notes "No tests found" +- [ ] If test file has only boilerplate, report notes "No meaningful tests" +- [ ] Score reflects lack of content appropriately + +### Legacy Tests + +- [ ] Legacy tests acknowledged in context +- [ ] Review provides practical recommendations for improvement +- [ ] Recognizes that complete refactor may not be feasible +- [ ] Prioritizes critical issues (flakiness) over style + +### Test Framework Variations + +- [ ] Review adapts to test framework (Playwright vs Jest vs Cypress) +- [ ] Framework-specific patterns recognized (e.g., Playwright fixtures) +- [ ] Framework-specific violations detected (e.g., Cypress anti-patterns) +- [ ] Knowledge fragments applied appropriately for framework + +### Justified Violations + +- [ ] Violations with justification comments in code noted as acceptable +- [ ] Justifications evaluated for legitimacy +- [ ] Report acknowledges justified patterns +- [ ] Score not penalized for justified violations + +--- + +## Final Validation + +### Review Completeness + +- [ ] All enabled quality criteria evaluated +- [ ] All test files in scope reviewed +- [ ] All violations cataloged +- [ ] All recommendations provided +- [ ] Review report is comprehensive + +### Review Accuracy + +- [ ] Quality score is accurate +- [ ] Violations are correct (no false positives) +- [ ] Critical issues not missed (no false negatives) +- [ ] Code locations are correct +- [ ] Knowledge base references are accurate + +### Review Usefulness + +- [ ] Feedback is actionable +- [ ] Recommendations are implementable +- [ ] Code examples are correct +- [ ] Review helps developer improve tests +- [ ] Review educates on best practices + +### Workflow Complete + +- [ ] All checklist items completed +- [ ] All outputs validated and saved +- [ ] User notified with summary +- [ ] Review ready for developer consumption +- [ ] Follow-up actions identified (if any) + +--- + +## Notes + +Record any issues, observations, or important context during workflow execution: + +- **Test Framework**: [Playwright, Jest, Cypress, etc.] +- **Review Scope**: [single file, directory, full suite] +- **Quality Score**: [0-100 score, letter grade] +- **Critical Issues**: [Count of P0/P1 violations] +- **Recommendation**: [Approve / Approve with comments / Request changes / Block] +- **Special Considerations**: [Legacy code, justified patterns, edge cases] +- **Follow-up Actions**: [Re-review after fixes, pair programming, etc.] diff --git a/src/bmm/workflows/testarch/test-review/instructions.md b/src/bmm/workflows/testarch/test-review/instructions.md new file mode 100644 index 00000000..d817d2a6 --- /dev/null +++ b/src/bmm/workflows/testarch/test-review/instructions.md @@ -0,0 +1,628 @@ +# Test Quality Review - Instructions v4.0 + +**Workflow:** `testarch-test-review` +**Purpose:** Review test quality using TEA's comprehensive knowledge base and validate against best practices for maintainability, determinism, isolation, and flakiness prevention +**Agent:** Test Architect (TEA) +**Format:** Pure Markdown v4.0 (no XML blocks) + +--- + +## Overview + +This workflow performs comprehensive test quality reviews using TEA's knowledge base of best practices. It validates tests against proven patterns for fixture architecture, network-first safeguards, data factories, determinism, isolation, and flakiness prevention. The review generates actionable feedback with quality scoring. + +**Key Capabilities:** + +- **Knowledge-Based Review**: Applies patterns from tea-index.csv fragments +- **Quality Scoring**: 0-100 score based on violations and best practices +- **Multi-Scope**: Review single file, directory, or entire test suite +- **Pattern Detection**: Identifies flaky patterns, hard waits, race conditions +- **Best Practice Validation**: BDD format, test IDs, priorities, assertions +- **Actionable Feedback**: Critical issues (must fix) vs recommendations (should fix) +- **Integration**: Works with story files, test-design, acceptance criteria + +--- + +## Prerequisites + +**Required:** + +- Test file(s) to review (auto-discovered or explicitly provided) +- Test framework configuration (playwright.config.ts, jest.config.js, etc.) + +**Recommended:** + +- Story file with acceptance criteria (for context) +- Test design document (for priority context) +- Knowledge base fragments available in tea-index.csv + +**Halt Conditions:** + +- If test file path is invalid or file doesn't exist, halt and request correction +- If test_dir is empty (no tests found), halt and notify user + +--- + +## Workflow Steps + +### Step 1: Load Context and Knowledge Base + +**Actions:** + +1. Check playwright-utils flag: + - Read `{config_source}` and check `config.tea_use_playwright_utils` + +2. Load relevant knowledge fragments from `{project-root}/_bmad/bmm/testarch/tea-index.csv`: + + **Core Patterns (Always load):** + - `test-quality.md` - Definition of Done (deterministic tests, isolated with cleanup, explicit assertions, <300 lines, <1.5 min, 658 lines, 5 examples) + - `data-factories.md` - Factory functions with faker: overrides, nested factories, API-first setup (498 lines, 5 examples) + - `test-levels-framework.md` - E2E vs API vs Component vs Unit appropriateness with decision matrix (467 lines, 4 examples) + - `selective-testing.md` - Duplicate coverage detection with tag-based, spec filter, diff-based selection (727 lines, 4 examples) + - `test-healing-patterns.md` - Common failure patterns: stale selectors, race conditions, dynamic data, network errors, hard waits (648 lines, 5 examples) + - `selector-resilience.md` - Selector best practices (data-testid > ARIA > text > CSS hierarchy, anti-patterns, 541 lines, 4 examples) + - `timing-debugging.md` - Race condition prevention and async debugging techniques (370 lines, 3 examples) + + **If `config.tea_use_playwright_utils: true` (All Utilities):** + - `overview.md` - Playwright utils best practices + - `api-request.md` - Validate apiRequest usage patterns + - `network-recorder.md` - Review HAR record/playback implementation + - `auth-session.md` - Check auth token management + - `intercept-network-call.md` - Validate network interception + - `recurse.md` - Review polling patterns + - `log.md` - Check logging best practices + - `file-utils.md` - Validate file operation patterns + - `burn-in.md` - Review burn-in configuration + - `network-error-monitor.md` - Check error monitoring setup + - `fixtures-composition.md` - Validate mergeTests usage + + **If `config.tea_use_playwright_utils: false`:** + - `fixture-architecture.md` - Pure function → Fixture → mergeTests composition with auto-cleanup (406 lines, 5 examples) + - `network-first.md` - Route intercept before navigate to prevent race conditions (489 lines, 5 examples) + - `playwright-config.md` - Environment-based configuration with fail-fast validation (722 lines, 5 examples) + - `component-tdd.md` - Red-Green-Refactor patterns with provider isolation (480 lines, 4 examples) + - `ci-burn-in.md` - Flaky test detection with 10-iteration burn-in loop (678 lines, 4 examples) + +3. Determine review scope: + - **single**: Review one test file (`test_file_path` provided) + - **directory**: Review all tests in directory (`test_dir` provided) + - **suite**: Review entire test suite (discover all test files) + +4. Auto-discover related artifacts (if `auto_discover_story: true`): + - Extract test ID from filename (e.g., `1.3-E2E-001.spec.ts` → story 1.3) + - Search for story file (`story-1.3.md`) + - Search for test design (`test-design-story-1.3.md` or `test-design-epic-1.md`) + +5. Read story file for context (if available): + - Extract acceptance criteria + - Extract priority classification + - Extract expected test IDs + +**Output:** Complete knowledge base loaded, review scope determined, context gathered + +--- + +### Step 2: Discover and Parse Test Files + +**Actions:** + +1. **Discover test files** based on scope: + - **single**: Use `test_file_path` variable + - **directory**: Use `glob` to find all test files in `test_dir` (e.g., `*.spec.ts`, `*.test.js`) + - **suite**: Use `glob` to find all test files recursively from project root + +2. **Parse test file metadata**: + - File path and name + - File size (warn if >15 KB or >300 lines) + - Test framework detected (Playwright, Jest, Cypress, Vitest, etc.) + - Imports and dependencies + - Test structure (describe/context/it blocks) + +3. **Extract test structure**: + - Count of describe blocks (test suites) + - Count of it/test blocks (individual tests) + - Test IDs (if present, e.g., `test.describe('1.3-E2E-001')`) + - Priority markers (if present, e.g., `test.describe.only` for P0) + - BDD structure (Given-When-Then comments or steps) + +4. **Identify test patterns**: + - Fixtures used + - Data factories used + - Network interception patterns + - Assertions used (expect, assert, toHaveText, etc.) + - Waits and timeouts (page.waitFor, sleep, hardcoded delays) + - Conditionals (if/else, switch, ternary) + - Try/catch blocks + - Shared state or globals + +**Output:** Complete test file inventory with structure and pattern analysis + +--- + +### Step 3: Validate Against Quality Criteria + +**Actions:** + +For each test file, validate against quality criteria (configurable via workflow variables): + +#### 1. BDD Format Validation (if `check_given_when_then: true`) + +- ✅ **PASS**: Tests use Given-When-Then structure (comments or step organization) +- ⚠️ **WARN**: Tests have some structure but not explicit GWT +- ❌ **FAIL**: Tests lack clear structure, hard to understand intent + +**Knowledge Fragment**: test-quality.md, tdd-cycles.md + +--- + +#### 2. Test ID Conventions (if `check_test_ids: true`) + +- ✅ **PASS**: Test IDs present and follow convention (e.g., `1.3-E2E-001`, `2.1-API-005`) +- ⚠️ **WARN**: Some test IDs missing or inconsistent +- ❌ **FAIL**: No test IDs, can't trace tests to requirements + +**Knowledge Fragment**: traceability.md, test-quality.md + +--- + +#### 3. Priority Markers (if `check_priority_markers: true`) + +- ✅ **PASS**: Tests classified as P0/P1/P2/P3 (via markers or test-design reference) +- ⚠️ **WARN**: Some priority classifications missing +- ❌ **FAIL**: No priority classification, can't determine criticality + +**Knowledge Fragment**: test-priorities.md, risk-governance.md + +--- + +#### 4. Hard Waits Detection (if `check_hard_waits: true`) + +- ✅ **PASS**: No hard waits detected (no `sleep()`, `wait(5000)`, hardcoded delays) +- ⚠️ **WARN**: Some hard waits used but with justification comments +- ❌ **FAIL**: Hard waits detected without justification (flakiness risk) + +**Patterns to detect:** + +- `sleep(1000)`, `setTimeout()`, `delay()` +- `page.waitForTimeout(5000)` without explicit reason +- `await new Promise(resolve => setTimeout(resolve, 3000))` + +**Knowledge Fragment**: test-quality.md, network-first.md + +--- + +#### 5. Determinism Check (if `check_determinism: true`) + +- ✅ **PASS**: Tests are deterministic (no conditionals, no try/catch abuse, no random values) +- ⚠️ **WARN**: Some conditionals but with clear justification +- ❌ **FAIL**: Tests use if/else, switch, or try/catch to control flow (flakiness risk) + +**Patterns to detect:** + +- `if (condition) { test logic }` - tests should work deterministically +- `try { test } catch { fallback }` - tests shouldn't swallow errors +- `Math.random()`, `Date.now()` without factory abstraction + +**Knowledge Fragment**: test-quality.md, data-factories.md + +--- + +#### 6. Isolation Validation (if `check_isolation: true`) + +- ✅ **PASS**: Tests clean up resources, no shared state, can run in any order +- ⚠️ **WARN**: Some cleanup missing but isolated enough +- ❌ **FAIL**: Tests share state, depend on execution order, leave resources + +**Patterns to check:** + +- afterEach/afterAll cleanup hooks present +- No global variables mutated +- Database/API state cleaned up after tests +- Test data deleted or marked inactive + +**Knowledge Fragment**: test-quality.md, data-factories.md + +--- + +#### 7. Fixture Patterns (if `check_fixture_patterns: true`) + +- ✅ **PASS**: Uses pure function → Fixture → mergeTests pattern +- ⚠️ **WARN**: Some fixtures used but not consistently +- ❌ **FAIL**: No fixtures, tests repeat setup code (maintainability risk) + +**Patterns to check:** + +- Fixtures defined (e.g., `test.extend({ customFixture: async ({}, use) => { ... }})`) +- Pure functions used for fixture logic +- mergeTests used to combine fixtures +- No beforeEach with complex setup (should be in fixtures) + +**Knowledge Fragment**: fixture-architecture.md + +--- + +#### 8. Data Factories (if `check_data_factories: true`) + +- ✅ **PASS**: Uses factory functions with overrides, API-first setup +- ⚠️ **WARN**: Some factories used but also hardcoded data +- ❌ **FAIL**: Hardcoded test data, magic strings/numbers (maintainability risk) + +**Patterns to check:** + +- Factory functions defined (e.g., `createUser()`, `generateInvoice()`) +- Factories use faker.js or similar for realistic data +- Factories accept overrides (e.g., `createUser({ email: 'custom@example.com' })`) +- API-first setup (create via API, test via UI) + +**Knowledge Fragment**: data-factories.md + +--- + +#### 9. Network-First Pattern (if `check_network_first: true`) + +- ✅ **PASS**: Route interception set up BEFORE navigation (race condition prevention) +- ⚠️ **WARN**: Some routes intercepted correctly, others after navigation +- ❌ **FAIL**: Route interception after navigation (race condition risk) + +**Patterns to check:** + +- `page.route()` called before `page.goto()` +- `page.waitForResponse()` used with explicit URL pattern +- No navigation followed immediately by route setup + +**Knowledge Fragment**: network-first.md + +--- + +#### 10. Assertions (if `check_assertions: true`) + +- ✅ **PASS**: Explicit assertions present (expect, assert, toHaveText) +- ⚠️ **WARN**: Some tests rely on implicit waits instead of assertions +- ❌ **FAIL**: Missing assertions, tests don't verify behavior + +**Patterns to check:** + +- Each test has at least one assertion +- Assertions are specific (not just truthy checks) +- Assertions use framework-provided matchers (toHaveText, toBeVisible) + +**Knowledge Fragment**: test-quality.md + +--- + +#### 11. Test Length (if `check_test_length: true`) + +- ✅ **PASS**: Test file ≤200 lines (ideal), ≤300 lines (acceptable) +- ⚠️ **WARN**: Test file 301-500 lines (consider splitting) +- ❌ **FAIL**: Test file >500 lines (too large, maintainability risk) + +**Knowledge Fragment**: test-quality.md + +--- + +#### 12. Test Duration (if `check_test_duration: true`) + +- ✅ **PASS**: Individual tests ≤1.5 minutes (target: <30 seconds) +- ⚠️ **WARN**: Some tests 1.5-3 minutes (consider optimization) +- ❌ **FAIL**: Tests >3 minutes (too slow, impacts CI/CD) + +**Note:** Duration estimation based on complexity analysis if execution data unavailable + +**Knowledge Fragment**: test-quality.md, selective-testing.md + +--- + +#### 13. Flakiness Patterns (if `check_flakiness_patterns: true`) + +- ✅ **PASS**: No known flaky patterns detected +- ⚠️ **WARN**: Some potential flaky patterns (e.g., tight timeouts, race conditions) +- ❌ **FAIL**: Multiple flaky patterns detected (high flakiness risk) + +**Patterns to detect:** + +- Tight timeouts (e.g., `{ timeout: 1000 }`) +- Race conditions (navigation before route interception) +- Timing-dependent assertions (e.g., checking timestamps) +- Retry logic in tests (hides flakiness) +- Environment-dependent assumptions (hardcoded URLs, ports) + +**Knowledge Fragment**: test-quality.md, network-first.md, ci-burn-in.md + +--- + +### Step 4: Calculate Quality Score + +**Actions:** + +1. **Count violations** by severity: + - **Critical (P0)**: Hard waits without justification, no assertions, race conditions, shared state + - **High (P1)**: Missing test IDs, no BDD structure, hardcoded data, missing fixtures + - **Medium (P2)**: Long test files (>300 lines), missing priorities, some conditionals + - **Low (P3)**: Minor style issues, incomplete cleanup, verbose tests + +2. **Calculate quality score** (if `quality_score_enabled: true`): + +``` +Starting Score: 100 + +Critical Violations: -10 points each +High Violations: -5 points each +Medium Violations: -2 points each +Low Violations: -1 point each + +Bonus Points: ++ Excellent BDD structure: +5 ++ Comprehensive fixtures: +5 ++ Comprehensive data factories: +5 ++ Network-first pattern: +5 ++ Perfect isolation: +5 ++ All test IDs present: +5 + +Quality Score: max(0, min(100, Starting Score - Violations + Bonus)) +``` + +3. **Quality Grade**: + - **90-100**: Excellent (A+) + - **80-89**: Good (A) + - **70-79**: Acceptable (B) + - **60-69**: Needs Improvement (C) + - **<60**: Critical Issues (F) + +**Output:** Quality score calculated with violation breakdown + +--- + +### Step 5: Generate Review Report + +**Actions:** + +1. **Create review report** using `test-review-template.md`: + + **Header Section:** + - Test file(s) reviewed + - Review date + - Review scope (single/directory/suite) + - Quality score and grade + + **Executive Summary:** + - Overall assessment (Excellent/Good/Needs Improvement/Critical) + - Key strengths + - Key weaknesses + - Recommendation (Approve/Approve with comments/Request changes) + + **Quality Criteria Assessment:** + - Table with all criteria evaluated + - Status for each (PASS/WARN/FAIL) + - Violation count per criterion + + **Critical Issues (Must Fix):** + - Priority P0/P1 violations + - Code location (file:line) + - Explanation of issue + - Recommended fix + - Knowledge base reference + + **Recommendations (Should Fix):** + - Priority P2/P3 violations + - Code location (file:line) + - Explanation of issue + - Recommended improvement + - Knowledge base reference + + **Best Practices Examples:** + - Highlight good patterns found in tests + - Reference knowledge base fragments + - Provide examples for others to follow + + **Knowledge Base References:** + - List all fragments consulted + - Provide links to detailed guidance + +2. **Generate inline comments** (if `generate_inline_comments: true`): + - Add TODO comments in test files at violation locations + - Format: `// TODO (TEA Review): [Issue description] - See test-review-{filename}.md` + - Never modify test logic, only add comments + +3. **Generate quality badge** (if `generate_quality_badge: true`): + - Create badge with quality score (e.g., "Test Quality: 87/100 (A)") + - Format for inclusion in README or documentation + +4. **Append to story file** (if `append_to_story: true` and story file exists): + - Add "Test Quality Review" section to story + - Include quality score and critical issues + - Link to full review report + +**Output:** Comprehensive review report with actionable feedback + +--- + +### Step 6: Save Outputs and Notify + +**Actions:** + +1. **Save review report** to `{output_file}` +2. **Save inline comments** to test files (if enabled) +3. **Save quality badge** to output folder (if enabled) +4. **Update story file** (if enabled) +5. **Generate summary message** for user: + - Quality score and grade + - Critical issue count + - Recommendation + +**Output:** All review artifacts saved and user notified + +--- + +## Quality Criteria Decision Matrix + +| Criterion | PASS | WARN | FAIL | Knowledge Fragment | +| ------------------ | ------------------------- | -------------- | ------------------- | ----------------------- | +| BDD Format | Given-When-Then present | Some structure | No structure | test-quality.md | +| Test IDs | All tests have IDs | Some missing | No IDs | traceability.md | +| Priority Markers | All classified | Some missing | No classification | test-priorities.md | +| Hard Waits | No hard waits | Some justified | Hard waits present | test-quality.md | +| Determinism | No conditionals/random | Some justified | Conditionals/random | test-quality.md | +| Isolation | Clean up, no shared state | Some gaps | Shared state | test-quality.md | +| Fixture Patterns | Pure fn → Fixture | Some fixtures | No fixtures | fixture-architecture.md | +| Data Factories | Factory functions | Some factories | Hardcoded data | data-factories.md | +| Network-First | Intercept before navigate | Some correct | Race conditions | network-first.md | +| Assertions | Explicit assertions | Some implicit | Missing assertions | test-quality.md | +| Test Length | ≤300 lines | 301-500 lines | >500 lines | test-quality.md | +| Test Duration | ≤1.5 min | 1.5-3 min | >3 min | test-quality.md | +| Flakiness Patterns | No flaky patterns | Some potential | Multiple patterns | ci-burn-in.md | + +--- + +## Example Review Summary + +````markdown +# Test Quality Review: auth-login.spec.ts + +**Quality Score**: 78/100 (B - Acceptable) +**Review Date**: 2025-10-14 +**Recommendation**: Approve with Comments + +## Executive Summary + +Overall, the test demonstrates good structure and coverage of the login flow. However, there are several areas for improvement to enhance maintainability and prevent flakiness. + +**Strengths:** + +- Excellent BDD structure with clear Given-When-Then comments +- Good use of test IDs (1.3-E2E-001, 1.3-E2E-002) +- Comprehensive assertions on authentication state + +**Weaknesses:** + +- Hard wait detected (page.waitForTimeout(2000)) - flakiness risk +- Hardcoded test data (email: 'test@example.com') - use factories instead +- Missing fixture for common login setup - DRY violation + +**Recommendation**: Address critical issue (hard wait) before merging. Other improvements can be addressed in follow-up PR. + +## Critical Issues (Must Fix) + +### 1. Hard Wait Detected (Line 45) + +**Severity**: P0 (Critical) +**Issue**: `await page.waitForTimeout(2000)` introduces flakiness +**Fix**: Use explicit wait for element or network request instead +**Knowledge**: See test-quality.md, network-first.md + +```typescript +// ❌ Bad (current) +await page.waitForTimeout(2000); +await expect(page.locator('[data-testid="user-menu"]')).toBeVisible(); + +// ✅ Good (recommended) +await expect(page.locator('[data-testid="user-menu"]')).toBeVisible({ timeout: 10000 }); +``` +```` + +## Recommendations (Should Fix) + +### 1. Use Data Factory for Test User (Lines 23, 32, 41) + +**Severity**: P1 (High) +**Issue**: Hardcoded email `test@example.com` - maintainability risk +**Fix**: Create factory function for test users +**Knowledge**: See data-factories.md + +```typescript +// ✅ Good (recommended) +import { createTestUser } from './factories/user-factory'; + +const testUser = createTestUser({ role: 'admin' }); +await loginPage.login(testUser.email, testUser.password); +``` + +### 2. Extract Login Setup to Fixture (Lines 18-28) + +**Severity**: P1 (High) +**Issue**: Login setup repeated across tests - DRY violation +**Fix**: Create fixture for authenticated state +**Knowledge**: See fixture-architecture.md + +```typescript +// ✅ Good (recommended) +const test = base.extend({ + authenticatedPage: async ({ page }, use) => { + const user = createTestUser(); + await loginPage.login(user.email, user.password); + await use(page); + }, +}); + +test('user can access dashboard', async ({ authenticatedPage }) => { + // Test starts already logged in +}); +``` + +## Quality Score Breakdown + +- Starting Score: 100 +- Critical Violations (1 × -10): -10 +- High Violations (2 × -5): -10 +- Medium Violations (0 × -2): 0 +- Low Violations (1 × -1): -1 +- Bonus (BDD +5, Test IDs +5): +10 +- **Final Score**: 78/100 (B) + +``` + +--- + +## Integration with Other Workflows + +### Before Test Review + +- **atdd**: Generate acceptance tests (TEA reviews them for quality) +- **automate**: Expand regression suite (TEA reviews new tests) +- **dev story**: Developer writes implementation tests (TEA reviews them) + +### After Test Review + +- **Developer**: Addresses critical issues, improves based on recommendations +- **gate**: Test quality review feeds into gate decision (high-quality tests increase confidence) + +### Coordinates With + +- **Story File**: Review links to acceptance criteria context +- **Test Design**: Review validates tests align with prioritization +- **Knowledge Base**: Review references fragments for detailed guidance + +--- + +## Important Notes + +1. **Non-Prescriptive**: Review provides guidance, not rigid rules +2. **Context Matters**: Some violations may be justified for specific scenarios +3. **Knowledge-Based**: All feedback grounded in proven patterns from tea-index.csv +4. **Actionable**: Every issue includes recommended fix with code examples +5. **Quality Score**: Use as indicator, not absolute measure +6. **Continuous Improvement**: Review same tests periodically as patterns evolve + +--- + +## Troubleshooting + +**Problem: No test files found** +- Verify test_dir path is correct +- Check test file extensions match glob pattern +- Ensure test files exist in expected location + +**Problem: Quality score seems too low/high** +- Review violation counts - may need to adjust thresholds +- Consider context - some projects have different standards +- Focus on critical issues first, not just score + +**Problem: Inline comments not generated** +- Check generate_inline_comments: true in variables +- Verify write permissions on test files +- Review append_to_file: false (separate report mode) + +**Problem: Knowledge fragments not loading** +- Verify tea-index.csv exists in testarch/ directory +- Check fragment file paths are correct +- Ensure auto_load_knowledge: true in variables +``` diff --git a/src/bmm/workflows/testarch/test-review/test-review-template.md b/src/bmm/workflows/testarch/test-review/test-review-template.md new file mode 100644 index 00000000..54127a5a --- /dev/null +++ b/src/bmm/workflows/testarch/test-review/test-review-template.md @@ -0,0 +1,390 @@ +# Test Quality Review: {test_filename} + +**Quality Score**: {score}/100 ({grade} - {assessment}) +**Review Date**: {YYYY-MM-DD} +**Review Scope**: {single | directory | suite} +**Reviewer**: {user_name or TEA Agent} + +--- + +Note: This review audits existing tests; it does not generate tests. + +## Executive Summary + +**Overall Assessment**: {Excellent | Good | Acceptable | Needs Improvement | Critical Issues} + +**Recommendation**: {Approve | Approve with Comments | Request Changes | Block} + +### Key Strengths + +✅ {strength_1} +✅ {strength_2} +✅ {strength_3} + +### Key Weaknesses + +❌ {weakness_1} +❌ {weakness_2} +❌ {weakness_3} + +### Summary + +{1-2 paragraph summary of overall test quality, highlighting major findings and recommendation rationale} + +--- + +## Quality Criteria Assessment + +| Criterion | Status | Violations | Notes | +| ------------------------------------ | ------------------------------- | ---------- | ------------ | +| BDD Format (Given-When-Then) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Test IDs | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Priority Markers (P0/P1/P2/P3) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Hard Waits (sleep, waitForTimeout) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Determinism (no conditionals) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Isolation (cleanup, no shared state) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Fixture Patterns | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Data Factories | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Network-First Pattern | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Explicit Assertions | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | +| Test Length (≤300 lines) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {lines} | {brief_note} | +| Test Duration (≤1.5 min) | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {duration} | {brief_note} | +| Flakiness Patterns | {✅ PASS \| ⚠️ WARN \| ❌ FAIL} | {count} | {brief_note} | + +**Total Violations**: {critical_count} Critical, {high_count} High, {medium_count} Medium, {low_count} Low + +--- + +## Quality Score Breakdown + +``` +Starting Score: 100 +Critical Violations: -{critical_count} × 10 = -{critical_deduction} +High Violations: -{high_count} × 5 = -{high_deduction} +Medium Violations: -{medium_count} × 2 = -{medium_deduction} +Low Violations: -{low_count} × 1 = -{low_deduction} + +Bonus Points: + Excellent BDD: +{0|5} + Comprehensive Fixtures: +{0|5} + Data Factories: +{0|5} + Network-First: +{0|5} + Perfect Isolation: +{0|5} + All Test IDs: +{0|5} + -------- +Total Bonus: +{bonus_total} + +Final Score: {final_score}/100 +Grade: {grade} +``` + +--- + +## Critical Issues (Must Fix) + +{If no critical issues: "No critical issues detected. ✅"} + +{For each critical issue:} + +### {issue_number}. {Issue Title} + +**Severity**: P0 (Critical) +**Location**: `{filename}:{line_number}` +**Criterion**: {criterion_name} +**Knowledge Base**: [{fragment_name}]({fragment_path}) + +**Issue Description**: +{Detailed explanation of what the problem is and why it's critical} + +**Current Code**: + +```typescript +// ❌ Bad (current implementation) +{ + code_snippet_showing_problem; +} +``` + +**Recommended Fix**: + +```typescript +// ✅ Good (recommended approach) +{ + code_snippet_showing_solution; +} +``` + +**Why This Matters**: +{Explanation of impact - flakiness risk, maintainability, reliability} + +**Related Violations**: +{If similar issue appears elsewhere, note line numbers} + +--- + +## Recommendations (Should Fix) + +{If no recommendations: "No additional recommendations. Test quality is excellent. ✅"} + +{For each recommendation:} + +### {rec_number}. {Recommendation Title} + +**Severity**: {P1 (High) | P2 (Medium) | P3 (Low)} +**Location**: `{filename}:{line_number}` +**Criterion**: {criterion_name} +**Knowledge Base**: [{fragment_name}]({fragment_path}) + +**Issue Description**: +{Detailed explanation of what could be improved and why} + +**Current Code**: + +```typescript +// ⚠️ Could be improved (current implementation) +{ + code_snippet_showing_current_approach; +} +``` + +**Recommended Improvement**: + +```typescript +// ✅ Better approach (recommended) +{ + code_snippet_showing_improvement; +} +``` + +**Benefits**: +{Explanation of benefits - maintainability, readability, reusability} + +**Priority**: +{Why this is P1/P2/P3 - urgency and impact} + +--- + +## Best Practices Found + +{If good patterns found, highlight them} + +{For each best practice:} + +### {practice_number}. {Best Practice Title} + +**Location**: `{filename}:{line_number}` +**Pattern**: {pattern_name} +**Knowledge Base**: [{fragment_name}]({fragment_path}) + +**Why This Is Good**: +{Explanation of why this pattern is excellent} + +**Code Example**: + +```typescript +// ✅ Excellent pattern demonstrated in this test +{ + code_snippet_showing_best_practice; +} +``` + +**Use as Reference**: +{Encourage using this pattern in other tests} + +--- + +## Test File Analysis + +### File Metadata + +- **File Path**: `{relative_path_from_project_root}` +- **File Size**: {line_count} lines, {kb_size} KB +- **Test Framework**: {Playwright | Jest | Cypress | Vitest | Other} +- **Language**: {TypeScript | JavaScript} + +### Test Structure + +- **Describe Blocks**: {describe_count} +- **Test Cases (it/test)**: {test_count} +- **Average Test Length**: {avg_lines_per_test} lines per test +- **Fixtures Used**: {fixture_count} ({fixture_names}) +- **Data Factories Used**: {factory_count} ({factory_names}) + +### Test Coverage Scope + +- **Test IDs**: {test_id_list} +- **Priority Distribution**: + - P0 (Critical): {p0_count} tests + - P1 (High): {p1_count} tests + - P2 (Medium): {p2_count} tests + - P3 (Low): {p3_count} tests + - Unknown: {unknown_count} tests + +### Assertions Analysis + +- **Total Assertions**: {assertion_count} +- **Assertions per Test**: {avg_assertions_per_test} (avg) +- **Assertion Types**: {assertion_types_used} + +--- + +## Context and Integration + +### Related Artifacts + +{If story file found:} + +- **Story File**: [{story_filename}]({story_path}) +- **Acceptance Criteria Mapped**: {ac_mapped}/{ac_total} ({ac_coverage}%) + +{If test-design found:} + +- **Test Design**: [{test_design_filename}]({test_design_path}) +- **Risk Assessment**: {risk_level} +- **Priority Framework**: P0-P3 applied + +### Acceptance Criteria Validation + +{If story file available, map tests to ACs:} + +| Acceptance Criterion | Test ID | Status | Notes | +| -------------------- | --------- | -------------------------- | ------- | +| {AC_1} | {test_id} | {✅ Covered \| ❌ Missing} | {notes} | +| {AC_2} | {test_id} | {✅ Covered \| ❌ Missing} | {notes} | +| {AC_3} | {test_id} | {✅ Covered \| ❌ Missing} | {notes} | + +**Coverage**: {covered_count}/{total_count} criteria covered ({coverage_percentage}%) + +--- + +## Knowledge Base References + +This review consulted the following knowledge base fragments: + +- **[test-quality.md](../../../testarch/knowledge/test-quality.md)** - Definition of Done for tests (no hard waits, <300 lines, <1.5 min, self-cleaning) +- **[fixture-architecture.md](../../../testarch/knowledge/fixture-architecture.md)** - Pure function → Fixture → mergeTests pattern +- **[network-first.md](../../../testarch/knowledge/network-first.md)** - Route intercept before navigate (race condition prevention) +- **[data-factories.md](../../../testarch/knowledge/data-factories.md)** - Factory functions with overrides, API-first setup +- **[test-levels-framework.md](../../../testarch/knowledge/test-levels-framework.md)** - E2E vs API vs Component vs Unit appropriateness +- **[tdd-cycles.md](../../../testarch/knowledge/tdd-cycles.md)** - Red-Green-Refactor patterns +- **[selective-testing.md](../../../testarch/knowledge/selective-testing.md)** - Duplicate coverage detection +- **[ci-burn-in.md](../../../testarch/knowledge/ci-burn-in.md)** - Flakiness detection patterns (10-iteration loop) +- **[test-priorities.md](../../../testarch/knowledge/test-priorities.md)** - P0/P1/P2/P3 classification framework +- **[traceability.md](../../../testarch/knowledge/traceability.md)** - Requirements-to-tests mapping + +See [tea-index.csv](../../../testarch/tea-index.csv) for complete knowledge base. + +--- + +## Next Steps + +### Immediate Actions (Before Merge) + +1. **{action_1}** - {description} + - Priority: {P0 | P1 | P2} + - Owner: {team_or_person} + - Estimated Effort: {time_estimate} + +2. **{action_2}** - {description} + - Priority: {P0 | P1 | P2} + - Owner: {team_or_person} + - Estimated Effort: {time_estimate} + +### Follow-up Actions (Future PRs) + +1. **{action_1}** - {description} + - Priority: {P2 | P3} + - Target: {next_sprint | backlog} + +2. **{action_2}** - {description} + - Priority: {P2 | P3} + - Target: {next_sprint | backlog} + +### Re-Review Needed? + +{✅ No re-review needed - approve as-is} +{⚠️ Re-review after critical fixes - request changes, then re-review} +{❌ Major refactor required - block merge, pair programming recommended} + +--- + +## Decision + +**Recommendation**: {Approve | Approve with Comments | Request Changes | Block} + +**Rationale**: +{1-2 paragraph explanation of recommendation based on findings} + +**For Approve**: + +> Test quality is excellent/good with {score}/100 score. {Minor issues noted can be addressed in follow-up PRs.} Tests are production-ready and follow best practices. + +**For Approve with Comments**: + +> Test quality is acceptable with {score}/100 score. {High-priority recommendations should be addressed but don't block merge.} Critical issues resolved, but improvements would enhance maintainability. + +**For Request Changes**: + +> Test quality needs improvement with {score}/100 score. {Critical issues must be fixed before merge.} {X} critical violations detected that pose flakiness/maintainability risks. + +**For Block**: + +> Test quality is insufficient with {score}/100 score. {Multiple critical issues make tests unsuitable for production.} Recommend pairing session with QA engineer to apply patterns from knowledge base. + +--- + +## Appendix + +### Violation Summary by Location + +{Table of all violations sorted by line number:} + +| Line | Severity | Criterion | Issue | Fix | +| ------ | ------------- | ----------- | ------------- | ----------- | +| {line} | {P0/P1/P2/P3} | {criterion} | {brief_issue} | {brief_fix} | +| {line} | {P0/P1/P2/P3} | {criterion} | {brief_issue} | {brief_fix} | + +### Quality Trends + +{If reviewing same file multiple times, show trend:} + +| Review Date | Score | Grade | Critical Issues | Trend | +| ------------ | ------------- | --------- | --------------- | ----------- | +| {YYYY-MM-DD} | {score_1}/100 | {grade_1} | {count_1} | ⬆️ Improved | +| {YYYY-MM-DD} | {score_2}/100 | {grade_2} | {count_2} | ⬇️ Declined | +| {YYYY-MM-DD} | {score_3}/100 | {grade_3} | {count_3} | ➡️ Stable | + +### Related Reviews + +{If reviewing multiple files in directory/suite:} + +| File | Score | Grade | Critical | Status | +| -------- | ----------- | ------- | -------- | ------------------ | +| {file_1} | {score}/100 | {grade} | {count} | {Approved/Blocked} | +| {file_2} | {score}/100 | {grade} | {count} | {Approved/Blocked} | +| {file_3} | {score}/100 | {grade} | {count} | {Approved/Blocked} | + +**Suite Average**: {avg_score}/100 ({avg_grade}) + +--- + +## Review Metadata + +**Generated By**: BMad TEA Agent (Test Architect) +**Workflow**: testarch-test-review v4.0 +**Review ID**: test-review-{filename}-{YYYYMMDD} +**Timestamp**: {YYYY-MM-DD HH:MM:SS} +**Version**: 1.0 + +--- + +## Feedback on This Review + +If you have questions or feedback on this review: + +1. Review patterns in knowledge base: `testarch/knowledge/` +2. Consult tea-index.csv for detailed guidance +3. Request clarification on specific violations +4. Pair with QA engineer to apply patterns + +This review is guidance, not rigid rules. Context matters - if a pattern is justified, document it with a comment. diff --git a/src/bmm/workflows/testarch/test-review/workflow.yaml b/src/bmm/workflows/testarch/test-review/workflow.yaml new file mode 100644 index 00000000..58dad5ee --- /dev/null +++ b/src/bmm/workflows/testarch/test-review/workflow.yaml @@ -0,0 +1,48 @@ +# Test Architect workflow: test-review +name: testarch-test-review +description: "Review test quality using comprehensive knowledge base and best practices validation" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/test-review" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +template: "{installed_path}/test-review-template.md" + +# Variables and inputs +variables: + test_dir: "{project-root}/tests" # Root test directory + review_scope: "single" # single (one file), directory (folder), suite (all tests) + +# Output configuration +default_output_file: "{output_folder}/test-review.md" + +# Required tools +required_tools: + - read_file # Read test files, story, test-design + - write_file # Create review report + - list_files # Discover test files in directory + - search_repo # Find tests by patterns + - glob # Find test files matching patterns + +tags: + - qa + - test-architect + - code-review + - quality + - best-practices + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true # Can review multiple files + +web_bundle: false diff --git a/src/bmm/workflows/testarch/trace/checklist.md b/src/bmm/workflows/testarch/trace/checklist.md new file mode 100644 index 00000000..78f345a1 --- /dev/null +++ b/src/bmm/workflows/testarch/trace/checklist.md @@ -0,0 +1,642 @@ +# Requirements Traceability & Gate Decision - Validation Checklist + +**Workflow:** `testarch-trace` +**Purpose:** Ensure complete traceability matrix with actionable gap analysis AND make deployment readiness decision (PASS/CONCERNS/FAIL/WAIVED) + +This checklist covers **two sequential phases**: + +- **PHASE 1**: Requirements Traceability (always executed) +- **PHASE 2**: Quality Gate Decision (executed if `enable_gate_decision: true`) + +--- + +# PHASE 1: REQUIREMENTS TRACEABILITY + +## Prerequisites Validation + +- [ ] Acceptance criteria are available (from story file OR inline) +- [ ] Test suite exists (or gaps are acknowledged and documented) +- [ ] If tests are missing, recommend `*atdd` (trace does not run it automatically) +- [ ] Test directory path is correct (`test_dir` variable) +- [ ] Story file is accessible (if using BMad mode) +- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance) + +--- + +## Context Loading + +- [ ] Story file read successfully (if applicable) +- [ ] Acceptance criteria extracted correctly +- [ ] Story ID identified (e.g., 1.3) +- [ ] `test-design.md` loaded (if available) +- [ ] `tech-spec.md` loaded (if available) +- [ ] `PRD.md` loaded (if available) +- [ ] Relevant knowledge fragments loaded from `tea-index.csv` + +--- + +## Test Discovery and Cataloging + +- [ ] Tests auto-discovered using multiple strategies (test IDs, describe blocks, file paths) +- [ ] Tests categorized by level (E2E, API, Component, Unit) +- [ ] Test metadata extracted: + - [ ] Test IDs (e.g., 1.3-E2E-001) + - [ ] Describe/context blocks + - [ ] It blocks (individual test cases) + - [ ] Given-When-Then structure (if BDD) + - [ ] Priority markers (P0/P1/P2/P3) +- [ ] All relevant test files found (no tests missed due to naming conventions) + +--- + +## Criteria-to-Test Mapping + +- [ ] Each acceptance criterion mapped to tests (or marked as NONE) +- [ ] Explicit references found (test IDs, describe blocks mentioning criterion) +- [ ] Test level documented (E2E, API, Component, Unit) +- [ ] Given-When-Then narrative verified for alignment +- [ ] Traceability matrix table generated: + - [ ] Criterion ID + - [ ] Description + - [ ] Test ID + - [ ] Test File + - [ ] Test Level + - [ ] Coverage Status + +--- + +## Coverage Classification + +- [ ] Coverage status classified for each criterion: + - [ ] **FULL** - All scenarios validated at appropriate level(s) + - [ ] **PARTIAL** - Some coverage but missing edge cases or levels + - [ ] **NONE** - No test coverage at any level + - [ ] **UNIT-ONLY** - Only unit tests (missing integration/E2E validation) + - [ ] **INTEGRATION-ONLY** - Only API/Component tests (missing unit confidence) +- [ ] Classification justifications provided +- [ ] Edge cases considered in FULL vs PARTIAL determination + +--- + +## Duplicate Coverage Detection + +- [ ] Duplicate coverage checked across test levels +- [ ] Acceptable overlap identified (defense in depth for critical paths) +- [ ] Unacceptable duplication flagged (same validation at multiple levels) +- [ ] Recommendations provided for consolidation +- [ ] Selective testing principles applied + +--- + +## Gap Analysis + +- [ ] Coverage gaps identified: + - [ ] Criteria with NONE status + - [ ] Criteria with PARTIAL status + - [ ] Criteria with UNIT-ONLY status + - [ ] Criteria with INTEGRATION-ONLY status +- [ ] Gaps prioritized by risk level using test-priorities framework: + - [ ] **CRITICAL** - P0 criteria without FULL coverage (BLOCKER) + - [ ] **HIGH** - P1 criteria without FULL coverage (PR blocker) + - [ ] **MEDIUM** - P2 criteria without FULL coverage (nightly gap) + - [ ] **LOW** - P3 criteria without FULL coverage (acceptable) +- [ ] Specific test recommendations provided for each gap: + - [ ] Suggested test level (E2E, API, Component, Unit) + - [ ] Test description (Given-When-Then) + - [ ] Recommended test ID (e.g., 1.3-E2E-004) + - [ ] Explanation of why test is needed + +--- + +## Coverage Metrics + +- [ ] Overall coverage percentage calculated (FULL coverage / total criteria) +- [ ] P0 coverage percentage calculated +- [ ] P1 coverage percentage calculated +- [ ] P2 coverage percentage calculated (if applicable) +- [ ] Coverage by level calculated: + - [ ] E2E coverage % + - [ ] API coverage % + - [ ] Component coverage % + - [ ] Unit coverage % + +--- + +## Test Quality Verification + +For each mapped test, verify: + +- [ ] Explicit assertions are present (not hidden in helpers) +- [ ] Test follows Given-When-Then structure +- [ ] No hard waits or sleeps (deterministic waiting only) +- [ ] Self-cleaning (test cleans up its data) +- [ ] File size < 300 lines +- [ ] Test duration < 90 seconds + +Quality issues flagged: + +- [ ] **BLOCKER** issues identified (missing assertions, hard waits, flaky patterns) +- [ ] **WARNING** issues identified (large files, slow tests, unclear structure) +- [ ] **INFO** issues identified (style inconsistencies, missing documentation) + +Knowledge fragments referenced: + +- [ ] `test-quality.md` for Definition of Done +- [ ] `fixture-architecture.md` for self-cleaning patterns +- [ ] `network-first.md` for Playwright best practices +- [ ] `data-factories.md` for test data patterns + +--- + +## Phase 1 Deliverables Generated + +### Traceability Matrix Markdown + +- [ ] File created at `{output_folder}/traceability-matrix.md` +- [ ] Template from `trace-template.md` used +- [ ] Full mapping table included +- [ ] Coverage status section included +- [ ] Gap analysis section included +- [ ] Quality assessment section included +- [ ] Recommendations section included + +### Coverage Badge/Metric (if enabled) + +- [ ] Badge markdown generated +- [ ] Metrics exported to JSON for CI/CD integration + +### Updated Story File (if enabled) + +- [ ] "Traceability" section added to story markdown +- [ ] Link to traceability matrix included +- [ ] Coverage summary included + +--- + +## Phase 1 Quality Assurance + +### Accuracy Checks + +- [ ] All acceptance criteria accounted for (none skipped) +- [ ] Test IDs correctly formatted (e.g., 1.3-E2E-001) +- [ ] File paths are correct and accessible +- [ ] Coverage percentages calculated correctly +- [ ] No false positives (tests incorrectly mapped to criteria) +- [ ] No false negatives (existing tests missed in mapping) + +### Completeness Checks + +- [ ] All test levels considered (E2E, API, Component, Unit) +- [ ] All priorities considered (P0, P1, P2, P3) +- [ ] All coverage statuses used appropriately (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY) +- [ ] All gaps have recommendations +- [ ] All quality issues have severity and remediation guidance + +### Actionability Checks + +- [ ] Recommendations are specific (not generic) +- [ ] Test IDs suggested for new tests +- [ ] Given-When-Then provided for recommended tests +- [ ] Impact explained for each gap +- [ ] Priorities clear (CRITICAL, HIGH, MEDIUM, LOW) + +--- + +## Phase 1 Documentation + +- [ ] Traceability matrix is readable and well-formatted +- [ ] Tables render correctly in markdown +- [ ] Code blocks have proper syntax highlighting +- [ ] Links are valid and accessible +- [ ] Recommendations are clear and prioritized + +--- + +# PHASE 2: QUALITY GATE DECISION + +**Note**: Phase 2 executes only if `enable_gate_decision: true` in workflow.yaml + +--- + +## Prerequisites + +### Evidence Gathering + +- [ ] Test execution results obtained (CI/CD pipeline, test framework reports) +- [ ] Story/epic/release file identified and read +- [ ] Test design document discovered or explicitly provided (if available) +- [ ] Traceability matrix discovered or explicitly provided (available from Phase 1) +- [ ] NFR assessment discovered or explicitly provided (if available) +- [ ] Code coverage report discovered or explicitly provided (if available) +- [ ] Burn-in results discovered or explicitly provided (if available) + +### Evidence Validation + +- [ ] Evidence freshness validated (warn if >7 days old, recommend re-running workflows) +- [ ] All required assessments available or user acknowledged gaps +- [ ] Test results are complete (not partial or interrupted runs) +- [ ] Test results match current codebase (not from outdated branch) + +### Knowledge Base Loading + +- [ ] `risk-governance.md` loaded successfully +- [ ] `probability-impact.md` loaded successfully +- [ ] `test-quality.md` loaded successfully +- [ ] `test-priorities.md` loaded successfully +- [ ] `ci-burn-in.md` loaded (if burn-in results available) + +--- + +## Process Steps + +### Step 1: Context Loading + +- [ ] Gate type identified (story/epic/release/hotfix) +- [ ] Target ID extracted (story_id, epic_num, or release_version) +- [ ] Decision thresholds loaded from workflow variables +- [ ] Risk tolerance configuration loaded +- [ ] Waiver policy loaded + +### Step 2: Evidence Parsing + +**Test Results:** + +- [ ] Total test count extracted +- [ ] Passed test count extracted +- [ ] Failed test count extracted +- [ ] Skipped test count extracted +- [ ] Test duration extracted +- [ ] P0 test pass rate calculated +- [ ] P1 test pass rate calculated +- [ ] Overall test pass rate calculated + +**Quality Assessments:** + +- [ ] P0/P1/P2/P3 scenarios extracted from test-design.md (if available) +- [ ] Risk scores extracted from test-design.md (if available) +- [ ] Coverage percentages extracted from traceability-matrix.md (available from Phase 1) +- [ ] Coverage gaps extracted from traceability-matrix.md (available from Phase 1) +- [ ] NFR status extracted from nfr-assessment.md (if available) +- [ ] Security issues count extracted from nfr-assessment.md (if available) + +**Code Coverage:** + +- [ ] Line coverage percentage extracted (if available) +- [ ] Branch coverage percentage extracted (if available) +- [ ] Function coverage percentage extracted (if available) +- [ ] Critical path coverage validated (if available) + +**Burn-in Results:** + +- [ ] Burn-in iterations count extracted (if available) +- [ ] Flaky tests count extracted (if available) +- [ ] Stability score calculated (if available) + +### Step 3: Decision Rules Application + +**P0 Criteria Evaluation:** + +- [ ] P0 test pass rate evaluated (must be 100%) +- [ ] P0 acceptance criteria coverage evaluated (must be 100%) +- [ ] Security issues count evaluated (must be 0) +- [ ] Critical NFR failures evaluated (must be 0) +- [ ] Flaky tests evaluated (must be 0 if burn-in enabled) +- [ ] P0 decision recorded: PASS or FAIL + +**P1 Criteria Evaluation:** + +- [ ] P1 test pass rate evaluated (threshold: min_p1_pass_rate) +- [ ] P1 acceptance criteria coverage evaluated (threshold: 95%) +- [ ] Overall test pass rate evaluated (threshold: min_overall_pass_rate) +- [ ] Code coverage evaluated (threshold: min_coverage) +- [ ] P1 decision recorded: PASS or CONCERNS + +**P2/P3 Criteria Evaluation:** + +- [ ] P2 failures tracked (informational, don't block if allow_p2_failures: true) +- [ ] P3 failures tracked (informational, don't block if allow_p3_failures: true) +- [ ] Residual risks documented + +**Final Decision:** + +- [ ] Decision determined: PASS / CONCERNS / FAIL / WAIVED +- [ ] Decision rationale documented +- [ ] Decision is deterministic (follows rules, not arbitrary) + +### Step 4: Documentation + +**Gate Decision Document Created:** + +- [ ] Story/epic/release info section complete (ID, title, description, links) +- [ ] Decision clearly stated (PASS / CONCERNS / FAIL / WAIVED) +- [ ] Decision date recorded +- [ ] Evaluator recorded (user or agent name) + +**Evidence Summary Documented:** + +- [ ] Test results summary complete (total, passed, failed, pass rates) +- [ ] Coverage summary complete (P0/P1 criteria, code coverage) +- [ ] NFR validation summary complete (security, performance, reliability, maintainability) +- [ ] Flakiness summary complete (burn-in iterations, flaky test count) + +**Rationale Documented:** + +- [ ] Decision rationale clearly explained +- [ ] Key evidence highlighted +- [ ] Assumptions and caveats noted (if any) + +**Residual Risks Documented (if CONCERNS or WAIVED):** + +- [ ] Unresolved P1/P2 issues listed +- [ ] Probability × impact estimated for each risk +- [ ] Mitigations or workarounds described + +**Waivers Documented (if WAIVED):** + +- [ ] Waiver reason documented (business justification) +- [ ] Waiver approver documented (name, role) +- [ ] Waiver expiry date documented +- [ ] Remediation plan documented (fix in next release, due date) +- [ ] Monitoring plan documented + +**Critical Issues Documented (if FAIL or CONCERNS):** + +- [ ] Top 5-10 critical issues listed +- [ ] Priority assigned to each issue (P0/P1/P2) +- [ ] Owner assigned to each issue +- [ ] Due date assigned to each issue + +**Recommendations Documented:** + +- [ ] Next steps clearly stated for decision type +- [ ] Deployment recommendation provided +- [ ] Monitoring recommendations provided (if applicable) +- [ ] Remediation recommendations provided (if applicable) + +### Step 5: Status Updates and Notifications + +**Gate YAML Created:** + +- [ ] Gate YAML snippet generated with decision and criteria +- [ ] Evidence references included in YAML +- [ ] Next steps included in YAML +- [ ] YAML file saved to output folder + +**Stakeholder Notification Generated:** + +- [ ] Notification subject line created +- [ ] Notification body created with summary +- [ ] Recipients identified (PM, SM, DEV lead, stakeholders) +- [ ] Notification ready for delivery (if notify_stakeholders: true) + +**Outputs Saved:** + +- [ ] Gate decision document saved to `{output_file}` +- [ ] Gate YAML saved to `{output_folder}/gate-decision-{target}.yaml` +- [ ] All outputs are valid and readable + +--- + +## Phase 2 Output Validation + +### Gate Decision Document + +**Completeness:** + +- [ ] All required sections present (info, decision, evidence, rationale, next steps) +- [ ] No placeholder text or TODOs left in document +- [ ] All evidence references are accurate and complete +- [ ] All links to artifacts are valid + +**Accuracy:** + +- [ ] Decision matches applied criteria rules +- [ ] Test results match CI/CD pipeline output +- [ ] Coverage percentages match reports +- [ ] NFR status matches assessment document +- [ ] No contradictions or inconsistencies + +**Clarity:** + +- [ ] Decision rationale is clear and unambiguous +- [ ] Technical jargon is explained or avoided +- [ ] Stakeholders can understand next steps +- [ ] Recommendations are actionable + +### Gate YAML + +**Format:** + +- [ ] YAML is valid (no syntax errors) +- [ ] All required fields present (target, decision, date, evaluator, criteria, evidence) +- [ ] Field values are correct data types (numbers, strings, dates) + +**Content:** + +- [ ] Criteria values match decision document +- [ ] Evidence references are accurate +- [ ] Next steps align with decision type + +--- + +## Phase 2 Quality Checks + +### Decision Integrity + +- [ ] Decision is deterministic (follows rules, not arbitrary) +- [ ] P0 failures result in FAIL decision (unless waived) +- [ ] Security issues result in FAIL decision (unless waived - but should never be waived) +- [ ] Waivers have business justification and approver (if WAIVED) +- [ ] Residual risks are documented (if CONCERNS or WAIVED) + +### Evidence-Based + +- [ ] Decision is based on actual test results (not guesses) +- [ ] All claims are supported by evidence +- [ ] No assumptions without documentation +- [ ] Evidence sources are cited (CI run IDs, report URLs) + +### Transparency + +- [ ] Decision rationale is transparent and auditable +- [ ] Criteria evaluation is documented step-by-step +- [ ] Any deviations from standard process are explained +- [ ] Waiver justifications are clear (if applicable) + +### Consistency + +- [ ] Decision aligns with risk-governance knowledge fragment +- [ ] Priority framework (P0/P1/P2/P3) applied consistently +- [ ] Terminology consistent with test-quality knowledge fragment +- [ ] Decision matrix followed correctly + +--- + +## Phase 2 Integration Points + +### CI/CD Pipeline + +- [ ] Gate YAML is CI/CD-compatible +- [ ] YAML can be parsed by pipeline automation +- [ ] Decision can be used to block/allow deployments +- [ ] Evidence references are accessible to pipeline + +### Stakeholders + +- [ ] Notification message is clear and actionable +- [ ] Decision is explained in non-technical terms +- [ ] Next steps are specific and time-bound +- [ ] Recipients are appropriate for decision type + +--- + +## Phase 2 Compliance and Audit + +### Audit Trail + +- [ ] Decision date and time recorded +- [ ] Evaluator identified (user or agent) +- [ ] All evidence sources cited +- [ ] Decision criteria documented +- [ ] Rationale clearly explained + +### Traceability + +- [ ] Gate decision traceable to story/epic/release +- [ ] Evidence traceable to specific test runs +- [ ] Assessments traceable to workflows that created them +- [ ] Waiver traceable to approver (if applicable) + +### Compliance + +- [ ] Security requirements validated (no unresolved vulnerabilities) +- [ ] Quality standards met or waived with justification +- [ ] Regulatory requirements addressed (if applicable) +- [ ] Documentation sufficient for external audit + +--- + +## Phase 2 Edge Cases and Exceptions + +### Missing Evidence + +- [ ] If test-design.md missing, decision still possible with test results + trace +- [ ] If traceability-matrix.md missing, decision still possible with test results (but Phase 1 should provide it) +- [ ] If nfr-assessment.md missing, NFR validation marked as NOT ASSESSED +- [ ] If code coverage missing, coverage criterion marked as NOT ASSESSED +- [ ] User acknowledged gaps in evidence or provided alternative proof + +### Stale Evidence + +- [ ] Evidence freshness checked (if validate_evidence_freshness: true) +- [ ] Warnings issued for assessments >7 days old +- [ ] User acknowledged stale evidence or re-ran workflows +- [ ] Decision document notes any stale evidence used + +### Conflicting Evidence + +- [ ] Conflicts between test results and assessments resolved +- [ ] Most recent/authoritative source identified +- [ ] Conflict resolution documented in decision rationale +- [ ] User consulted if conflict cannot be resolved + +### Waiver Scenarios + +- [ ] Waiver only used for FAIL decision (not PASS or CONCERNS) +- [ ] Waiver has business justification (not technical convenience) +- [ ] Waiver has named approver with authority (VP/CTO/PO) +- [ ] Waiver has expiry date (does NOT apply to future releases) +- [ ] Waiver has remediation plan with concrete due date +- [ ] Security vulnerabilities are NOT waived (enforced) + +--- + +# FINAL VALIDATION (Both Phases) + +## Non-Prescriptive Validation + +- [ ] Traceability format adapted to team needs (not rigid template) +- [ ] Examples are minimal and focused on patterns +- [ ] Teams can extend with custom classifications +- [ ] Integration with external systems supported (JIRA, Azure DevOps) +- [ ] Compliance requirements considered (if applicable) + +--- + +## Documentation and Communication + +- [ ] All documents are readable and well-formatted +- [ ] Tables render correctly in markdown +- [ ] Code blocks have proper syntax highlighting +- [ ] Links are valid and accessible +- [ ] Recommendations are clear and prioritized +- [ ] Gate decision is prominent and unambiguous (Phase 2) + +--- + +## Final Validation + +**Phase 1 (Traceability):** + +- [ ] All prerequisites met +- [ ] All acceptance criteria mapped or gaps documented +- [ ] P0 coverage is 100% OR documented as BLOCKER +- [ ] Gap analysis is complete and prioritized +- [ ] Test quality issues identified and flagged +- [ ] Deliverables generated and saved + +**Phase 2 (Gate Decision):** + +- [ ] All quality evidence gathered +- [ ] Decision criteria applied correctly +- [ ] Decision rationale documented +- [ ] Gate YAML ready for CI/CD integration +- [ ] Status file updated (if enabled) +- [ ] Stakeholders notified (if enabled) + +**Workflow Complete:** + +- [ ] Phase 1 completed successfully +- [ ] Phase 2 completed successfully (if enabled) +- [ ] All outputs validated and saved +- [ ] Ready to proceed based on gate decision + +--- + +## Sign-Off + +**Phase 1 - Traceability Status:** + +- [ ] ✅ PASS - All quality gates met, no critical gaps +- [ ] ⚠️ WARN - P1 gaps exist, address before PR merge +- [ ] ❌ FAIL - P0 gaps exist, BLOCKER for release + +**Phase 2 - Gate Decision Status (if enabled):** + +- [ ] ✅ PASS - Deploy to production +- [ ] ⚠️ CONCERNS - Deploy with monitoring +- [ ] ❌ FAIL - Block deployment, fix issues +- [ ] 🔓 WAIVED - Deploy with business approval and remediation plan + +**Next Actions:** + +- If PASS (both phases): Proceed to deployment +- If WARN/CONCERNS: Address gaps/issues, proceed with monitoring +- If FAIL (either phase): Run `*atdd` for missing tests, fix issues, re-run `*trace` +- If WAIVED: Deploy with approved waiver, schedule remediation + +--- + +## Notes + +Record any issues, deviations, or important observations during workflow execution: + +- **Phase 1 Issues**: [Note any traceability mapping challenges, missing tests, quality concerns] +- **Phase 2 Issues**: [Note any missing, stale, or conflicting evidence] +- **Decision Rationale**: [Document any nuanced reasoning or edge cases] +- **Waiver Details**: [Document waiver negotiations or approvals] +- **Follow-up Actions**: [List any actions required after gate decision] + +--- + + diff --git a/src/bmm/workflows/testarch/trace/instructions.md b/src/bmm/workflows/testarch/trace/instructions.md new file mode 100644 index 00000000..deafb36c --- /dev/null +++ b/src/bmm/workflows/testarch/trace/instructions.md @@ -0,0 +1,1030 @@ +# Test Architect Workflow: Requirements Traceability & Quality Gate Decision + +**Workflow:** `testarch-trace` +**Purpose:** Generate requirements-to-tests traceability matrix, analyze coverage gaps, and make quality gate decisions (PASS/CONCERNS/FAIL/WAIVED) +**Agent:** Test Architect (TEA) +**Format:** Pure Markdown v4.0 (no XML blocks) + +--- + +## Overview + +This workflow operates in two sequential phases to validate test coverage and deployment readiness: + +**PHASE 1 - REQUIREMENTS TRACEABILITY:** Create comprehensive traceability matrix mapping acceptance criteria to implemented tests, identify coverage gaps, and provide actionable recommendations. + +**PHASE 2 - QUALITY GATE DECISION:** Use traceability results combined with test execution evidence to make gate decisions (PASS/CONCERNS/FAIL/WAIVED) that determine deployment readiness. + +**Key Capabilities:** + +- Map acceptance criteria to specific test cases across all levels (E2E, API, Component, Unit) +- Classify coverage status (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY) +- Prioritize gaps by risk level (P0/P1/P2/P3) using test-priorities framework +- Apply deterministic decision rules based on coverage and test execution results +- Generate gate decisions with evidence and rationale +- Support waivers for business-approved exceptions +- Update workflow status and notify stakeholders + +--- + +## Prerequisites + +**Required (Phase 1):** + +- Acceptance criteria (from story file OR provided inline) +- Implemented test suite (or acknowledge gaps to be addressed) + +**Required (Phase 2 - if `enable_gate_decision: true`):** + +- Test execution results (CI/CD test reports, pass/fail rates) +- Test design with risk priorities (P0/P1/P2/P3) + +**Recommended:** + +- `test-design.md` (for risk assessment and priority context) +- `nfr-assessment.md` (for release-level gates) +- `tech-spec.md` (for technical implementation context) +- Test framework configuration (playwright.config.ts, jest.config.js, etc.) + +**Halt Conditions:** + +- If story lacks any implemented tests AND no gaps are acknowledged, recommend running `*atdd` workflow first +- If acceptance criteria are completely missing, halt and request them +- If Phase 2 enabled but test execution results missing, warn and skip gate decision + +Note: `*trace` never runs `*atdd` automatically; it only recommends running it when tests are missing. + +--- + +## PHASE 1: REQUIREMENTS TRACEABILITY + +This phase focuses on mapping requirements to tests, analyzing coverage, and identifying gaps. + +--- + +### Step 1: Load Context and Knowledge Base + +**Actions:** + +1. Load relevant knowledge fragments from `{project-root}/_bmad/bmm/testarch/tea-index.csv`: + - `test-priorities-matrix.md` - P0/P1/P2/P3 risk framework with automated priority calculation, risk-based mapping, tagging strategy (389 lines, 2 examples) + - `risk-governance.md` - Risk-based testing approach: 6 categories (TECH, SEC, PERF, DATA, BUS, OPS), automated scoring, gate decision engine, coverage traceability (625 lines, 4 examples) + - `probability-impact.md` - Risk scoring methodology: probability × impact matrix, automated classification, dynamic re-assessment, gate integration (604 lines, 4 examples) + - `test-quality.md` - Definition of Done for tests: deterministic, isolated with cleanup, explicit assertions, length/time limits (658 lines, 5 examples) + - `selective-testing.md` - Duplicate coverage patterns: tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples) + +2. Read story file (if provided): + - Extract acceptance criteria + - Identify story ID (e.g., 1.3) + - Note any existing test design or priority information + +3. Read related BMad artifacts (if available): + - `test-design.md` - Risk assessment and test priorities + - `tech-spec.md` - Technical implementation details + - `PRD.md` - Product requirements context + +**Output:** Complete understanding of requirements, priorities, and existing context + +--- + +### Step 2: Discover and Catalog Tests + +**Actions:** + +1. Auto-discover test files related to the story: + - Search for test IDs (e.g., `1.3-E2E-001`, `1.3-UNIT-005`) + - Search for describe blocks mentioning feature name + - Search for file paths matching feature directory + - Use `glob` to find test files in `{test_dir}` + +2. Categorize tests by level: + - **E2E Tests**: Full user journeys through UI + - **API Tests**: HTTP contract and integration tests + - **Component Tests**: UI component behavior in isolation + - **Unit Tests**: Business logic and pure functions + +3. Extract test metadata: + - Test ID (if present) + - Describe/context blocks + - It blocks (individual test cases) + - Given-When-Then structure (if BDD) + - Assertions used + - Priority markers (P0/P1/P2/P3) + +**Output:** Complete catalog of all tests for this feature + +--- + +### Step 3: Map Criteria to Tests + +**Actions:** + +1. For each acceptance criterion: + - Search for explicit references (test IDs, describe blocks mentioning criterion) + - Map to specific test files and it blocks + - Use Given-When-Then narrative to verify alignment + - Document test level (E2E, API, Component, Unit) + +2. Build traceability matrix: + + ``` + | Criterion ID | Description | Test ID | Test File | Test Level | Coverage Status | + | ------------ | ----------- | ----------- | ---------------- | ---------- | --------------- | + | AC-1 | User can... | 1.3-E2E-001 | e2e/auth.spec.ts | E2E | FULL | + ``` + +3. Classify coverage status for each criterion: + - **FULL**: All scenarios validated at appropriate level(s) + - **PARTIAL**: Some coverage but missing edge cases or levels + - **NONE**: No test coverage at any level + - **UNIT-ONLY**: Only unit tests (missing integration/E2E validation) + - **INTEGRATION-ONLY**: Only API/Component tests (missing unit confidence) + +4. Check for duplicate coverage: + - Same behavior tested at multiple levels unnecessarily + - Flag violations of selective testing principles + - Recommend consolidation where appropriate + +**Output:** Complete traceability matrix with coverage classifications + +--- + +### Step 4: Analyze Gaps and Prioritize + +**Actions:** + +1. Identify coverage gaps: + - List criteria with NONE, PARTIAL, UNIT-ONLY, or INTEGRATION-ONLY status + - Assign severity based on test-priorities framework: + - **CRITICAL**: P0 criteria without FULL coverage (blocks release) + - **HIGH**: P1 criteria without FULL coverage (PR blocker) + - **MEDIUM**: P2 criteria without FULL coverage (nightly test gap) + - **LOW**: P3 criteria without FULL coverage (acceptable gap) + +2. Recommend specific tests to add: + - Suggest test level (E2E, API, Component, Unit) + - Provide test description (Given-When-Then) + - Recommend test ID (e.g., `1.3-E2E-004`) + - Explain why this test is needed + +3. Calculate coverage metrics: + - Overall coverage percentage (criteria with FULL coverage / total criteria) + - P0 coverage percentage (critical paths) + - P1 coverage percentage (high priority) + - Coverage by level (E2E%, API%, Component%, Unit%) + +4. Check against quality gates: + - P0 coverage >= 100% (required) + - P1 coverage >= 90% (recommended) + - Overall coverage >= 80% (recommended) + +**Output:** Prioritized gap analysis with actionable recommendations and coverage metrics + +--- + +### Step 5: Verify Test Quality + +**Actions:** + +1. For each mapped test, verify: + - Explicit assertions are present (not hidden in helpers) + - Test follows Given-When-Then structure + - No hard waits or sleeps + - Self-cleaning (test cleans up its data) + - File size < 300 lines + - Test duration < 90 seconds + +2. Flag quality issues: + - **BLOCKER**: Missing assertions, hard waits, flaky patterns + - **WARNING**: Large files, slow tests, unclear structure + - **INFO**: Style inconsistencies, missing documentation + +3. Reference knowledge fragments: + - `test-quality.md` for Definition of Done + - `fixture-architecture.md` for self-cleaning patterns + - `network-first.md` for Playwright best practices + - `data-factories.md` for test data patterns + +**Output:** Quality assessment for each test with improvement recommendations + +--- + +### Step 6: Generate Deliverables (Phase 1) + +**Actions:** + +1. Create traceability matrix markdown file: + - Use template from `trace-template.md` + - Include full mapping table + - Add coverage status section + - Add gap analysis section + - Add quality assessment section + - Add recommendations section + - Save to `{output_folder}/traceability-matrix.md` + +2. Generate gate YAML snippet (if enabled): + + ```yaml + traceability: + story_id: '1.3' + coverage: + overall: 85% + p0: 100% + p1: 90% + p2: 75% + gaps: + critical: 0 + high: 1 + medium: 2 + status: 'PASS' # or "FAIL" if P0 < 100% + ``` + +3. Create coverage badge/metric (if enabled): + - Generate badge markdown: `![Coverage](https://img.shields.io/badge/coverage-85%25-green)` + - Export metrics to JSON for CI/CD integration + +4. Update story file (if enabled): + - Add "Traceability" section to story markdown + - Link to traceability matrix + - Include coverage summary + - Add gate status + +**Output:** Complete Phase 1 traceability deliverables + +**Next:** If `enable_gate_decision: true`, proceed to Phase 2. Otherwise, workflow complete. + +--- + +## PHASE 2: QUALITY GATE DECISION + +This phase uses traceability results to make a quality gate decision (PASS/CONCERNS/FAIL/WAIVED) based on evidence and decision rules. + +**When Phase 2 Runs:** Automatically after Phase 1 if `enable_gate_decision: true` (default: true) + +**Skip Conditions:** If test execution results (`test_results`) not provided, warn and skip Phase 2. + +--- + +### Step 7: Gather Quality Evidence + +**Actions:** + +1. **Load Phase 1 traceability results** (inherited context): + - Coverage metrics (P0/P1/overall percentages) + - Gap analysis (missing/partial tests) + - Quality concerns (test quality flags) + - Traceability matrix + +2. **Load test execution results** (if `test_results` provided): + - Read CI/CD test reports (JUnit XML, TAP, JSON) + - Extract pass/fail counts by priority + - Calculate pass rates: + - **P0 pass rate**: `(P0 passed / P0 total) * 100` + - **P1 pass rate**: `(P1 passed / P1 total) * 100` + - **Overall pass rate**: `(All passed / All total) * 100` + - Identify failing tests and map to criteria + +3. **Load NFR assessment** (if `nfr_file` provided): + - Read `nfr-assessment.md` or similar + - Check critical NFR status (performance, security, scalability) + - Flag any critical NFR failures + +4. **Load supporting artifacts**: + - `test-design.md` → Risk priorities, DoD checklist + - `story-*.md` or `Epics.md` → Requirements context + +5. **Validate evidence freshness** (if `validate_evidence_freshness: true`): + - Check timestamps of test-design, traceability, NFR assessments + - Warn if artifacts are >7 days old + +6. **Check prerequisite workflows** (if `check_all_workflows_complete: true`): + - Verify test-design workflow complete + - Verify trace workflow complete (Phase 1) + - Verify nfr-assess workflow complete (if release-level gate) + +**Output:** Consolidated evidence bundle with all quality signals + +--- + +### Step 8: Apply Decision Rules + +**If `decision_mode: "deterministic"`** (rule-based - default): + +**Decision rules** (based on `workflow.yaml` thresholds): + +1. **PASS** if ALL of the following are true: + - P0 coverage ≥ `min_p0_coverage` (default: 100%) + - P1 coverage ≥ `min_p1_coverage` (default: 90%) + - Overall coverage ≥ `min_overall_coverage` (default: 80%) + - P0 test pass rate = `min_p0_pass_rate` (default: 100%) + - P1 test pass rate ≥ `min_p1_pass_rate` (default: 95%) + - Overall test pass rate ≥ `min_overall_pass_rate` (default: 90%) + - Critical NFRs passed (if `nfr_file` provided) + - No unresolved security issues ≤ `max_security_issues` (default: 0) + - No test quality red flags (hard waits, no assertions) + +2. **CONCERNS** if ANY of the following are true: + - P1 coverage 80-89% (below threshold but not critical) + - P1 test pass rate 90-94% (below threshold but not critical) + - Overall pass rate 85-89% + - P2 coverage <50% (informational) + - Some non-critical NFRs failing + - Minor test quality concerns (large test files, inferred mappings) + - **Note**: CONCERNS does NOT block deployment but requires acknowledgment + +3. **FAIL** if ANY of the following are true: + - P0 coverage <100% (missing critical tests) + - P0 test pass rate <100% (failing critical tests) + - P1 coverage <80% (significant gap) + - P1 test pass rate <90% (significant failures) + - Overall coverage <80% + - Overall pass rate <85% + - Critical NFRs failing (`max_critical_nfrs_fail` exceeded) + - Unresolved security issues (`max_security_issues` exceeded) + - Major test quality issues (tests with no assertions, pervasive hard waits) + +4. **WAIVED** (only if `allow_waivers: true`): + - Decision would be FAIL based on rules above + - Business stakeholder has approved waiver + - Waiver documented with: + - Justification (time constraint, known limitation, acceptable risk) + - Approver name and date + - Mitigation plan (follow-up stories, manual testing) + - Waiver evidence linked (email, Slack thread, ticket) + +**Risk tolerance adjustments:** + +- If `allow_p2_failures: true` → P2 test failures do NOT affect gate decision +- If `allow_p3_failures: true` → P3 test failures do NOT affect gate decision +- If `escalate_p1_failures: true` → P1 failures require explicit manager/lead approval + +**If `decision_mode: "manual"`:** + +- Present evidence summary to team +- Recommend decision based on rules above +- Team makes final call in meeting/chat +- Document decision with approver names + +**Output:** Gate decision (PASS/CONCERNS/FAIL/WAIVED) with rule-based rationale + +--- + +### Step 9: Document Decision and Evidence + +**Actions:** + +1. **Create gate decision document**: + - Save to `gate_output_file` (default: `{output_folder}/gate-decision-{gate_type}-{story_id}.md`) + - Use structure below + +2. **Document structure**: + +```markdown +# Quality Gate Decision: {gate_type} {story_id/epic_num/release_version} + +**Decision**: [PASS / CONCERNS / FAIL / WAIVED] +**Date**: {date} +**Decider**: {decision_mode} (deterministic | manual) +**Evidence Date**: {test_results_date} + +--- + +## Summary + +[1-2 sentence summary of decision and key factors] + +--- + +## Decision Criteria + +| Criterion | Threshold | Actual | Status | +| ----------------- | --------- | -------- | ------ | +| P0 Coverage | ≥100% | 100% | ✅ PASS | +| P1 Coverage | ≥90% | 88% | ⚠️ FAIL | +| Overall Coverage | ≥80% | 92% | ✅ PASS | +| P0 Pass Rate | 100% | 100% | ✅ PASS | +| P1 Pass Rate | ≥95% | 98% | ✅ PASS | +| Overall Pass Rate | ≥90% | 96% | ✅ PASS | +| Critical NFRs | All Pass | All Pass | ✅ PASS | +| Security Issues | 0 | 0 | ✅ PASS | + +**Overall Status**: 7/8 criteria met → Decision: **CONCERNS** + +--- + +## Evidence Summary + +### Test Coverage (from Phase 1 Traceability) + +- **P0 Coverage**: 100% (5/5 criteria fully covered) +- **P1 Coverage**: 88% (7/8 criteria fully covered) +- **Overall Coverage**: 92% (12/13 criteria covered) +- **Gap**: AC-5 (P1) missing E2E test + +### Test Execution Results + +- **P0 Pass Rate**: 100% (12/12 tests passed) +- **P1 Pass Rate**: 98% (45/46 tests passed) +- **Overall Pass Rate**: 96% (67/70 tests passed) +- **Failures**: 3 P2 tests (non-blocking) + +### Non-Functional Requirements + +- Performance: ✅ PASS (response time <500ms) +- Security: ✅ PASS (no vulnerabilities) +- Scalability: ✅ PASS (handles 10K users) + +### Test Quality + +- All tests have explicit assertions ✅ +- No hard waits detected ✅ +- Test files <300 lines ✅ +- Test IDs follow convention ✅ + +--- + +## Decision Rationale + +**Why CONCERNS (not PASS)**: + +- P1 coverage at 88% is below 90% threshold +- AC-5 (P1 priority) missing E2E test for error handling scenario +- This is a known gap from test-design phase + +**Why CONCERNS (not FAIL)**: + +- P0 coverage is 100% (critical paths validated) +- Overall coverage is 92% (above 80% threshold) +- Test pass rate is excellent (96% overall) +- Gap is isolated to one P1 criterion (not systemic) + +**Recommendation**: + +- Acknowledge gap and proceed with deployment +- Add missing AC-5 E2E test in next sprint +- Create follow-up story: "Add E2E test for AC-5 error handling" + +--- + +## Next Steps + +- [ ] Create follow-up story for AC-5 E2E test +- [ ] Deploy to staging environment +- [ ] Monitor production for edge cases related to AC-5 +- [ ] Update traceability matrix after follow-up test added + +--- + +## References + +- Traceability Matrix: `_bmad/output/traceability-matrix.md` +- Test Design: `_bmad/output/test-design-epic-2.md` +- Test Results: `ci-artifacts/test-report-2025-01-15.xml` +- NFR Assessment: `_bmad/output/nfr-assessment-release-1.2.md` +``` + +3. **Include evidence links** (if `require_evidence: true`): + - Link to traceability matrix + - Link to test execution reports (CI artifacts) + - Link to NFR assessment + - Link to test-design document + - Link to relevant PRs, commits, deployments + +4. **Waiver documentation** (if decision is WAIVED): + - Approver name and role (e.g., "Jane Doe, Engineering Manager") + - Approval date and method (e.g., "2025-01-15, Slack thread") + - Justification (e.g., "Time-boxed MVP, missing tests will be added in v1.1") + - Mitigation plan (e.g., "Manual testing by QA, follow-up stories created") + - Evidence link (e.g., "Slack: #engineering 2025-01-15 3:42pm") + +**Output:** Complete gate decision document with evidence and rationale + +--- + +### Step 10: Update Status Tracking and Notify + +**Actions:** + +1. **Generate stakeholder notification** (if `notify_stakeholders: true`): + - Create concise summary message for team communication + - Include: Decision, key metrics, action items + - Format for Slack/email/chat: + + ``` + 🚦 Quality Gate Decision: Story 1.3 - User Login + + Decision: ⚠️ CONCERNS + - P0 Coverage: ✅ 100% + - P1 Coverage: ⚠️ 88% (below 90%) + - Test Pass Rate: ✅ 96% + + Action Required: + - Create follow-up story for AC-5 E2E test + - Deploy to staging for validation + + Full Report: _bmad/output/gate-decision-story-1.3.md + ``` + +2. **Request sign-off** (if `require_sign_off: true`): + - Prompt for named approver (tech lead, QA lead, PM) + - Document approver name and timestamp in gate decision + - Block until sign-off received (interactive prompt) + +**Output:** Status tracking updated, stakeholders notified, sign-off obtained (if required) + +**Workflow Complete**: Both Phase 1 (traceability) and Phase 2 (gate decision) deliverables generated. + +--- + +## Decision Matrix (Quick Reference) + +| Scenario | P0 Cov | P1 Cov | Overall Cov | P0 Pass | P1 Pass | Overall Pass | NFRs | Decision | +| --------------- | ----------------- | ------ | ----------- | ------- | ------- | ------------ | ---- | ------------ | +| All green | 100% | ≥90% | ≥80% | 100% | ≥95% | ≥90% | Pass | **PASS** | +| Minor gap | 100% | 80-89% | ≥80% | 100% | 90-94% | 85-89% | Pass | **CONCERNS** | +| Missing P0 | <100% | - | - | - | - | - | - | **FAIL** | +| P0 test fail | 100% | - | - | <100% | - | - | - | **FAIL** | +| P1 gap | 100% | <80% | - | 100% | - | - | - | **FAIL** | +| NFR fail | 100% | ≥90% | ≥80% | 100% | ≥95% | ≥90% | Fail | **FAIL** | +| Security issue | - | - | - | - | - | - | Yes | **FAIL** | +| Business waiver | [FAIL conditions] | - | - | - | - | - | - | **WAIVED** | + +--- + +## Waiver Management + +**When to use waivers:** + +- Time-boxed MVP releases (known gaps, follow-up planned) +- Low-risk P1 gaps with mitigation (manual testing, monitoring) +- Technical debt acknowledged by product/engineering leadership +- External dependencies blocking test automation + +**Waiver approval process:** + +1. Document gap and risk in gate decision +2. Propose mitigation plan (manual testing, follow-up stories, monitoring) +3. Request approval from stakeholder (EM, PM, QA lead) +4. Link approval evidence (email, chat thread, meeting notes) +5. Add waiver to gate decision document +6. Create follow-up stories to close gaps + +**Waiver does NOT apply to:** + +- P0 gaps (always blocking) +- Critical security issues (always blocking) +- Critical NFR failures (performance, data integrity) + +--- + +## Example Gate Decisions + +### Example 1: PASS (All Criteria Met) + +``` +Decision: ✅ PASS + +Summary: All quality criteria met. Story 1.3 is ready for production deployment. + +Evidence: +- P0 Coverage: 100% (5/5 criteria) +- P1 Coverage: 95% (19/20 criteria) +- Overall Coverage: 92% (24/26 criteria) +- P0 Pass Rate: 100% (12/12 tests) +- P1 Pass Rate: 98% (45/46 tests) +- Overall Pass Rate: 96% (67/70 tests) +- NFRs: All pass (performance, security, scalability) + +Action: Deploy to production ✅ +``` + +### Example 2: CONCERNS (Minor Gap, Non-Blocking) + +``` +Decision: ⚠️ CONCERNS + +Summary: P1 coverage slightly below threshold (88% vs 90%). Recommend deploying with follow-up story. + +Evidence: +- P0 Coverage: 100% ✅ +- P1 Coverage: 88% ⚠️ (below 90%) +- Overall Coverage: 92% ✅ +- Test Pass Rate: 96% ✅ +- Gap: AC-5 (P1) missing E2E test + +Action: +- Deploy to staging for validation +- Create follow-up story for AC-5 E2E test +- Monitor production for edge cases related to AC-5 +``` + +### Example 3: FAIL (P0 Gap, Blocking) + +``` +Decision: ❌ FAIL + +Summary: P0 coverage incomplete. Missing critical validation test. BLOCKING deployment. + +Evidence: +- P0 Coverage: 80% ❌ (4/5 criteria, AC-2 missing) +- AC-2: "User cannot login with invalid credentials" (P0 priority) +- No tests validate login security for invalid credentials +- This is a critical security gap + +Action: +- Add P0 test for AC-2: 1.3-E2E-004 (invalid credentials) +- Re-run traceability after test added +- Re-evaluate gate decision after P0 coverage = 100% + +Deployment BLOCKED until P0 gap resolved ❌ +``` + +### Example 4: WAIVED (Business Decision) + +``` +Decision: ⚠️ WAIVED + +Summary: P1 coverage below threshold (75% vs 90%), but waived for MVP launch. + +Evidence: +- P0 Coverage: 100% ✅ +- P1 Coverage: 75% ❌ (below 90%) +- Gap: 5 P1 criteria missing E2E tests (error handling, edge cases) + +Waiver: +- Approver: Jane Doe, Engineering Manager +- Date: 2025-01-15 +- Justification: Time-boxed MVP for investor demo. Core functionality (P0) fully validated. P1 gaps are low-risk edge cases. +- Mitigation: Manual QA testing for P1 scenarios, follow-up stories created for automated tests in v1.1 +- Evidence: Slack #engineering 2025-01-15 3:42pm + +Action: +- Deploy to production with manual QA validation ✅ +- Add 5 E2E tests for P1 gaps in v1.1 sprint +- Monitor production logs for edge case occurrences +``` + +--- + +## Non-Prescriptive Approach + +**Minimal Examples:** This workflow provides principles and patterns, not rigid templates. Teams should adapt the traceability and gate decision formats to their needs. + +**Key Patterns to Follow:** + +- Map criteria to tests explicitly (don't rely on inference alone) +- Prioritize by risk (P0 gaps are critical, P3 gaps are acceptable) +- Check coverage at appropriate levels (E2E for journeys, Unit for logic) +- Verify test quality (explicit assertions, no flakiness) +- Apply deterministic gate rules for consistency +- Document gate decisions with clear evidence +- Use waivers judiciously (business approved, mitigation planned) + +**Extend as Needed:** + +- Add custom coverage classifications +- Integrate with code coverage tools (Istanbul, NYC) +- Link to external traceability systems (JIRA, Azure DevOps) +- Add compliance or regulatory requirements +- Customize gate decision thresholds per project +- Add manual approval workflows for gate decisions + +--- + +## Coverage Classification Details + +### FULL Coverage + +- All scenarios validated at appropriate test level(s) +- Edge cases considered +- Both happy path and error paths tested +- Assertions are explicit and complete + +### PARTIAL Coverage + +- Some scenarios validated but missing edge cases +- Only happy path tested (missing error paths) +- Assertions present but incomplete +- Coverage exists but needs enhancement + +### NONE Coverage + +- No tests found for this criterion +- Complete gap requiring new tests +- Critical if P0/P1, acceptable if P3 + +### UNIT-ONLY Coverage + +- Only unit tests exist (business logic validated) +- Missing integration or E2E validation +- Risk: Implementation may not work end-to-end +- Recommendation: Add integration or E2E tests for critical paths + +### INTEGRATION-ONLY Coverage + +- Only API or Component tests exist +- Missing unit test confidence for business logic +- Risk: Logic errors may not be caught quickly +- Recommendation: Add unit tests for complex algorithms or state machines + +--- + +## Duplicate Coverage Detection + +Use selective testing principles from `selective-testing.md`: + +**Acceptable Overlap:** + +- Unit tests for business logic + E2E tests for user journey (different aspects) +- API tests for contract + E2E tests for full workflow (defense in depth for critical paths) + +**Unacceptable Duplication:** + +- Same validation at multiple levels (e.g., E2E testing math logic better suited for unit tests) +- Multiple E2E tests covering identical user path +- Component tests duplicating unit test logic + +**Recommendation Pattern:** + +- Test logic at unit level +- Test integration at API/Component level +- Test user experience at E2E level +- Avoid testing framework behavior at any level + +--- + +## Integration with BMad Artifacts + +### With test-design.md + +- Use risk assessment to prioritize gap remediation +- Reference test priorities (P0/P1/P2/P3) for severity classification and gate decision +- Align traceability with originally planned test coverage + +### With tech-spec.md + +- Understand technical implementation details +- Map criteria to specific code modules +- Verify tests cover technical edge cases + +### With PRD.md + +- Understand full product context +- Verify acceptance criteria align with product goals +- Check for unstated requirements that need coverage + +### With nfr-assessment.md + +- Load non-functional validation results for gate decision +- Check critical NFR status (performance, security, scalability) +- Include NFR pass/fail in gate decision criteria + +--- + +## Quality Gates (Phase 1 Recommendations) + +### P0 Coverage (Critical Paths) + +- **Requirement:** 100% FULL coverage +- **Severity:** BLOCKER if not met +- **Action:** Do not release until P0 coverage is complete + +### P1 Coverage (High Priority) + +- **Requirement:** 90% FULL coverage +- **Severity:** HIGH if not met +- **Action:** Block PR merge until addressed + +### P2 Coverage (Medium Priority) + +- **Requirement:** No strict requirement (recommended 80%) +- **Severity:** MEDIUM if gaps exist +- **Action:** Address in nightly test improvements + +### P3 Coverage (Low Priority) + +- **Requirement:** No requirement +- **Severity:** LOW if gaps exist +- **Action:** Optional - add if time permits + +--- + +## Example Traceability Matrix + +````markdown +# Traceability Matrix - Story 1.3 + +**Story:** User Authentication +**Date:** 2025-10-14 +**Status:** 85% Coverage (1 HIGH gap) + +## Coverage Summary + +| Priority | Total Criteria | FULL Coverage | Coverage % | Status | +| --------- | -------------- | ------------- | ---------- | ------ | +| P0 | 3 | 3 | 100% | ✅ PASS | +| P1 | 5 | 4 | 80% | ⚠️ WARN | +| P2 | 4 | 3 | 75% | ✅ PASS | +| P3 | 2 | 1 | 50% | ✅ PASS | +| **Total** | **14** | **11** | **79%** | ⚠️ WARN | + +## Detailed Mapping + +### AC-1: User can login with email and password (P0) + +- **Coverage:** FULL ✅ +- **Tests:** + - `1.3-E2E-001` - tests/e2e/auth.spec.ts:12 + - Given: User has valid credentials + - When: User submits login form + - Then: User is redirected to dashboard + - `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8 + - Given: Valid email and password hash + - When: validateCredentials is called + - Then: Returns user object + +### AC-2: User sees error for invalid credentials (P0) + +- **Coverage:** FULL ✅ +- **Tests:** + - `1.3-E2E-002` - tests/e2e/auth.spec.ts:28 + - Given: User has invalid password + - When: User submits login form + - Then: Error message is displayed + - `1.3-UNIT-002` - tests/unit/auth-service.spec.ts:18 + - Given: Invalid password hash + - When: validateCredentials is called + - Then: Throws AuthenticationError + +### AC-3: User can reset password via email (P1) + +- **Coverage:** PARTIAL ⚠️ +- **Tests:** + - `1.3-E2E-003` - tests/e2e/auth.spec.ts:44 + - Given: User requests password reset + - When: User clicks reset link + - Then: User can set new password +- **Gaps:** + - Missing: Email delivery validation + - Missing: Expired token handling + - Missing: Unit test for token generation +- **Recommendation:** Add `1.3-API-001` for email service integration and `1.3-UNIT-003` for token logic + +## Gap Analysis + +### Critical Gaps (BLOCKER) + +- None ✅ + +### High Priority Gaps (PR BLOCKER) + +1. **AC-3: Password reset email edge cases** + - Missing tests for expired tokens, invalid tokens, email failures + - Recommend: `1.3-API-001` (email service integration) and `1.3-E2E-004` (error paths) + - Impact: Users may not be able to recover accounts in error scenarios + +### Medium Priority Gaps (Nightly) + +1. **AC-7: Session timeout handling** - UNIT-ONLY coverage (missing E2E validation) + +## Quality Assessment + +### Tests with Issues + +- `1.3-E2E-001` ⚠️ - 145 seconds (exceeds 90s target) - Optimize fixture setup +- `1.3-UNIT-005` ⚠️ - 320 lines (exceeds 300 line limit) - Split into multiple test files + +### Tests Passing Quality Gates + +- 11/13 tests (85%) meet all quality criteria ✅ + +## Gate YAML Snippet + +```yaml +traceability: + story_id: '1.3' + coverage: + overall: 79% + p0: 100% + p1: 80% + p2: 75% + p3: 50% + gaps: + critical: 0 + high: 1 + medium: 1 + low: 1 + status: 'WARN' # P1 coverage below 90% threshold + recommendations: + - 'Add 1.3-API-001 for email service integration' + - 'Add 1.3-E2E-004 for password reset error paths' + - 'Optimize 1.3-E2E-001 performance (145s → <90s)' +``` +```` + +## Recommendations + +1. **Address High Priority Gap:** Add password reset edge case tests before PR merge +2. **Optimize Slow Test:** Refactor `1.3-E2E-001` to use faster fixture setup +3. **Split Large Test:** Break `1.3-UNIT-005` into focused test files +4. **Enhance P2 Coverage:** Add E2E validation for session timeout (currently UNIT-ONLY) + +``` + +--- + +## Validation Checklist + +Before completing this workflow, verify: + +**Phase 1 (Traceability):** +- ✅ All acceptance criteria are mapped to tests (or gaps are documented) +- ✅ Coverage status is classified (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY) +- ✅ Gaps are prioritized by risk level (P0/P1/P2/P3) +- ✅ P0 coverage is 100% or blockers are documented +- ✅ Duplicate coverage is identified and flagged +- ✅ Test quality is assessed (assertions, structure, performance) +- ✅ Traceability matrix is generated and saved + +**Phase 2 (Gate Decision - if enabled):** +- ✅ Test execution results loaded and pass rates calculated +- ✅ NFR assessment results loaded (if applicable) +- ✅ Decision rules applied consistently (PASS/CONCERNS/FAIL/WAIVED) +- ✅ Gate decision document created with evidence +- ✅ Waiver documented if decision is WAIVED (approver, justification, mitigation) +- ✅ Stakeholders notified (if enabled) + +--- + +## Notes + +**Phase 1 (Traceability):** +- **Explicit Mapping:** Require tests to reference criteria explicitly (test IDs, describe blocks) for maintainability +- **Risk-Based Prioritization:** Use test-priorities framework (P0/P1/P2/P3) to determine gap severity +- **Quality Over Quantity:** Better to have fewer high-quality tests with FULL coverage than many low-quality tests with PARTIAL coverage +- **Selective Testing:** Avoid duplicate coverage - test each behavior at the appropriate level only + +**Phase 2 (Gate Decision):** +- **Deterministic Rules:** Use consistent thresholds (P0=100%, P1≥90%, overall≥80%) for objectivity +- **Evidence-Based:** Every decision must cite specific metrics (coverage %, pass rates, NFRs) +- **Waiver Discipline:** Waivers require approver name, justification, mitigation plan, and evidence link +- **Non-Blocking CONCERNS:** Use CONCERNS for minor gaps that don't justify blocking deployment (e.g., P1 at 88% vs 90%) +- **Automate in CI/CD:** Generate YAML snippets that can be consumed by CI/CD pipelines for automated quality gates + +--- + +## Troubleshooting + +### "No tests found for this story" +- Run `*atdd` workflow first to generate failing acceptance tests +- Check test file naming conventions (may not match story ID pattern) +- Verify test directory path is correct + +### "Cannot determine coverage status" +- Tests may lack explicit mapping to criteria (no test IDs, unclear describe blocks) +- Review test structure and add Given-When-Then narrative +- Add test IDs in format: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., 1.3-E2E-001) + +### "P0 coverage below 100%" +- This is a **BLOCKER** - do not release +- Identify missing P0 tests in gap analysis +- Run `*atdd` workflow to generate missing tests +- Verify with stakeholders that P0 classification is correct + +### "Duplicate coverage detected" +- Review selective testing principles in `selective-testing.md` +- Determine if overlap is acceptable (defense in depth) or wasteful (same validation at multiple levels) +- Consolidate tests at appropriate level (logic → unit, integration → API, journey → E2E) + +### "Test execution results missing" (Phase 2) +- Phase 2 gate decision requires `test_results` (CI/CD test reports) +- If missing, Phase 2 will be skipped with warning +- Provide JUnit XML, TAP, or JSON test report path via `test_results` variable + +### "Gate decision is FAIL but deployment needed urgently" +- Request business waiver (if `allow_waivers: true`) +- Document approver, justification, mitigation plan +- Create follow-up stories to address gaps +- Use WAIVED decision only for non-P0 gaps + +--- + +## Related Workflows + +**Prerequisites:** +- `testarch-test-design` - Define test priorities (P0/P1/P2/P3) before tracing (required for Phase 2) +- `testarch-atdd` or `testarch-automate` - Generate tests before tracing coverage + +**Complements:** +- `testarch-nfr-assess` - Non-functional requirements validation (recommended for release gates) +- `testarch-test-review` - Review test quality issues flagged in traceability + +**Next Steps:** +- If gate decision is PASS/CONCERNS → Deploy and monitor +- If gate decision is FAIL → Add missing tests, re-run trace workflow +- If gate decision is WAIVED → Deploy with mitigation, create follow-up stories + +--- + + +``` diff --git a/src/bmm/workflows/testarch/trace/trace-template.md b/src/bmm/workflows/testarch/trace/trace-template.md new file mode 100644 index 00000000..ddc74019 --- /dev/null +++ b/src/bmm/workflows/testarch/trace/trace-template.md @@ -0,0 +1,675 @@ +# Traceability Matrix & Gate Decision - Story {STORY_ID} + +**Story:** {STORY_TITLE} +**Date:** {DATE} +**Evaluator:** {user_name or TEA Agent} + +--- + +Note: This workflow does not generate tests. If gaps exist, run `*atdd` or `*automate` to create coverage. + +## PHASE 1: REQUIREMENTS TRACEABILITY + +### Coverage Summary + +| Priority | Total Criteria | FULL Coverage | Coverage % | Status | +| --------- | -------------- | ------------- | ---------- | ------------ | +| P0 | {P0_TOTAL} | {P0_FULL} | {P0_PCT}% | {P0_STATUS} | +| P1 | {P1_TOTAL} | {P1_FULL} | {P1_PCT}% | {P1_STATUS} | +| P2 | {P2_TOTAL} | {P2_FULL} | {P2_PCT}% | {P2_STATUS} | +| P3 | {P3_TOTAL} | {P3_FULL} | {P3_PCT}% | {P3_STATUS} | +| **Total** | **{TOTAL}** | **{FULL}** | **{PCT}%** | **{STATUS}** | + +**Legend:** + +- ✅ PASS - Coverage meets quality gate threshold +- ⚠️ WARN - Coverage below threshold but not critical +- ❌ FAIL - Coverage below minimum threshold (blocker) + +--- + +### Detailed Mapping + +#### {CRITERION_ID}: {CRITERION_DESCRIPTION} ({PRIORITY}) + +- **Coverage:** {COVERAGE_STATUS} {STATUS_ICON} +- **Tests:** + - `{TEST_ID}` - {TEST_FILE}:{LINE} + - **Given:** {GIVEN} + - **When:** {WHEN} + - **Then:** {THEN} + - `{TEST_ID_2}` - {TEST_FILE_2}:{LINE} + - **Given:** {GIVEN_2} + - **When:** {WHEN_2} + - **Then:** {THEN_2} + +- **Gaps:** (if PARTIAL or UNIT-ONLY or INTEGRATION-ONLY) + - Missing: {MISSING_SCENARIO_1} + - Missing: {MISSING_SCENARIO_2} + +- **Recommendation:** {RECOMMENDATION_TEXT} + +--- + +#### Example: AC-1: User can login with email and password (P0) + +- **Coverage:** FULL ✅ +- **Tests:** + - `1.3-E2E-001` - tests/e2e/auth.spec.ts:12 + - **Given:** User has valid credentials + - **When:** User submits login form + - **Then:** User is redirected to dashboard + - `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8 + - **Given:** Valid email and password hash + - **When:** validateCredentials is called + - **Then:** Returns user object + +--- + +#### Example: AC-3: User can reset password via email (P1) + +- **Coverage:** PARTIAL ⚠️ +- **Tests:** + - `1.3-E2E-003` - tests/e2e/auth.spec.ts:44 + - **Given:** User requests password reset + - **When:** User clicks reset link in email + - **Then:** User can set new password + +- **Gaps:** + - Missing: Email delivery validation + - Missing: Expired token handling (error path) + - Missing: Invalid token handling (security test) + - Missing: Unit test for token generation logic + +- **Recommendation:** Add `1.3-API-001` for email service integration testing and `1.3-UNIT-003` for token generation logic. Add `1.3-E2E-004` for error path validation (expired/invalid tokens). + +--- + +### Gap Analysis + +#### Critical Gaps (BLOCKER) ❌ + +{CRITICAL_GAP_COUNT} gaps found. **Do not release until resolved.** + +1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P0) + - Current Coverage: {COVERAGE_STATUS} + - Missing Tests: {MISSING_TEST_DESCRIPTION} + - Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL}) + - Impact: {IMPACT_DESCRIPTION} + +--- + +#### High Priority Gaps (PR BLOCKER) ⚠️ + +{HIGH_GAP_COUNT} gaps found. **Address before PR merge.** + +1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P1) + - Current Coverage: {COVERAGE_STATUS} + - Missing Tests: {MISSING_TEST_DESCRIPTION} + - Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL}) + - Impact: {IMPACT_DESCRIPTION} + +--- + +#### Medium Priority Gaps (Nightly) ⚠️ + +{MEDIUM_GAP_COUNT} gaps found. **Address in nightly test improvements.** + +1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P2) + - Current Coverage: {COVERAGE_STATUS} + - Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL}) + +--- + +#### Low Priority Gaps (Optional) ℹ️ + +{LOW_GAP_COUNT} gaps found. **Optional - add if time permits.** + +1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P3) + - Current Coverage: {COVERAGE_STATUS} + +--- + +### Quality Assessment + +#### Tests with Issues + +**BLOCKER Issues** ❌ + +- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION} + +**WARNING Issues** ⚠️ + +- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION} + +**INFO Issues** ℹ️ + +- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION} + +--- + +#### Example Quality Issues + +**WARNING Issues** ⚠️ + +- `1.3-E2E-001` - 145 seconds (exceeds 90s target) - Optimize fixture setup to reduce test duration +- `1.3-UNIT-005` - 320 lines (exceeds 300 line limit) - Split into multiple focused test files + +**INFO Issues** ℹ️ + +- `1.3-E2E-002` - Missing Given-When-Then structure - Refactor describe block to use BDD format + +--- + +#### Tests Passing Quality Gates + +**{PASSING_TEST_COUNT}/{TOTAL_TEST_COUNT} tests ({PASSING_PCT}%) meet all quality criteria** ✅ + +--- + +### Duplicate Coverage Analysis + +#### Acceptable Overlap (Defense in Depth) + +- {CRITERION_ID}: Tested at unit (business logic) and E2E (user journey) ✅ + +#### Unacceptable Duplication ⚠️ + +- {CRITERION_ID}: Same validation at E2E and Component level + - Recommendation: Remove {TEST_ID} or consolidate with {OTHER_TEST_ID} + +--- + +### Coverage by Test Level + +| Test Level | Tests | Criteria Covered | Coverage % | +| ---------- | ----------------- | -------------------- | ---------------- | +| E2E | {E2E_COUNT} | {E2E_CRITERIA} | {E2E_PCT}% | +| API | {API_COUNT} | {API_CRITERIA} | {API_PCT}% | +| Component | {COMP_COUNT} | {COMP_CRITERIA} | {COMP_PCT}% | +| Unit | {UNIT_COUNT} | {UNIT_CRITERIA} | {UNIT_PCT}% | +| **Total** | **{TOTAL_TESTS}** | **{TOTAL_CRITERIA}** | **{TOTAL_PCT}%** | + +--- + +### Traceability Recommendations + +#### Immediate Actions (Before PR Merge) + +1. **{ACTION_1}** - {DESCRIPTION} +2. **{ACTION_2}** - {DESCRIPTION} + +#### Short-term Actions (This Sprint) + +1. **{ACTION_1}** - {DESCRIPTION} +2. **{ACTION_2}** - {DESCRIPTION} + +#### Long-term Actions (Backlog) + +1. **{ACTION_1}** - {DESCRIPTION} + +--- + +#### Example Recommendations + +**Immediate Actions (Before PR Merge)** + +1. **Add P1 Password Reset Tests** - Implement `1.3-API-001` for email service integration and `1.3-E2E-004` for error path validation. P1 coverage currently at 80%, target is 90%. +2. **Optimize Slow E2E Test** - Refactor `1.3-E2E-001` to use faster fixture setup. Currently 145s, target is <90s. + +**Short-term Actions (This Sprint)** + +1. **Enhance P2 Coverage** - Add E2E validation for session timeout (`1.3-E2E-005`). Currently UNIT-ONLY coverage. +2. **Split Large Test File** - Break `1.3-UNIT-005` (320 lines) into multiple focused test files (<300 lines each). + +**Long-term Actions (Backlog)** + +1. **Enrich P3 Coverage** - Add tests for edge cases in P3 criteria if time permits. + +--- + +## PHASE 2: QUALITY GATE DECISION + +**Gate Type:** {story | epic | release | hotfix} +**Decision Mode:** {deterministic | manual} + +--- + +### Evidence Summary + +#### Test Execution Results + +- **Total Tests**: {total_count} +- **Passed**: {passed_count} ({pass_percentage}%) +- **Failed**: {failed_count} ({fail_percentage}%) +- **Skipped**: {skipped_count} ({skip_percentage}%) +- **Duration**: {total_duration} + +**Priority Breakdown:** + +- **P0 Tests**: {p0_passed}/{p0_total} passed ({p0_pass_rate}%) {✅ | ❌} +- **P1 Tests**: {p1_passed}/{p1_total} passed ({p1_pass_rate}%) {✅ | ⚠️ | ❌} +- **P2 Tests**: {p2_passed}/{p2_total} passed ({p2_pass_rate}%) {informational} +- **P3 Tests**: {p3_passed}/{p3_total} passed ({p3_pass_rate}%) {informational} + +**Overall Pass Rate**: {overall_pass_rate}% {✅ | ⚠️ | ❌} + +**Test Results Source**: {CI_run_id | test_report_url | local_run} + +--- + +#### Coverage Summary (from Phase 1) + +**Requirements Coverage:** + +- **P0 Acceptance Criteria**: {p0_covered}/{p0_total} covered ({p0_coverage}%) {✅ | ❌} +- **P1 Acceptance Criteria**: {p1_covered}/{p1_total} covered ({p1_coverage}%) {✅ | ⚠️ | ❌} +- **P2 Acceptance Criteria**: {p2_covered}/{p2_total} covered ({p2_coverage}%) {informational} +- **Overall Coverage**: {overall_coverage}% + +**Code Coverage** (if available): + +- **Line Coverage**: {line_coverage}% {✅ | ⚠️ | ❌} +- **Branch Coverage**: {branch_coverage}% {✅ | ⚠️ | ❌} +- **Function Coverage**: {function_coverage}% {✅ | ⚠️ | ❌} + +**Coverage Source**: {coverage_report_url | coverage_file_path} + +--- + +#### Non-Functional Requirements (NFRs) + +**Security**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌} + +- Security Issues: {security_issue_count} +- {details_if_issues} + +**Performance**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌} + +- {performance_metrics_summary} + +**Reliability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌} + +- {reliability_metrics_summary} + +**Maintainability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌} + +- {maintainability_metrics_summary} + +**NFR Source**: {nfr_assessment_file_path | not_assessed} + +--- + +#### Flakiness Validation + +**Burn-in Results** (if available): + +- **Burn-in Iterations**: {iteration_count} (e.g., 10) +- **Flaky Tests Detected**: {flaky_test_count} {✅ if 0 | ❌ if >0} +- **Stability Score**: {stability_percentage}% + +**Flaky Tests List** (if any): + +- {flaky_test_1_name} - {failure_rate} +- {flaky_test_2_name} - {failure_rate} + +**Burn-in Source**: {CI_burn_in_run_id | not_available} + +--- + +### Decision Criteria Evaluation + +#### P0 Criteria (Must ALL Pass) + +| Criterion | Threshold | Actual | Status | +| --------------------- | --------- | ------------------------- | -------- | -------- | +| P0 Coverage | 100% | {p0_coverage}% | {✅ PASS | ❌ FAIL} | +| P0 Test Pass Rate | 100% | {p0_pass_rate}% | {✅ PASS | ❌ FAIL} | +| Security Issues | 0 | {security_issue_count} | {✅ PASS | ❌ FAIL} | +| Critical NFR Failures | 0 | {critical_nfr_fail_count} | {✅ PASS | ❌ FAIL} | +| Flaky Tests | 0 | {flaky_test_count} | {✅ PASS | ❌ FAIL} | + +**P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED} + +--- + +#### P1 Criteria (Required for PASS, May Accept for CONCERNS) + +| Criterion | Threshold | Actual | Status | +| ---------------------- | ------------------------- | -------------------- | -------- | ----------- | -------- | +| P1 Coverage | ≥{min_p1_coverage}% | {p1_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} | +| P1 Test Pass Rate | ≥{min_p1_pass_rate}% | {p1_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} | +| Overall Test Pass Rate | ≥{min_overall_pass_rate}% | {overall_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} | +| Overall Coverage | ≥{min_coverage}% | {overall_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} | + +**P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED} + +--- + +#### P2/P3 Criteria (Informational, Don't Block) + +| Criterion | Actual | Notes | +| ----------------- | --------------- | ------------------------------------------------------------ | +| P2 Test Pass Rate | {p2_pass_rate}% | {allow_p2_failures ? "Tracked, doesn't block" : "Evaluated"} | +| P3 Test Pass Rate | {p3_pass_rate}% | {allow_p3_failures ? "Tracked, doesn't block" : "Evaluated"} | + +--- + +### GATE DECISION: {PASS | CONCERNS | FAIL | WAIVED} + +--- + +### Rationale + +{Explain decision based on criteria evaluation} + +{Highlight key evidence that drove decision} + +{Note any assumptions or caveats} + +**Example (PASS):** + +> All P0 criteria met with 100% coverage and pass rates across critical tests. All P1 criteria exceeded thresholds with 98% overall pass rate and 92% coverage. No security issues detected. No flaky tests in validation. Feature is ready for production deployment with standard monitoring. + +**Example (CONCERNS):** + +> All P0 criteria met, ensuring critical user journeys are protected. However, P1 coverage (88%) falls below threshold (90%) due to missing E2E test for AC-5 edge case. Overall pass rate (96%) is excellent. Issues are non-critical and have acceptable workarounds. Risk is low enough to deploy with enhanced monitoring. + +**Example (FAIL):** + +> CRITICAL BLOCKERS DETECTED: +> +> 1. P0 coverage incomplete (80%) - AC-2 security validation missing +> 2. P0 test failures (75% pass rate) in core search functionality +> 3. Unresolved SQL injection vulnerability in search filter (CRITICAL) +> +> Release MUST BE BLOCKED until P0 issues are resolved. Security vulnerability cannot be waived. + +**Example (WAIVED):** + +> Original decision was FAIL due to P0 test failure in legacy Excel 2007 export module (affects <1% of users). However, release contains critical GDPR compliance features required by regulatory deadline (Oct 15). Business has approved waiver given: +> +> - Regulatory priority overrides legacy module risk +> - Workaround available (use Excel 2010+) +> - Issue will be fixed in v2.4.1 hotfix (due Oct 20) +> - Enhanced monitoring in place + +--- + +### {Section: Delete if not applicable} + +#### Residual Risks (For CONCERNS or WAIVED) + +List unresolved P1/P2 issues that don't block release but should be tracked: + +1. **{Risk Description}** + - **Priority**: P1 | P2 + - **Probability**: Low | Medium | High + - **Impact**: Low | Medium | High + - **Risk Score**: {probability × impact} + - **Mitigation**: {workaround or monitoring plan} + - **Remediation**: {fix in next sprint/release} + +**Overall Residual Risk**: {LOW | MEDIUM | HIGH} + +--- + +#### Waiver Details (For WAIVED only) + +**Original Decision**: ❌ FAIL + +**Reason for Failure**: + +- {list_of_blocking_issues} + +**Waiver Information**: + +- **Waiver Reason**: {business_justification} +- **Waiver Approver**: {name}, {role} (e.g., Jane Doe, VP Engineering) +- **Approval Date**: {YYYY-MM-DD} +- **Waiver Expiry**: {YYYY-MM-DD} (**NOTE**: Does NOT apply to next release) + +**Monitoring Plan**: + +- {enhanced_monitoring_1} +- {enhanced_monitoring_2} +- {escalation_criteria} + +**Remediation Plan**: + +- **Fix Target**: {next_release_version} (e.g., v2.4.1 hotfix) +- **Due Date**: {YYYY-MM-DD} +- **Owner**: {team_or_person} +- **Verification**: {how_fix_will_be_verified} + +**Business Justification**: +{detailed_explanation_of_why_waiver_is_acceptable} + +--- + +#### Critical Issues (For FAIL or CONCERNS) + +Top blockers requiring immediate attention: + +| Priority | Issue | Description | Owner | Due Date | Status | +| -------- | ------------- | ------------------- | ------------ | ------------ | ------------------ | +| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} | +| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} | +| P1 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} | + +**Blocking Issues Count**: {p0_blocker_count} P0 blockers, {p1_blocker_count} P1 issues + +--- + +### Gate Recommendations + +#### For PASS Decision ✅ + +1. **Proceed to deployment** + - Deploy to staging environment + - Validate with smoke tests + - Monitor key metrics for 24-48 hours + - Deploy to production with standard monitoring + +2. **Post-Deployment Monitoring** + - {metric_1_to_monitor} + - {metric_2_to_monitor} + - {alert_thresholds} + +3. **Success Criteria** + - {success_criterion_1} + - {success_criterion_2} + +--- + +#### For CONCERNS Decision ⚠️ + +1. **Deploy with Enhanced Monitoring** + - Deploy to staging with extended validation period + - Enable enhanced logging/monitoring for known risk areas: + - {risk_area_1} + - {risk_area_2} + - Set aggressive alerts for potential issues + - Deploy to production with caution + +2. **Create Remediation Backlog** + - Create story: "{fix_title_1}" (Priority: {priority}) + - Create story: "{fix_title_2}" (Priority: {priority}) + - Target sprint: {next_sprint} + +3. **Post-Deployment Actions** + - Monitor {specific_areas} closely for {time_period} + - Weekly status updates on remediation progress + - Re-assess after fixes deployed + +--- + +#### For FAIL Decision ❌ + +1. **Block Deployment Immediately** + - Do NOT deploy to any environment + - Notify stakeholders of blocking issues + - Escalate to tech lead and PM + +2. **Fix Critical Issues** + - Address P0 blockers listed in Critical Issues section + - Owner assignments confirmed + - Due dates agreed upon + - Daily standup on blocker resolution + +3. **Re-Run Gate After Fixes** + - Re-run full test suite after fixes + - Re-run `bmad tea *trace` workflow + - Verify decision is PASS before deploying + +--- + +#### For WAIVED Decision 🔓 + +1. **Deploy with Business Approval** + - Confirm waiver approver has signed off + - Document waiver in release notes + - Notify all stakeholders of waived risks + +2. **Aggressive Monitoring** + - {enhanced_monitoring_plan} + - {escalation_procedures} + - Daily checks on waived risk areas + +3. **Mandatory Remediation** + - Fix MUST be completed by {due_date} + - Issue CANNOT be waived in next release + - Track remediation progress weekly + - Verify fix in next gate + +--- + +### Next Steps + +**Immediate Actions** (next 24-48 hours): + +1. {action_1} +2. {action_2} +3. {action_3} + +**Follow-up Actions** (next sprint/release): + +1. {action_1} +2. {action_2} +3. {action_3} + +**Stakeholder Communication**: + +- Notify PM: {decision_summary} +- Notify SM: {decision_summary} +- Notify DEV lead: {decision_summary} + +--- + +## Integrated YAML Snippet (CI/CD) + +```yaml +traceability_and_gate: + # Phase 1: Traceability + traceability: + story_id: "{STORY_ID}" + date: "{DATE}" + coverage: + overall: {OVERALL_PCT}% + p0: {P0_PCT}% + p1: {P1_PCT}% + p2: {P2_PCT}% + p3: {P3_PCT}% + gaps: + critical: {CRITICAL_COUNT} + high: {HIGH_COUNT} + medium: {MEDIUM_COUNT} + low: {LOW_COUNT} + quality: + passing_tests: {PASSING_COUNT} + total_tests: {TOTAL_TESTS} + blocker_issues: {BLOCKER_COUNT} + warning_issues: {WARNING_COUNT} + recommendations: + - "{RECOMMENDATION_1}" + - "{RECOMMENDATION_2}" + + # Phase 2: Gate Decision + gate_decision: + decision: "{PASS | CONCERNS | FAIL | WAIVED}" + gate_type: "{story | epic | release | hotfix}" + decision_mode: "{deterministic | manual}" + criteria: + p0_coverage: {p0_coverage}% + p0_pass_rate: {p0_pass_rate}% + p1_coverage: {p1_coverage}% + p1_pass_rate: {p1_pass_rate}% + overall_pass_rate: {overall_pass_rate}% + overall_coverage: {overall_coverage}% + security_issues: {security_issue_count} + critical_nfrs_fail: {critical_nfr_fail_count} + flaky_tests: {flaky_test_count} + thresholds: + min_p0_coverage: 100 + min_p0_pass_rate: 100 + min_p1_coverage: {min_p1_coverage} + min_p1_pass_rate: {min_p1_pass_rate} + min_overall_pass_rate: {min_overall_pass_rate} + min_coverage: {min_coverage} + evidence: + test_results: "{CI_run_id | test_report_url}" + traceability: "{trace_file_path}" + nfr_assessment: "{nfr_file_path}" + code_coverage: "{coverage_report_url}" + next_steps: "{brief_summary_of_recommendations}" + waiver: # Only if WAIVED + reason: "{business_justification}" + approver: "{name}, {role}" + expiry: "{YYYY-MM-DD}" + remediation_due: "{YYYY-MM-DD}" +``` + +--- + +## Related Artifacts + +- **Story File:** {STORY_FILE_PATH} +- **Test Design:** {TEST_DESIGN_PATH} (if available) +- **Tech Spec:** {TECH_SPEC_PATH} (if available) +- **Test Results:** {TEST_RESULTS_PATH} +- **NFR Assessment:** {NFR_FILE_PATH} (if available) +- **Test Files:** {TEST_DIR_PATH} + +--- + +## Sign-Off + +**Phase 1 - Traceability Assessment:** + +- Overall Coverage: {OVERALL_PCT}% +- P0 Coverage: {P0_PCT}% {P0_STATUS} +- P1 Coverage: {P1_PCT}% {P1_STATUS} +- Critical Gaps: {CRITICAL_COUNT} +- High Priority Gaps: {HIGH_COUNT} + +**Phase 2 - Gate Decision:** + +- **Decision**: {PASS | CONCERNS | FAIL | WAIVED} {STATUS_ICON} +- **P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED} +- **P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED} + +**Overall Status:** {STATUS} {STATUS_ICON} + +**Next Steps:** + +- If PASS ✅: Proceed to deployment +- If CONCERNS ⚠️: Deploy with monitoring, create remediation backlog +- If FAIL ❌: Block deployment, fix critical issues, re-run workflow +- If WAIVED 🔓: Deploy with business approval and aggressive monitoring + +**Generated:** {DATE} +**Workflow:** testarch-trace v4.0 (Enhanced with Gate Decision) + +--- + + diff --git a/src/bmm/workflows/testarch/trace/workflow.yaml b/src/bmm/workflows/testarch/trace/workflow.yaml new file mode 100644 index 00000000..fc5193ef --- /dev/null +++ b/src/bmm/workflows/testarch/trace/workflow.yaml @@ -0,0 +1,57 @@ +# Test Architect workflow: trace (enhanced with gate decision) +name: testarch-trace +description: "Generate requirements-to-tests traceability matrix, analyze coverage, and make quality gate decision (PASS/CONCERNS/FAIL/WAIVED)" +author: "BMad" + +# Critical variables from config +config_source: "{project-root}/_bmad/bmm/config.yaml" +output_folder: "{config_source}:output_folder" +user_name: "{config_source}:user_name" +communication_language: "{config_source}:communication_language" +document_output_language: "{config_source}:document_output_language" +date: system-generated + +# Workflow components +installed_path: "{project-root}/_bmad/bmm/workflows/testarch/trace" +instructions: "{installed_path}/instructions.md" +validation: "{installed_path}/checklist.md" +template: "{installed_path}/trace-template.md" + +# Variables and inputs +variables: + # Directory paths + test_dir: "{project-root}/tests" # Root test directory + source_dir: "{project-root}/src" # Source code directory + + # Workflow behavior + coverage_levels: "e2e,api,component,unit" # Which test levels to trace + gate_type: "story" # story | epic | release | hotfix - determines gate scope + decision_mode: "deterministic" # deterministic (rule-based) | manual (team decision) + +# Output configuration +default_output_file: "{output_folder}/traceability-matrix.md" + +# Required tools +required_tools: + - read_file # Read story, test files, BMad artifacts + - write_file # Create traceability matrix, gate YAML + - list_files # Discover test files + - search_repo # Find tests by test ID, describe blocks + - glob # Find test files matching patterns + +tags: + - qa + - traceability + - test-architect + - coverage + - requirements + - gate + - decision + - release + +execution_hints: + interactive: false # Minimize prompts + autonomous: true # Proceed without user input unless blocked + iterative: true + +web_bundle: false diff --git a/src/modules/bmgd/workflows/4-production/gap-analysis/instructions.xml b/src/modules/bmgd/workflows/4-production/gap-analysis/instructions.xml deleted file mode 100644 index 4af6db50..00000000 --- a/src/modules/bmgd/workflows/4-production/gap-analysis/instructions.xml +++ /dev/null @@ -1,367 +0,0 @@ - - The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml - You MUST have already loaded and processed: {installed_path}/workflow.yaml - Communicate all responses in {communication_language} - - - - Use {{story_file}} directly - Read COMPLETE story file - Extract story_key from filename or metadata - - - - - 🔍 **Gap Analysis - Story Task Validation** - - This workflow validates story tasks against your actual codebase. - - **Use Cases:** - - Audit "done" stories to verify they match reality - - Validate story tasks before starting development - - Check if completed work was actually implemented - - **Provide story to validate:** - - - Enter story file path, story key (e.g., "1-2-auth"), or status to scan (e.g., "done", "review", "in-progress"): - - - Use provided file path as {{story_file}} - Read COMPLETE story file - Extract story_key from filename - - - - - Search {story_dir} for file matching pattern {{story_key}}.md - Set {{story_file}} to found file path - Read COMPLETE story file - - - - - 🔎 Scanning sprint-status.yaml for stories with status: {{user_input}}... - - - Load the FULL file: {{sprint_status}} - Parse development_status section - Find all stories where status equals {{user_input}} - - - 📋 No stories found with status: {{user_input}} - - Available statuses: backlog, ready-for-dev, in-progress, review, done - - HALT - - - - Found {{count}} stories with status {{user_input}}: - - {{list_of_stories}} - - Which story would you like to validate? [Enter story key or 'all']: - - - Set {{batch_mode}} = true - Store list of all story keys to validate - Set {{story_file}} to first story in list - Read COMPLETE story file - - - - - Set {{story_file}} to selected story path - Read COMPLETE story file - - - - - - Set {{story_file}} to found story path - Read COMPLETE story file - - - - - - ⚠️ No sprint-status.yaml found. Please provide direct story file path. - HALT - - - - - - - - 🔍 CODEBASE REALITY CHECK - Validate tasks against actual code! - - 📊 **Analyzing Story: {{story_key}}** - - Scanning codebase to validate tasks... - - - - Parse story sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Status - Extract all tasks and subtasks from story file - Identify technical areas mentioned in tasks (files, classes, functions, services, components) - - - Determine scan targets from task descriptions: - - For "Create X" tasks: Check if X already exists - - For "Implement Y" tasks: Search for Y functionality - - For "Add Z" tasks: Verify Z is missing - - For test tasks: Check for existing test files - - Use Glob to find relevant files matching patterns from tasks (e.g., **/*.ts, **/*.tsx, **/*.test.ts) - Use Grep to search for specific classes, functions, or components mentioned in tasks - Use Read to verify implementation details and functionality in key discovered files - - - Document scan results: - - **CODEBASE REALITY:** - ✅ What Exists: - - List verified files, classes, functions, services found - - Note implementation completeness (partial vs full) - - Identify code that tasks claim to create but already exists - - - ❌ What's Missing: - - List features mentioned in tasks but NOT found in codebase - - Identify claimed implementations that don't exist - - Note tasks marked complete but code missing - - - - For each task in the story, determine: - - ACCURATE: Task matches reality (code exists if task is checked, missing if unchecked) - - FALSE POSITIVE: Task checked [x] but code doesn't exist (BS detection!) - - FALSE NEGATIVE: Task unchecked [ ] but code already exists - - NEEDS UPDATE: Task description doesn't match current implementation - - Generate validation report with: - - Tasks that are accurate - - Tasks that are false positives (marked done but not implemented) ⚠️ - - Tasks that are false negatives (not marked but already exist) - - Recommended task updates - - - - 📋 SHOW TRUTH - Compare story claims vs codebase reality - - - 📊 **Gap Analysis Results: {{story_key}}** - - **Story Status:** {{story_status}} - - --- - - **Codebase Scan Results:** - - ✅ **What Actually Exists:** - {{list_of_existing_files_features_with_details}} - - ❌ **What's Actually Missing:** - {{list_of_missing_elements_despite_claims}} - - --- - - **Task Validation:** - - {{if_any_accurate_tasks}} - ✅ **Accurate Tasks** ({{count}}): - {{list_tasks_that_match_reality}} - {{endif}} - - {{if_any_false_positives}} - ⚠️ **FALSE POSITIVES** ({{count}}) - Marked done but NOT implemented: - {{list_tasks_marked_complete_but_code_missing}} - **WARNING:** These tasks claim completion but code doesn't exist! - {{endif}} - - {{if_any_false_negatives}} - ℹ️ **FALSE NEGATIVES** ({{count}}) - Not marked but ALREADY exist: - {{list_tasks_unchecked_but_code_exists}} - {{endif}} - - {{if_any_needs_update}} - 🔄 **NEEDS UPDATE** ({{count}}) - Task description doesn't match implementation: - {{list_tasks_needing_description_updates}} - {{endif}} - - --- - - 📝 **Proposed Story Updates:** - - {{if_false_positives_found}} - **CRITICAL - Uncheck false positives:** - {{list_tasks_to_uncheck_with_reasoning}} - {{endif}} - - {{if_false_negatives_found}} - **Check completed work:** - {{list_tasks_to_check_with_verification}} - {{endif}} - - {{if_task_updates_needed}} - **Update task descriptions:** - {{list_task_description_updates}} - {{endif}} - - {{if_gap_analysis_section_missing}} - **Add Gap Analysis section** documenting findings - {{endif}} - - --- - - **Story Accuracy Score:** {{percentage_of_accurate_tasks}}% ({{accurate_count}}/{{total_count}}) - - - - - - 🚨 **WARNING:** This story is marked {{story_status}} but has FALSE POSITIVES! - - {{count}} task(s) claim completion but code doesn't exist. - This story may have been prematurely marked complete. - - **Recommendation:** Update story status to 'in-progress' and complete missing work. - - - - - - - **What would you like to do?** - - Options: - [U] Update - Apply proposed changes to story file - [A] Audit Report - Save findings to report file without updating story - [N] No Changes - Just show me the findings - [R] Review Details - Show me more details about specific findings - [C] Continue to Next - Move to next story (batch mode only) - [Q] Quit - Exit gap analysis - - - - - Update story file with proposed changes: - - Uncheck false positive tasks - - Check false negative tasks - - Update task descriptions as needed - - Add or update "Gap Analysis" section with findings - - Add Change Log entry: "Gap analysis performed - tasks validated against codebase ({{date}})" - - - Story has false positives. Update status to 'in-progress'? [Y/n]: - - Update story Status to 'in-progress' - - Update sprint-status.yaml status for this story to 'in-progress' - - - - - ✅ Story file updated with gap analysis findings. - - - {{changes_count}} task(s) updated - - Gap Analysis section added/updated - - Accuracy score: {{accuracy_percentage}}% - - **File:** {{story_file}} - - - - Continue to next story? [Y/n]: - - Load next story from batch list - Analyze next story - - - - HALT - Gap analysis complete - - - - - Generate audit report file: {{story_dir}}/gap-analysis-report-{{story_key}}-{{date}}.md - Include full findings, accuracy scores, recommendations - 📄 Audit report saved: {{report_file}} - - This report can be shared with team for review. - Story file was NOT modified. - - - - Continue to next story? [Y/n]: - - Load next story from batch list - Analyze next story - - - - HALT - Gap analysis complete - - - - - ℹ️ Findings displayed only. No files modified. - HALT - Gap analysis complete - - - - - Which findings would you like more details about? (specify task numbers, file names, or areas): - Provide detailed analysis of requested areas using Read tool for deeper code inspection - After review, re-present the decision options - Continue based on user's subsequent choice - - - - - Load next story from batch list - Analyze next story - - - - ⚠️ Not in batch mode. Only one story to validate. - HALT - - - - - 👋 Gap analysis session ended. - - {{if batch_mode}}Processed {{processed_count}}/{{total_count}} stories.{{endif}} - - HALT - - - - - ✅ **Gap Analysis Complete, {user_name}!** - - {{if_single_story}} - **Story Analyzed:** {{story_key}} - **Accuracy Score:** {{accuracy_percentage}}% - **Actions Taken:** {{actions_summary}} - {{endif}} - - {{if_batch_mode}} - **Batch Analysis Summary:** - - Stories analyzed: {{processed_count}} - - Average accuracy: {{avg_accuracy}}% - - False positives found: {{total_false_positives}} - - Stories updated: {{updated_count}} - {{endif}} - - **Next Steps:** - - Review updated stories - - Address any false positives found - - Run dev-story for stories needing work - - - - diff --git a/src/modules/bmgd/workflows/4-production/gap-analysis/workflow.yaml b/src/modules/bmgd/workflows/4-production/gap-analysis/workflow.yaml deleted file mode 100644 index 5175a5da..00000000 --- a/src/modules/bmgd/workflows/4-production/gap-analysis/workflow.yaml +++ /dev/null @@ -1,23 +0,0 @@ -name: gap-analysis -description: "Validate story tasks against actual codebase - audit completed stories or validate before development" -author: "BMad" - -# Critical variables from config -config_source: "{project-root}/_bmad/bmgd/config.yaml" -user_name: "{config_source}:user_name" -communication_language: "{config_source}:communication_language" -implementation_artifacts: "{config_source}:implementation_artifacts" -story_dir: "{implementation_artifacts}" - -# Workflow components -installed_path: "{project-root}/_bmad/bmgd/workflows/4-production/gap-analysis" -instructions: "{installed_path}/instructions.xml" - -# Variables -story_file: "" # User provides story file path or auto-discover -sprint_status: "{implementation_artifacts}/sprint-status.yaml" -project_context: "**/project-context.md" - -standalone: true - -web_bundle: false diff --git a/src/modules/bmgd/workflows/4-production/push-all/instructions.xml b/src/modules/bmgd/workflows/4-production/push-all/instructions.xml deleted file mode 100644 index 31a614a1..00000000 --- a/src/modules/bmgd/workflows/4-production/push-all/instructions.xml +++ /dev/null @@ -1,518 +0,0 @@ - - The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml - You MUST have already loaded and processed: {installed_path}/workflow.yaml - Communicate all responses in {communication_language} - 📝 PUSH-ALL - Stage, commit, and push all changes with comprehensive safety validation - ⚠️ Use with caution - commits ALL repository changes - - - 🔄 **Analyzing Repository Changes** - - Scanning for changes to commit and push... - - - - Run git commands in parallel: - - git status - Show modified/added/deleted/untracked files - - git diff --stat - Show change statistics - - git log -1 --oneline - Show recent commit for message style - - git branch --show-current - Confirm current branch - - Parse git status output to identify: - - Modified files - - Added files - - Deleted files - - Untracked files - - Total insertion/deletion counts - - - - ℹ️ **No Changes to Commit** - - Working directory is clean. - Nothing to push. - - HALT - No work to do - - - - - 🔒 SAFETY CHECKS - Validate changes before committing - - Scan all changed files for dangerous patterns: - - **Secret Detection:** - Check for files matching secret patterns: - - .env*, *.key, *.pem, credentials.json, secrets.yaml - - id_rsa, *.p12, *.pfx, *.cer - - Any file containing: _API_KEY=, _SECRET=, _TOKEN= with real values (not placeholders) - - - Validate API keys are placeholders only: - ✅ Acceptable placeholders: - - API_KEY=your-api-key-here - - SECRET=placeholder - - TOKEN=xxx - - API_KEY=${{YOUR_KEY}} - - SECRET_KEY=<your-key> - - - ❌ BLOCK real keys: - - OPENAI_API_KEY=sk-proj-xxxxx (real OpenAI key) - - AWS_SECRET_KEY=AKIA... (real AWS key) - - STRIPE_API_KEY=sk_live_... (real Stripe key) - - Any key with recognizable provider prefix + actual value - - - **File Size Check:** - Check for files >10MB without Git LFS configuration - - **Build Artifacts:** - Check for unwanted directories/files that should be gitignored: - - node_modules/, dist/, build/, .next/, __pycache__/, *.pyc, .venv/ - - .DS_Store, Thumbs.db, *.swp, *.tmp, *.log (in root) - - *.class, target/, bin/ (Java) - - vendor/ (unless dependency managed) - - - **Git State:** - Verify: - - .gitignore exists and properly configured - - No unresolved merge conflicts - - Git repository initialized - - - - - 🚨 **DANGER: Secrets Detected!** - - The following sensitive data was found: - {{list_detected_secrets_with_files}} - - ❌ **BLOCKED:** Cannot commit secrets to version control. - - **Actions Required:** - 1. Move secrets to .env file (add to .gitignore) - 2. Use environment variables: process.env.API_KEY - 3. Remove secrets from tracked files: git rm --cached [file] - 4. Update code to load from environment - - **Example:** - ``` - // Before (UNSAFE): - const apiKey = 'sk-proj-xxxxx'; - - // After (SAFE): - const apiKey = process.env.OPENAI_API_KEY; - ``` - - Halting workflow for safety. - - HALT - Cannot proceed with secrets - - - - ⚠️ **Warning: Large Files Detected** - - Files >10MB found: - {{list_large_files_with_sizes}} - - **Recommendation:** Set up Git LFS - ``` - git lfs install - git lfs track "*.{file_extension}" - git add .gitattributes - ``` - - - Proceed with large files anyway? [y/n]: - - - Halting. Please configure Git LFS first. - HALT - - - - - ⚠️ **Warning: Build Artifacts Detected** - - These files should be in .gitignore: - {{list_build_artifacts}} - - **Update .gitignore:** - ``` - node_modules/ - dist/ - build/ - .DS_Store - ``` - - - Commit build artifacts anyway? [y/n]: - - - Halting. Update .gitignore and git rm --cached [files] - HALT - - - - - ⚠️ **Warning: Pushing to {{branch_name}}** - - You're committing directly to {{branch_name}}. - - **Recommendation:** Use feature branch workflow: - 1. git checkout -b feature/my-changes - 2. Make and commit changes - 3. git push -u origin feature/my-changes - 4. Create PR for review - - - Push directly to {{branch_name}}? [y/n]: - - - Halting. Create a feature branch instead. - HALT - - - - ✅ **Safety Checks Passed** - - All validations completed successfully. - - - - - - 📊 **Changes Summary** - - **Files:** - - Modified: {{modified_count}} - - Added: {{added_count}} - - Deleted: {{deleted_count}} - - Untracked: {{untracked_count}} - **Total:** {{total_file_count}} files - - **Changes:** - - Insertions: +{{insertion_count}} lines - - Deletions: -{{deletion_count}} lines - - **Safety:** - {{if_all_safe}} - ✅ No secrets detected - ✅ No large files (or approved) - ✅ No build artifacts (or approved) - ✅ .gitignore configured - {{endif}} - - {{if_warnings_approved}} - ⚠️ Warnings acknowledged and approved - {{endif}} - - **Git:** - - Branch: {{current_branch}} - - Remote: origin/{{current_branch}} - - Last commit: {{last_commit_message}} - - --- - - **I will execute:** - 1. `git add .` - Stage all changes - 2. `git commit -m "[generated message]"` - Create commit - 3. `git push` - Push to remote - - - - **Proceed with commit and push?** - - Options: - [yes] - Proceed with commit and push - [no] - Cancel (leave changes unstaged) - [review] - Show detailed diff first - - - - Execute: git diff --stat - Execute: git diff | head -100 (show first 100 lines of changes) - - {{diff_output}} - - (Use 'git diff' to see full changes) - - After reviewing, proceed with commit and push? [yes/no]: - - - - ❌ **Push-All Cancelled** - - Changes remain unstaged. No git operations performed. - - You can: - - Review changes: git status, git diff - - Commit manually: git add [files] && git commit - - Discard changes: git checkout -- [files] - - HALT - User cancelled - - - - - Execute: git add . - Execute: git status - - ✅ **All Changes Staged** - - Ready for commit: - {{list_staged_files}} - - - - - 📝 COMMIT MESSAGE - Generate conventional commit format - - Analyze changes to determine commit type: - - feat: New features (new files with functionality) - - fix: Bug fixes (fixing broken functionality) - - docs: Documentation only (*.md, comments) - - style: Formatting, missing semicolons (no code change) - - refactor: Code restructuring (no feature/fix) - - test: Adding/updating tests - - chore: Tooling, configs, dependencies - - perf: Performance improvements - - Determine scope (optional): - - Component/feature name if changes focused on one area - - Omit if changes span multiple areas - - - Generate message summary (max 72 chars): - - Use imperative mood: "add feature" not "added feature" - - Lowercase except proper nouns - - No period at end - - - Generate message body (if changes >5 files): - - List key changes as bullet points - - Max 3-5 bullets - - Keep concise - - - Reference recent commits for style consistency - - 📝 **Generated Commit Message:** - - ``` - {{generated_commit_message}} - ``` - - Based on: - - {{commit_type}} commit type - - {{file_count}} files changed - - {{change_summary}} - - - **Use this commit message?** - - Options: - [yes] - Use generated message - [edit] - Let me write custom message - [cancel] - Cancel push-all (leave staged) - - - - Enter your commit message (use conventional commit format if possible): - Store user input as {{commit_message}} - ✅ Using custom commit message - - - - ❌ Push-all cancelled - - Changes remain staged. - Run: git reset to unstage - - HALT - - - - Use {{generated_commit_message}} as {{commit_message}} - - - - - Execute git commit with heredoc for multi-line message safety: - git commit -m "$(cat <<'EOF' -{{commit_message}} -EOF -)" - - - - ❌ **Commit Failed** - - Error: {{commit_error}} - - **Common Causes:** - - Pre-commit hooks failing (linting, tests) - - Missing git config (user.name, user.email) - - Locked files or permissions - - Empty commit (no actual changes) - - **Fix and try again:** - - Check pre-commit output - - Set git config: git config user.name "Your Name" - - Verify file permissions - - HALT - Fix errors before proceeding - - - Parse commit output for hash - ✅ **Commit Created** - - Commit: {{commit_hash}} - Message: {{commit_subject}} - - - - - 🚀 **Pushing to Remote** - - Pushing {{current_branch}} to origin... - - - Execute: git push - - - - ⚠️ **Push Rejected - Remote Has New Commits** - - Remote branch has commits you don't have locally. - Attempting to rebase and retry... - - - Execute: git pull --rebase - - - ❌ **Merge Conflicts During Rebase** - - Conflicts found: - {{list_conflicted_files}} - - **Manual resolution required:** - 1. Resolve conflicts in listed files - 2. git add [resolved files] - 3. git rebase --continue - 4. git push - - Halting for manual conflict resolution. - - HALT - Resolve conflicts manually - - - Execute: git push - - - - ℹ️ **No Upstream Branch Set** - - First push to origin for this branch. - Setting upstream... - - - Execute: git push -u origin {{current_branch}} - - - - ❌ **Push to Protected Branch Blocked** - - Branch {{current_branch}} is protected on remote. - - **Use PR workflow instead:** - 1. Ensure you're on a feature branch - 2. Push feature branch: git push -u origin feature-branch - 3. Create PR for review - - Changes are committed locally but not pushed. - - HALT - Use PR workflow for protected branches - - - - ❌ **Authentication Failed** - - Git push requires authentication. - - **Fix authentication:** - - GitHub: Set up SSH key or Personal Access Token - - Check: git remote -v (verify remote URL) - - Docs: https://docs.github.com/authentication - - Changes are committed locally but not pushed. - - HALT - Fix authentication - - - - ❌ **Push Failed** - - Error: {{push_error}} - - Your changes are committed locally but not pushed to remote. - - **Troubleshoot:** - - Check network connection - - Verify remote exists: git remote -v - - Check permissions on remote repository - - Try manual push: git push - - Halting for manual resolution. - - HALT - Manual push required - - - - - ✅ **Successfully Pushed to Remote!** - - **Commit:** {{commit_hash}} - {{commit_subject}} - **Branch:** {{current_branch}} → origin/{{current_branch}} - **Files changed:** {{file_count}} (+{{insertions}}, -{{deletions}}) - - --- - - Your changes are now on the remote repository. - - - Execute: git log -1 --oneline --decorate - - **Latest commit:** {{git_log_output}} - - - - - - 🎉 **Push-All Complete, {user_name}!** - - **Summary:** - - ✅ {{file_count}} files committed - - ✅ Pushed to origin/{{current_branch}} - - ✅ All safety checks passed - - **Commit Details:** - - Hash: {{commit_hash}} - - Message: {{commit_subject}} - - Changes: +{{insertions}}, -{{deletions}} - - **Next Steps:** - - Verify on remote (GitHub/GitLab/etc) - - Create PR if working on feature branch - - Notify team if appropriate - - **Git State:** - - Working directory: clean - - Branch: {{current_branch}} - - In sync with remote - - - - diff --git a/src/modules/bmgd/workflows/4-production/push-all/workflow.yaml b/src/modules/bmgd/workflows/4-production/push-all/workflow.yaml deleted file mode 100644 index c9467652..00000000 --- a/src/modules/bmgd/workflows/4-production/push-all/workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: push-all -description: "Stage all changes, create commit with safety checks, and push to remote - use with caution" -author: "BMad" - -# Critical variables from config -config_source: "{project-root}/_bmad/bmgd/config.yaml" -user_name: "{config_source}:user_name" -communication_language: "{config_source}:communication_language" - -# Workflow components -installed_path: "{project-root}/_bmad/bmgd/workflows/4-production/push-all" -instructions: "{installed_path}/instructions.xml" - -standalone: true - -web_bundle: false diff --git a/src/modules/bmgd/workflows/4-production/super-dev-story/README.md b/src/modules/bmgd/workflows/4-production/super-dev-story/README.md deleted file mode 100644 index 03dcc245..00000000 --- a/src/modules/bmgd/workflows/4-production/super-dev-story/README.md +++ /dev/null @@ -1,283 +0,0 @@ -# Super-Dev-Story Workflow - -**Enhanced story development with comprehensive quality validation** - -## What It Does - -Super-dev-story is `/dev-story` on steroids - it includes ALL standard development steps PLUS additional quality gates: - -``` -Standard dev-story: - 1-8. Development cycle → Mark "review" - -Super-dev-story: - 1-8. Development cycle - 9.5. Post-dev gap analysis (verify work complete) - 9.6. Automated code review (catch issues) - → Fix issues if found (loop back to step 5) - 9. Mark "review" (only after all validation passes) -``` - -## When to Use - -### Use `/super-dev-story` for: - -- ✅ Security-critical features (auth, payments, PII handling) -- ✅ Complex business logic with many edge cases -- ✅ Stories you want bulletproof before human review -- ✅ High-stakes features (production releases, customer-facing) -- ✅ When you want to minimize review cycles - -### Use standard `/dev-story` for: - -- Documentation updates -- Simple UI tweaks -- Configuration changes -- Low-risk experimental features -- When speed matters more than extra validation - -## Cost vs Benefit - -| Aspect | dev-story | super-dev-story | -|--------|-----------|-----------------| -| **Tokens** | 50K-100K | 80K-150K (+30-50%) | -| **Time** | Normal | +20-30% | -| **Quality** | Good | Excellent | -| **Review cycles** | 1-3 iterations | 0-1 iterations | -| **False completions** | Possible | Prevented | - -**ROI:** Extra 30K tokens (~$0.09) prevents hours of rework and multiple review cycles - -## What Gets Validated - -### Step 9.5: Post-Dev Gap Analysis - -**Checks:** -- Tasks marked [x] → Code actually exists and works? -- Required files → Actually created? -- Claimed tests → Actually exist and pass? -- Partial implementations → Marked complete prematurely? - -**Catches:** -- ❌ "Created auth service" → File doesn't exist -- ❌ "Added tests with 90% coverage" → Only 60% actual -- ❌ "Implemented login" → Function exists but incomplete - -**Actions if issues found:** -- Unchecks false positive tasks -- Adds tasks for missing work -- Loops back to implementation - -### Step 9.6: Automated Code Review - -**Reviews:** -- ✅ Correctness (logic errors, edge cases) -- ✅ Security (vulnerabilities, input validation) -- ✅ Architecture (pattern compliance, SOLID principles) -- ✅ Performance (inefficiencies, optimization opportunities) -- ✅ Testing (coverage gaps, test quality) -- ✅ Code Quality (readability, maintainability) - -**Actions if issues found:** -- Adds review findings as tasks -- Loops back to implementation -- Continues until issues resolved - -## Usage - -### Basic Usage - -```bash -# Load any BMAD agent -/super-dev-story - -# Follows same flow as dev-story, with extra validation -``` - -### Specify Story - -```bash -/super-dev-story docs/sprint-artifacts/1-2-auth.md -``` - -### Expected Flow - -``` -1. Pre-dev gap analysis - ├─ "Approve task updates? [Y/A/n/e/s/r]" - └─ Select option - -2. Development (standard TDD cycle) - └─ Implements all tasks - -3. Post-dev gap analysis - ├─ Scans codebase - ├─ If gaps: adds tasks, loops back - └─ If clean: proceeds - -4. Code review - ├─ Analyzes all changes - ├─ If issues: adds tasks, loops back - └─ If clean: proceeds - -5. Story marked "review" - └─ Truly complete! -``` - -## Fix Iteration Safety - -Super-dev has a **max iteration limit** (default: 3) to prevent infinite loops: - -```yaml -# workflow.yaml -super_dev_settings: - max_fix_iterations: 3 # Stop after 3 fix cycles - fail_on_critical_issues: true # HALT if critical security issues -``` - -If exceeded: -``` -🛑 Maximum Fix Iterations Reached - -Attempted 3 fix cycles. -Manual intervention required. - -Issues remaining: -- [List of unresolved issues] -``` - -## Examples - -### Example 1: Perfect First Try - -``` -/super-dev-story - -Pre-gap: ✅ Tasks accurate -Development: ✅ 8 tasks completed -Post-gap: ✅ All work verified -Code review: ✅ No issues - -→ Story complete! (45 minutes, 85K tokens) -``` - -### Example 2: Post-Dev Catches Incomplete Work - -``` -/super-dev-story - -Pre-gap: ✅ Tasks accurate -Development: ✅ 8 tasks completed -Post-gap: ⚠️ Tests claim 90% coverage, actual 65% - -→ Adds task: "Increase test coverage to 90%" -→ Implements missing tests -→ Post-gap: ✅ Now 92% coverage -→ Code review: ✅ No issues - -→ Story complete! (52 minutes, 95K tokens) -``` - -### Example 3: Code Review Finds Security Issue - -``` -/super-dev-story - -Pre-gap: ✅ Tasks accurate -Development: ✅ 10 tasks completed -Post-gap: ✅ All work verified -Code review: 🚨 CRITICAL - SQL injection vulnerability - -→ Adds task: "Fix SQL injection in user search" -→ Implements parameterized queries -→ Post-gap: ✅ Verified -→ Code review: ✅ Security issue resolved - -→ Story complete! (58 minutes, 110K tokens) -``` - -## Comparison to Standard Workflow - -### Standard Flow (dev-story) - -``` -Day 1: Develop story (30 min) -Day 2: Human review finds 3 issues -Day 3: Fix issues (20 min) -Day 4: Human review again -Day 5: Approved - -Total: 5 days, 2 review cycles -``` - -### Super-Dev Flow - -``` -Day 1: Super-dev-story - - Development (30 min) - - Post-gap finds 1 issue (auto-fix 5 min) - - Code review finds 2 issues (auto-fix 15 min) - - Complete (50 min total) - -Day 2: Human review -Day 3: Approved (minimal/no changes needed) - -Total: 3 days, 1 review cycle -``` - -**Savings:** 2 days, 1 fewer review cycle, higher initial quality - -## Troubleshooting - -### "Super-dev keeps looping forever" - -**Cause:** Each validation finds new issues -**Solution:** This indicates quality problems. Review max_fix_iterations setting or manually intervene. - -### "Post-dev gap analysis keeps failing" - -**Cause:** Dev agent marking tasks complete prematurely -**Solution:** This is expected! Super-dev catches this. The loop ensures actual completion. - -### "Code review too strict" - -**Cause:** Reviewing for issues standard dev-story would miss -**Solution:** This is intentional. For less strict review, use standard dev-story. - -### "Too many tokens/too slow" - -**Cause:** Multi-stage validation adds overhead -**Solution:** Use standard dev-story for non-critical stories. Reserve super-dev for important work. - -## Best Practices - -1. **Reserve for important stories** - Don't use for trivial changes -2. **Trust the process** - Fix iterations mean it's working correctly -3. **Review limits** - Adjust max_fix_iterations if stories are complex -4. **Monitor costs** - Track token usage vs review cycle savings -5. **Learn patterns** - Code review findings inform future architecture - -## Configuration Reference - -```yaml -# _bmad/bmgd/config.yaml or _bmad/bmgd/config.yaml - -# Per-project settings -super_dev_settings: - post_dev_gap_analysis: true # Enable post-dev validation - auto_code_review: true # Enable automatic code review - fail_on_critical_issues: true # HALT on security vulnerabilities - max_fix_iterations: 3 # Maximum fix cycles before manual intervention - auto_fix_minor_issues: false # Auto-fix LOW severity without asking -``` - -## See Also - -- [dev-story workflow](../dev-story/) - Standard development workflow -- [gap-analysis workflow](../gap-analysis/) - Standalone audit tool -- [Gap Analysis Guide](../../../../docs/gap-analysis.md) - Complete documentation -- [Super-Dev Mode Concept](../../../../docs/super-dev-mode.md) - Vision and roadmap - ---- - -**Super-Dev-Story: Because "done" should mean DONE** ✅ diff --git a/src/modules/bmgd/workflows/4-production/super-dev-story/instructions.xml b/src/modules/bmgd/workflows/4-production/super-dev-story/instructions.xml deleted file mode 100644 index 121027a2..00000000 --- a/src/modules/bmgd/workflows/4-production/super-dev-story/instructions.xml +++ /dev/null @@ -1,283 +0,0 @@ - - The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml - You MUST have already loaded and processed: {installed_path}/workflow.yaml - Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} - Generate all documents in {document_output_language} - 🚀 SUPER-DEV MODE: Enhanced quality workflow with post-implementation validation and automated code review - This workflow orchestrates existing workflows with additional validation steps - - - - - - - 🎯 RUN DEV-STORY - Complete all standard development steps - This includes: story loading, pre-dev gap analysis, development, testing, and task completion - - 🚀 **Super-Dev-Story: Enhanced Quality Workflow** - - Running standard dev-story workflow (Steps 1-8)... - - This includes: - ✅ Story loading and validation - ✅ Pre-dev gap analysis - ✅ TDD implementation cycle - ✅ Comprehensive testing - ✅ Task completion validation - - After dev-story completes, super-dev will add: - ✅ Post-dev gap analysis - ✅ Automated code review - ✅ Auto push-all - - - - - Pass through any user-provided story file path - - - - ✅ Dev-story complete - all tasks implemented and tested - - Proceeding to super-dev enhancements... - - - - - ❌ Dev-story did not complete successfully - - Cannot proceed with super-dev enhancements. - Fix issues and retry. - - HALT - dev-story must complete first - - - - - - - - - 🔍 POST-DEV VALIDATION - Verify all work actually completed! - This catches incomplete implementations that were prematurely marked done - - - 🔎 **Post-Development Gap Analysis** - - All tasks marked complete. Verifying against codebase reality... - - - - Re-read story file to get requirements and tasks - Extract all tasks marked [x] complete - For each completed task, identify what should exist in codebase - - - Use Glob to find files that should have been created - Use Grep to search for functions/classes that should exist - Use Read to verify implementation completeness (not just existence) - Run tests to verify claimed test coverage actually exists and passes - - - Compare claimed work vs actual implementation: - - **POST-DEV VERIFICATION:** - ✅ Verified Complete: - - List tasks where code fully exists and works - - Confirm tests exist and pass - - Verify implementation matches requirements - - - ❌ False Positives Detected: - - List tasks marked [x] but code missing or incomplete - - Identify claimed tests that don't exist or fail - - Note partial implementations marked as complete - - - - - - ⚠️ **Post-Dev Gaps Detected!** - - **Tasks marked complete but implementation incomplete:** - {{list_false_positives_with_details}} - - These issues must be addressed before story can be marked complete. - - - Uncheck false positive tasks in story file - Add new tasks for missing work - Update Gap Analysis section with post-dev findings - - 🔄 Re-invoking dev-story to complete missing work... - - - - Resume with added tasks for missing work - - - ✅ Missing work completed. Proceeding to code review... - - - - ✅ **Post-Dev Validation Passed** - - All tasks verified complete against codebase. - Proceeding to code review... - - Update Gap Analysis section with post-dev verification results - - - - - - - - - 👀 AUTO CODE REVIEW - Independent quality validation - - - 🔍 **Running Automated Code Review** - - Analyzing implementation for issues... - - - - - Run code review on completed story - - - Parse code review results from story file "Code Review" section - Extract issues by severity (Critical, High, Medium, Low) - Count total issues found - - - 🚨 **Code Review Found Issues Requiring Fixes** - - Issues found: {{total_issue_count}} - - Critical: {{critical_count}} - - High: {{high_count}} - - Medium: {{medium_count}} - - Low: {{low_count}} - - Adding review findings to story tasks and re-running dev-story... - - - Add code review findings as tasks in story file - - - - Fix code review issues - - - ✅ Code review issues resolved. Proceeding to push... - - - - ℹ️ **Code Review Found Minor Issues** - - - Medium: {{medium_count}} - - Low: {{low_count}} - - - Auto-fix these minor issues? [Y/n/skip]: - - - Add review findings as tasks - - - - - - - Document issues in story file - ℹ️ Minor issues documented. Proceeding to push... - - - - - ✅ **Code Review Passed** - - No issues found. Implementation meets quality standards. - Proceeding to push... - - - - - - - - - - 📝 PUSH-ALL - Stage, commit, and push with safety validation - - 📝 **Committing and Pushing Story Changes** - - Running push-all workflow with safety checks... - - - - Commit and push all story changes - - - - ✅ Changes pushed to remote successfully - - - - ⚠️ Push failed but story is complete locally - - You can push manually when ready. - - - - - - - - - - 🎉 **SUPER-DEV STORY COMPLETE, {user_name}!** - - **Quality Gates Passed:** - ✅ Pre-dev gap analysis - Tasks validated before work - ✅ Development - All tasks completed with TDD - ✅ Post-dev gap analysis - Implementation verified - ✅ Code review - Quality and security validated - ✅ Pushed to remote - Changes backed up - - **Story File:** {{story_file}} - **Status:** review (ready for human review) - - --- - - **What Super-Dev Validated:** - 1. 🔍 Tasks matched codebase reality before starting - 2. 💻 Implementation completed per requirements - 3. ✅ No false positive completions (all work verified) - 4. 👀 Code quality and security validated - 5. 📝 Changes committed and pushed to remote - - **Next Steps:** - - Review the completed story - - Verify business requirements met - - Merge when approved - - **Note:** This story went through enhanced quality validation. - It should require minimal human review. - - - Based on {user_skill_level}, ask if user needs explanations about implementation, decisions, or findings - - - Provide clear, contextual explanations - - - 💡 **Tip:** This story was developed with super-dev-story for enhanced quality. - - For faster development, use standard `dev-story` workflow. - For maximum quality, continue using `super-dev-story`. - - - - diff --git a/tools/build-docs.js b/tools/build-docs.js index 8946d70b..18e991de 100644 --- a/tools/build-docs.js +++ b/tools/build-docs.js @@ -166,7 +166,6 @@ function generateLlmsTxt(outputDir) { '', `- **[BMM - Method](${SITE_URL}/docs/bmm/quick-start)** - Core methodology module`, `- **[BMB - Builder](${SITE_URL}/docs/modules/bmb/)** - Agent and workflow builder`, - `- **[BMGD - Game Dev](${SITE_URL}/docs/modules/bmgd/quick-start)** - Game development module`, '', '---', '', diff --git a/tools/cli/installers/lib/modules/manager.js b/tools/cli/installers/lib/modules/manager.js index efbddcf0..60355087 100644 --- a/tools/cli/installers/lib/modules/manager.js +++ b/tools/cli/installers/lib/modules/manager.js @@ -1166,7 +1166,7 @@ class ModuleManager { // Parse INSTALL workflow path // Handle_bmad - // Example: {project-root}/_bmad/bmgd/workflows/4-production/create-story/workflow.yaml + // Example: {project-root}/_bmad/bmm/workflows/4-implementation/create-story/workflow.yaml const installMatch = installWorkflowPath.match(/\{project-root\}\/(_bmad)\/([^/]+)\/workflows\/(.+)/); if (!installMatch) { console.warn(chalk.yellow(` Could not parse workflow-install path: ${installWorkflowPath}`));