diff --git a/docs/technical-decisions-template.md b/docs/technical-decisions-template.md
new file mode 100644
index 00000000..5f813239
--- /dev/null
+++ b/docs/technical-decisions-template.md
@@ -0,0 +1,30 @@
+# Technical Decisions Log
+
+_Auto-updated during discovery and planning sessions - you can also add information here yourself_
+
+## Purpose
+
+This document captures technical decisions, preferences, and constraints discovered during project discussions. It serves as input for solution-architecture.md and solution design documents.
+
+## Confirmed Decisions
+
+
+
+## Preferences
+
+
+
+## Constraints
+
+
+
+## To Investigate
+
+
+
+## Notes
+
+- This file is automatically updated when technical information is mentioned
+- Decisions here are inputs, not final architecture
+- Final technical decisions belong in solution-architecture.md
+- Implementation details belong in solutions/\*.md and story context or dev notes.
diff --git a/src/modules/bmm/README.md b/src/modules/bmm/README.md
index 4e2a9ec8..566b0a7b 100644
--- a/src/modules/bmm/README.md
+++ b/src/modules/bmm/README.md
@@ -17,7 +17,7 @@ Specialized AI agents for different development roles:
- **Architect** - Technical architecture and design
- **SM** (Scrum Master) - Sprint and story management
- **DEV** (Developer) - Code implementation
-- **SR** (Senior Reviewer) - Code review and quality
+- **TEA** (Test Architect) - Test Architect
- **UX** - User experience design
- And more specialized roles
@@ -65,17 +65,9 @@ Test architecture and quality assurance components.
## Quick Start
```bash
-# Run a planning workflow
-bmad pm plan-project
-
-# Create a new story
-bmad sm create-story
-
-# Run development workflow
-bmad dev develop
-
-# Review implementation
-bmad sr review-story
+# Load the PM agent - either via slash command or drag and drop or @ the agent file.
+# Once loaded, the agent should greet you and offer a menu of options. You can enter:
+`*plan-project`
```
## Key Concepts
diff --git a/src/modules/bmm/agents/analyst.agent.yaml b/src/modules/bmm/agents/analyst.agent.yaml
index 54f38a56..4fec587d 100644
--- a/src/modules/bmm/agents/analyst.agent.yaml
+++ b/src/modules/bmm/agents/analyst.agent.yaml
@@ -26,6 +26,10 @@ agent:
workflow: "{project-root}/bmad/bmm/workflows/1-analysis/product-brief/workflow.yaml"
description: Produce Project Brief
+ - trigger: document-project
+ workflow: "{project-root}/bmad/bmm/workflows/1-analysis/document-project/workflow.yaml"
+ description: Generate comprehensive documentation of an existing Project
+
- trigger: research
workflow: "{project-root}/bmad/bmm/workflows/1-analysis/research/workflow.yaml"
description: Guide me through Research
diff --git a/src/modules/bmm/agents/dev.agent.yaml b/src/modules/bmm/agents/dev.agent.yaml
index c48d5732..42f9becf 100644
--- a/src/modules/bmm/agents/dev.agent.yaml
+++ b/src/modules/bmm/agents/dev.agent.yaml
@@ -16,18 +16,19 @@ agent:
- I treat the Story Context XML as the single source of truth, trusting it over any training priors while refusing to invent solutions when information is missing.
- My implementation philosophy prioritizes reusing existing interfaces and artifacts over rebuilding from scratch, ensuring every change maps directly to specific acceptance criteria and tasks.
- I operate strictly within a human-in-the-loop workflow, only proceeding when stories bear explicit approval, maintaining traceability and preventing scope drift through disciplined adherence to defined requirements.
+ - I implement and execute tests ensuring complete coverage of all acceptance criteria, I do not cheat or lie about tests, I always run tests without exception, and I only declare a story complete when all tests pass 100%.
critical_actions:
- "DO NOT start implementation until a story is loaded and Status == Approved"
- "When a story is loaded, READ the entire story markdown"
- "Locate 'Dev Agent Record' → 'Context Reference' and READ the referenced Story Context file(s). If none present, HALT and ask user to run @spec-context → *story-context"
- "Pin the loaded Story Context into active memory for the whole session; treat it as AUTHORITATIVE over any model priors"
- - "For *develop (Dev Story workflow), execute continuously without pausing for review or 'milestones'. Only halt for explicit blocker conditions (e.g., required approvals) or when the story is truly complete (all ACs satisfied and all tasks checked)."
+ - "For *develop (Dev Story workflow), execute continuously without pausing for review or 'milestones'. Only halt for explicit blocker conditions (e.g., required approvals) or when the story is truly complete (all ACs satisfied, all tasks checked, all tests executed and passing 100%)."
menu:
- trigger: develop
workflow: "{project-root}/bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml"
- description: Execute Dev Story workflow (implements tasks, tests, validates, updates story)
+ description: "Execute Dev Story workflow, implementing tasks and tests, or performing updates to the story"
- trigger: development-status
exec: "{project-root}/bmad/bmm/tasks/development-status.xml"
@@ -35,4 +36,4 @@ agent:
- trigger: review
workflow: "{project-root}/bmad/bmm/workflows/4-implementation/review-story/workflow.yaml"
- description: Perform Senior Developer Review on a story flagged Ready for Review (loads context/tech-spec, checks ACs/tests/architecture/security, appends review notes)
+ description: "Perform a thorough clean context review on a story flagged Ready for Review, and appends review notes to story file"
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/README.md b/src/modules/bmm/workflows/1-analysis/document-project/README.md
new file mode 100644
index 00000000..0d76a2a1
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/README.md
@@ -0,0 +1,445 @@
+# Document Project Workflow
+
+**Version:** 1.2.0
+**Module:** BMM (BMAD Method Module)
+**Type:** Action Workflow (Documentation Generator)
+
+## Purpose
+
+Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development. Generates a master index and multiple documentation files tailored to project structure and type.
+
+**NEW in v1.2.0:** Context-safe architecture with scan levels, resumability, and write-as-you-go pattern to prevent context exhaustion.
+
+## Key Features
+
+- **Multi-Project Type Support**: Handles web, backend, mobile, CLI, game, embedded, data, infra, library, desktop, and extension projects
+- **Multi-Part Detection**: Automatically detects and documents projects with separate client/server or multiple services
+- **Three Scan Levels** (NEW v1.2.0): Quick (2-5 min), Deep (10-30 min), Exhaustive (30-120 min)
+- **Resumability** (NEW v1.2.0): Interrupt and resume workflows without losing progress
+- **Write-as-you-go** (NEW v1.2.0): Documents written immediately to prevent context exhaustion
+- **Intelligent Batching** (NEW v1.2.0): Subfolder-based processing for deep/exhaustive scans
+- **Data-Driven Analysis**: Uses CSV-based project type detection and documentation requirements
+- **Comprehensive Scanning**: Analyzes APIs, data models, UI components, configuration, security patterns, and more
+- **Architecture Matching**: Matches projects to 170+ architecture templates from the solutioning registry
+- **Brownfield PRD Ready**: Generates documentation specifically designed for AI agents planning new features
+
+## How to Invoke
+
+```bash
+workflow document-project
+```
+
+Or from BMAD CLI:
+
+```bash
+/bmad:bmm:workflows:document-project
+```
+
+## Scan Levels (NEW in v1.2.0)
+
+Choose the right scan depth for your needs:
+
+### 1. Quick Scan (Default)
+
+**Duration:** 2-5 minutes
+**What it does:** Pattern-based analysis without reading source files
+**Reads:** Config files, package manifests, directory structure, README
+**Use when:**
+
+- You need a fast project overview
+- Initial understanding of project structure
+- Planning next steps before deeper analysis
+
+**Does NOT read:** Source code files (_.js, _.ts, _.py, _.go, etc.)
+
+### 2. Deep Scan
+
+**Duration:** 10-30 minutes
+**What it does:** Reads files in critical directories based on project type
+**Reads:** Files in critical paths defined by documentation requirements
+**Use when:**
+
+- Creating comprehensive documentation for brownfield PRD
+- Need detailed analysis of key areas
+- Want balance between depth and speed
+
+**Example:** For a web app, reads controllers/, models/, components/, but not every utility file
+
+### 3. Exhaustive Scan
+
+**Duration:** 30-120 minutes
+**What it does:** Reads ALL source files in project
+**Reads:** Every source file (excludes node_modules, dist, build, .git)
+**Use when:**
+
+- Complete project analysis needed
+- Migration planning requires full understanding
+- Detailed audit of entire codebase
+- Deep technical debt assessment
+
+**Note:** Deep-dive mode ALWAYS uses exhaustive scan (no choice)
+
+## Resumability (NEW in v1.2.0)
+
+The workflow can be interrupted and resumed without losing progress:
+
+- **State Tracking:** Progress saved in `project-scan-report.json`
+- **Auto-Detection:** Workflow detects incomplete runs (<24 hours old)
+- **Resume Prompt:** Choose to resume or start fresh
+- **Step-by-Step:** Resume from exact step where interrupted
+- **Archiving:** Old state files automatically archived
+
+**Example Resume Flow:**
+
+```
+> workflow document-project
+
+I found an in-progress workflow state from 2025-10-11 14:32:15.
+
+Current Progress:
+- Mode: initial_scan
+- Scan Level: deep
+- Completed Steps: 5/12
+- Last Step: step_5
+
+Would you like to:
+1. Resume from where we left off - Continue from step 6
+2. Start fresh - Archive old state and begin new scan
+3. Cancel - Exit without changes
+
+Your choice [1/2/3]:
+```
+
+## What It Does
+
+### Step-by-Step Process
+
+1. **Detects Project Structure** - Identifies if project is single-part or multi-part (client/server/etc.)
+2. **Classifies Project Type** - Matches against 12 project types (web, backend, mobile, etc.)
+3. **Discovers Documentation** - Finds existing README, CONTRIBUTING, ARCHITECTURE files
+4. **Analyzes Tech Stack** - Parses package files, identifies frameworks, versions, dependencies
+5. **Conditional Scanning** - Performs targeted analysis based on project type requirements:
+ - API routes and endpoints
+ - Database models and schemas
+ - State management patterns
+ - UI component libraries
+ - Configuration and security
+ - CI/CD and deployment configs
+6. **Generates Source Tree** - Creates annotated directory structure with critical paths
+7. **Extracts Dev Instructions** - Documents setup, build, run, and test commands
+8. **Creates Architecture Docs** - Generates detailed architecture using matched templates
+9. **Builds Master Index** - Creates comprehensive index.md as primary AI retrieval source
+10. **Validates Output** - Runs 140+ point checklist to ensure completeness
+
+### Output Files
+
+**Single-Part Projects:**
+
+- `index.md` - Master index
+- `project-overview.md` - Executive summary
+- `architecture.md` - Detailed architecture
+- `source-tree-analysis.md` - Annotated directory tree
+- `component-inventory.md` - Component catalog (if applicable)
+- `development-guide.md` - Local dev instructions
+- `api-contracts.md` - API documentation (if applicable)
+- `data-models.md` - Database schema (if applicable)
+- `deployment-guide.md` - Deployment process (optional)
+- `contribution-guide.md` - Contributing guidelines (optional)
+- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
+
+**Multi-Part Projects (e.g., client + server):**
+
+- `index.md` - Master index with part navigation
+- `project-overview.md` - Multi-part summary
+- `architecture-{part_id}.md` - Per-part architecture docs
+- `source-tree-analysis.md` - Full tree with part annotations
+- `component-inventory-{part_id}.md` - Per-part components
+- `development-guide-{part_id}.md` - Per-part dev guides
+- `integration-architecture.md` - How parts communicate
+- `project-parts.json` - Machine-readable metadata
+- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
+- Additional conditional files per part (API, data models, etc.)
+
+## Data Files
+
+The workflow uses three CSV files:
+
+1. **project-types.csv** - Project type detection and classification
+ - Location: `/bmad/bmm/workflows/3-solutioning/project-types/project-types.csv`
+ - 12 project types with detection keywords
+
+2. **registry.csv** - Architecture template matching
+ - Location: `/bmad/bmm/workflows/3-solutioning/templates/registry.csv`
+ - 170+ architecture patterns
+
+3. **documentation-requirements.csv** - Scanning requirements per project type
+ - Location: `/bmad/bmm/workflows/document-project/documentation-requirements.csv`
+ - 24 columns of analysis patterns and requirements
+
+## Use Cases
+
+### Primary Use Case: Brownfield PRD Creation
+
+After running this workflow, use the generated `index.md` as input to brownfield PRD workflows:
+
+```
+User: "I want to add a new dashboard feature"
+PRD Workflow: Loads docs/index.md
+→ Understands existing architecture
+→ Identifies reusable components
+→ Plans integration with existing APIs
+→ Creates contextual PRD with epics and stories
+```
+
+### Other Use Cases
+
+- **Onboarding New Developers** - Comprehensive project documentation
+- **Architecture Review** - Structured analysis of existing system
+- **Technical Debt Assessment** - Identify patterns and anti-patterns
+- **Migration Planning** - Understand current state before refactoring
+
+## Requirements
+
+### Recommended Inputs (Optional)
+
+- Project root directory (defaults to current directory)
+- README.md or similar docs (auto-discovered if present)
+- User guidance on key areas to focus (workflow will ask)
+
+### Tools Used
+
+- File system scanning (Glob, Read, Grep)
+- Code analysis
+- Git repository analysis (optional)
+
+## Configuration
+
+### Default Output Location
+
+Files are saved to: `{output_folder}` (from config.yaml)
+
+Default: `/docs/` folder in project root
+
+### Customization
+
+- Modify `documentation-requirements.csv` to adjust scanning patterns for project types
+- Add new project types to `project-types.csv`
+- Add new architecture templates to `registry.csv`
+
+## Example: Multi-Part Web App
+
+**Input:**
+
+```
+my-app/
+├── client/ # React frontend
+├── server/ # Express backend
+└── README.md
+```
+
+**Detection Result:**
+
+- Repository Type: Monorepo
+- Part 1: client (web/React)
+- Part 2: server (backend/Express)
+
+**Output (10+ files):**
+
+```
+docs/
+├── index.md
+├── project-overview.md
+├── architecture-client.md
+├── architecture-server.md
+├── source-tree-analysis.md
+├── component-inventory-client.md
+├── development-guide-client.md
+├── development-guide-server.md
+├── api-contracts-server.md
+├── data-models-server.md
+├── integration-architecture.md
+└── project-parts.json
+```
+
+## Example: Simple CLI Tool
+
+**Input:**
+
+```
+hello-cli/
+├── main.go
+├── go.mod
+└── README.md
+```
+
+**Detection Result:**
+
+- Repository Type: Monolith
+- Part 1: main (cli/Go)
+
+**Output (4 files):**
+
+```
+docs/
+├── index.md
+├── project-overview.md
+├── architecture.md
+└── source-tree-analysis.md
+```
+
+## Deep-Dive Mode
+
+### What is Deep-Dive Mode?
+
+When you run the workflow on a project that already has documentation, you'll be offered a choice:
+
+1. **Rescan entire project** - Update all documentation with latest changes
+2. **Deep-dive into specific area** - Generate EXHAUSTIVE documentation for a particular feature/module/folder
+3. **Cancel** - Keep existing documentation
+
+Deep-dive mode performs **comprehensive, file-by-file analysis** of a specific area, reading EVERY file completely and documenting:
+
+- All exports with complete signatures
+- All imports and dependencies
+- Dependency graphs and data flow
+- Code patterns and implementations
+- Testing coverage and strategies
+- Integration points
+- Reuse opportunities
+
+### When to Use Deep-Dive Mode
+
+- **Before implementing a feature** - Deep-dive the area you'll be modifying
+- **During architecture review** - Deep-dive complex modules
+- **For code understanding** - Deep-dive unfamiliar parts of codebase
+- **When creating PRDs** - Deep-dive areas affected by new features
+
+### Deep-Dive Process
+
+1. Workflow detects existing `index.md`
+2. Offers deep-dive option
+3. Suggests areas based on project structure:
+ - API route groups
+ - Feature modules
+ - UI component areas
+ - Services/business logic
+4. You select area or specify custom path
+5. Workflow reads EVERY file in that area
+6. Generates `deep-dive-{area-name}.md` with complete analysis
+7. Updates `index.md` with link to deep-dive doc
+8. Offers to deep-dive another area or finish
+
+### Deep-Dive Output Example
+
+**docs/deep-dive-dashboard-feature.md:**
+
+- Complete file inventory (47 files analyzed)
+- Every export with signatures
+- Dependency graph
+- Data flow analysis
+- Integration points
+- Testing coverage
+- Related code references
+- Implementation guidance
+- ~3,000 LOC documented in detail
+
+### Incremental Deep-Diving
+
+You can deep-dive multiple areas over time:
+
+- First run: Scan entire project → generates index.md
+- Second run: Deep-dive dashboard feature
+- Third run: Deep-dive API layer
+- Fourth run: Deep-dive authentication system
+
+All deep-dive docs are linked from the master index.
+
+## Validation
+
+The workflow includes a comprehensive 160+ point checklist covering:
+
+- Project detection accuracy
+- Technology stack completeness
+- Codebase scanning thoroughness
+- Architecture documentation quality
+- Multi-part handling (if applicable)
+- Brownfield PRD readiness
+- Deep-dive completeness (if applicable)
+
+## Next Steps After Completion
+
+1. **Review** `docs/index.md` - Your master documentation index
+2. **Validate** - Check generated docs for accuracy
+3. **Use for PRD** - Point brownfield PRD workflow to index.md
+4. **Maintain** - Re-run workflow when architecture changes significantly
+
+## File Structure
+
+```
+document-project/
+├── workflow.yaml # Workflow configuration
+├── instructions.md # Step-by-step workflow logic
+├── checklist.md # Validation criteria
+├── documentation-requirements.csv # Project type scanning patterns
+├── templates/ # Output templates
+│ ├── index-template.md
+│ ├── project-overview-template.md
+│ └── source-tree-template.md
+└── README.md # This file
+```
+
+## Troubleshooting
+
+**Issue: Project type not detected correctly**
+
+- Solution: Workflow will ask for confirmation; manually select correct type
+
+**Issue: Missing critical information**
+
+- Solution: Provide additional context when prompted; re-run specific analysis steps
+
+**Issue: Multi-part detection missed a part**
+
+- Solution: When asked to confirm parts, specify the missing part and its path
+
+**Issue: Architecture template doesn't match well**
+
+- Solution: Check registry.csv; may need to add new template or adjust matching criteria
+
+## Architecture Improvements in v1.2.0
+
+### Context-Safe Design
+
+The workflow now uses a write-as-you-go architecture:
+
+- Documents written immediately to disk (not accumulated in memory)
+- Detailed findings purged after writing (only summaries kept)
+- State tracking enables resumption from any step
+- Batching strategy prevents context exhaustion on large projects
+
+### Batching Strategy
+
+For deep/exhaustive scans:
+
+- Process ONE subfolder at a time
+- Read files → Extract info → Write output → Validate → Purge context
+- Primary concern is file SIZE (not count)
+- Track batches in state file for resumability
+
+### State File Format
+
+Optimized JSON (no pretty-printing):
+
+```json
+{
+ "workflow_version": "1.2.0",
+ "timestamps": {...},
+ "mode": "initial_scan",
+ "scan_level": "deep",
+ "completed_steps": [...],
+ "current_step": "step_6",
+ "findings": {"summary": "only"},
+ "outputs_generated": [...],
+ "resume_instructions": "..."
+}
+```
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/checklist.md b/src/modules/bmm/workflows/1-analysis/document-project/checklist.md
new file mode 100644
index 00000000..7515cfe7
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/checklist.md
@@ -0,0 +1,245 @@
+# Document Project Workflow - Validation Checklist
+
+## Scan Level and Resumability (v1.2.0)
+
+- [ ] Scan level selection offered (quick/deep/exhaustive) for initial_scan and full_rescan modes
+- [ ] Deep-dive mode automatically uses exhaustive scan (no choice given)
+- [ ] Quick scan does NOT read source files (only patterns, configs, manifests)
+- [ ] Deep scan reads files in critical directories per project type
+- [ ] Exhaustive scan reads ALL source files (excluding node_modules, dist, build)
+- [ ] State file (project-scan-report.json) created at workflow start
+- [ ] State file updated after each step completion
+- [ ] State file contains all required fields per schema
+- [ ] Resumability prompt shown if state file exists and is <24 hours old
+- [ ] Old state files (>24 hours) automatically archived
+- [ ] Resume functionality loads previous state correctly
+- [ ] Workflow can jump to correct step when resuming
+
+## Write-as-you-go Architecture
+
+- [ ] Each document written to disk IMMEDIATELY after generation
+- [ ] Document validation performed right after writing (section-level)
+- [ ] State file updated after each document is written
+- [ ] Detailed findings purged from context after writing (only summaries kept)
+- [ ] Context contains only high-level summaries (1-2 sentences per section)
+- [ ] No accumulation of full project analysis in memory
+
+## Batching Strategy (Deep/Exhaustive Scans)
+
+- [ ] Batching applied for deep and exhaustive scan levels
+- [ ] Batches organized by SUBFOLDER (not arbitrary file count)
+- [ ] Large files (>5000 LOC) handled with appropriate judgment
+- [ ] Each batch: read files, extract info, write output, validate, purge context
+- [ ] Batch completion tracked in state file (batches_completed array)
+- [ ] Batch summaries kept in context (1-2 sentences max)
+
+## Project Detection and Classification
+
+- [ ] Project type correctly identified and matches actual technology stack
+- [ ] Multi-part vs single-part structure accurately detected
+- [ ] All project parts identified if multi-part (no missing client/server/etc.)
+- [ ] Documentation requirements loaded for each part type
+- [ ] Architecture registry match is appropriate for detected stack
+
+## Technology Stack Analysis
+
+- [ ] All major technologies identified (framework, language, database, etc.)
+- [ ] Versions captured where available
+- [ ] Technology decision table is complete and accurate
+- [ ] Dependencies and libraries documented
+- [ ] Build tools and package managers identified
+
+## Codebase Scanning Completeness
+
+- [ ] All critical directories scanned based on project type
+- [ ] API endpoints documented (if requires_api_scan = true)
+- [ ] Data models captured (if requires_data_models = true)
+- [ ] State management patterns identified (if requires_state_management = true)
+- [ ] UI components inventoried (if requires_ui_components = true)
+- [ ] Configuration files located and documented
+- [ ] Authentication/security patterns identified
+- [ ] Entry points correctly identified
+- [ ] Integration points mapped (for multi-part projects)
+- [ ] Test files and patterns documented
+
+## Source Tree Analysis
+
+- [ ] Complete directory tree generated with no major omissions
+- [ ] Critical folders highlighted and described
+- [ ] Entry points clearly marked
+- [ ] Integration paths noted (for multi-part)
+- [ ] Asset locations identified (if applicable)
+- [ ] File organization patterns explained
+
+## Architecture Documentation Quality
+
+- [ ] Architecture document uses appropriate template from registry
+- [ ] All template sections filled with relevant information (no placeholders)
+- [ ] Technology stack section is comprehensive
+- [ ] Architecture pattern clearly explained
+- [ ] Data architecture documented (if applicable)
+- [ ] API design documented (if applicable)
+- [ ] Component structure explained (if applicable)
+- [ ] Source tree included and annotated
+- [ ] Testing strategy documented
+- [ ] Deployment architecture captured (if config found)
+
+## Development and Operations Documentation
+
+- [ ] Prerequisites clearly listed
+- [ ] Installation steps documented
+- [ ] Environment setup instructions provided
+- [ ] Local run commands specified
+- [ ] Build process documented
+- [ ] Test commands and approach explained
+- [ ] Deployment process documented (if applicable)
+- [ ] CI/CD pipeline details captured (if found)
+- [ ] Contribution guidelines extracted (if found)
+
+## Multi-Part Project Specific (if applicable)
+
+- [ ] Each part documented separately
+- [ ] Part-specific architecture files created (architecture-{part_id}.md)
+- [ ] Part-specific component inventories created (if applicable)
+- [ ] Part-specific development guides created
+- [ ] Integration architecture document created
+- [ ] Integration points clearly defined with type and details
+- [ ] Data flow between parts explained
+- [ ] project-parts.json metadata file created
+
+## Index and Navigation
+
+- [ ] index.md created as master entry point
+- [ ] Project structure clearly summarized in index
+- [ ] Quick reference section complete and accurate
+- [ ] All generated docs linked from index
+- [ ] All existing docs linked from index (if found)
+- [ ] Getting started section provides clear next steps
+- [ ] AI-assisted development guidance included
+- [ ] Navigation structure matches project complexity (simple for single-part, detailed for multi-part)
+
+## File Completeness
+
+- [ ] index.md generated
+- [ ] project-overview.md generated
+- [ ] source-tree-analysis.md generated
+- [ ] architecture.md (or per-part) generated
+- [ ] component-inventory.md (or per-part) generated if UI components exist
+- [ ] development-guide.md (or per-part) generated
+- [ ] api-contracts.md (or per-part) generated if APIs documented
+- [ ] data-models.md (or per-part) generated if data models found
+- [ ] deployment-guide.md generated if deployment config found
+- [ ] contribution-guide.md generated if guidelines found
+- [ ] integration-architecture.md generated if multi-part
+- [ ] project-parts.json generated if multi-part
+
+## Content Quality
+
+- [ ] Technical information is accurate and specific
+- [ ] No generic placeholders or "TODO" items remain
+- [ ] Examples and code snippets are relevant to actual project
+- [ ] File paths and directory references are correct
+- [ ] Technology names and versions are accurate
+- [ ] Terminology is consistent across all documents
+- [ ] Descriptions are clear and actionable
+
+## Brownfield PRD Readiness
+
+- [ ] Documentation provides enough context for AI to understand existing system
+- [ ] Integration points are clear for planning new features
+- [ ] Reusable components are identified for leveraging in new work
+- [ ] Data models are documented for schema extension planning
+- [ ] API contracts are documented for endpoint expansion
+- [ ] Code conventions and patterns are captured for consistency
+- [ ] Architecture constraints are clear for informed decision-making
+
+## Output Validation
+
+- [ ] All files saved to correct output folder
+- [ ] File naming follows convention (no part suffix for single-part, with suffix for multi-part)
+- [ ] No broken internal links between documents
+- [ ] Markdown formatting is correct and renders properly
+- [ ] JSON files are valid (project-parts.json if applicable)
+
+## Final Validation
+
+- [ ] User confirmed project classification is accurate
+- [ ] User provided any additional context needed
+- [ ] All requested areas of focus addressed
+- [ ] Documentation is immediately usable for brownfield PRD workflow
+- [ ] No critical information gaps identified
+
+## Issues Found
+
+### Critical Issues (must fix before completion)
+
+-
+
+### Minor Issues (can be addressed later)
+
+-
+
+### Missing Information (to note for user)
+
+- ***
+
+## Deep-Dive Mode Validation (if deep-dive was performed)
+
+- [ ] Deep-dive target area correctly identified and scoped
+- [ ] All files in target area read completely (no skipped files)
+- [ ] File inventory includes all exports with complete signatures
+- [ ] Dependencies mapped for all files
+- [ ] Dependents identified (who imports each file)
+- [ ] Code snippets included for key implementation details
+- [ ] Patterns and design approaches documented
+- [ ] State management strategy explained
+- [ ] Side effects documented (API calls, DB queries, etc.)
+- [ ] Error handling approaches captured
+- [ ] Testing files and coverage documented
+- [ ] TODOs and comments extracted
+- [ ] Dependency graph created showing relationships
+- [ ] Data flow traced through the scanned area
+- [ ] Integration points with rest of codebase identified
+- [ ] Related code and similar patterns found outside scanned area
+- [ ] Reuse opportunities documented
+- [ ] Implementation guidance provided
+- [ ] Modification instructions clear
+- [ ] Index.md updated with deep-dive link
+- [ ] Deep-dive documentation is immediately useful for implementation
+
+---
+
+## State File Quality
+
+- [ ] State file is valid JSON (no syntax errors)
+- [ ] State file is optimized (no pretty-printing, minimal whitespace)
+- [ ] State file contains all completed steps with timestamps
+- [ ] State file outputs_generated list is accurate and complete
+- [ ] State file resume_instructions are clear and actionable
+- [ ] State file findings contain only high-level summaries (not detailed data)
+- [ ] State file can be successfully loaded for resumption
+
+## Completion Criteria
+
+All items in the following sections must be checked:
+
+- ✓ Scan Level and Resumability (v1.2.0)
+- ✓ Write-as-you-go Architecture
+- ✓ Batching Strategy (if deep/exhaustive scan)
+- ✓ Project Detection and Classification
+- ✓ Technology Stack Analysis
+- ✓ Architecture Documentation Quality
+- ✓ Index and Navigation
+- ✓ File Completeness
+- ✓ Brownfield PRD Readiness
+- ✓ State File Quality
+- ✓ Deep-Dive Mode Validation (if applicable)
+
+The workflow is complete when:
+
+1. All critical checklist items are satisfied
+2. No critical issues remain
+3. User has reviewed and approved the documentation
+4. Generated docs are ready for use in brownfield PRD workflow
+5. Deep-dive docs (if any) are comprehensive and implementation-ready
+6. State file is valid and can enable resumption if interrupted
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/documentation-requirements.csv b/src/modules/bmm/workflows/1-analysis/document-project/documentation-requirements.csv
new file mode 100644
index 00000000..9f773ab0
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/documentation-requirements.csv
@@ -0,0 +1,12 @@
+project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory
+web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false
+mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true
+backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false
+cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false
+library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
+desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true
+game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true
+data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false
+extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false
+infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
+embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/instructions.md b/src/modules/bmm/workflows/1-analysis/document-project/instructions.md
new file mode 100644
index 00000000..af9b8a50
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/instructions.md
@@ -0,0 +1,128 @@
+# Document Project Workflow Router
+
+
+
+The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml
+You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/document-project/workflow.yaml
+This router determines workflow mode and delegates to specialized sub-workflows
+
+
+SMART LOADING STRATEGY: Check state file FIRST before loading any CSV files
+
+Check for existing state file at: {output_folder}/project-scan-report.json
+
+
+ Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification
+ Extract cached project_type_id(s) from state file if present
+ Calculate age of state file (current time - last_updated)
+
+I found an in-progress workflow state from {{last_updated}}.
+
+**Current Progress:**
+
+- Mode: {{mode}}
+- Scan Level: {{scan_level}}
+- Completed Steps: {{completed_steps_count}}/{{total_steps}}
+- Last Step: {{current_step}}
+- Project Type(s): {{cached_project_types}}
+
+Would you like to:
+
+1. **Resume from where we left off** - Continue from step {{current_step}}
+2. **Start fresh** - Archive old state and begin new scan
+3. **Cancel** - Exit without changes
+
+Your choice [1/2/3]:
+
+
+
+ Set resume_mode = true
+ Set workflow_mode = {{mode}}
+ Load findings summaries from state file
+ Load cached project_type_id(s) from state file
+
+ CONDITIONAL CSV LOADING FOR RESUME:
+ For each cached project_type_id, load ONLY the corresponding row from: {documentation_requirements_csv}
+ Skip loading project-types.csv and architecture_registry.csv (not needed on resume)
+ Store loaded doc requirements for use in remaining steps
+
+ Display: "Resuming {{workflow_mode}} from {{current_step}} with cached project type(s): {{cached_project_types}}"
+
+
+ Load and execute: {installed_path}/workflows/deep-dive-instructions.md with resume context
+
+
+
+ Load and execute: {installed_path}/workflows/full-scan-instructions.md with resume context
+
+
+
+
+ Create archive directory: {output_folder}/.archive/
+ Move old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json
+ Set resume_mode = false
+ Continue to Step 0.5
+
+
+
+ Display: "Exiting workflow without changes."
+ Exit workflow
+
+
+
+
+
+ Display: "Found old state file (>24 hours). Starting fresh scan."
+ Archive old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json
+ Set resume_mode = false
+ Continue to Step 0.5
+
+
+
+
+
+Check if {output_folder}/index.md exists
+
+
+ Read existing index.md to extract metadata (date, project structure, parts count)
+ Store as {{existing_doc_date}}, {{existing_structure}}
+
+I found existing documentation generated on {{existing_doc_date}}.
+
+What would you like to do?
+
+1. **Re-scan entire project** - Update all documentation with latest changes
+2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder
+3. **Cancel** - Keep existing documentation as-is
+
+Your choice [1/2/3]:
+
+
+
+ Set workflow_mode = "full_rescan"
+ Display: "Starting full project rescan..."
+ Load and execute: {installed_path}/workflows/full-scan-instructions.md
+
+
+
+ Set workflow_mode = "deep_dive"
+ Set scan_level = "exhaustive"
+ Display: "Starting deep-dive documentation mode..."
+ Load and execute: {installed_path}/workflows/deep-dive-instructions.md
+
+
+
+ Display message: "Keeping existing documentation. Exiting workflow."
+ Exit workflow
+
+
+
+
+ Set workflow_mode = "initial_scan"
+ Display: "No existing documentation found. Starting initial project scan..."
+ Load and execute: {installed_path}/workflows/full-scan-instructions.md
+
+
+
+
+
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/README.md b/src/modules/bmm/workflows/1-analysis/document-project/templates/README.md
new file mode 100644
index 00000000..dc55fcf4
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/README.md
@@ -0,0 +1,38 @@
+# Document Project Workflow Templates
+
+This directory contains template files for the `document-project` workflow.
+
+## Template Files
+
+- **index-template.md** - Master index template (adapts for single/multi-part projects)
+- **project-overview-template.md** - Executive summary and high-level overview
+- **source-tree-template.md** - Annotated directory structure
+
+## Template Usage
+
+The workflow dynamically selects and populates templates based on:
+
+1. **Project structure** (single part vs multi-part)
+2. **Project type** (web, backend, mobile, etc.)
+3. **Documentation requirements** (from documentation-requirements.csv)
+
+## Variable Naming Convention
+
+Templates use Handlebars-style variables:
+
+- `{{variable_name}}` - Simple substitution
+- `{{#if condition}}...{{/if}}` - Conditional blocks
+- `{{#each collection}}...{{/each}}` - Iteration
+
+## Additional Templates
+
+Architecture-specific templates are dynamically loaded from:
+`/bmad/bmm/workflows/3-solutioning/templates/`
+
+Based on the matched architecture type from the registry.
+
+## Notes
+
+- Templates support both simple and complex project structures
+- Multi-part projects get part-specific file naming (e.g., `architecture-{part_id}.md`)
+- Single-part projects use simplified naming (e.g., `architecture.md`)
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/deep-dive-template.md b/src/modules/bmm/workflows/1-analysis/document-project/templates/deep-dive-template.md
new file mode 100644
index 00000000..c1285cdc
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/deep-dive-template.md
@@ -0,0 +1,345 @@
+# {{target_name}} - Deep Dive Documentation
+
+**Generated:** {{date}}
+**Scope:** {{target_path}}
+**Files Analyzed:** {{file_count}}
+**Lines of Code:** {{total_loc}}
+**Workflow Mode:** Exhaustive Deep-Dive
+
+## Overview
+
+{{target_description}}
+
+**Purpose:** {{target_purpose}}
+**Key Responsibilities:** {{responsibilities}}
+**Integration Points:** {{integration_summary}}
+
+## Complete File Inventory
+
+{{#each files_in_inventory}}
+
+### {{file_path}}
+
+**Purpose:** {{purpose}}
+**Lines of Code:** {{loc}}
+**File Type:** {{file_type}}
+
+**What Future Contributors Must Know:** {{contributor_note}}
+
+**Exports:**
+{{#each exports}}
+
+- `{{signature}}` - {{description}}
+ {{/each}}
+
+**Dependencies:**
+{{#each imports}}
+
+- `{{import_path}}` - {{reason}}
+ {{/each}}
+
+**Used By:**
+{{#each dependents}}
+
+- `{{dependent_path}}`
+ {{/each}}
+
+**Key Implementation Details:**
+
+```{{language}}
+{{key_code_snippet}}
+```
+
+{{implementation_notes}}
+
+**Patterns Used:**
+{{#each patterns}}
+
+- {{pattern_name}}: {{pattern_description}}
+ {{/each}}
+
+**State Management:** {{state_approach}}
+
+**Side Effects:**
+{{#each side_effects}}
+
+- {{effect_type}}: {{effect_description}}
+ {{/each}}
+
+**Error Handling:** {{error_handling_approach}}
+
+**Testing:**
+
+- Test File: {{test_file_path}}
+- Coverage: {{coverage_percentage}}%
+- Test Approach: {{test_approach}}
+
+**Comments/TODOs:**
+{{#each todos}}
+
+- Line {{line_number}}: {{todo_text}}
+ {{/each}}
+
+---
+
+{{/each}}
+
+## Contributor Checklist
+
+- **Risks & Gotchas:** {{risks_notes}}
+- **Pre-change Verification Steps:** {{verification_steps}}
+- **Suggested Tests Before PR:** {{suggested_tests}}
+
+## Architecture & Design Patterns
+
+### Code Organization
+
+{{organization_approach}}
+
+### Design Patterns
+
+{{#each design_patterns}}
+
+- **{{pattern_name}}**: {{usage_description}}
+ {{/each}}
+
+### State Management Strategy
+
+{{state_management_details}}
+
+### Error Handling Philosophy
+
+{{error_handling_philosophy}}
+
+### Testing Strategy
+
+{{testing_strategy}}
+
+## Data Flow
+
+{{data_flow_diagram}}
+
+### Data Entry Points
+
+{{#each entry_points}}
+
+- **{{entry_name}}**: {{entry_description}}
+ {{/each}}
+
+### Data Transformations
+
+{{#each transformations}}
+
+- **{{transformation_name}}**: {{transformation_description}}
+ {{/each}}
+
+### Data Exit Points
+
+{{#each exit_points}}
+
+- **{{exit_name}}**: {{exit_description}}
+ {{/each}}
+
+## Integration Points
+
+### APIs Consumed
+
+{{#each apis_consumed}}
+
+- **{{api_endpoint}}**: {{api_description}}
+ - Method: {{method}}
+ - Authentication: {{auth_requirement}}
+ - Response: {{response_schema}}
+ {{/each}}
+
+### APIs Exposed
+
+{{#each apis_exposed}}
+
+- **{{api_endpoint}}**: {{api_description}}
+ - Method: {{method}}
+ - Request: {{request_schema}}
+ - Response: {{response_schema}}
+ {{/each}}
+
+### Shared State
+
+{{#each shared_state}}
+
+- **{{state_name}}**: {{state_description}}
+ - Type: {{state_type}}
+ - Accessed By: {{accessors}}
+ {{/each}}
+
+### Events
+
+{{#each events}}
+
+- **{{event_name}}**: {{event_description}}
+ - Type: {{publish_or_subscribe}}
+ - Payload: {{payload_schema}}
+ {{/each}}
+
+### Database Access
+
+{{#each database_operations}}
+
+- **{{table_name}}**: {{operation_type}}
+ - Queries: {{query_patterns}}
+ - Indexes Used: {{indexes}}
+ {{/each}}
+
+## Dependency Graph
+
+{{dependency_graph_visualization}}
+
+### Entry Points (Not Imported by Others in Scope)
+
+{{#each entry_point_files}}
+
+- {{file_path}}
+ {{/each}}
+
+### Leaf Nodes (Don't Import Others in Scope)
+
+{{#each leaf_files}}
+
+- {{file_path}}
+ {{/each}}
+
+### Circular Dependencies
+
+{{#if has_circular_dependencies}}
+⚠️ Circular dependencies detected:
+{{#each circular_deps}}
+
+- {{cycle_description}}
+ {{/each}}
+ {{else}}
+ ✓ No circular dependencies detected
+ {{/if}}
+
+## Testing Analysis
+
+### Test Coverage Summary
+
+- **Statements:** {{statements_coverage}}%
+- **Branches:** {{branches_coverage}}%
+- **Functions:** {{functions_coverage}}%
+- **Lines:** {{lines_coverage}}%
+
+### Test Files
+
+{{#each test_files}}
+
+- **{{test_file_path}}**
+ - Tests: {{test_count}}
+ - Approach: {{test_approach}}
+ - Mocking Strategy: {{mocking_strategy}}
+ {{/each}}
+
+### Test Utilities Available
+
+{{#each test_utilities}}
+
+- `{{utility_name}}`: {{utility_description}}
+ {{/each}}
+
+### Testing Gaps
+
+{{#each testing_gaps}}
+
+- {{gap_description}}
+ {{/each}}
+
+## Related Code & Reuse Opportunities
+
+### Similar Features Elsewhere
+
+{{#each similar_features}}
+
+- **{{feature_name}}** (`{{feature_path}}`)
+ - Similarity: {{similarity_description}}
+ - Can Reference For: {{reference_use_case}}
+ {{/each}}
+
+### Reusable Utilities Available
+
+{{#each reusable_utilities}}
+
+- **{{utility_name}}** (`{{utility_path}}`)
+ - Purpose: {{utility_purpose}}
+ - How to Use: {{usage_example}}
+ {{/each}}
+
+### Patterns to Follow
+
+{{#each patterns_to_follow}}
+
+- **{{pattern_name}}**: Reference `{{reference_file}}` for implementation
+ {{/each}}
+
+## Implementation Notes
+
+### Code Quality Observations
+
+{{#each quality_observations}}
+
+- {{observation}}
+ {{/each}}
+
+### TODOs and Future Work
+
+{{#each all_todos}}
+
+- **{{file_path}}:{{line_number}}**: {{todo_text}}
+ {{/each}}
+
+### Known Issues
+
+{{#each known_issues}}
+
+- {{issue_description}}
+ {{/each}}
+
+### Optimization Opportunities
+
+{{#each optimizations}}
+
+- {{optimization_suggestion}}
+ {{/each}}
+
+### Technical Debt
+
+{{#each tech_debt_items}}
+
+- {{debt_description}}
+ {{/each}}
+
+## Modification Guidance
+
+### To Add New Functionality
+
+{{modification_guidance_add}}
+
+### To Modify Existing Functionality
+
+{{modification_guidance_modify}}
+
+### To Remove/Deprecate
+
+{{modification_guidance_remove}}
+
+### Testing Checklist for Changes
+
+{{#each testing_checklist_items}}
+
+- [ ] {{checklist_item}}
+ {{/each}}
+
+---
+
+_Generated by `document-project` workflow (deep-dive mode)_
+_Base Documentation: docs/index.md_
+_Scan Date: {{date}}_
+_Analysis Mode: Exhaustive_
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/index-template.md b/src/modules/bmm/workflows/1-analysis/document-project/templates/index-template.md
new file mode 100644
index 00000000..0340a35a
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/index-template.md
@@ -0,0 +1,169 @@
+# {{project_name}} Documentation Index
+
+**Type:** {{repository_type}}{{#if is_multi_part}} with {{parts_count}} parts{{/if}}
+**Primary Language:** {{primary_language}}
+**Architecture:** {{architecture_type}}
+**Last Updated:** {{date}}
+
+## Project Overview
+
+{{project_description}}
+
+{{#if is_multi_part}}
+
+## Project Structure
+
+This project consists of {{parts_count}} parts:
+
+{{#each project_parts}}
+
+### {{part_name}} ({{part_id}})
+
+- **Type:** {{project_type}}
+- **Location:** `{{root_path}}`
+- **Tech Stack:** {{tech_stack_summary}}
+- **Entry Point:** {{entry_point}}
+ {{/each}}
+
+## Cross-Part Integration
+
+{{integration_summary}}
+
+{{/if}}
+
+## Quick Reference
+
+{{#if is_single_part}}
+
+- **Tech Stack:** {{tech_stack_summary}}
+- **Entry Point:** {{entry_point}}
+- **Architecture Pattern:** {{architecture_pattern}}
+- **Database:** {{database}}
+- **Deployment:** {{deployment_platform}}
+ {{else}}
+ {{#each project_parts}}
+
+### {{part_name}} Quick Ref
+
+- **Stack:** {{tech_stack_summary}}
+- **Entry:** {{entry_point}}
+- **Pattern:** {{architecture_pattern}}
+ {{/each}}
+ {{/if}}
+
+## Generated Documentation
+
+### Core Documentation
+
+- [Project Overview](./project-overview.md) - Executive summary and high-level architecture
+- [Source Tree Analysis](./source-tree-analysis.md) - Annotated directory structure
+
+{{#if is_single_part}}
+
+- [Architecture](./architecture.md) - Detailed technical architecture
+- [Component Inventory](./component-inventory.md) - Catalog of major components{{#if has_ui_components}} and UI elements{{/if}}
+- [Development Guide](./development-guide.md) - Local setup and development workflow
+ {{#if has_api_docs}}- [API Contracts](./api-contracts.md) - API endpoints and schemas{{/if}}
+ {{#if has_data_models}}- [Data Models](./data-models.md) - Database schema and models{{/if}}
+ {{else}}
+
+### Part-Specific Documentation
+
+{{#each project_parts}}
+
+#### {{part_name}} ({{part_id}})
+
+- [Architecture](./architecture-{{part_id}}.md) - Technical architecture for {{part_name}}
+ {{#if has_components}}- [Components](./component-inventory-{{part_id}}.md) - Component catalog{{/if}}
+- [Development Guide](./development-guide-{{part_id}}.md) - Setup and dev workflow
+ {{#if has_api}}- [API Contracts](./api-contracts-{{part_id}}.md) - API documentation{{/if}}
+ {{#if has_data}}- [Data Models](./data-models-{{part_id}}.md) - Data architecture{{/if}}
+ {{/each}}
+
+### Integration
+
+- [Integration Architecture](./integration-architecture.md) - How parts communicate
+- [Project Parts Metadata](./project-parts.json) - Machine-readable structure
+ {{/if}}
+
+### Optional Documentation
+
+{{#if has_deployment_guide}}- [Deployment Guide](./deployment-guide.md) - Deployment process and infrastructure{{/if}}
+{{#if has_contribution_guide}}- [Contribution Guide](./contribution-guide.md) - Contributing guidelines and standards{{/if}}
+
+## Existing Documentation
+
+{{#if has_existing_docs}}
+{{#each existing_docs}}
+
+- [{{title}}]({{path}}) - {{description}}
+ {{/each}}
+ {{else}}
+ No existing documentation files were found in the project.
+ {{/if}}
+
+## Getting Started
+
+{{#if is_single_part}}
+
+### Prerequisites
+
+{{prerequisites}}
+
+### Setup
+
+```bash
+{{setup_commands}}
+```
+
+### Run Locally
+
+```bash
+{{run_commands}}
+```
+
+### Run Tests
+
+```bash
+{{test_commands}}
+```
+
+{{else}}
+{{#each project_parts}}
+
+### {{part_name}} Setup
+
+**Prerequisites:** {{prerequisites}}
+
+**Install & Run:**
+
+```bash
+cd {{root_path}}
+{{setup_command}}
+{{run_command}}
+```
+
+{{/each}}
+{{/if}}
+
+## For AI-Assisted Development
+
+This documentation was generated specifically to enable AI agents to understand and extend this codebase.
+
+### When Planning New Features:
+
+**UI-only features:**
+{{#if is_multi_part}}→ Reference: `architecture-{{ui_part_id}}.md`, `component-inventory-{{ui_part_id}}.md`{{else}}→ Reference: `architecture.md`, `component-inventory.md`{{/if}}
+
+**API/Backend features:**
+{{#if is_multi_part}}→ Reference: `architecture-{{api_part_id}}.md`, `api-contracts-{{api_part_id}}.md`, `data-models-{{api_part_id}}.md`{{else}}→ Reference: `architecture.md`{{#if has_api_docs}}, `api-contracts.md`{{/if}}{{#if has_data_models}}, `data-models.md`{{/if}}{{/if}}
+
+**Full-stack features:**
+→ Reference: All architecture docs{{#if is_multi_part}} + `integration-architecture.md`{{/if}}
+
+**Deployment changes:**
+{{#if has_deployment_guide}}→ Reference: `deployment-guide.md`{{else}}→ Review CI/CD configs in project{{/if}}
+
+---
+
+_Documentation generated by BMAD Method `document-project` workflow_
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/project-overview-template.md b/src/modules/bmm/workflows/1-analysis/document-project/templates/project-overview-template.md
new file mode 100644
index 00000000..3bbb0d24
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/project-overview-template.md
@@ -0,0 +1,103 @@
+# {{project_name}} - Project Overview
+
+**Date:** {{date}}
+**Type:** {{project_type}}
+**Architecture:** {{architecture_type}}
+
+## Executive Summary
+
+{{executive_summary}}
+
+## Project Classification
+
+- **Repository Type:** {{repository_type}}
+- **Project Type(s):** {{project_types_list}}
+- **Primary Language(s):** {{primary_languages}}
+- **Architecture Pattern:** {{architecture_pattern}}
+
+{{#if is_multi_part}}
+
+## Multi-Part Structure
+
+This project consists of {{parts_count}} distinct parts:
+
+{{#each project_parts}}
+
+### {{part_name}}
+
+- **Type:** {{project_type}}
+- **Location:** `{{root_path}}`
+- **Purpose:** {{purpose}}
+- **Tech Stack:** {{tech_stack}}
+ {{/each}}
+
+### How Parts Integrate
+
+{{integration_description}}
+{{/if}}
+
+## Technology Stack Summary
+
+{{#if is_single_part}}
+{{technology_table}}
+{{else}}
+{{#each project_parts}}
+
+### {{part_name}} Stack
+
+{{technology_table}}
+{{/each}}
+{{/if}}
+
+## Key Features
+
+{{key_features}}
+
+## Architecture Highlights
+
+{{architecture_highlights}}
+
+## Development Overview
+
+### Prerequisites
+
+{{prerequisites}}
+
+### Getting Started
+
+{{getting_started_summary}}
+
+### Key Commands
+
+{{#if is_single_part}}
+
+- **Install:** `{{install_command}}`
+- **Dev:** `{{dev_command}}`
+- **Build:** `{{build_command}}`
+- **Test:** `{{test_command}}`
+ {{else}}
+ {{#each project_parts}}
+
+#### {{part_name}}
+
+- **Install:** `{{install_command}}`
+- **Dev:** `{{dev_command}}`
+ {{/each}}
+ {{/if}}
+
+## Repository Structure
+
+{{repository_structure_summary}}
+
+## Documentation Map
+
+For detailed information, see:
+
+- [index.md](./index.md) - Master documentation index
+- [architecture.md](./architecture{{#if is_multi_part}}-{part_id}{{/if}}.md) - Detailed architecture
+- [source-tree-analysis.md](./source-tree-analysis.md) - Directory structure
+- [development-guide.md](./development-guide{{#if is_multi_part}}-{part_id}{{/if}}.md) - Development workflow
+
+---
+
+_Generated using BMAD Method `document-project` workflow_
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/project-scan-report-schema.json b/src/modules/bmm/workflows/1-analysis/document-project/templates/project-scan-report-schema.json
new file mode 100644
index 00000000..8133e15f
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/project-scan-report-schema.json
@@ -0,0 +1,160 @@
+{
+ "$schema": "http://json-schema.org/draft-07/schema#",
+ "title": "Project Scan Report Schema",
+ "description": "State tracking file for document-project workflow resumability",
+ "type": "object",
+ "required": ["workflow_version", "timestamps", "mode", "scan_level", "completed_steps", "current_step"],
+ "properties": {
+ "workflow_version": {
+ "type": "string",
+ "description": "Version of document-project workflow",
+ "example": "1.2.0"
+ },
+ "timestamps": {
+ "type": "object",
+ "required": ["started", "last_updated"],
+ "properties": {
+ "started": {
+ "type": "string",
+ "format": "date-time",
+ "description": "ISO 8601 timestamp when workflow started"
+ },
+ "last_updated": {
+ "type": "string",
+ "format": "date-time",
+ "description": "ISO 8601 timestamp of last state update"
+ },
+ "completed": {
+ "type": "string",
+ "format": "date-time",
+ "description": "ISO 8601 timestamp when workflow completed (if finished)"
+ }
+ }
+ },
+ "mode": {
+ "type": "string",
+ "enum": ["initial_scan", "full_rescan", "deep_dive"],
+ "description": "Workflow execution mode"
+ },
+ "scan_level": {
+ "type": "string",
+ "enum": ["quick", "deep", "exhaustive"],
+ "description": "Scan depth level (deep_dive mode always uses exhaustive)"
+ },
+ "project_root": {
+ "type": "string",
+ "description": "Absolute path to project root directory"
+ },
+ "output_folder": {
+ "type": "string",
+ "description": "Absolute path to output folder"
+ },
+ "completed_steps": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "required": ["step", "status"],
+ "properties": {
+ "step": {
+ "type": "string",
+ "description": "Step identifier (e.g., 'step_1', 'step_2')"
+ },
+ "status": {
+ "type": "string",
+ "enum": ["completed", "partial", "failed"]
+ },
+ "timestamp": {
+ "type": "string",
+ "format": "date-time"
+ },
+ "outputs": {
+ "type": "array",
+ "items": { "type": "string" },
+ "description": "Files written during this step"
+ },
+ "summary": {
+ "type": "string",
+ "description": "1-2 sentence summary of step outcome"
+ }
+ }
+ }
+ },
+ "current_step": {
+ "type": "string",
+ "description": "Current step identifier for resumption"
+ },
+ "findings": {
+ "type": "object",
+ "description": "High-level summaries only (detailed findings purged after writing)",
+ "properties": {
+ "project_classification": {
+ "type": "object",
+ "properties": {
+ "repository_type": { "type": "string" },
+ "parts_count": { "type": "integer" },
+ "primary_language": { "type": "string" },
+ "architecture_type": { "type": "string" }
+ }
+ },
+ "technology_stack": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "part_id": { "type": "string" },
+ "tech_summary": { "type": "string" }
+ }
+ }
+ },
+ "batches_completed": {
+ "type": "array",
+ "description": "For deep/exhaustive scans: subfolders processed",
+ "items": {
+ "type": "object",
+ "properties": {
+ "path": { "type": "string" },
+ "files_scanned": { "type": "integer" },
+ "summary": { "type": "string" }
+ }
+ }
+ }
+ }
+ },
+ "outputs_generated": {
+ "type": "array",
+ "items": { "type": "string" },
+ "description": "List of all output files generated"
+ },
+ "resume_instructions": {
+ "type": "string",
+ "description": "Instructions for resuming from current_step"
+ },
+ "validation_status": {
+ "type": "object",
+ "properties": {
+ "last_validated": {
+ "type": "string",
+ "format": "date-time"
+ },
+ "validation_errors": {
+ "type": "array",
+ "items": { "type": "string" }
+ }
+ }
+ },
+ "deep_dive_targets": {
+ "type": "array",
+ "description": "Track deep-dive areas analyzed (for deep_dive mode)",
+ "items": {
+ "type": "object",
+ "properties": {
+ "target_name": { "type": "string" },
+ "target_path": { "type": "string" },
+ "files_analyzed": { "type": "integer" },
+ "output_file": { "type": "string" },
+ "timestamp": { "type": "string", "format": "date-time" }
+ }
+ }
+ }
+ }
+}
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/templates/source-tree-template.md b/src/modules/bmm/workflows/1-analysis/document-project/templates/source-tree-template.md
new file mode 100644
index 00000000..20306217
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/templates/source-tree-template.md
@@ -0,0 +1,135 @@
+# {{project_name}} - Source Tree Analysis
+
+**Date:** {{date}}
+
+## Overview
+
+{{source_tree_overview}}
+
+{{#if is_multi_part}}
+
+## Multi-Part Structure
+
+This project is organized into {{parts_count}} distinct parts:
+
+{{#each project_parts}}
+
+- **{{part_name}}** (`{{root_path}}`): {{purpose}}
+ {{/each}}
+ {{/if}}
+
+## Complete Directory Structure
+
+```
+{{complete_source_tree}}
+```
+
+## Critical Directories
+
+{{#each critical_folders}}
+
+### `{{folder_path}}`
+
+{{description}}
+
+**Purpose:** {{purpose}}
+**Contains:** {{contents_summary}}
+{{#if entry_points}}**Entry Points:** {{entry_points}}{{/if}}
+{{#if integration_note}}**Integration:** {{integration_note}}{{/if}}
+
+{{/each}}
+
+{{#if is_multi_part}}
+
+## Part-Specific Trees
+
+{{#each project_parts}}
+
+### {{part_name}} Structure
+
+```
+{{source_tree}}
+```
+
+**Key Directories:**
+{{#each critical_directories}}
+
+- **`{{path}}`**: {{description}}
+ {{/each}}
+
+{{/each}}
+
+## Integration Points
+
+{{#each integration_points}}
+
+### {{from_part}} → {{to_part}}
+
+- **Location:** `{{integration_path}}`
+- **Type:** {{integration_type}}
+- **Details:** {{details}}
+ {{/each}}
+
+{{/if}}
+
+## Entry Points
+
+{{#if is_single_part}}
+
+- **Main Entry:** `{{main_entry_point}}`
+ {{#if additional_entry_points}}
+- **Additional:**
+ {{#each additional_entry_points}}
+ - `{{path}}`: {{description}}
+ {{/each}}
+ {{/if}}
+ {{else}}
+ {{#each project_parts}}
+
+### {{part_name}}
+
+- **Entry Point:** `{{entry_point}}`
+- **Bootstrap:** {{bootstrap_description}}
+ {{/each}}
+ {{/if}}
+
+## File Organization Patterns
+
+{{file_organization_patterns}}
+
+## Key File Types
+
+{{#each file_type_patterns}}
+
+### {{file_type}}
+
+- **Pattern:** `{{pattern}}`
+- **Purpose:** {{purpose}}
+- **Examples:** {{examples}}
+ {{/each}}
+
+## Asset Locations
+
+{{#if has_assets}}
+{{#each asset_locations}}
+
+- **{{asset_type}}**: `{{location}}` ({{file_count}} files, {{total_size}})
+ {{/each}}
+ {{else}}
+ No significant assets detected.
+ {{/if}}
+
+## Configuration Files
+
+{{#each config_files}}
+
+- **`{{path}}`**: {{description}}
+ {{/each}}
+
+## Notes for Development
+
+{{development_notes}}
+
+---
+
+_Generated using BMAD Method `document-project` workflow_
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml b/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml
new file mode 100644
index 00000000..fc3591ea
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml
@@ -0,0 +1,98 @@
+# Document Project Workflow Configuration
+name: "document-project"
+version: "1.2.0"
+description: "Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development"
+author: "BMad"
+
+# Critical variables
+config_source: "{project-root}/bmad/bmm/config.yaml"
+output_folder: "{config_source}:output_folder"
+user_name: "{config_source}:user_name"
+communication_language: "{config_source}:communication_language"
+date: system-generated
+
+# Module path and component files
+installed_path: "{project-root}/bmad/bmm/workflows/document-project"
+template: false # This is an action workflow with multiple output files
+instructions: "{installed_path}/instructions.md"
+validation: "{installed_path}/checklist.md"
+
+# Required data files - CRITICAL for project type detection and documentation requirements
+project_types_csv: "{project-root}/bmad/bmm/workflows/3-solutioning/project-types/project-types.csv"
+architecture_registry_csv: "{project-root}/bmad/bmm/workflows/3-solutioning/templates/registry.csv"
+documentation_requirements_csv: "{installed_path}/documentation-requirements.csv"
+
+# Architecture template references
+architecture_templates_path: "{project-root}/bmad/bmm/workflows/3-solutioning/templates"
+
+# Optional input - project root to scan (defaults to current working directory)
+recommended_inputs:
+ - project_root: "User will specify or use current directory"
+ - existing_readme: "README.md at project root (if exists)"
+ - project_config: "package.json, go.mod, requirements.txt, etc. (auto-detected)"
+
+# Output configuration - Multiple files generated in output folder
+# File naming depends on project structure (simple vs multi-part)
+# Simple projects: index.md, architecture.md, etc.
+# Multi-part projects: index.md, architecture-{part_id}.md, etc.
+
+default_output_files:
+ - index: "{output_folder}/index.md"
+ - project_overview: "{output_folder}/project-overview.md"
+ - architecture: "{output_folder}/architecture.md" # or architecture-{part_id}.md for multi-part
+ - source_tree: "{output_folder}/source-tree-analysis.md"
+ - component_inventory: "{output_folder}/component-inventory.md" # or component-inventory-{part_id}.md
+ - development_guide: "{output_folder}/development-guide.md" # or development-guide-{part_id}.md
+ - deployment_guide: "{output_folder}/deployment-guide.md" # optional, if deployment config found
+ - contribution_guide: "{output_folder}/contribution-guide.md" # optional, if CONTRIBUTING.md found
+ - api_contracts: "{output_folder}/api-contracts.md" # optional, per part if needed
+ - data_models: "{output_folder}/data-models.md" # optional, per part if needed
+ - integration_architecture: "{output_folder}/integration-architecture.md" # only for multi-part
+ - project_parts: "{output_folder}/project-parts.json" # metadata for multi-part projects
+ - deep_dive: "{output_folder}/deep-dive-{sanitized_target_name}.md" # deep-dive mode output
+ - project_scan_report: "{output_folder}/project-scan-report.json" # state tracking for resumability
+
+# Runtime variables (generated during workflow execution)
+runtime_variables:
+ - workflow_mode: "initial_scan | full_rescan | deep_dive"
+ - scan_level: "quick | deep | exhaustive (default: quick)"
+ - project_type: "Detected project type (web, backend, cli, etc.)"
+ - project_parts: "Array of project parts for multi-part projects"
+ - architecture_match: "Matched architecture from registry"
+ - doc_requirements: "Documentation requirements for project type"
+ - tech_stack: "Detected technology stack"
+ - existing_docs: "Discovered existing documentation"
+ - deep_dive_target: "Target area for deep-dive analysis (if deep-dive mode)"
+ - deep_dive_count: "Number of deep-dive docs generated"
+ - resume_point: "Step to resume from (if resuming interrupted workflow)"
+ - state_file: "Path to project-scan-report.json for state tracking"
+
+# Scan Level Definitions
+scan_levels:
+ quick:
+ description: "Pattern-based scanning without reading source files"
+ duration: "2-5 minutes"
+ reads: "Config files, package manifests, directory structure only"
+ use_case: "Quick project overview, initial understanding"
+ default: true
+ deep:
+ description: "Reads files in critical directories per project type"
+ duration: "10-30 minutes"
+ reads: "Critical files based on documentation_requirements.csv patterns"
+ use_case: "Comprehensive documentation for brownfield PRD"
+ default: false
+ exhaustive:
+ description: "Reads ALL source files in project"
+ duration: "30-120 minutes"
+ reads: "Every source file (excluding node_modules, dist, build)"
+ use_case: "Complete analysis, migration planning, detailed audit"
+ default: false
+
+# Resumability Settings
+resumability:
+ enabled: true
+ state_file_location: "{output_folder}/project-scan-report.json"
+ state_file_max_age: "24 hours"
+ auto_prompt_resume: true
+ archive_old_state: true
+ archive_location: "{output_folder}/.archive/"
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive-instructions.md b/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive-instructions.md
new file mode 100644
index 00000000..57de41b4
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive-instructions.md
@@ -0,0 +1,298 @@
+# Deep-Dive Documentation Instructions
+
+
+
+This workflow performs exhaustive deep-dive documentation of specific areas
+Called by: ../document-project/instructions.md router
+Handles: deep_dive mode only
+
+
+Deep-dive mode requires literal full-file review. Sampling, guessing, or relying solely on tooling output is FORBIDDEN.
+Load existing project structure from index.md and project-parts.json (if exists)
+Load source tree analysis to understand available areas
+
+
+ Analyze existing documentation to suggest deep-dive options
+
+What area would you like to deep-dive into?
+
+**Suggested Areas Based on Project Structure:**
+
+{{#if has_api_routes}}
+
+### API Routes ({{api_route_count}} endpoints found)
+
+{{#each api_route_groups}}
+{{group_index}}. {{group_name}} - {{endpoint_count}} endpoints in `{{path}}`
+{{/each}}
+{{/if}}
+
+{{#if has_feature_modules}}
+
+### Feature Modules ({{feature_count}} features)
+
+{{#each feature_modules}}
+{{module_index}}. {{module_name}} - {{file_count}} files in `{{path}}`
+{{/each}}
+{{/if}}
+
+{{#if has_ui_components}}
+
+### UI Component Areas
+
+{{#each component_groups}}
+{{group_index}}. {{group_name}} - {{component_count}} components in `{{path}}`
+{{/each}}
+{{/if}}
+
+{{#if has_services}}
+
+### Services/Business Logic
+
+{{#each service_groups}}
+{{service_index}}. {{service_name}} - `{{path}}`
+{{/each}}
+{{/if}}
+
+**Or specify custom:**
+
+- Folder path (e.g., "client/src/features/dashboard")
+- File path (e.g., "server/src/api/users.ts")
+- Feature name (e.g., "authentication system")
+
+Enter your choice (number or custom path):
+
+
+Parse user input to determine: - target_type: "folder" | "file" | "feature" | "api_group" | "component_group" - target_path: Absolute path to scan - target_name: Human-readable name for documentation - target_scope: List of all files to analyze
+
+
+Store as {{deep_dive_target}}
+
+Display confirmation:
+Target: {{target_name}}
+Type: {{target_type}}
+Path: {{target_path}}
+Estimated files to analyze: {{estimated_file_count}}
+
+This will read EVERY file in this area. Proceed? [y/n]
+
+
+Return to Step 13a (select different area)
+
+
+
+ Set scan_mode = "exhaustive"
+ Initialize file_inventory = []
+ You must read every line of every file in scope and capture a plain-language explanation (what the file does, side effects, why it matters) that future developer agents can act on. No shortcuts.
+
+
+ Get complete recursive file list from {{target_path}}
+ Filter out: node_modules/, .git/, dist/, build/, coverage/, *.min.js, *.map
+ For EVERY remaining file in folder:
+ - Read complete file contents (all lines)
+ - Extract all exports (functions, classes, types, interfaces, constants)
+ - Extract all imports (dependencies)
+ - Identify purpose from comments and code structure
+ - Write 1-2 sentences (minimum) in natural language describing behaviour, side effects, assumptions, and anything a developer must know before modifying the file
+ - Extract function signatures with parameter types and return types
+ - Note any TODOs, FIXMEs, or comments
+ - Identify patterns (hooks, components, services, controllers, etc.)
+ - Capture per-file contributor guidance: `contributor_note`, `risks`, `verification_steps`, `suggested_tests`
+ - Store in file_inventory
+
+
+
+
+ Read complete file at {{target_path}}
+ Extract all information as above
+ Read all files it imports (follow import chain 1 level deep)
+ Find all files that import this file (dependents via grep)
+ Store all in file_inventory
+
+
+
+ Identify all route/controller files in API group
+ Read all route handlers completely
+ Read associated middleware, controllers, services
+ Read data models and schemas used
+ Extract complete request/response schemas
+ Document authentication and authorization requirements
+ Store all in file_inventory
+
+
+
+ Search codebase for all files related to feature name
+ Include: UI components, API endpoints, models, services, tests
+ Read each file completely
+ Store all in file_inventory
+
+
+
+ Get all component files in group
+ Read each component completely
+ Extract: Props interfaces, hooks used, child components, state management
+ Store all in file_inventory
+
+
+For each file in file\*inventory, document: - **File Path:** Full path - **Purpose:** What this file does (1-2 sentences) - **Lines of Code:** Total LOC - **Exports:** Complete list with signatures
+
+- Functions: `functionName(param: Type): ReturnType` - Description
+ _ Classes: `ClassName` - Description with key methods
+ _ Types/Interfaces: `TypeName` - Description
+ \_ Constants: `CONSTANT_NAME: Type` - Description - **Imports/Dependencies:** What it uses and why - **Used By:** Files that import this (dependents) - **Key Implementation Details:** Important logic, algorithms, patterns - **State Management:** If applicable (Redux, Context, local state) - **Side Effects:** API calls, database queries, file I/O, external services - **Error Handling:** Try/catch blocks, error boundaries, validation - **Testing:** Associated test files and coverage - **Comments/TODOs:** Any inline documentation or planned work
+
+
+comprehensive_file_inventory
+
+
+
+ Build dependency graph for scanned area:
+ - Create graph with files as nodes
+ - Add edges for import relationships
+ - Identify circular dependencies if any
+ - Find entry points (files not imported by others in scope)
+ - Find leaf nodes (files that don't import others in scope)
+
+
+Trace data flow through the system: - Follow function calls and data transformations - Track API calls and their responses - Document state updates and propagation - Map database queries and mutations
+
+
+Identify integration points: - External APIs consumed - Internal APIs/services called - Shared state accessed - Events published/subscribed - Database tables accessed
+
+
+dependency_graph
+data_flow_analysis
+integration_points
+
+
+
+ Search codebase OUTSIDE scanned area for:
+ - Similar file/folder naming patterns
+ - Similar function signatures
+ - Similar component structures
+ - Similar API patterns
+ - Reusable utilities that could be used
+
+
+Identify code reuse opportunities: - Shared utilities available - Design patterns used elsewhere - Component libraries available - Helper functions that could apply
+
+
+Find reference implementations: - Similar features in other parts of codebase - Established patterns to follow - Testing approaches used elsewhere
+
+
+related_code_references
+reuse_opportunities
+
+
+
+ Create documentation filename: deep-dive-{{sanitized_target_name}}.md
+ Aggregate contributor insights across files:
+ - Combine unique risk/gotcha notes into {{risks_notes}}
+ - Combine verification steps developers should run before changes into {{verification_steps}}
+ - Combine recommended test commands into {{suggested_tests}}
+
+
+Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md
+Fill template with all collected data from steps 13b-13d
+Write filled template to: {output_folder}/deep-dive-{{sanitized_target_name}}.md
+Validate deep-dive document completeness
+
+deep_dive_documentation
+
+Update state file: - Add to deep_dive_targets array: {"target_name": "{{target_name}}", "target_path": "{{target_path}}", "files_analyzed": {{file_count}}, "output_file": "deep-dive-{{sanitized_target_name}}.md", "timestamp": "{{now}}"} - Add output to outputs_generated - Update last_updated timestamp
+
+
+
+
+ Read existing index.md
+
+Check if "Deep-Dive Documentation" section exists
+
+
+ Add new section after "Generated Documentation":
+
+## Deep-Dive Documentation
+
+Detailed exhaustive analysis of specific areas:
+
+
+
+
+
+Add link to new deep-dive doc:
+
+- [{{target_name}} Deep-Dive](./deep-dive-{{sanitized_target_name}}.md) - Comprehensive analysis of {{target_description}} ({{file_count}} files, {{total_loc}} LOC) - Generated {{date}}
+
+
+ Update index metadata:
+ Last Updated: {{date}}
+ Deep-Dives: {{deep_dive_count}}
+
+
+ Save updated index.md
+
+ updated_index
+
+
+
+ Display summary:
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+## Deep-Dive Documentation Complete! ✓
+
+**Generated:** {output_folder}/deep-dive-{{target_name}}.md
+**Files Analyzed:** {{file_count}}
+**Lines of Code Scanned:** {{total_loc}}
+**Time Taken:** ~{{duration}}
+
+**Documentation Includes:**
+
+- Complete file inventory with all exports
+- Dependency graph and data flow
+- Integration points and API contracts
+- Testing analysis and coverage
+- Related code and reuse opportunities
+- Implementation guidance
+
+**Index Updated:** {output_folder}/index.md now includes link to this deep-dive
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+
+Would you like to:
+
+1. **Deep-dive another area** - Analyze another feature/module/folder
+2. **Finish** - Complete workflow
+
+Your choice [1/2]:
+
+
+
+ Clear current deep_dive_target
+ Go to Step 13a (select new area)
+
+
+
+ Display final message:
+
+All deep-dive documentation complete!
+
+**Master Index:** {output_folder}/index.md
+**Deep-Dives Generated:** {{deep_dive_count}}
+
+These comprehensive docs are now ready for:
+
+- Architecture review
+- Implementation planning
+- Code understanding
+- Brownfield PRD creation
+
+Thank you for using the document-project workflow!
+
+Exit workflow
+
+
+
+
+
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive.yaml b/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive.yaml
new file mode 100644
index 00000000..2ad5c71c
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/workflows/deep-dive.yaml
@@ -0,0 +1,31 @@
+# Deep-Dive Documentation Workflow Configuration
+name: "document-project-deep-dive"
+description: "Exhaustive deep-dive documentation of specific project areas"
+author: "BMad"
+
+# This is a sub-workflow called by document-project/workflow.yaml
+parent_workflow: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml"
+
+# Critical variables inherited from parent
+config_source: "{project-root}/bmad/bmb/config.yaml"
+output_folder: "{config_source}:output_folder"
+user_name: "{config_source}:user_name"
+date: system-generated
+
+# Module path and component files
+installed_path: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflows"
+template: false # Action workflow
+instructions: "{installed_path}/deep-dive-instructions.md"
+validation: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/checklist.md"
+
+# Templates
+deep_dive_template: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/templates/deep-dive-template.md"
+
+# Runtime inputs (passed from parent workflow)
+workflow_mode: "deep_dive"
+scan_level: "exhaustive" # Deep-dive always uses exhaustive scan
+project_root_path: ""
+existing_index_path: "" # Path to existing index.md
+
+# Configuration
+autonomous: false # Requires user input to select target area
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan-instructions.md b/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan-instructions.md
new file mode 100644
index 00000000..176c51fc
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan-instructions.md
@@ -0,0 +1,1119 @@
+# Full Project Scan Instructions
+
+
+
+This workflow performs complete project documentation (Steps 1-12)
+Called by: document-project/instructions.md router
+Handles: initial_scan and full_rescan modes
+
+
+CSV LOADING STRATEGY - Understanding the Documentation Requirements System:
+
+Display explanation to user:
+
+**How Project Type Detection Works:**
+
+This workflow uses 3 CSV files to intelligently document your project:
+
+1. **project-types.csv** ({project_types_csv})
+ - Contains 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)
+ - Each type has detection_keywords used to identify project type from codebase
+ - Used ONLY during initial project classification (Step 1)
+
+2. **documentation-requirements.csv** ({documentation_requirements_csv})
+ - 24-column schema that defines what to look for in each project type
+ - Columns include: requires_api_scan, requires_data_models, requires_ui_components, etc.
+ - Contains file patterns (key_file_patterns, critical_directories, test_file_patterns, etc.)
+ - Acts as a "scan guide" - tells the workflow WHERE to look and WHAT to document
+ - Example: For project_type_id="web", requires_api_scan=true, so workflow scans api/ folder
+
+3. **architecture-registry.csv** ({architecture_registry_csv})
+ - Maps detected tech stacks to architecture templates
+ - Used to select appropriate architecture document template
+ - Only loaded when generating architecture documentation (Step 8)
+
+**When Each CSV is Loaded:**
+
+- **Fresh Start (initial_scan)**: Load project-types.csv → detect type → load corresponding doc requirements row
+- **Resume**: Load ONLY the doc requirements row(s) for cached project_type_id(s)
+- **Full Rescan**: Same as fresh start (may re-detect project type)
+- **Deep Dive**: Load ONLY doc requirements for the part being deep-dived
+
+
+Now loading CSV files for fresh start...
+Load project-types.csv from: {project_types_csv}
+Store all 12 project types with their detection_keywords for use in Step 1
+Display: "Loaded 12 project type definitions"
+
+Load documentation-requirements.csv from: {documentation_requirements_csv}
+Store all rows indexed by project_type_id for later lookup
+Display: "Loaded documentation requirements for 12 project types"
+
+Load architecture-registry.csv from: {architecture_registry_csv}
+Store architecture templates for later matching in Step 3
+Display: "Loaded architecture template registry"
+
+Display: "✓ CSV data files loaded successfully. Ready to begin project analysis."
+
+
+
+Check if {output_folder}/index.md exists
+
+
+ Read existing index.md to extract metadata (date, project structure, parts count)
+ Store as {{existing_doc_date}}, {{existing_structure}}
+
+I found existing documentation generated on {{existing_doc_date}}.
+
+What would you like to do?
+
+1. **Re-scan entire project** - Update all documentation with latest changes
+2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder
+3. **Cancel** - Keep existing documentation as-is
+
+Your choice [1/2/3]:
+
+
+
+ Set workflow_mode = "full_rescan"
+ Continue to scan level selection below
+
+
+
+ Set workflow_mode = "deep_dive"
+ Set scan_level = "exhaustive"
+ Initialize state file with mode=deep_dive, scan_level=exhaustive
+ Jump to Step 13
+
+
+
+ Display message: "Keeping existing documentation. Exiting workflow."
+ Exit workflow
+
+
+
+
+ Set workflow_mode = "initial_scan"
+ Continue to scan level selection below
+
+
+Select Scan Level
+
+
+ Choose your scan depth level:
+
+**1. Quick Scan** (2-5 minutes) [DEFAULT]
+
+- Pattern-based analysis without reading source files
+- Scans: Config files, package manifests, directory structure
+- Best for: Quick project overview, initial understanding
+- File reading: Minimal (configs, README, package.json, etc.)
+
+**2. Deep Scan** (10-30 minutes)
+
+- Reads files in critical directories based on project type
+- Scans: All critical paths from documentation requirements
+- Best for: Comprehensive documentation for brownfield PRD
+- File reading: Selective (key files in critical directories)
+
+**3. Exhaustive Scan** (30-120 minutes)
+
+- Reads ALL source files in project
+- Scans: Every source file (excludes node_modules, dist, build)
+- Best for: Complete analysis, migration planning, detailed audit
+- File reading: Complete (all source files)
+
+Your choice [1/2/3] (default: 1):
+
+
+
+ Set scan_level = "quick"
+ Display: "Using Quick Scan (pattern-based, no source file reading)"
+
+
+
+ Set scan_level = "deep"
+ Display: "Using Deep Scan (reading critical files per project type)"
+
+
+
+ Set scan_level = "exhaustive"
+ Display: "Using Exhaustive Scan (reading all source files)"
+
+
+Initialize state file: {output_folder}/project-scan-report.json
+Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable.
+Write initial state:
+{
+"workflow_version": "1.2.0",
+"timestamps": {"started": "{{current_timestamp}}", "last_updated": "{{current_timestamp}}"},
+"mode": "{{workflow_mode}}",
+"scan_level": "{{scan_level}}",
+"project_root": "{{project_root_path}}",
+"output_folder": "{{output_folder}}",
+"completed_steps": [],
+"current_step": "step_1",
+"findings": {},
+"outputs_generated": ["project-scan-report.json"],
+"resume_instructions": "Starting from step 1"
+}
+
+Continue with standard workflow from Step 1
+
+
+
+
+Ask user: "What is the root directory of the project to document?" (default: current working directory)
+Store as {{project_root_path}}
+
+Scan {{project_root_path}} for key indicators:
+
+- Directory structure (presence of client/, server/, api/, src/, app/, etc.)
+- Key files (package.json, go.mod, requirements.txt, etc.)
+- Technology markers matching detection_keywords from project-types.csv
+
+
+Detect if project is:
+
+- **Monolith**: Single cohesive codebase
+- **Monorepo**: Multiple parts in one repository
+- **Multi-part**: Separate client/server or similar architecture
+
+
+
+ List detected parts with their paths
+ I detected multiple parts in this project:
+ {{detected_parts_list}}
+
+Is this correct? Should I document each part separately? [y/n]
+
+
+Set repository_type = "monorepo" or "multi-part"
+For each detected part: - Identify root path - Run project type detection against project-types.csv - Store as part in project_parts array
+
+
+Ask user to specify correct parts and their paths
+
+
+
+ Set repository_type = "monolith"
+ Create single part in project_parts array with root_path = {{project_root_path}}
+ Run project type detection against project-types.csv
+
+
+For each part, match detected technologies and keywords against project-types.csv
+Assign project_type_id to each part
+Load corresponding documentation_requirements row for each part
+
+I've classified this project:
+{{project_classification_summary}}
+
+Does this look correct? [y/n/edit]
+
+
+project_structure
+project_parts_metadata
+
+IMMEDIATELY update state file with step completion:
+
+- Add to completed_steps: {"step": "step_1", "status": "completed", "timestamp": "{{now}}", "summary": "Classified as {{repository_type}} with {{parts_count}} parts"}
+- Update current_step = "step_2"
+- Update findings.project_classification with high-level summary only
+- **CACHE project_type_id(s)**: Add project_types array: [{"part_id": "{{part_id}}", "project_type_id": "{{project_type_id}}", "display_name": "{{display_name}}"}]
+- This cached data prevents reloading all CSV files on resume - we can load just the needed documentation_requirements row(s)
+- Update last_updated timestamp
+- Write state file
+
+
+PURGE detailed scan results from memory, keep only summary: "{{repository_type}}, {{parts_count}} parts, {{primary_tech}}"
+
+
+
+For each part, scan for existing documentation using patterns:
+- README.md, README.rst, README.txt
+- CONTRIBUTING.md, CONTRIBUTING.rst
+- ARCHITECTURE.md, ARCHITECTURE.txt, docs/architecture/
+- DEPLOYMENT.md, DEPLOY.md, docs/deployment/
+- API.md, docs/api/
+- Any files in docs/, documentation/, .github/ folders
+
+
+Create inventory of existing_docs with:
+
+- File path
+- File type (readme, architecture, api, etc.)
+- Which part it belongs to (if multi-part)
+
+
+I found these existing documentation files:
+{{existing_docs_list}}
+
+Are there any other important documents or key areas I should focus on while analyzing this project? [Provide paths or guidance, or type 'none']
+
+
+Store user guidance as {{user_context}}
+
+existing_documentation_inventory
+user_provided_context
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_2", "status": "completed", "timestamp": "{{now}}", "summary": "Found {{existing_docs_count}} existing docs"}
+- Update current_step = "step_3"
+- Update last_updated timestamp
+
+
+PURGE detailed doc contents from memory, keep only: "{{existing_docs_count}} docs found"
+
+
+
+For each part in project_parts:
+ - Load key_file_patterns from documentation_requirements
+ - Scan part root for these patterns
+ - Parse technology manifest files (package.json, go.mod, requirements.txt, etc.)
+ - Extract: framework, language, version, database, dependencies
+ - Build technology_table with columns: Category, Technology, Version, Justification
+
+
+Match detected tech stack against architecture_registry_csv:
+
+- Use project_type_id + languages + architecture_style tags
+- Find closest matching architecture template
+- Store as {{architecture_match}} for each part
+
+
+technology_stack
+architecture_template_matches
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_3", "status": "completed", "timestamp": "{{now}}", "summary": "Tech stack: {{primary_framework}}"}
+- Update current_step = "step_4"
+- Update findings.technology_stack with summary per part
+- Update last_updated timestamp
+
+
+PURGE detailed tech analysis from memory, keep only: "{{framework}} on {{language}}"
+
+
+
+
+BATCHING STRATEGY FOR DEEP/EXHAUSTIVE SCANS
+
+
+ This step requires file reading. Apply batching strategy:
+
+Identify subfolders to process based on: - scan_level == "deep": Use critical_directories from documentation_requirements - scan_level == "exhaustive": Get ALL subfolders recursively (excluding node_modules, .git, dist, build, coverage)
+
+
+For each subfolder to scan: 1. Read all files in subfolder (consider file size - use judgment for files >5000 LOC) 2. Extract required information based on conditional flags below 3. IMMEDIATELY write findings to appropriate output file 4. Validate written document (section-level validation) 5. Update state file with batch completion 6. PURGE detailed findings from context, keep only 1-2 sentence summary 7. Move to next subfolder
+
+
+Track batches in state file:
+findings.batches_completed: [
+{"path": "{{subfolder_path}}", "files_scanned": {{count}}, "summary": "{{brief_summary}}"}
+]
+
+
+
+
+ Use pattern matching only - do NOT read source files
+ Use glob/grep to identify file locations and patterns
+ Extract information from filenames, directory structure, and config files only
+
+
+For each part, check documentation_requirements boolean flags and execute corresponding scans:
+
+
+ Scan for API routes and endpoints using integration_scan_patterns
+ Look for: controllers/, routes/, api/, handlers/, endpoints/
+
+
+ Use glob to find route files, extract patterns from filenames and folder structure
+
+
+
+ Read files in batches (one subfolder at a time)
+ Extract: HTTP methods, paths, request/response types from actual code
+
+
+Build API contracts catalog
+IMMEDIATELY write to: {output*folder}/api-contracts-{part_id}.md
+Validate document has all required sections
+Update state file with output generated
+PURGE detailed API data, keep only: "{{api_count}} endpoints documented"
+api_contracts*{part_id}
+
+
+
+ Scan for data models using schema_migration_patterns
+ Look for: models/, schemas/, entities/, migrations/, prisma/, ORM configs
+
+
+ Identify schema files via glob, parse migration file names for table discovery
+
+
+
+ Read model files in batches (one subfolder at a time)
+ Extract: table names, fields, relationships, constraints from actual code
+
+
+Build database schema documentation
+IMMEDIATELY write to: {output*folder}/data-models-{part_id}.md
+Validate document completeness
+Update state file with output generated
+PURGE detailed schema data, keep only: "{{table_count}} tables documented"
+data_models*{part_id}
+
+
+
+ Analyze state management patterns
+ Look for: Redux, Context API, MobX, Vuex, Pinia, Provider patterns
+ Identify: stores, reducers, actions, state structure
+ state_management_patterns_{part_id}
+
+
+
+ Inventory UI component library
+ Scan: components/, ui/, widgets/, views/ folders
+ Categorize: Layout, Form, Display, Navigation, etc.
+ Identify: Design system, component patterns, reusable elements
+ ui_component_inventory_{part_id}
+
+
+
+ Look for hardware schematics using hardware_interface_patterns
+ This appears to be an embedded/hardware project. Do you have:
+ - Pinout diagrams
+ - Hardware schematics
+ - PCB layouts
+ - Hardware documentation
+
+If yes, please provide paths or links. [Provide paths or type 'none']
+
+Store hardware docs references
+hardware*documentation*{part_id}
+
+
+
+ Scan and catalog assets using asset_patterns
+ Categorize by: Images, Audio, 3D Models, Sprites, Textures, etc.
+ Calculate: Total size, file counts, formats used
+ asset_inventory_{part_id}
+
+
+Scan for additional patterns based on doc requirements:
+
+- config_patterns → Configuration management
+- auth_security_patterns → Authentication/authorization approach
+- entry_point_patterns → Application entry points and bootstrap
+- shared_code_patterns → Shared libraries and utilities
+- async_event_patterns → Event-driven architecture
+- ci_cd_patterns → CI/CD pipeline details
+- localization_patterns → i18n/l10n support
+
+
+Apply scan_level strategy to each pattern scan (quick=glob only, deep/exhaustive=read files)
+
+comprehensive*analysis*{part_id}
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_4", "status": "completed", "timestamp": "{{now}}", "summary": "Conditional analysis complete, {{files_generated}} files written"}
+- Update current_step = "step_5"
+- Update last_updated timestamp
+- List all outputs_generated
+
+
+PURGE all detailed scan results from context. Keep only summaries:
+
+- "APIs: {{api_count}} endpoints"
+- "Data: {{table_count}} tables"
+- "Components: {{component_count}} components"
+
+
+
+
+For each part, generate complete directory tree using critical_directories from doc requirements
+
+Annotate the tree with:
+
+- Purpose of each critical directory
+- Entry points marked
+- Key file locations highlighted
+- Integration points noted (for multi-part projects)
+
+
+Show how parts are organized and where they interface
+
+Create formatted source tree with descriptions:
+
+```
+project-root/
+├── client/ # React frontend (Part: client)
+│ ├── src/
+│ │ ├── components/ # Reusable UI components
+│ │ ├── pages/ # Route-based pages
+│ │ └── api/ # API client layer → Calls server/
+├── server/ # Express API backend (Part: api)
+│ ├── src/
+│ │ ├── routes/ # REST API endpoints
+│ │ ├── models/ # Database models
+│ │ └── services/ # Business logic
+```
+
+
+
+source_tree_analysis
+critical_folders_summary
+
+IMMEDIATELY write source-tree-analysis.md to disk
+Validate document structure
+Update state file:
+
+- Add to completed_steps: {"step": "step_5", "status": "completed", "timestamp": "{{now}}", "summary": "Source tree documented"}
+- Update current_step = "step_6"
+- Add output: "source-tree-analysis.md"
+
+ PURGE detailed tree from context, keep only: "Source tree with {{folder_count}} critical folders"
+
+
+
+Scan for development setup using key_file_patterns and existing docs:
+- Prerequisites (Node version, Python version, etc.)
+- Installation steps (npm install, etc.)
+- Environment setup (.env files, config)
+- Build commands (npm run build, make, etc.)
+- Run commands (npm start, go run, etc.)
+- Test commands using test_file_patterns
+
+
+Look for deployment configuration using ci_cd_patterns:
+
+- Dockerfile, docker-compose.yml
+- Kubernetes configs (k8s/, helm/)
+- CI/CD pipelines (.github/workflows/, .gitlab-ci.yml)
+- Deployment scripts
+- Infrastructure as Code (terraform/, pulumi/)
+
+
+
+ Extract contribution guidelines:
+ - Code style rules
+ - PR process
+ - Commit conventions
+ - Testing requirements
+
+
+
+development_instructions
+deployment_configuration
+contribution_guidelines
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_6", "status": "completed", "timestamp": "{{now}}", "summary": "Dev/deployment guides written"}
+- Update current_step = "step_7"
+- Add generated outputs to list
+
+ PURGE detailed instructions, keep only: "Dev setup and deployment documented"
+
+
+
+Analyze how parts communicate:
+- Scan integration_scan_patterns across parts
+- Identify: REST calls, GraphQL queries, gRPC, message queues, shared databases
+- Document: API contracts between parts, data flow, authentication flow
+
+
+Create integration_points array with:
+
+- from: source part
+- to: target part
+- type: REST API, GraphQL, gRPC, Event Bus, etc.
+- details: Endpoints, protocols, data formats
+
+
+IMMEDIATELY write integration-architecture.md to disk
+Validate document completeness
+
+integration_architecture
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_7", "status": "completed", "timestamp": "{{now}}", "summary": "Integration architecture documented"}
+- Update current_step = "step_8"
+
+ PURGE integration details, keep only: "{{integration_count}} integration points"
+
+
+
+For each part in project_parts:
+ - Use matched architecture template from Step 3 as base structure
+ - Fill in all sections with discovered information:
+ * Executive Summary
+ * Technology Stack (from Step 3)
+ * Architecture Pattern (from registry match)
+ * Data Architecture (from Step 4 data models scan)
+ * API Design (from Step 4 API scan if applicable)
+ * Component Overview (from Step 4 component scan if applicable)
+ * Source Tree (from Step 5)
+ * Development Workflow (from Step 6)
+ * Deployment Architecture (from Step 6)
+ * Testing Strategy (from test patterns)
+
+
+
+ - Generate: architecture.md (no part suffix)
+
+
+
+ - Generate: architecture-{part_id}.md for each part
+
+
+For each architecture file generated:
+
+- IMMEDIATELY write architecture file to disk
+- Validate against architecture template schema
+- Update state file with output
+- PURGE detailed architecture from context, keep only: "Architecture for {{part_id}} written"
+
+
+architecture_document
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_8", "status": "completed", "timestamp": "{{now}}", "summary": "Architecture docs written for {{parts_count}} parts"}
+- Update current_step = "step_9"
+
+
+
+
+Generate project-overview.md with:
+- Project name and purpose (from README or user input)
+- Executive summary
+- Tech stack summary table
+- Architecture type classification
+- Repository structure (monolith/monorepo/multi-part)
+- Links to detailed docs
+
+
+Generate source-tree-analysis.md with:
+
+- Full annotated directory tree from Step 5
+- Critical folders explained
+- Entry points documented
+- Multi-part structure (if applicable)
+
+
+IMMEDIATELY write project-overview.md to disk
+Validate document sections
+
+Generate source-tree-analysis.md (if not already written in Step 5)
+IMMEDIATELY write to disk and validate
+
+Generate component-inventory.md (or per-part versions) with:
+
+- All discovered components from Step 4
+- Categorized by type
+- Reusable vs specific components
+- Design system elements (if found)
+
+ IMMEDIATELY write each component inventory to disk and validate
+
+Generate development-guide.md (or per-part versions) with:
+
+- Prerequisites and dependencies
+- Environment setup instructions
+- Local development commands
+- Build process
+- Testing approach and commands
+- Common development tasks
+
+ IMMEDIATELY write each development guide to disk and validate
+
+
+ Generate deployment-guide.md with:
+ - Infrastructure requirements
+ - Deployment process
+ - Environment configuration
+ - CI/CD pipeline details
+
+ IMMEDIATELY write to disk and validate
+
+
+
+ Generate contribution-guide.md with:
+ - Code style and conventions
+ - PR process
+ - Testing requirements
+ - Documentation standards
+
+ IMMEDIATELY write to disk and validate
+
+
+
+ Generate api-contracts.md (or per-part) with:
+ - All API endpoints
+ - Request/response schemas
+ - Authentication requirements
+ - Example requests
+
+ IMMEDIATELY write to disk and validate
+
+
+
+ Generate data-models.md (or per-part) with:
+ - Database schema
+ - Table relationships
+ - Data models and entities
+ - Migration strategy
+
+ IMMEDIATELY write to disk and validate
+
+
+
+ Generate integration-architecture.md with:
+ - How parts communicate
+ - Integration points diagram/description
+ - Data flow between parts
+ - Shared dependencies
+
+ IMMEDIATELY write to disk and validate
+
+Generate project-parts.json metadata file:
+`json
+ {
+ "repository_type": "monorepo",
+ "parts": [ ... ],
+ "integration_points": [ ... ]
+ }
+ `
+
+IMMEDIATELY write to disk
+
+
+supporting_documentation
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_9", "status": "completed", "timestamp": "{{now}}", "summary": "All supporting docs written"}
+- Update current_step = "step_10"
+- List all newly generated outputs
+
+
+PURGE all document contents from context, keep only list of files generated
+
+
+
+
+INCOMPLETE DOCUMENTATION MARKER CONVENTION:
+When a document SHOULD be generated but wasn't (due to quick scan, missing data, conditional requirements not met):
+
+- Use EXACTLY this marker: _(To be generated)_
+- Place it at the end of the markdown link line
+- Example: - [API Contracts - Server](./api-contracts-server.md) _(To be generated)_
+- This allows Step 11 to detect and offer to complete these items
+- ALWAYS use this exact format for consistency and automated detection
+
+
+Create index.md with intelligent navigation based on project structure
+
+
+ Generate simple index with:
+ - Project name and type
+ - Quick reference (tech stack, architecture type)
+ - Links to all generated docs
+ - Links to discovered existing docs
+ - Getting started section
+
+
+
+
+ Generate comprehensive index with:
+ - Project overview and structure summary
+ - Part-based navigation section
+ - Quick reference by part
+ - Cross-part integration links
+ - Links to all generated and existing docs
+ - Getting started per part
+
+
+
+Include in index.md:
+
+## Project Documentation Index
+
+### Project Overview
+
+- **Type:** {{repository_type}} {{#if multi-part}}with {{parts.length}} parts{{/if}}
+- **Primary Language:** {{primary_language}}
+- **Architecture:** {{architecture_type}}
+
+### Quick Reference
+
+{{#if single_part}}
+
+- **Tech Stack:** {{tech_stack_summary}}
+- **Entry Point:** {{entry_point}}
+- **Architecture Pattern:** {{architecture_pattern}}
+ {{else}}
+ {{#each parts}}
+
+#### {{part_name}} ({{part_id}})
+
+- **Type:** {{project_type}}
+- **Tech Stack:** {{tech_stack}}
+- **Root:** {{root_path}}
+ {{/each}}
+ {{/if}}
+
+### Generated Documentation
+
+- [Project Overview](./project-overview.md)
+- [Architecture](./architecture{{#if multi-part}}-{part*id}{{/if}}.md){{#unless architecture_file_exists}} *(To be generated)\_{{/unless}}
+- [Source Tree Analysis](./source-tree-analysis.md)
+- [Component Inventory](./component-inventory{{#if multi-part}}-{part*id}{{/if}}.md){{#unless component_inventory_exists}} *(To be generated)\_{{/unless}}
+- [Development Guide](./development-guide{{#if multi-part}}-{part*id}{{/if}}.md){{#unless dev_guide_exists}} *(To be generated)_{{/unless}}
+ {{#if deployment_found}}- [Deployment Guide](./deployment-guide.md){{#unless deployment_guide_exists}} _(To be generated)_{{/unless}}{{/if}}
+ {{#if contribution_found}}- [Contribution Guide](./contribution-guide.md){{/if}}
+ {{#if api_documented}}- [API Contracts](./api-contracts{{#if multi-part}}-{part_id}{{/if}}.md){{#unless api_contracts_exists}} _(To be generated)_{{/unless}}{{/if}}
+ {{#if data_models_documented}}- [Data Models](./data-models{{#if multi-part}}-{part_id}{{/if}}.md){{#unless data_models_exists}} _(To be generated)_{{/unless}}{{/if}}
+ {{#if multi-part}}- [Integration Architecture](./integration-architecture.md){{#unless integration_arch_exists}} _(To be generated)\_{{/unless}}{{/if}}
+
+### Existing Documentation
+
+{{#each existing_docs}}
+
+- [{{title}}]({{relative_path}}) - {{description}}
+ {{/each}}
+
+### Getting Started
+
+{{getting_started_instructions}}
+
+
+Before writing index.md, check which expected files actually exist:
+
+- For each document that should have been generated, check if file exists on disk
+- Set existence flags: architecture_file_exists, component_inventory_exists, dev_guide_exists, etc.
+- These flags determine whether to add the _(To be generated)_ marker
+- Track which files are missing in {{missing_docs_list}} for reporting
+
+
+IMMEDIATELY write index.md to disk with appropriate _(To be generated)_ markers for missing files
+Validate index has all required sections and links are valid
+
+index
+
+Update state file:
+
+- Add to completed_steps: {"step": "step_10", "status": "completed", "timestamp": "{{now}}", "summary": "Master index generated"}
+- Update current_step = "step_11"
+- Add output: "index.md"
+
+
+PURGE index content from context
+
+
+
+Show summary of all generated files:
+Generated in {{output_folder}}/:
+{{file_list_with_sizes}}
+
+
+Run validation checklist from {validation}
+
+INCOMPLETE DOCUMENTATION DETECTION:
+
+1. PRIMARY SCAN: Look for exact marker: _(To be generated)_
+2. FALLBACK SCAN: Look for fuzzy patterns (in case agent was lazy):
+ - _(TBD)_
+ - _(TODO)_
+ - _(Coming soon)_
+ - _(Not yet generated)_
+ - _(Pending)_
+3. Extract document metadata from each match for user selection
+
+
+Read {output_folder}/index.md
+
+Scan for incomplete documentation markers:
+Step 1: Search for exact pattern "_(To be generated)_" (case-sensitive)
+Step 2: For each match found, extract the entire line
+Step 3: Parse line to extract:
+
+- Document title (text within [brackets] or **bold**)
+- File path (from markdown link or inferable from title)
+- Document type (infer from filename: architecture, api-contracts, data-models, component-inventory, development-guide, deployment-guide, integration-architecture)
+- Part ID if applicable (extract from filename like "architecture-server.md" → part_id: "server")
+ Step 4: Add to {{incomplete_docs_strict}} array
+
+
+Fallback fuzzy scan for alternate markers:
+Search for patterns: _(TBD)_, _(TODO)_, _(Coming soon)_, _(Not yet generated)_, _(Pending)_
+For each fuzzy match:
+
+- Extract same metadata as strict scan
+- Add to {{incomplete_docs_fuzzy}} array with fuzzy_match flag
+
+
+Combine results:
+Set {{incomplete_docs_list}} = {{incomplete_docs_strict}} + {{incomplete_docs_fuzzy}}
+For each item store structure:
+{
+"title": "Architecture – Server",
+"file*path": "./architecture-server.md",
+"doc_type": "architecture",
+"part_id": "server",
+"line_text": "- [Architecture – Server](./architecture-server.md) *(To be generated)\_",
+"fuzzy_match": false
+}
+
+
+Documentation generation complete!
+
+Summary:
+
+- Project Type: {{project_type_summary}}
+- Parts Documented: {{parts_count}}
+- Files Generated: {{files_count}}
+- Total Lines: {{total_lines}}
+
+{{#if incomplete_docs_list.length > 0}}
+⚠️ **Incomplete Documentation Detected:**
+
+I found {{incomplete_docs_list.length}} item(s) marked as incomplete:
+
+{{#each incomplete_docs_list}}
+{{@index + 1}}. **{{title}}** ({{doc_type}}{{#if part_id}} for {{part_id}}{{/if}}){{#if fuzzy_match}} ⚠️ [non-standard marker]{{/if}}
+{{/each}}
+
+{{/if}}
+
+Would you like to:
+
+{{#if incomplete_docs_list.length > 0}}
+
+1. **Generate incomplete documentation** - Complete any of the {{incomplete_docs_list.length}} items above
+2. Review any specific section [type section name]
+3. Add more detail to any area [type area name]
+4. Generate additional custom documentation [describe what]
+5. Finalize and complete [type 'done']
+ {{else}}
+6. Review any specific section [type section name]
+7. Add more detail to any area [type area name]
+8. Generate additional documentation [describe what]
+9. Finalize and complete [type 'done']
+ {{/if}}
+
+Your choice:
+
+
+
+ Which incomplete items would you like to generate?
+
+{{#each incomplete_docs_list}}
+{{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} - {{part_id}}{{/if}})
+{{/each}}
+{{incomplete_docs_list.length + 1}}. All of them
+
+Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all':
+
+
+Parse user selection:
+
+- If "all", set {{selected_items}} = all items in {{incomplete_docs_list}}
+- If comma-separated numbers, extract selected items by index
+- Store result in {{selected_items}} array
+
+
+ Display: "Generating {{selected_items.length}} document(s)..."
+
+ For each item in {{selected_items}}:
+
+1. **Identify the part and requirements:**
+ - Extract part_id from item (if exists)
+ - Look up part data in project_parts array from state file
+ - Load documentation_requirements for that part's project_type_id
+
+2. **Route to appropriate generation substep based on doc_type:**
+
+ **If doc_type == "architecture":**
+ - Display: "Generating architecture documentation for {{part_id}}..."
+ - Load architecture_match for this part from state file (Step 3 cache)
+ - Re-run Step 8 architecture generation logic ONLY for this specific part
+ - Use matched template and fill with cached data from state file
+ - Write architecture-{{part_id}}.md to disk
+ - Validate completeness
+
+ **If doc_type == "api-contracts":**
+ - Display: "Generating API contracts for {{part_id}}..."
+ - Load part data and documentation_requirements
+ - Re-run Step 4 API scan substep targeting ONLY this part
+ - Use scan_level from state file (quick/deep/exhaustive)
+ - Generate api-contracts-{{part_id}}.md
+ - Validate document structure
+
+ **If doc_type == "data-models":**
+ - Display: "Generating data models documentation for {{part_id}}..."
+ - Re-run Step 4 data models scan substep targeting ONLY this part
+ - Use schema_migration_patterns from documentation_requirements
+ - Generate data-models-{{part_id}}.md
+ - Validate completeness
+
+ **If doc_type == "component-inventory":**
+ - Display: "Generating component inventory for {{part_id}}..."
+ - Re-run Step 9 component inventory generation for this specific part
+ - Scan components/, ui/, widgets/ folders
+ - Generate component-inventory-{{part_id}}.md
+ - Validate structure
+
+ **If doc_type == "development-guide":**
+ - Display: "Generating development guide for {{part_id}}..."
+ - Re-run Step 9 development guide generation for this specific part
+ - Use key_file_patterns and test_file_patterns from documentation_requirements
+ - Generate development-guide-{{part_id}}.md
+ - Validate completeness
+
+ **If doc_type == "deployment-guide":**
+ - Display: "Generating deployment guide..."
+ - Re-run Step 6 deployment configuration scan
+ - Re-run Step 9 deployment guide generation
+ - Generate deployment-guide.md
+ - Validate structure
+
+ **If doc_type == "integration-architecture":**
+ - Display: "Generating integration architecture..."
+ - Re-run Step 7 integration analysis for all parts
+ - Generate integration-architecture.md
+ - Validate completeness
+
+3. **Post-generation actions:**
+ - Confirm file was written successfully
+ - Update state file with newly generated output
+ - Add to {{newly_generated_docs}} tracking list
+ - Display: "✓ Generated: {{file_path}}"
+
+4. **Handle errors:**
+ - If generation fails, log error and continue with next item
+ - Track failed items in {{failed_generations}} list
+
+
+After all selected items are processed:
+
+**Update index.md to remove markers:**
+
+1. Read current index.md content
+2. For each item in {{newly_generated_docs}}:
+ - Find the line containing the file link and marker
+ - Remove the _(To be generated)_ or fuzzy marker text
+ - Leave the markdown link intact
+3. Write updated index.md back to disk
+4. Update state file to record index.md modification
+
+
+Display generation summary:
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+✓ **Documentation Generation Complete!**
+
+**Successfully Generated:**
+{{#each newly_generated_docs}}
+
+- {{title}} → {{file_path}}
+ {{/each}}
+
+{{#if failed_generations.length > 0}}
+**Failed to Generate:**
+{{#each failed_generations}}
+
+- {{title}} ({{error_message}})
+ {{/each}}
+ {{/if}}
+
+**Updated:** index.md (removed incomplete markers)
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+
+Update state file with all generation activities
+
+Return to Step 11 menu (loop back to check for any remaining incomplete items)
+
+
+Make requested modifications and regenerate affected files
+Proceed to Step 12 completion
+
+
+ Update state file:
+- Add to completed_steps: {"step": "step_11_iteration", "status": "completed", "timestamp": "{{now}}", "summary": "Review iteration complete"}
+- Keep current_step = "step_11" (for loop back)
+- Update last_updated timestamp
+
+ Loop back to beginning of Step 11 (re-scan for remaining incomplete docs)
+
+
+
+ Update state file:
+- Add to completed_steps: {"step": "step_11", "status": "completed", "timestamp": "{{now}}", "summary": "Validation and review complete"}
+- Update current_step = "step_12"
+
+ Proceed to Step 12
+
+
+
+
+Create final summary report
+Compile verification recap variables:
+ - Set {{verification_summary}} to the concrete tests, validations, or scripts you executed (or "none run").
+ - Set {{open_risks}} to any remaining risks or TODO follow-ups (or "none").
+ - Set {{next_checks}} to recommended actions before merging/deploying (or "none").
+
+
+Display completion message:
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+## Project Documentation Complete! ✓
+
+**Location:** {{output_folder}}/
+
+**Master Index:** {{output_folder}}/index.md
+👆 This is your primary entry point for AI-assisted development
+
+**Generated Documentation:**
+{{generated_files_list}}
+
+**Next Steps:**
+
+1. Review the index.md to familiarize yourself with the documentation structure
+2. When creating a brownfield PRD, point the PRD workflow to: {{output_folder}}/index.md
+3. For UI-only features: Reference {{output_folder}}/architecture-{{ui_part_id}}.md
+4. For API-only features: Reference {{output_folder}}/architecture-{{api_part_id}}.md
+5. For full-stack features: Reference both part architectures + integration-architecture.md
+
+**Verification Recap:**
+
+- Tests/extractions executed: {{verification_summary}}
+- Outstanding risks or follow-ups: {{open_risks}}
+- Recommended next checks before PR: {{next_checks}}
+
+**Brownfield PRD Command:**
+When ready to plan new features, run the PRD workflow and provide this index as input.
+
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+
+FINALIZE state file:
+
+- Add to completed_steps: {"step": "step_12", "status": "completed", "timestamp": "{{now}}", "summary": "Workflow complete"}
+- Update timestamps.completed = "{{now}}"
+- Update current_step = "completed"
+- Write final state file
+
+
+Display: "State file saved: {{output_folder}}/project-scan-report.json"
+
+
diff --git a/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan.yaml b/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan.yaml
new file mode 100644
index 00000000..64d6861a
--- /dev/null
+++ b/src/modules/bmm/workflows/1-analysis/document-project/workflows/full-scan.yaml
@@ -0,0 +1,33 @@
+# Full Project Scan Workflow Configuration
+name: "document-project-full-scan"
+description: "Complete project documentation workflow (initial scan or full rescan)"
+author: "BMad"
+
+# This is a sub-workflow called by document-project/workflow.yaml
+parent_workflow: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml"
+
+# Critical variables inherited from parent
+config_source: "{project-root}/bmad/bmb/config.yaml"
+output_folder: "{config_source}:output_folder"
+user_name: "{config_source}:user_name"
+date: system-generated
+
+# Data files
+project_types_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/project-types.csv"
+documentation_requirements_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/documentation-requirements.csv"
+architecture_registry_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/architecture-registry.csv"
+
+# Module path and component files
+installed_path: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflows"
+template: false # Action workflow
+instructions: "{installed_path}/full-scan-instructions.md"
+validation: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/checklist.md"
+
+# Runtime inputs (passed from parent workflow)
+workflow_mode: "" # "initial_scan" or "full_rescan"
+scan_level: "" # "quick", "deep", or "exhaustive"
+resume_mode: false
+project_root_path: ""
+
+# Configuration
+autonomous: false # Requires user input at key decision points
diff --git a/src/utility/models/fragments/activation-steps.xml b/src/utility/models/fragments/activation-steps.xml
index 7e4dd8d3..040c0e7b 100644
--- a/src/utility/models/fragments/activation-steps.xml
+++ b/src/utility/models/fragments/activation-steps.xml
@@ -1,6 +1,6 @@
Load persona from this current agent file (already in context)
🚨 IMMEDIATE ACTION REQUIRED - BEFORE ANY OUTPUT:
- - Use Read tool to load {project-root}/bmad/{{module}}/config.yaml NOW
+ - Load and read {project-root}/bmad/{{module}}/config.yaml NOW
- Store ALL fields as session variables: {user_name}, {communication_language}, {output_folder}
- VERIFY: If config not loaded, STOP and report error to user
- DO NOT PROCEED to step 3 until config is successfully loaded and variables stored
diff --git a/tools/cli/lib/activation-builder.js b/tools/cli/lib/activation-builder.js
index 9b6a2941..b3aead2f 100644
--- a/tools/cli/lib/activation-builder.js
+++ b/tools/cli/lib/activation-builder.js
@@ -51,13 +51,15 @@ class ActivationBuilder {
// 2. Build menu handlers section with dynamic handlers
const menuHandlers = await this.loadFragment('menu-handlers.xml');
- // Build extract list (comma-separated list of used attributes)
- const extractList = profile.usedAttributes.join(', ');
-
// Build handlers (load only needed handlers)
const handlers = await this.buildHandlers(profile);
- const processedHandlers = menuHandlers.replace('{DYNAMIC_EXTRACT_LIST}', extractList).replace('{DYNAMIC_HANDLERS}', handlers);
+ // Remove the extract line from the final output - it's just build metadata
+ // The extract list tells us which attributes to look for during processing
+ // but shouldn't appear in the final agent file
+ const processedHandlers = menuHandlers
+ .replace('{DYNAMIC_EXTRACT_LIST}\n', '') // Remove the entire extract line
+ .replace('{DYNAMIC_HANDLERS}', handlers);
activation += '\n' + this.indent(processedHandlers, 2) + '\n';
diff --git a/v6-open-items.md b/v6-open-items.md
index 379b489b..ea770dfb 100644
--- a/v6-open-items.md
+++ b/v6-open-items.md
@@ -4,12 +4,29 @@
Aside from stability and bug fixes found during the alpha period - the main focus will be on the following:
+- In Progress - Brownfield v6 integrated into the workflow.
+- In Progress - Full workflow single file tracking.
+- In Progress - Codex improvements.
+ - Advanced Elicitation is not working well with codex
+ - Brainstorming is somewhat ok with codex, but could be improved
+- Validate Gemini CLI - is it able to function at all for any workflows?
+- BoMB Tooling included with module install
+- Teat Architect better integration into workflows
+- Document new agent workflows.
+- need to segregate game dev workflows and potentially add as an installation choice
+- the workflow runner needs to become a series of targeted workflow injections at install time so workflows can be run directly without the bloated intermediary.
+- All project levels (0 through 4) manual flows validated through workflow phase 1-4 for greenfield and brownfield
+- NPX installer
+- github pipelines, branch protection, vulnerability scanners
+- subagent injections reenabled
+- bmm existing project scanning and integration with workflow phase 0-4 improvements
+- Additional custom sections for architecture project types
- DONE: Single Agent web bundler finalized - run `npm run bundle'
- DONE: 4->v6 upgrade installer fixed.
- DONE: v6->v6 updates will no longer remove custom content. so if you have a new agent you created for example anywhere under the bmad folder, updates will no longer remove them.
- DONE: if you modify an installed file and upgrade, the file will be saved as a .bak file and the installer will inform you.
- DONE: Game Agents comms style WAY to over the top - reduced a bit.
-- need to nest subagents for better organization.
+- DONE: need to nest subagents for better organization.
- DONE: Quick note on BMM v6 Flow
- DONE: CC SubAgents installed to sub-folders now.
- DONE: Qwen TOML update.
@@ -19,24 +36,13 @@ Aside from stability and bug fixes found during the alpha period - the main focu
- DONE: Agent improvement to loading instruction insertion and customization system overhaul
- DONE: Stand along agents now will install to bmad/agents and are able to be compiled by the installer also
- bmm `testarch` integrated into the BMM workflow's after aligned with the rest of bmad method flow.
-- Document new agent workflows.
-- need to segregate game dev workflows and potentially add as an installation choice
-- the workflow runner needs to become a series of targeted workflow injections at install time so workflows can be run directly without the bloated intermediary.
-- All project levels (0 through 4) manual flows validated through workflow phase 1-4
- - level 0 (simple addition or update to existing project) workflow is super streamlined from explanation of issue through code implementation
- - simple spec file -> context -> implementation
- - level 1 (simple update to existing, or a very simple oneshot tool or project)
-- NPX installer
-- github pipelines, branch protection, vulnerability scanners
-- improved subagent injections
-- bmm existing project scanning and integration with workflow phase 0-4 improvements
-- BTA Module coming soon!
## Needed before Beta → v0 release
Once the alpha is stabilized and we switch to beta, work on v4.x will freeze and the beta will merge to main. The NPX installer will still install v4 by default, but people will be able to npm install the beta version also.
- Orchestration tracking works consistently across all workflow phases on BMM module
+- Single Reference Architecture
- Module repository and submission process defined
- Final polished documentation and user guide for each module
- Final polished documentation for overall project architecture
@@ -49,5 +55,6 @@ Once the alpha is stabilized and we switch to beta, work on v4.x will freeze and
- Installer offers installation of vetted community modules
- DevOps Module
- Security Module
-- BoMB improvements
+- Further BoMB improvements
- 2-3 functional Reference Architecture Project Scaffolds and community contribution process defined
+-