Merge branch 'v6-alpha' into v6-alpha

This commit is contained in:
麻绛 2025-10-13 10:34:49 +08:00 committed by GitHub
commit 176fe5b7a6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 3429 additions and 34 deletions

View File

@ -0,0 +1,30 @@
# Technical Decisions Log
_Auto-updated during discovery and planning sessions - you can also add information here yourself_
## Purpose
This document captures technical decisions, preferences, and constraints discovered during project discussions. It serves as input for solution-architecture.md and solution design documents.
## Confirmed Decisions
<!-- Technical choices explicitly confirmed by the team/user -->
## Preferences
<!-- Non-binding preferences mentioned during discussions -->
## Constraints
<!-- Hard requirements from infrastructure, compliance, or integration needs -->
## To Investigate
<!-- Technical questions that need research or architect input -->
## Notes
- This file is automatically updated when technical information is mentioned
- Decisions here are inputs, not final architecture
- Final technical decisions belong in solution-architecture.md
- Implementation details belong in solutions/\*.md and story context or dev notes.

View File

@ -17,7 +17,7 @@ Specialized AI agents for different development roles:
- **Architect** - Technical architecture and design
- **SM** (Scrum Master) - Sprint and story management
- **DEV** (Developer) - Code implementation
- **SR** (Senior Reviewer) - Code review and quality
- **TEA** (Test Architect) - Test Architect
- **UX** - User experience design
- And more specialized roles
@ -65,17 +65,9 @@ Test architecture and quality assurance components.
## Quick Start
```bash
# Run a planning workflow
bmad pm plan-project
# Create a new story
bmad sm create-story
# Run development workflow
bmad dev develop
# Review implementation
bmad sr review-story
# Load the PM agent - either via slash command or drag and drop or @ the agent file.
# Once loaded, the agent should greet you and offer a menu of options. You can enter:
`*plan-project`
```
## Key Concepts

View File

@ -26,6 +26,10 @@ agent:
workflow: "{project-root}/bmad/bmm/workflows/1-analysis/product-brief/workflow.yaml"
description: Produce Project Brief
- trigger: document-project
workflow: "{project-root}/bmad/bmm/workflows/1-analysis/document-project/workflow.yaml"
description: Generate comprehensive documentation of an existing Project
- trigger: research
workflow: "{project-root}/bmad/bmm/workflows/1-analysis/research/workflow.yaml"
description: Guide me through Research

View File

@ -16,18 +16,19 @@ agent:
- I treat the Story Context XML as the single source of truth, trusting it over any training priors while refusing to invent solutions when information is missing.
- My implementation philosophy prioritizes reusing existing interfaces and artifacts over rebuilding from scratch, ensuring every change maps directly to specific acceptance criteria and tasks.
- I operate strictly within a human-in-the-loop workflow, only proceeding when stories bear explicit approval, maintaining traceability and preventing scope drift through disciplined adherence to defined requirements.
- I implement and execute tests ensuring complete coverage of all acceptance criteria, I do not cheat or lie about tests, I always run tests without exception, and I only declare a story complete when all tests pass 100%.
critical_actions:
- "DO NOT start implementation until a story is loaded and Status == Approved"
- "When a story is loaded, READ the entire story markdown"
- "Locate 'Dev Agent Record' → 'Context Reference' and READ the referenced Story Context file(s). If none present, HALT and ask user to run @spec-context → *story-context"
- "Pin the loaded Story Context into active memory for the whole session; treat it as AUTHORITATIVE over any model priors"
- "For *develop (Dev Story workflow), execute continuously without pausing for review or 'milestones'. Only halt for explicit blocker conditions (e.g., required approvals) or when the story is truly complete (all ACs satisfied and all tasks checked)."
- "For *develop (Dev Story workflow), execute continuously without pausing for review or 'milestones'. Only halt for explicit blocker conditions (e.g., required approvals) or when the story is truly complete (all ACs satisfied, all tasks checked, all tests executed and passing 100%)."
menu:
- trigger: develop
workflow: "{project-root}/bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml"
description: Execute Dev Story workflow (implements tasks, tests, validates, updates story)
description: "Execute Dev Story workflow, implementing tasks and tests, or performing updates to the story"
- trigger: development-status
exec: "{project-root}/bmad/bmm/tasks/development-status.xml"
@ -35,4 +36,4 @@ agent:
- trigger: review
workflow: "{project-root}/bmad/bmm/workflows/4-implementation/review-story/workflow.yaml"
description: Perform Senior Developer Review on a story flagged Ready for Review (loads context/tech-spec, checks ACs/tests/architecture/security, appends review notes)
description: "Perform a thorough clean context review on a story flagged Ready for Review, and appends review notes to story file"

View File

@ -0,0 +1,445 @@
# Document Project Workflow
**Version:** 1.2.0
**Module:** BMM (BMAD Method Module)
**Type:** Action Workflow (Documentation Generator)
## Purpose
Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development. Generates a master index and multiple documentation files tailored to project structure and type.
**NEW in v1.2.0:** Context-safe architecture with scan levels, resumability, and write-as-you-go pattern to prevent context exhaustion.
## Key Features
- **Multi-Project Type Support**: Handles web, backend, mobile, CLI, game, embedded, data, infra, library, desktop, and extension projects
- **Multi-Part Detection**: Automatically detects and documents projects with separate client/server or multiple services
- **Three Scan Levels** (NEW v1.2.0): Quick (2-5 min), Deep (10-30 min), Exhaustive (30-120 min)
- **Resumability** (NEW v1.2.0): Interrupt and resume workflows without losing progress
- **Write-as-you-go** (NEW v1.2.0): Documents written immediately to prevent context exhaustion
- **Intelligent Batching** (NEW v1.2.0): Subfolder-based processing for deep/exhaustive scans
- **Data-Driven Analysis**: Uses CSV-based project type detection and documentation requirements
- **Comprehensive Scanning**: Analyzes APIs, data models, UI components, configuration, security patterns, and more
- **Architecture Matching**: Matches projects to 170+ architecture templates from the solutioning registry
- **Brownfield PRD Ready**: Generates documentation specifically designed for AI agents planning new features
## How to Invoke
```bash
workflow document-project
```
Or from BMAD CLI:
```bash
/bmad:bmm:workflows:document-project
```
## Scan Levels (NEW in v1.2.0)
Choose the right scan depth for your needs:
### 1. Quick Scan (Default)
**Duration:** 2-5 minutes
**What it does:** Pattern-based analysis without reading source files
**Reads:** Config files, package manifests, directory structure, README
**Use when:**
- You need a fast project overview
- Initial understanding of project structure
- Planning next steps before deeper analysis
**Does NOT read:** Source code files (_.js, _.ts, _.py, _.go, etc.)
### 2. Deep Scan
**Duration:** 10-30 minutes
**What it does:** Reads files in critical directories based on project type
**Reads:** Files in critical paths defined by documentation requirements
**Use when:**
- Creating comprehensive documentation for brownfield PRD
- Need detailed analysis of key areas
- Want balance between depth and speed
**Example:** For a web app, reads controllers/, models/, components/, but not every utility file
### 3. Exhaustive Scan
**Duration:** 30-120 minutes
**What it does:** Reads ALL source files in project
**Reads:** Every source file (excludes node_modules, dist, build, .git)
**Use when:**
- Complete project analysis needed
- Migration planning requires full understanding
- Detailed audit of entire codebase
- Deep technical debt assessment
**Note:** Deep-dive mode ALWAYS uses exhaustive scan (no choice)
## Resumability (NEW in v1.2.0)
The workflow can be interrupted and resumed without losing progress:
- **State Tracking:** Progress saved in `project-scan-report.json`
- **Auto-Detection:** Workflow detects incomplete runs (<24 hours old)
- **Resume Prompt:** Choose to resume or start fresh
- **Step-by-Step:** Resume from exact step where interrupted
- **Archiving:** Old state files automatically archived
**Example Resume Flow:**
```
> workflow document-project
I found an in-progress workflow state from 2025-10-11 14:32:15.
Current Progress:
- Mode: initial_scan
- Scan Level: deep
- Completed Steps: 5/12
- Last Step: step_5
Would you like to:
1. Resume from where we left off - Continue from step 6
2. Start fresh - Archive old state and begin new scan
3. Cancel - Exit without changes
Your choice [1/2/3]:
```
## What It Does
### Step-by-Step Process
1. **Detects Project Structure** - Identifies if project is single-part or multi-part (client/server/etc.)
2. **Classifies Project Type** - Matches against 12 project types (web, backend, mobile, etc.)
3. **Discovers Documentation** - Finds existing README, CONTRIBUTING, ARCHITECTURE files
4. **Analyzes Tech Stack** - Parses package files, identifies frameworks, versions, dependencies
5. **Conditional Scanning** - Performs targeted analysis based on project type requirements:
- API routes and endpoints
- Database models and schemas
- State management patterns
- UI component libraries
- Configuration and security
- CI/CD and deployment configs
6. **Generates Source Tree** - Creates annotated directory structure with critical paths
7. **Extracts Dev Instructions** - Documents setup, build, run, and test commands
8. **Creates Architecture Docs** - Generates detailed architecture using matched templates
9. **Builds Master Index** - Creates comprehensive index.md as primary AI retrieval source
10. **Validates Output** - Runs 140+ point checklist to ensure completeness
### Output Files
**Single-Part Projects:**
- `index.md` - Master index
- `project-overview.md` - Executive summary
- `architecture.md` - Detailed architecture
- `source-tree-analysis.md` - Annotated directory tree
- `component-inventory.md` - Component catalog (if applicable)
- `development-guide.md` - Local dev instructions
- `api-contracts.md` - API documentation (if applicable)
- `data-models.md` - Database schema (if applicable)
- `deployment-guide.md` - Deployment process (optional)
- `contribution-guide.md` - Contributing guidelines (optional)
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
**Multi-Part Projects (e.g., client + server):**
- `index.md` - Master index with part navigation
- `project-overview.md` - Multi-part summary
- `architecture-{part_id}.md` - Per-part architecture docs
- `source-tree-analysis.md` - Full tree with part annotations
- `component-inventory-{part_id}.md` - Per-part components
- `development-guide-{part_id}.md` - Per-part dev guides
- `integration-architecture.md` - How parts communicate
- `project-parts.json` - Machine-readable metadata
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
- Additional conditional files per part (API, data models, etc.)
## Data Files
The workflow uses three CSV files:
1. **project-types.csv** - Project type detection and classification
- Location: `/bmad/bmm/workflows/3-solutioning/project-types/project-types.csv`
- 12 project types with detection keywords
2. **registry.csv** - Architecture template matching
- Location: `/bmad/bmm/workflows/3-solutioning/templates/registry.csv`
- 170+ architecture patterns
3. **documentation-requirements.csv** - Scanning requirements per project type
- Location: `/bmad/bmm/workflows/document-project/documentation-requirements.csv`
- 24 columns of analysis patterns and requirements
## Use Cases
### Primary Use Case: Brownfield PRD Creation
After running this workflow, use the generated `index.md` as input to brownfield PRD workflows:
```
User: "I want to add a new dashboard feature"
PRD Workflow: Loads docs/index.md
→ Understands existing architecture
→ Identifies reusable components
→ Plans integration with existing APIs
→ Creates contextual PRD with epics and stories
```
### Other Use Cases
- **Onboarding New Developers** - Comprehensive project documentation
- **Architecture Review** - Structured analysis of existing system
- **Technical Debt Assessment** - Identify patterns and anti-patterns
- **Migration Planning** - Understand current state before refactoring
## Requirements
### Recommended Inputs (Optional)
- Project root directory (defaults to current directory)
- README.md or similar docs (auto-discovered if present)
- User guidance on key areas to focus (workflow will ask)
### Tools Used
- File system scanning (Glob, Read, Grep)
- Code analysis
- Git repository analysis (optional)
## Configuration
### Default Output Location
Files are saved to: `{output_folder}` (from config.yaml)
Default: `/docs/` folder in project root
### Customization
- Modify `documentation-requirements.csv` to adjust scanning patterns for project types
- Add new project types to `project-types.csv`
- Add new architecture templates to `registry.csv`
## Example: Multi-Part Web App
**Input:**
```
my-app/
├── client/ # React frontend
├── server/ # Express backend
└── README.md
```
**Detection Result:**
- Repository Type: Monorepo
- Part 1: client (web/React)
- Part 2: server (backend/Express)
**Output (10+ files):**
```
docs/
├── index.md
├── project-overview.md
├── architecture-client.md
├── architecture-server.md
├── source-tree-analysis.md
├── component-inventory-client.md
├── development-guide-client.md
├── development-guide-server.md
├── api-contracts-server.md
├── data-models-server.md
├── integration-architecture.md
└── project-parts.json
```
## Example: Simple CLI Tool
**Input:**
```
hello-cli/
├── main.go
├── go.mod
└── README.md
```
**Detection Result:**
- Repository Type: Monolith
- Part 1: main (cli/Go)
**Output (4 files):**
```
docs/
├── index.md
├── project-overview.md
├── architecture.md
└── source-tree-analysis.md
```
## Deep-Dive Mode
### What is Deep-Dive Mode?
When you run the workflow on a project that already has documentation, you'll be offered a choice:
1. **Rescan entire project** - Update all documentation with latest changes
2. **Deep-dive into specific area** - Generate EXHAUSTIVE documentation for a particular feature/module/folder
3. **Cancel** - Keep existing documentation
Deep-dive mode performs **comprehensive, file-by-file analysis** of a specific area, reading EVERY file completely and documenting:
- All exports with complete signatures
- All imports and dependencies
- Dependency graphs and data flow
- Code patterns and implementations
- Testing coverage and strategies
- Integration points
- Reuse opportunities
### When to Use Deep-Dive Mode
- **Before implementing a feature** - Deep-dive the area you'll be modifying
- **During architecture review** - Deep-dive complex modules
- **For code understanding** - Deep-dive unfamiliar parts of codebase
- **When creating PRDs** - Deep-dive areas affected by new features
### Deep-Dive Process
1. Workflow detects existing `index.md`
2. Offers deep-dive option
3. Suggests areas based on project structure:
- API route groups
- Feature modules
- UI component areas
- Services/business logic
4. You select area or specify custom path
5. Workflow reads EVERY file in that area
6. Generates `deep-dive-{area-name}.md` with complete analysis
7. Updates `index.md` with link to deep-dive doc
8. Offers to deep-dive another area or finish
### Deep-Dive Output Example
**docs/deep-dive-dashboard-feature.md:**
- Complete file inventory (47 files analyzed)
- Every export with signatures
- Dependency graph
- Data flow analysis
- Integration points
- Testing coverage
- Related code references
- Implementation guidance
- ~3,000 LOC documented in detail
### Incremental Deep-Diving
You can deep-dive multiple areas over time:
- First run: Scan entire project → generates index.md
- Second run: Deep-dive dashboard feature
- Third run: Deep-dive API layer
- Fourth run: Deep-dive authentication system
All deep-dive docs are linked from the master index.
## Validation
The workflow includes a comprehensive 160+ point checklist covering:
- Project detection accuracy
- Technology stack completeness
- Codebase scanning thoroughness
- Architecture documentation quality
- Multi-part handling (if applicable)
- Brownfield PRD readiness
- Deep-dive completeness (if applicable)
## Next Steps After Completion
1. **Review** `docs/index.md` - Your master documentation index
2. **Validate** - Check generated docs for accuracy
3. **Use for PRD** - Point brownfield PRD workflow to index.md
4. **Maintain** - Re-run workflow when architecture changes significantly
## File Structure
```
document-project/
├── workflow.yaml # Workflow configuration
├── instructions.md # Step-by-step workflow logic
├── checklist.md # Validation criteria
├── documentation-requirements.csv # Project type scanning patterns
├── templates/ # Output templates
│ ├── index-template.md
│ ├── project-overview-template.md
│ └── source-tree-template.md
└── README.md # This file
```
## Troubleshooting
**Issue: Project type not detected correctly**
- Solution: Workflow will ask for confirmation; manually select correct type
**Issue: Missing critical information**
- Solution: Provide additional context when prompted; re-run specific analysis steps
**Issue: Multi-part detection missed a part**
- Solution: When asked to confirm parts, specify the missing part and its path
**Issue: Architecture template doesn't match well**
- Solution: Check registry.csv; may need to add new template or adjust matching criteria
## Architecture Improvements in v1.2.0
### Context-Safe Design
The workflow now uses a write-as-you-go architecture:
- Documents written immediately to disk (not accumulated in memory)
- Detailed findings purged after writing (only summaries kept)
- State tracking enables resumption from any step
- Batching strategy prevents context exhaustion on large projects
### Batching Strategy
For deep/exhaustive scans:
- Process ONE subfolder at a time
- Read files → Extract info → Write output → Validate → Purge context
- Primary concern is file SIZE (not count)
- Track batches in state file for resumability
### State File Format
Optimized JSON (no pretty-printing):
```json
{
"workflow_version": "1.2.0",
"timestamps": {...},
"mode": "initial_scan",
"scan_level": "deep",
"completed_steps": [...],
"current_step": "step_6",
"findings": {"summary": "only"},
"outputs_generated": [...],
"resume_instructions": "..."
}
```

View File

@ -0,0 +1,245 @@
# Document Project Workflow - Validation Checklist
## Scan Level and Resumability (v1.2.0)
- [ ] Scan level selection offered (quick/deep/exhaustive) for initial_scan and full_rescan modes
- [ ] Deep-dive mode automatically uses exhaustive scan (no choice given)
- [ ] Quick scan does NOT read source files (only patterns, configs, manifests)
- [ ] Deep scan reads files in critical directories per project type
- [ ] Exhaustive scan reads ALL source files (excluding node_modules, dist, build)
- [ ] State file (project-scan-report.json) created at workflow start
- [ ] State file updated after each step completion
- [ ] State file contains all required fields per schema
- [ ] Resumability prompt shown if state file exists and is <24 hours old
- [ ] Old state files (>24 hours) automatically archived
- [ ] Resume functionality loads previous state correctly
- [ ] Workflow can jump to correct step when resuming
## Write-as-you-go Architecture
- [ ] Each document written to disk IMMEDIATELY after generation
- [ ] Document validation performed right after writing (section-level)
- [ ] State file updated after each document is written
- [ ] Detailed findings purged from context after writing (only summaries kept)
- [ ] Context contains only high-level summaries (1-2 sentences per section)
- [ ] No accumulation of full project analysis in memory
## Batching Strategy (Deep/Exhaustive Scans)
- [ ] Batching applied for deep and exhaustive scan levels
- [ ] Batches organized by SUBFOLDER (not arbitrary file count)
- [ ] Large files (>5000 LOC) handled with appropriate judgment
- [ ] Each batch: read files, extract info, write output, validate, purge context
- [ ] Batch completion tracked in state file (batches_completed array)
- [ ] Batch summaries kept in context (1-2 sentences max)
## Project Detection and Classification
- [ ] Project type correctly identified and matches actual technology stack
- [ ] Multi-part vs single-part structure accurately detected
- [ ] All project parts identified if multi-part (no missing client/server/etc.)
- [ ] Documentation requirements loaded for each part type
- [ ] Architecture registry match is appropriate for detected stack
## Technology Stack Analysis
- [ ] All major technologies identified (framework, language, database, etc.)
- [ ] Versions captured where available
- [ ] Technology decision table is complete and accurate
- [ ] Dependencies and libraries documented
- [ ] Build tools and package managers identified
## Codebase Scanning Completeness
- [ ] All critical directories scanned based on project type
- [ ] API endpoints documented (if requires_api_scan = true)
- [ ] Data models captured (if requires_data_models = true)
- [ ] State management patterns identified (if requires_state_management = true)
- [ ] UI components inventoried (if requires_ui_components = true)
- [ ] Configuration files located and documented
- [ ] Authentication/security patterns identified
- [ ] Entry points correctly identified
- [ ] Integration points mapped (for multi-part projects)
- [ ] Test files and patterns documented
## Source Tree Analysis
- [ ] Complete directory tree generated with no major omissions
- [ ] Critical folders highlighted and described
- [ ] Entry points clearly marked
- [ ] Integration paths noted (for multi-part)
- [ ] Asset locations identified (if applicable)
- [ ] File organization patterns explained
## Architecture Documentation Quality
- [ ] Architecture document uses appropriate template from registry
- [ ] All template sections filled with relevant information (no placeholders)
- [ ] Technology stack section is comprehensive
- [ ] Architecture pattern clearly explained
- [ ] Data architecture documented (if applicable)
- [ ] API design documented (if applicable)
- [ ] Component structure explained (if applicable)
- [ ] Source tree included and annotated
- [ ] Testing strategy documented
- [ ] Deployment architecture captured (if config found)
## Development and Operations Documentation
- [ ] Prerequisites clearly listed
- [ ] Installation steps documented
- [ ] Environment setup instructions provided
- [ ] Local run commands specified
- [ ] Build process documented
- [ ] Test commands and approach explained
- [ ] Deployment process documented (if applicable)
- [ ] CI/CD pipeline details captured (if found)
- [ ] Contribution guidelines extracted (if found)
## Multi-Part Project Specific (if applicable)
- [ ] Each part documented separately
- [ ] Part-specific architecture files created (architecture-{part_id}.md)
- [ ] Part-specific component inventories created (if applicable)
- [ ] Part-specific development guides created
- [ ] Integration architecture document created
- [ ] Integration points clearly defined with type and details
- [ ] Data flow between parts explained
- [ ] project-parts.json metadata file created
## Index and Navigation
- [ ] index.md created as master entry point
- [ ] Project structure clearly summarized in index
- [ ] Quick reference section complete and accurate
- [ ] All generated docs linked from index
- [ ] All existing docs linked from index (if found)
- [ ] Getting started section provides clear next steps
- [ ] AI-assisted development guidance included
- [ ] Navigation structure matches project complexity (simple for single-part, detailed for multi-part)
## File Completeness
- [ ] index.md generated
- [ ] project-overview.md generated
- [ ] source-tree-analysis.md generated
- [ ] architecture.md (or per-part) generated
- [ ] component-inventory.md (or per-part) generated if UI components exist
- [ ] development-guide.md (or per-part) generated
- [ ] api-contracts.md (or per-part) generated if APIs documented
- [ ] data-models.md (or per-part) generated if data models found
- [ ] deployment-guide.md generated if deployment config found
- [ ] contribution-guide.md generated if guidelines found
- [ ] integration-architecture.md generated if multi-part
- [ ] project-parts.json generated if multi-part
## Content Quality
- [ ] Technical information is accurate and specific
- [ ] No generic placeholders or "TODO" items remain
- [ ] Examples and code snippets are relevant to actual project
- [ ] File paths and directory references are correct
- [ ] Technology names and versions are accurate
- [ ] Terminology is consistent across all documents
- [ ] Descriptions are clear and actionable
## Brownfield PRD Readiness
- [ ] Documentation provides enough context for AI to understand existing system
- [ ] Integration points are clear for planning new features
- [ ] Reusable components are identified for leveraging in new work
- [ ] Data models are documented for schema extension planning
- [ ] API contracts are documented for endpoint expansion
- [ ] Code conventions and patterns are captured for consistency
- [ ] Architecture constraints are clear for informed decision-making
## Output Validation
- [ ] All files saved to correct output folder
- [ ] File naming follows convention (no part suffix for single-part, with suffix for multi-part)
- [ ] No broken internal links between documents
- [ ] Markdown formatting is correct and renders properly
- [ ] JSON files are valid (project-parts.json if applicable)
## Final Validation
- [ ] User confirmed project classification is accurate
- [ ] User provided any additional context needed
- [ ] All requested areas of focus addressed
- [ ] Documentation is immediately usable for brownfield PRD workflow
- [ ] No critical information gaps identified
## Issues Found
### Critical Issues (must fix before completion)
-
### Minor Issues (can be addressed later)
-
### Missing Information (to note for user)
- ***
## Deep-Dive Mode Validation (if deep-dive was performed)
- [ ] Deep-dive target area correctly identified and scoped
- [ ] All files in target area read completely (no skipped files)
- [ ] File inventory includes all exports with complete signatures
- [ ] Dependencies mapped for all files
- [ ] Dependents identified (who imports each file)
- [ ] Code snippets included for key implementation details
- [ ] Patterns and design approaches documented
- [ ] State management strategy explained
- [ ] Side effects documented (API calls, DB queries, etc.)
- [ ] Error handling approaches captured
- [ ] Testing files and coverage documented
- [ ] TODOs and comments extracted
- [ ] Dependency graph created showing relationships
- [ ] Data flow traced through the scanned area
- [ ] Integration points with rest of codebase identified
- [ ] Related code and similar patterns found outside scanned area
- [ ] Reuse opportunities documented
- [ ] Implementation guidance provided
- [ ] Modification instructions clear
- [ ] Index.md updated with deep-dive link
- [ ] Deep-dive documentation is immediately useful for implementation
---
## State File Quality
- [ ] State file is valid JSON (no syntax errors)
- [ ] State file is optimized (no pretty-printing, minimal whitespace)
- [ ] State file contains all completed steps with timestamps
- [ ] State file outputs_generated list is accurate and complete
- [ ] State file resume_instructions are clear and actionable
- [ ] State file findings contain only high-level summaries (not detailed data)
- [ ] State file can be successfully loaded for resumption
## Completion Criteria
All items in the following sections must be checked:
- ✓ Scan Level and Resumability (v1.2.0)
- ✓ Write-as-you-go Architecture
- ✓ Batching Strategy (if deep/exhaustive scan)
- ✓ Project Detection and Classification
- ✓ Technology Stack Analysis
- ✓ Architecture Documentation Quality
- ✓ Index and Navigation
- ✓ File Completeness
- ✓ Brownfield PRD Readiness
- ✓ State File Quality
- ✓ Deep-Dive Mode Validation (if applicable)
The workflow is complete when:
1. All critical checklist items are satisfied
2. No critical issues remain
3. User has reviewed and approved the documentation
4. Generated docs are ready for use in brownfield PRD workflow
5. Deep-dive docs (if any) are comprehensive and implementation-ready
6. State file is valid and can enable resumption if interrupted

View File

@ -0,0 +1,12 @@
project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory
web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false
mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true
backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false
cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false
library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true
game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true
data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false
extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false
infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false
1 project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory
2 web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false
3 mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true
4 backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false
5 cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false
6 library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
7 desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true
8 game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true
9 data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false
10 extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false
11 infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
12 embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false

View File

@ -0,0 +1,128 @@
# Document Project Workflow Router
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/document-project/workflow.yaml</critical>
<critical>This router determines workflow mode and delegates to specialized sub-workflows</critical>
<step n="0" goal="Check for resumability and determine workflow mode">
<critical>SMART LOADING STRATEGY: Check state file FIRST before loading any CSV files</critical>
<action>Check for existing state file at: {output_folder}/project-scan-report.json</action>
<check if="project-scan-report.json exists">
<action>Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification</action>
<action>Extract cached project_type_id(s) from state file if present</action>
<action>Calculate age of state file (current time - last_updated)</action>
<ask>I found an in-progress workflow state from {{last_updated}}.
**Current Progress:**
- Mode: {{mode}}
- Scan Level: {{scan_level}}
- Completed Steps: {{completed_steps_count}}/{{total_steps}}
- Last Step: {{current_step}}
- Project Type(s): {{cached_project_types}}
Would you like to:
1. **Resume from where we left off** - Continue from step {{current_step}}
2. **Start fresh** - Archive old state and begin new scan
3. **Cancel** - Exit without changes
Your choice [1/2/3]:
</ask>
<check if="user selects 1">
<action>Set resume_mode = true</action>
<action>Set workflow_mode = {{mode}}</action>
<action>Load findings summaries from state file</action>
<action>Load cached project_type_id(s) from state file</action>
<critical>CONDITIONAL CSV LOADING FOR RESUME:</critical>
<action>For each cached project_type_id, load ONLY the corresponding row from: {documentation_requirements_csv}</action>
<action>Skip loading project-types.csv and architecture_registry.csv (not needed on resume)</action>
<action>Store loaded doc requirements for use in remaining steps</action>
<action>Display: "Resuming {{workflow_mode}} from {{current_step}} with cached project type(s): {{cached_project_types}}"</action>
<check if="workflow_mode == deep_dive">
<action>Load and execute: {installed_path}/workflows/deep-dive-instructions.md with resume context</action>
</check>
<check if="workflow_mode == initial_scan OR workflow_mode == full_rescan">
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md with resume context</action>
</check>
</check>
<check if="user selects 2">
<action>Create archive directory: {output_folder}/.archive/</action>
<action>Move old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action>
</check>
<check if="user selects 3">
<action>Display: "Exiting workflow without changes."</action>
<action>Exit workflow</action>
</check>
</check>
<check if="state file age >= 24 hours">
<action>Display: "Found old state file (>24 hours). Starting fresh scan."</action>
<action>Archive old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action>
</check>
</step>
<step n="0.5" goal="Check for existing documentation and determine workflow mode" if="resume_mode == false">
<action>Check if {output_folder}/index.md exists</action>
<check if="index.md exists">
<action>Read existing index.md to extract metadata (date, project structure, parts count)</action>
<action>Store as {{existing_doc_date}}, {{existing_structure}}</action>
<ask>I found existing documentation generated on {{existing_doc_date}}.
What would you like to do?
1. **Re-scan entire project** - Update all documentation with latest changes
2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder
3. **Cancel** - Keep existing documentation as-is
Your choice [1/2/3]:
</ask>
<check if="user selects 1">
<action>Set workflow_mode = "full_rescan"</action>
<action>Display: "Starting full project rescan..."</action>
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md</action>
</check>
<check if="user selects 2">
<action>Set workflow_mode = "deep_dive"</action>
<action>Set scan_level = "exhaustive"</action>
<action>Display: "Starting deep-dive documentation mode..."</action>
<action>Load and execute: {installed_path}/workflows/deep-dive-instructions.md</action>
</check>
<check if="user selects 3">
<action>Display message: "Keeping existing documentation. Exiting workflow."</action>
<action>Exit workflow</action>
</check>
</check>
<check if="index.md does not exist">
<action>Set workflow_mode = "initial_scan"</action>
<action>Display: "No existing documentation found. Starting initial project scan..."</action>
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md</action>
</check>
</step>
</workflow>

View File

@ -0,0 +1,38 @@
# Document Project Workflow Templates
This directory contains template files for the `document-project` workflow.
## Template Files
- **index-template.md** - Master index template (adapts for single/multi-part projects)
- **project-overview-template.md** - Executive summary and high-level overview
- **source-tree-template.md** - Annotated directory structure
## Template Usage
The workflow dynamically selects and populates templates based on:
1. **Project structure** (single part vs multi-part)
2. **Project type** (web, backend, mobile, etc.)
3. **Documentation requirements** (from documentation-requirements.csv)
## Variable Naming Convention
Templates use Handlebars-style variables:
- `{{variable_name}}` - Simple substitution
- `{{#if condition}}...{{/if}}` - Conditional blocks
- `{{#each collection}}...{{/each}}` - Iteration
## Additional Templates
Architecture-specific templates are dynamically loaded from:
`/bmad/bmm/workflows/3-solutioning/templates/`
Based on the matched architecture type from the registry.
## Notes
- Templates support both simple and complex project structures
- Multi-part projects get part-specific file naming (e.g., `architecture-{part_id}.md`)
- Single-part projects use simplified naming (e.g., `architecture.md`)

View File

@ -0,0 +1,345 @@
# {{target_name}} - Deep Dive Documentation
**Generated:** {{date}}
**Scope:** {{target_path}}
**Files Analyzed:** {{file_count}}
**Lines of Code:** {{total_loc}}
**Workflow Mode:** Exhaustive Deep-Dive
## Overview
{{target_description}}
**Purpose:** {{target_purpose}}
**Key Responsibilities:** {{responsibilities}}
**Integration Points:** {{integration_summary}}
## Complete File Inventory
{{#each files_in_inventory}}
### {{file_path}}
**Purpose:** {{purpose}}
**Lines of Code:** {{loc}}
**File Type:** {{file_type}}
**What Future Contributors Must Know:** {{contributor_note}}
**Exports:**
{{#each exports}}
- `{{signature}}` - {{description}}
{{/each}}
**Dependencies:**
{{#each imports}}
- `{{import_path}}` - {{reason}}
{{/each}}
**Used By:**
{{#each dependents}}
- `{{dependent_path}}`
{{/each}}
**Key Implementation Details:**
```{{language}}
{{key_code_snippet}}
```
{{implementation_notes}}
**Patterns Used:**
{{#each patterns}}
- {{pattern_name}}: {{pattern_description}}
{{/each}}
**State Management:** {{state_approach}}
**Side Effects:**
{{#each side_effects}}
- {{effect_type}}: {{effect_description}}
{{/each}}
**Error Handling:** {{error_handling_approach}}
**Testing:**
- Test File: {{test_file_path}}
- Coverage: {{coverage_percentage}}%
- Test Approach: {{test_approach}}
**Comments/TODOs:**
{{#each todos}}
- Line {{line_number}}: {{todo_text}}
{{/each}}
---
{{/each}}
## Contributor Checklist
- **Risks & Gotchas:** {{risks_notes}}
- **Pre-change Verification Steps:** {{verification_steps}}
- **Suggested Tests Before PR:** {{suggested_tests}}
## Architecture & Design Patterns
### Code Organization
{{organization_approach}}
### Design Patterns
{{#each design_patterns}}
- **{{pattern_name}}**: {{usage_description}}
{{/each}}
### State Management Strategy
{{state_management_details}}
### Error Handling Philosophy
{{error_handling_philosophy}}
### Testing Strategy
{{testing_strategy}}
## Data Flow
{{data_flow_diagram}}
### Data Entry Points
{{#each entry_points}}
- **{{entry_name}}**: {{entry_description}}
{{/each}}
### Data Transformations
{{#each transformations}}
- **{{transformation_name}}**: {{transformation_description}}
{{/each}}
### Data Exit Points
{{#each exit_points}}
- **{{exit_name}}**: {{exit_description}}
{{/each}}
## Integration Points
### APIs Consumed
{{#each apis_consumed}}
- **{{api_endpoint}}**: {{api_description}}
- Method: {{method}}
- Authentication: {{auth_requirement}}
- Response: {{response_schema}}
{{/each}}
### APIs Exposed
{{#each apis_exposed}}
- **{{api_endpoint}}**: {{api_description}}
- Method: {{method}}
- Request: {{request_schema}}
- Response: {{response_schema}}
{{/each}}
### Shared State
{{#each shared_state}}
- **{{state_name}}**: {{state_description}}
- Type: {{state_type}}
- Accessed By: {{accessors}}
{{/each}}
### Events
{{#each events}}
- **{{event_name}}**: {{event_description}}
- Type: {{publish_or_subscribe}}
- Payload: {{payload_schema}}
{{/each}}
### Database Access
{{#each database_operations}}
- **{{table_name}}**: {{operation_type}}
- Queries: {{query_patterns}}
- Indexes Used: {{indexes}}
{{/each}}
## Dependency Graph
{{dependency_graph_visualization}}
### Entry Points (Not Imported by Others in Scope)
{{#each entry_point_files}}
- {{file_path}}
{{/each}}
### Leaf Nodes (Don't Import Others in Scope)
{{#each leaf_files}}
- {{file_path}}
{{/each}}
### Circular Dependencies
{{#if has_circular_dependencies}}
⚠️ Circular dependencies detected:
{{#each circular_deps}}
- {{cycle_description}}
{{/each}}
{{else}}
✓ No circular dependencies detected
{{/if}}
## Testing Analysis
### Test Coverage Summary
- **Statements:** {{statements_coverage}}%
- **Branches:** {{branches_coverage}}%
- **Functions:** {{functions_coverage}}%
- **Lines:** {{lines_coverage}}%
### Test Files
{{#each test_files}}
- **{{test_file_path}}**
- Tests: {{test_count}}
- Approach: {{test_approach}}
- Mocking Strategy: {{mocking_strategy}}
{{/each}}
### Test Utilities Available
{{#each test_utilities}}
- `{{utility_name}}`: {{utility_description}}
{{/each}}
### Testing Gaps
{{#each testing_gaps}}
- {{gap_description}}
{{/each}}
## Related Code & Reuse Opportunities
### Similar Features Elsewhere
{{#each similar_features}}
- **{{feature_name}}** (`{{feature_path}}`)
- Similarity: {{similarity_description}}
- Can Reference For: {{reference_use_case}}
{{/each}}
### Reusable Utilities Available
{{#each reusable_utilities}}
- **{{utility_name}}** (`{{utility_path}}`)
- Purpose: {{utility_purpose}}
- How to Use: {{usage_example}}
{{/each}}
### Patterns to Follow
{{#each patterns_to_follow}}
- **{{pattern_name}}**: Reference `{{reference_file}}` for implementation
{{/each}}
## Implementation Notes
### Code Quality Observations
{{#each quality_observations}}
- {{observation}}
{{/each}}
### TODOs and Future Work
{{#each all_todos}}
- **{{file_path}}:{{line_number}}**: {{todo_text}}
{{/each}}
### Known Issues
{{#each known_issues}}
- {{issue_description}}
{{/each}}
### Optimization Opportunities
{{#each optimizations}}
- {{optimization_suggestion}}
{{/each}}
### Technical Debt
{{#each tech_debt_items}}
- {{debt_description}}
{{/each}}
## Modification Guidance
### To Add New Functionality
{{modification_guidance_add}}
### To Modify Existing Functionality
{{modification_guidance_modify}}
### To Remove/Deprecate
{{modification_guidance_remove}}
### Testing Checklist for Changes
{{#each testing_checklist_items}}
- [ ] {{checklist_item}}
{{/each}}
---
_Generated by `document-project` workflow (deep-dive mode)_
_Base Documentation: docs/index.md_
_Scan Date: {{date}}_
_Analysis Mode: Exhaustive_

View File

@ -0,0 +1,169 @@
# {{project_name}} Documentation Index
**Type:** {{repository_type}}{{#if is_multi_part}} with {{parts_count}} parts{{/if}}
**Primary Language:** {{primary_language}}
**Architecture:** {{architecture_type}}
**Last Updated:** {{date}}
## Project Overview
{{project_description}}
{{#if is_multi_part}}
## Project Structure
This project consists of {{parts_count}} parts:
{{#each project_parts}}
### {{part_name}} ({{part_id}})
- **Type:** {{project_type}}
- **Location:** `{{root_path}}`
- **Tech Stack:** {{tech_stack_summary}}
- **Entry Point:** {{entry_point}}
{{/each}}
## Cross-Part Integration
{{integration_summary}}
{{/if}}
## Quick Reference
{{#if is_single_part}}
- **Tech Stack:** {{tech_stack_summary}}
- **Entry Point:** {{entry_point}}
- **Architecture Pattern:** {{architecture_pattern}}
- **Database:** {{database}}
- **Deployment:** {{deployment_platform}}
{{else}}
{{#each project_parts}}
### {{part_name}} Quick Ref
- **Stack:** {{tech_stack_summary}}
- **Entry:** {{entry_point}}
- **Pattern:** {{architecture_pattern}}
{{/each}}
{{/if}}
## Generated Documentation
### Core Documentation
- [Project Overview](./project-overview.md) - Executive summary and high-level architecture
- [Source Tree Analysis](./source-tree-analysis.md) - Annotated directory structure
{{#if is_single_part}}
- [Architecture](./architecture.md) - Detailed technical architecture
- [Component Inventory](./component-inventory.md) - Catalog of major components{{#if has_ui_components}} and UI elements{{/if}}
- [Development Guide](./development-guide.md) - Local setup and development workflow
{{#if has_api_docs}}- [API Contracts](./api-contracts.md) - API endpoints and schemas{{/if}}
{{#if has_data_models}}- [Data Models](./data-models.md) - Database schema and models{{/if}}
{{else}}
### Part-Specific Documentation
{{#each project_parts}}
#### {{part_name}} ({{part_id}})
- [Architecture](./architecture-{{part_id}}.md) - Technical architecture for {{part_name}}
{{#if has_components}}- [Components](./component-inventory-{{part_id}}.md) - Component catalog{{/if}}
- [Development Guide](./development-guide-{{part_id}}.md) - Setup and dev workflow
{{#if has_api}}- [API Contracts](./api-contracts-{{part_id}}.md) - API documentation{{/if}}
{{#if has_data}}- [Data Models](./data-models-{{part_id}}.md) - Data architecture{{/if}}
{{/each}}
### Integration
- [Integration Architecture](./integration-architecture.md) - How parts communicate
- [Project Parts Metadata](./project-parts.json) - Machine-readable structure
{{/if}}
### Optional Documentation
{{#if has_deployment_guide}}- [Deployment Guide](./deployment-guide.md) - Deployment process and infrastructure{{/if}}
{{#if has_contribution_guide}}- [Contribution Guide](./contribution-guide.md) - Contributing guidelines and standards{{/if}}
## Existing Documentation
{{#if has_existing_docs}}
{{#each existing_docs}}
- [{{title}}]({{path}}) - {{description}}
{{/each}}
{{else}}
No existing documentation files were found in the project.
{{/if}}
## Getting Started
{{#if is_single_part}}
### Prerequisites
{{prerequisites}}
### Setup
```bash
{{setup_commands}}
```
### Run Locally
```bash
{{run_commands}}
```
### Run Tests
```bash
{{test_commands}}
```
{{else}}
{{#each project_parts}}
### {{part_name}} Setup
**Prerequisites:** {{prerequisites}}
**Install & Run:**
```bash
cd {{root_path}}
{{setup_command}}
{{run_command}}
```
{{/each}}
{{/if}}
## For AI-Assisted Development
This documentation was generated specifically to enable AI agents to understand and extend this codebase.
### When Planning New Features:
**UI-only features:**
{{#if is_multi_part}}→ Reference: `architecture-{{ui_part_id}}.md`, `component-inventory-{{ui_part_id}}.md`{{else}}→ Reference: `architecture.md`, `component-inventory.md`{{/if}}
**API/Backend features:**
{{#if is_multi_part}}→ Reference: `architecture-{{api_part_id}}.md`, `api-contracts-{{api_part_id}}.md`, `data-models-{{api_part_id}}.md`{{else}}→ Reference: `architecture.md`{{#if has_api_docs}}, `api-contracts.md`{{/if}}{{#if has_data_models}}, `data-models.md`{{/if}}{{/if}}
**Full-stack features:**
→ Reference: All architecture docs{{#if is_multi_part}} + `integration-architecture.md`{{/if}}
**Deployment changes:**
{{#if has_deployment_guide}}→ Reference: `deployment-guide.md`{{else}}→ Review CI/CD configs in project{{/if}}
---
_Documentation generated by BMAD Method `document-project` workflow_

View File

@ -0,0 +1,103 @@
# {{project_name}} - Project Overview
**Date:** {{date}}
**Type:** {{project_type}}
**Architecture:** {{architecture_type}}
## Executive Summary
{{executive_summary}}
## Project Classification
- **Repository Type:** {{repository_type}}
- **Project Type(s):** {{project_types_list}}
- **Primary Language(s):** {{primary_languages}}
- **Architecture Pattern:** {{architecture_pattern}}
{{#if is_multi_part}}
## Multi-Part Structure
This project consists of {{parts_count}} distinct parts:
{{#each project_parts}}
### {{part_name}}
- **Type:** {{project_type}}
- **Location:** `{{root_path}}`
- **Purpose:** {{purpose}}
- **Tech Stack:** {{tech_stack}}
{{/each}}
### How Parts Integrate
{{integration_description}}
{{/if}}
## Technology Stack Summary
{{#if is_single_part}}
{{technology_table}}
{{else}}
{{#each project_parts}}
### {{part_name}} Stack
{{technology_table}}
{{/each}}
{{/if}}
## Key Features
{{key_features}}
## Architecture Highlights
{{architecture_highlights}}
## Development Overview
### Prerequisites
{{prerequisites}}
### Getting Started
{{getting_started_summary}}
### Key Commands
{{#if is_single_part}}
- **Install:** `{{install_command}}`
- **Dev:** `{{dev_command}}`
- **Build:** `{{build_command}}`
- **Test:** `{{test_command}}`
{{else}}
{{#each project_parts}}
#### {{part_name}}
- **Install:** `{{install_command}}`
- **Dev:** `{{dev_command}}`
{{/each}}
{{/if}}
## Repository Structure
{{repository_structure_summary}}
## Documentation Map
For detailed information, see:
- [index.md](./index.md) - Master documentation index
- [architecture.md](./architecture{{#if is_multi_part}}-{part_id}{{/if}}.md) - Detailed architecture
- [source-tree-analysis.md](./source-tree-analysis.md) - Directory structure
- [development-guide.md](./development-guide{{#if is_multi_part}}-{part_id}{{/if}}.md) - Development workflow
---
_Generated using BMAD Method `document-project` workflow_

View File

@ -0,0 +1,160 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Scan Report Schema",
"description": "State tracking file for document-project workflow resumability",
"type": "object",
"required": ["workflow_version", "timestamps", "mode", "scan_level", "completed_steps", "current_step"],
"properties": {
"workflow_version": {
"type": "string",
"description": "Version of document-project workflow",
"example": "1.2.0"
},
"timestamps": {
"type": "object",
"required": ["started", "last_updated"],
"properties": {
"started": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp when workflow started"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last state update"
},
"completed": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp when workflow completed (if finished)"
}
}
},
"mode": {
"type": "string",
"enum": ["initial_scan", "full_rescan", "deep_dive"],
"description": "Workflow execution mode"
},
"scan_level": {
"type": "string",
"enum": ["quick", "deep", "exhaustive"],
"description": "Scan depth level (deep_dive mode always uses exhaustive)"
},
"project_root": {
"type": "string",
"description": "Absolute path to project root directory"
},
"output_folder": {
"type": "string",
"description": "Absolute path to output folder"
},
"completed_steps": {
"type": "array",
"items": {
"type": "object",
"required": ["step", "status"],
"properties": {
"step": {
"type": "string",
"description": "Step identifier (e.g., 'step_1', 'step_2')"
},
"status": {
"type": "string",
"enum": ["completed", "partial", "failed"]
},
"timestamp": {
"type": "string",
"format": "date-time"
},
"outputs": {
"type": "array",
"items": { "type": "string" },
"description": "Files written during this step"
},
"summary": {
"type": "string",
"description": "1-2 sentence summary of step outcome"
}
}
}
},
"current_step": {
"type": "string",
"description": "Current step identifier for resumption"
},
"findings": {
"type": "object",
"description": "High-level summaries only (detailed findings purged after writing)",
"properties": {
"project_classification": {
"type": "object",
"properties": {
"repository_type": { "type": "string" },
"parts_count": { "type": "integer" },
"primary_language": { "type": "string" },
"architecture_type": { "type": "string" }
}
},
"technology_stack": {
"type": "array",
"items": {
"type": "object",
"properties": {
"part_id": { "type": "string" },
"tech_summary": { "type": "string" }
}
}
},
"batches_completed": {
"type": "array",
"description": "For deep/exhaustive scans: subfolders processed",
"items": {
"type": "object",
"properties": {
"path": { "type": "string" },
"files_scanned": { "type": "integer" },
"summary": { "type": "string" }
}
}
}
}
},
"outputs_generated": {
"type": "array",
"items": { "type": "string" },
"description": "List of all output files generated"
},
"resume_instructions": {
"type": "string",
"description": "Instructions for resuming from current_step"
},
"validation_status": {
"type": "object",
"properties": {
"last_validated": {
"type": "string",
"format": "date-time"
},
"validation_errors": {
"type": "array",
"items": { "type": "string" }
}
}
},
"deep_dive_targets": {
"type": "array",
"description": "Track deep-dive areas analyzed (for deep_dive mode)",
"items": {
"type": "object",
"properties": {
"target_name": { "type": "string" },
"target_path": { "type": "string" },
"files_analyzed": { "type": "integer" },
"output_file": { "type": "string" },
"timestamp": { "type": "string", "format": "date-time" }
}
}
}
}
}

View File

@ -0,0 +1,135 @@
# {{project_name}} - Source Tree Analysis
**Date:** {{date}}
## Overview
{{source_tree_overview}}
{{#if is_multi_part}}
## Multi-Part Structure
This project is organized into {{parts_count}} distinct parts:
{{#each project_parts}}
- **{{part_name}}** (`{{root_path}}`): {{purpose}}
{{/each}}
{{/if}}
## Complete Directory Structure
```
{{complete_source_tree}}
```
## Critical Directories
{{#each critical_folders}}
### `{{folder_path}}`
{{description}}
**Purpose:** {{purpose}}
**Contains:** {{contents_summary}}
{{#if entry_points}}**Entry Points:** {{entry_points}}{{/if}}
{{#if integration_note}}**Integration:** {{integration_note}}{{/if}}
{{/each}}
{{#if is_multi_part}}
## Part-Specific Trees
{{#each project_parts}}
### {{part_name}} Structure
```
{{source_tree}}
```
**Key Directories:**
{{#each critical_directories}}
- **`{{path}}`**: {{description}}
{{/each}}
{{/each}}
## Integration Points
{{#each integration_points}}
### {{from_part}} → {{to_part}}
- **Location:** `{{integration_path}}`
- **Type:** {{integration_type}}
- **Details:** {{details}}
{{/each}}
{{/if}}
## Entry Points
{{#if is_single_part}}
- **Main Entry:** `{{main_entry_point}}`
{{#if additional_entry_points}}
- **Additional:**
{{#each additional_entry_points}}
- `{{path}}`: {{description}}
{{/each}}
{{/if}}
{{else}}
{{#each project_parts}}
### {{part_name}}
- **Entry Point:** `{{entry_point}}`
- **Bootstrap:** {{bootstrap_description}}
{{/each}}
{{/if}}
## File Organization Patterns
{{file_organization_patterns}}
## Key File Types
{{#each file_type_patterns}}
### {{file_type}}
- **Pattern:** `{{pattern}}`
- **Purpose:** {{purpose}}
- **Examples:** {{examples}}
{{/each}}
## Asset Locations
{{#if has_assets}}
{{#each asset_locations}}
- **{{asset_type}}**: `{{location}}` ({{file_count}} files, {{total_size}})
{{/each}}
{{else}}
No significant assets detected.
{{/if}}
## Configuration Files
{{#each config_files}}
- **`{{path}}`**: {{description}}
{{/each}}
## Notes for Development
{{development_notes}}
---
_Generated using BMAD Method `document-project` workflow_

View File

@ -0,0 +1,98 @@
# Document Project Workflow Configuration
name: "document-project"
version: "1.2.0"
description: "Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development"
author: "BMad"
# Critical variables
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Module path and component files
installed_path: "{project-root}/bmad/bmm/workflows/document-project"
template: false # This is an action workflow with multiple output files
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Required data files - CRITICAL for project type detection and documentation requirements
project_types_csv: "{project-root}/bmad/bmm/workflows/3-solutioning/project-types/project-types.csv"
architecture_registry_csv: "{project-root}/bmad/bmm/workflows/3-solutioning/templates/registry.csv"
documentation_requirements_csv: "{installed_path}/documentation-requirements.csv"
# Architecture template references
architecture_templates_path: "{project-root}/bmad/bmm/workflows/3-solutioning/templates"
# Optional input - project root to scan (defaults to current working directory)
recommended_inputs:
- project_root: "User will specify or use current directory"
- existing_readme: "README.md at project root (if exists)"
- project_config: "package.json, go.mod, requirements.txt, etc. (auto-detected)"
# Output configuration - Multiple files generated in output folder
# File naming depends on project structure (simple vs multi-part)
# Simple projects: index.md, architecture.md, etc.
# Multi-part projects: index.md, architecture-{part_id}.md, etc.
default_output_files:
- index: "{output_folder}/index.md"
- project_overview: "{output_folder}/project-overview.md"
- architecture: "{output_folder}/architecture.md" # or architecture-{part_id}.md for multi-part
- source_tree: "{output_folder}/source-tree-analysis.md"
- component_inventory: "{output_folder}/component-inventory.md" # or component-inventory-{part_id}.md
- development_guide: "{output_folder}/development-guide.md" # or development-guide-{part_id}.md
- deployment_guide: "{output_folder}/deployment-guide.md" # optional, if deployment config found
- contribution_guide: "{output_folder}/contribution-guide.md" # optional, if CONTRIBUTING.md found
- api_contracts: "{output_folder}/api-contracts.md" # optional, per part if needed
- data_models: "{output_folder}/data-models.md" # optional, per part if needed
- integration_architecture: "{output_folder}/integration-architecture.md" # only for multi-part
- project_parts: "{output_folder}/project-parts.json" # metadata for multi-part projects
- deep_dive: "{output_folder}/deep-dive-{sanitized_target_name}.md" # deep-dive mode output
- project_scan_report: "{output_folder}/project-scan-report.json" # state tracking for resumability
# Runtime variables (generated during workflow execution)
runtime_variables:
- workflow_mode: "initial_scan | full_rescan | deep_dive"
- scan_level: "quick | deep | exhaustive (default: quick)"
- project_type: "Detected project type (web, backend, cli, etc.)"
- project_parts: "Array of project parts for multi-part projects"
- architecture_match: "Matched architecture from registry"
- doc_requirements: "Documentation requirements for project type"
- tech_stack: "Detected technology stack"
- existing_docs: "Discovered existing documentation"
- deep_dive_target: "Target area for deep-dive analysis (if deep-dive mode)"
- deep_dive_count: "Number of deep-dive docs generated"
- resume_point: "Step to resume from (if resuming interrupted workflow)"
- state_file: "Path to project-scan-report.json for state tracking"
# Scan Level Definitions
scan_levels:
quick:
description: "Pattern-based scanning without reading source files"
duration: "2-5 minutes"
reads: "Config files, package manifests, directory structure only"
use_case: "Quick project overview, initial understanding"
default: true
deep:
description: "Reads files in critical directories per project type"
duration: "10-30 minutes"
reads: "Critical files based on documentation_requirements.csv patterns"
use_case: "Comprehensive documentation for brownfield PRD"
default: false
exhaustive:
description: "Reads ALL source files in project"
duration: "30-120 minutes"
reads: "Every source file (excluding node_modules, dist, build)"
use_case: "Complete analysis, migration planning, detailed audit"
default: false
# Resumability Settings
resumability:
enabled: true
state_file_location: "{output_folder}/project-scan-report.json"
state_file_max_age: "24 hours"
auto_prompt_resume: true
archive_old_state: true
archive_location: "{output_folder}/.archive/"

View File

@ -0,0 +1,298 @@
# Deep-Dive Documentation Instructions
<workflow>
<critical>This workflow performs exhaustive deep-dive documentation of specific areas</critical>
<critical>Called by: ../document-project/instructions.md router</critical>
<critical>Handles: deep_dive mode only</critical>
<step n="13" goal="Deep-dive documentation of specific area" if="workflow_mode == deep_dive">
<critical>Deep-dive mode requires literal full-file review. Sampling, guessing, or relying solely on tooling output is FORBIDDEN.</critical>
<action>Load existing project structure from index.md and project-parts.json (if exists)</action>
<action>Load source tree analysis to understand available areas</action>
<step n="13a" goal="Identify area for deep-dive">
<action>Analyze existing documentation to suggest deep-dive options</action>
<ask>What area would you like to deep-dive into?
**Suggested Areas Based on Project Structure:**
{{#if has_api_routes}}
### API Routes ({{api_route_count}} endpoints found)
{{#each api_route_groups}}
{{group_index}}. {{group_name}} - {{endpoint_count}} endpoints in `{{path}}`
{{/each}}
{{/if}}
{{#if has_feature_modules}}
### Feature Modules ({{feature_count}} features)
{{#each feature_modules}}
{{module_index}}. {{module_name}} - {{file_count}} files in `{{path}}`
{{/each}}
{{/if}}
{{#if has_ui_components}}
### UI Component Areas
{{#each component_groups}}
{{group_index}}. {{group_name}} - {{component_count}} components in `{{path}}`
{{/each}}
{{/if}}
{{#if has_services}}
### Services/Business Logic
{{#each service_groups}}
{{service_index}}. {{service_name}} - `{{path}}`
{{/each}}
{{/if}}
**Or specify custom:**
- Folder path (e.g., "client/src/features/dashboard")
- File path (e.g., "server/src/api/users.ts")
- Feature name (e.g., "authentication system")
Enter your choice (number or custom path):
</ask>
<action>Parse user input to determine: - target_type: "folder" | "file" | "feature" | "api_group" | "component_group" - target_path: Absolute path to scan - target_name: Human-readable name for documentation - target_scope: List of all files to analyze
</action>
<action>Store as {{deep_dive_target}}</action>
<action>Display confirmation:
Target: {{target_name}}
Type: {{target_type}}
Path: {{target_path}}
Estimated files to analyze: {{estimated_file_count}}
This will read EVERY file in this area. Proceed? [y/n]
</action>
<action if="user confirms 'n'">Return to Step 13a (select different area)</action>
</step>
<step n="13b" goal="Comprehensive exhaustive scan of target area">
<action>Set scan_mode = "exhaustive"</action>
<action>Initialize file_inventory = []</action>
<critical>You must read every line of every file in scope and capture a plain-language explanation (what the file does, side effects, why it matters) that future developer agents can act on. No shortcuts.</critical>
<check if="target_type == folder">
<action>Get complete recursive file list from {{target_path}}</action>
<action>Filter out: node_modules/, .git/, dist/, build/, coverage/, *.min.js, *.map</action>
<action>For EVERY remaining file in folder:
- Read complete file contents (all lines)
- Extract all exports (functions, classes, types, interfaces, constants)
- Extract all imports (dependencies)
- Identify purpose from comments and code structure
- Write 1-2 sentences (minimum) in natural language describing behaviour, side effects, assumptions, and anything a developer must know before modifying the file
- Extract function signatures with parameter types and return types
- Note any TODOs, FIXMEs, or comments
- Identify patterns (hooks, components, services, controllers, etc.)
- Capture per-file contributor guidance: `contributor_note`, `risks`, `verification_steps`, `suggested_tests`
- Store in file_inventory
</action>
</check>
<check if="target_type == file">
<action>Read complete file at {{target_path}}</action>
<action>Extract all information as above</action>
<action>Read all files it imports (follow import chain 1 level deep)</action>
<action>Find all files that import this file (dependents via grep)</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == api_group">
<action>Identify all route/controller files in API group</action>
<action>Read all route handlers completely</action>
<action>Read associated middleware, controllers, services</action>
<action>Read data models and schemas used</action>
<action>Extract complete request/response schemas</action>
<action>Document authentication and authorization requirements</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == feature">
<action>Search codebase for all files related to feature name</action>
<action>Include: UI components, API endpoints, models, services, tests</action>
<action>Read each file completely</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == component_group">
<action>Get all component files in group</action>
<action>Read each component completely</action>
<action>Extract: Props interfaces, hooks used, child components, state management</action>
<action>Store all in file_inventory</action>
</check>
<action>For each file in file\*inventory, document: - **File Path:** Full path - **Purpose:** What this file does (1-2 sentences) - **Lines of Code:** Total LOC - **Exports:** Complete list with signatures
- Functions: `functionName(param: Type): ReturnType` - Description
_ Classes: `ClassName` - Description with key methods
_ Types/Interfaces: `TypeName` - Description
\_ Constants: `CONSTANT_NAME: Type` - Description - **Imports/Dependencies:** What it uses and why - **Used By:** Files that import this (dependents) - **Key Implementation Details:** Important logic, algorithms, patterns - **State Management:** If applicable (Redux, Context, local state) - **Side Effects:** API calls, database queries, file I/O, external services - **Error Handling:** Try/catch blocks, error boundaries, validation - **Testing:** Associated test files and coverage - **Comments/TODOs:** Any inline documentation or planned work
</action>
<template-output>comprehensive_file_inventory</template-output>
</step>
<step n="13c" goal="Analyze relationships and data flow">
<action>Build dependency graph for scanned area:
- Create graph with files as nodes
- Add edges for import relationships
- Identify circular dependencies if any
- Find entry points (files not imported by others in scope)
- Find leaf nodes (files that don't import others in scope)
</action>
<action>Trace data flow through the system: - Follow function calls and data transformations - Track API calls and their responses - Document state updates and propagation - Map database queries and mutations
</action>
<action>Identify integration points: - External APIs consumed - Internal APIs/services called - Shared state accessed - Events published/subscribed - Database tables accessed
</action>
<template-output>dependency_graph</template-output>
<template-output>data_flow_analysis</template-output>
<template-output>integration_points</template-output>
</step>
<step n="13d" goal="Find related code and similar patterns">
<action>Search codebase OUTSIDE scanned area for:
- Similar file/folder naming patterns
- Similar function signatures
- Similar component structures
- Similar API patterns
- Reusable utilities that could be used
</action>
<action>Identify code reuse opportunities: - Shared utilities available - Design patterns used elsewhere - Component libraries available - Helper functions that could apply
</action>
<action>Find reference implementations: - Similar features in other parts of codebase - Established patterns to follow - Testing approaches used elsewhere
</action>
<template-output>related_code_references</template-output>
<template-output>reuse_opportunities</template-output>
</step>
<step n="13e" goal="Generate comprehensive deep-dive documentation">
<action>Create documentation filename: deep-dive-{{sanitized_target_name}}.md</action>
<action>Aggregate contributor insights across files:
- Combine unique risk/gotcha notes into {{risks_notes}}
- Combine verification steps developers should run before changes into {{verification_steps}}
- Combine recommended test commands into {{suggested_tests}}
</action>
<action>Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md</action>
<action>Fill template with all collected data from steps 13b-13d</action>
<action>Write filled template to: {output_folder}/deep-dive-{{sanitized_target_name}}.md</action>
<action>Validate deep-dive document completeness</action>
<template-output>deep_dive_documentation</template-output>
<action>Update state file: - Add to deep_dive_targets array: {"target_name": "{{target_name}}", "target_path": "{{target_path}}", "files_analyzed": {{file_count}}, "output_file": "deep-dive-{{sanitized_target_name}}.md", "timestamp": "{{now}}"} - Add output to outputs_generated - Update last_updated timestamp
</action>
</step>
<step n="13f" goal="Update master index with deep-dive link">
<action>Read existing index.md</action>
<action>Check if "Deep-Dive Documentation" section exists</action>
<check if="section does not exist">
<action>Add new section after "Generated Documentation":
## Deep-Dive Documentation
Detailed exhaustive analysis of specific areas:
</action>
</check>
<action>Add link to new deep-dive doc:
- [{{target_name}} Deep-Dive](./deep-dive-{{sanitized_target_name}}.md) - Comprehensive analysis of {{target_description}} ({{file_count}} files, {{total_loc}} LOC) - Generated {{date}}
</action>
<action>Update index metadata:
Last Updated: {{date}}
Deep-Dives: {{deep_dive_count}}
</action>
<action>Save updated index.md</action>
<template-output>updated_index</template-output>
</step>
<step n="13g" goal="Offer to continue or complete">
<action>Display summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Deep-Dive Documentation Complete! ✓
**Generated:** {output_folder}/deep-dive-{{target_name}}.md
**Files Analyzed:** {{file_count}}
**Lines of Code Scanned:** {{total_loc}}
**Time Taken:** ~{{duration}}
**Documentation Includes:**
- Complete file inventory with all exports
- Dependency graph and data flow
- Integration points and API contracts
- Testing analysis and coverage
- Related code and reuse opportunities
- Implementation guidance
**Index Updated:** {output_folder}/index.md now includes link to this deep-dive
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</action>
<ask>Would you like to:
1. **Deep-dive another area** - Analyze another feature/module/folder
2. **Finish** - Complete workflow
Your choice [1/2]:
</ask>
<action if="user selects 1">
<action>Clear current deep_dive_target</action>
<action>Go to Step 13a (select new area)</action>
</action>
<action if="user selects 2">
<action>Display final message:
All deep-dive documentation complete!
**Master Index:** {output_folder}/index.md
**Deep-Dives Generated:** {{deep_dive_count}}
These comprehensive docs are now ready for:
- Architecture review
- Implementation planning
- Code understanding
- Brownfield PRD creation
Thank you for using the document-project workflow!
</action>
<action>Exit workflow</action>
</action>
</step>
</step>
</workflow>

View File

@ -0,0 +1,31 @@
# Deep-Dive Documentation Workflow Configuration
name: "document-project-deep-dive"
description: "Exhaustive deep-dive documentation of specific project areas"
author: "BMad"
# This is a sub-workflow called by document-project/workflow.yaml
parent_workflow: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml"
# Critical variables inherited from parent
config_source: "{project-root}/bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
date: system-generated
# Module path and component files
installed_path: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflows"
template: false # Action workflow
instructions: "{installed_path}/deep-dive-instructions.md"
validation: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/checklist.md"
# Templates
deep_dive_template: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/templates/deep-dive-template.md"
# Runtime inputs (passed from parent workflow)
workflow_mode: "deep_dive"
scan_level: "exhaustive" # Deep-dive always uses exhaustive scan
project_root_path: ""
existing_index_path: "" # Path to existing index.md
# Configuration
autonomous: false # Requires user input to select target area

View File

@ -0,0 +1,33 @@
# Full Project Scan Workflow Configuration
name: "document-project-full-scan"
description: "Complete project documentation workflow (initial scan or full rescan)"
author: "BMad"
# This is a sub-workflow called by document-project/workflow.yaml
parent_workflow: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflow.yaml"
# Critical variables inherited from parent
config_source: "{project-root}/bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
date: system-generated
# Data files
project_types_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/project-types.csv"
documentation_requirements_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/documentation-requirements.csv"
architecture_registry_csv: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/data/architecture-registry.csv"
# Module path and component files
installed_path: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/workflows"
template: false # Action workflow
instructions: "{installed_path}/full-scan-instructions.md"
validation: "{project-root}/src/modules/bmm/workflows/1-analysis/document-project/checklist.md"
# Runtime inputs (passed from parent workflow)
workflow_mode: "" # "initial_scan" or "full_rescan"
scan_level: "" # "quick", "deep", or "exhaustive"
resume_mode: false
project_root_path: ""
# Configuration
autonomous: false # Requires user input at key decision points

View File

@ -1,6 +1,6 @@
<step n="1">Load persona from this current agent file (already in context)</step>
<step n="2">🚨 IMMEDIATE ACTION REQUIRED - BEFORE ANY OUTPUT:
- Use Read tool to load {project-root}/bmad/{{module}}/config.yaml NOW
- Load and read {project-root}/bmad/{{module}}/config.yaml NOW
- Store ALL fields as session variables: {user_name}, {communication_language}, {output_folder}
- VERIFY: If config not loaded, STOP and report error to user
- DO NOT PROCEED to step 3 until config is successfully loaded and variables stored</step>

View File

@ -51,13 +51,15 @@ class ActivationBuilder {
// 2. Build menu handlers section with dynamic handlers
const menuHandlers = await this.loadFragment('menu-handlers.xml');
// Build extract list (comma-separated list of used attributes)
const extractList = profile.usedAttributes.join(', ');
// Build handlers (load only needed handlers)
const handlers = await this.buildHandlers(profile);
const processedHandlers = menuHandlers.replace('{DYNAMIC_EXTRACT_LIST}', extractList).replace('{DYNAMIC_HANDLERS}', handlers);
// Remove the extract line from the final output - it's just build metadata
// The extract list tells us which attributes to look for during processing
// but shouldn't appear in the final agent file
const processedHandlers = menuHandlers
.replace('<extract>{DYNAMIC_EXTRACT_LIST}</extract>\n', '') // Remove the entire extract line
.replace('{DYNAMIC_HANDLERS}', handlers);
activation += '\n' + this.indent(processedHandlers, 2) + '\n';

View File

@ -4,12 +4,29 @@
Aside from stability and bug fixes found during the alpha period - the main focus will be on the following:
- In Progress - Brownfield v6 integrated into the workflow.
- In Progress - Full workflow single file tracking.
- In Progress - Codex improvements.
- Advanced Elicitation is not working well with codex
- Brainstorming is somewhat ok with codex, but could be improved
- Validate Gemini CLI - is it able to function at all for any workflows?
- BoMB Tooling included with module install
- Teat Architect better integration into workflows
- Document new agent workflows.
- need to segregate game dev workflows and potentially add as an installation choice
- the workflow runner needs to become a series of targeted workflow injections at install time so workflows can be run directly without the bloated intermediary.
- All project levels (0 through 4) manual flows validated through workflow phase 1-4 for greenfield and brownfield
- NPX installer
- github pipelines, branch protection, vulnerability scanners
- subagent injections reenabled
- bmm existing project scanning and integration with workflow phase 0-4 improvements
- Additional custom sections for architecture project types
- DONE: Single Agent web bundler finalized - run `npm run bundle'
- DONE: 4->v6 upgrade installer fixed.
- DONE: v6->v6 updates will no longer remove custom content. so if you have a new agent you created for example anywhere under the bmad folder, updates will no longer remove them.
- DONE: if you modify an installed file and upgrade, the file will be saved as a .bak file and the installer will inform you.
- DONE: Game Agents comms style WAY to over the top - reduced a bit.
- need to nest subagents for better organization.
- DONE: need to nest subagents for better organization.
- DONE: Quick note on BMM v6 Flow
- DONE: CC SubAgents installed to sub-folders now.
- DONE: Qwen TOML update.
@ -19,24 +36,13 @@ Aside from stability and bug fixes found during the alpha period - the main focu
- DONE: Agent improvement to loading instruction insertion and customization system overhaul
- DONE: Stand along agents now will install to bmad/agents and are able to be compiled by the installer also
- bmm `testarch` integrated into the BMM workflow's after aligned with the rest of bmad method flow.
- Document new agent workflows.
- need to segregate game dev workflows and potentially add as an installation choice
- the workflow runner needs to become a series of targeted workflow injections at install time so workflows can be run directly without the bloated intermediary.
- All project levels (0 through 4) manual flows validated through workflow phase 1-4
- level 0 (simple addition or update to existing project) workflow is super streamlined from explanation of issue through code implementation
- simple spec file -> context -> implementation
- level 1 (simple update to existing, or a very simple oneshot tool or project)
- NPX installer
- github pipelines, branch protection, vulnerability scanners
- improved subagent injections
- bmm existing project scanning and integration with workflow phase 0-4 improvements
- BTA Module coming soon!
## Needed before Beta → v0 release
Once the alpha is stabilized and we switch to beta, work on v4.x will freeze and the beta will merge to main. The NPX installer will still install v4 by default, but people will be able to npm install the beta version also.
- Orchestration tracking works consistently across all workflow phases on BMM module
- Single Reference Architecture
- Module repository and submission process defined
- Final polished documentation and user guide for each module
- Final polished documentation for overall project architecture
@ -49,5 +55,6 @@ Once the alpha is stabilized and we switch to beta, work on v4.x will freeze and
- Installer offers installation of vetted community modules
- DevOps Module
- Security Module
- BoMB improvements
- Further BoMB improvements
- 2-3 functional Reference Architecture Project Scaffolds and community contribution process defined
-