This PR introduces a powerful new Codebase Flattener Tool that aggregates entire codebases into AI-optimized XML format, making it easy to share project context with AI assistants for analysis, debugging, and development assistance.
- AI-Optimized XML Output : Generates clean, structured XML specifically designed for AI model consumption - Smart File Discovery : Recursive file scanning with intelligent filtering using glob patterns - Binary File Detection : Automatically identifies and excludes binary files, focusing on source code - Progress Tracking : Real-time progress indicators with comprehensive completion statistics - Flexible Output : Customizable output file location and naming via CLI arguments - Gitignore Integration : Automatically respects .gitignore patterns to exclude unnecessary files - CDATA Handling : Proper XML CDATA sections with escape sequence handling for ]]> patterns - Content Indentation : Beautiful XML formatting with properly indented file content (4-space indentation) - Error Handling : Robust error handling with detailed logging for problematic files - Hierarchical Formatting : Clean XML structure with proper indentation and formatting - File Content Preservation : Maintains original file formatting within indented CDATA sections - Exclusion Logic : Prevents self-inclusion of output files ( flattened-codebase.xml , repomix-output.xml ) - tools/flattener/main.js - Complete flattener implementation with CLI interface - package.json - Added new dependencies (glob, minimatch, fs-extra, commander, ora, chalk) - package-lock.json - Updated dependency tree - .gitignore - Added exclusions for flattener outputs - README.md - Comprehensive documentation with usage examples - docs/bmad-workflow-guide.md - Integration guidance - tools/cli.js - CLI integration - .vscode/settings.json - SonarLint configuration ``` current directory npm run flatten npm run flatten -- --output my-project.xml npm run flatten -- -o /path/to/output/codebase.xml ``` The tool provides comprehensive completion summaries including: - File count and breakdown (text/binary/errors) - Source code size and generated XML size - Total lines of code and estimated token count - Processing progress and performance metrics - Bug Fix : Corrected typo in exclusion patterns ( repromix-output.xml → repomix-output.xml ) - Performance : Efficient file processing with streaming and progress indicators - Reliability : Comprehensive error handling and validation - Maintainability : Clean, well-documented code with modular functions - AI Integration : Perfect for sharing codebase context with AI assistants - Code Reviews : Streamlined code review process with complete project context - Documentation : Enhanced project documentation and analysis capabilities - Development Workflow : Improved development assistance and debugging support This tool significantly enhances the BMad-Method framework's AI integration capabilities, providing developers with a seamless way to share complete project context for enhanced AI-assisted development workflows.
This commit is contained in:
parent
bfaaa0ee11
commit
125b464b2f
|
|
@ -21,6 +21,7 @@ CLAUDE.md
|
||||||
test-project-install/*
|
test-project-install/*
|
||||||
sample-project/*
|
sample-project/*
|
||||||
.claude
|
.claude
|
||||||
|
.vscode/
|
||||||
.windsurf/
|
.windsurf/
|
||||||
.trae/
|
.trae/
|
||||||
.bmad-core
|
.bmad-core
|
||||||
|
|
@ -28,3 +29,11 @@ sample-project/*
|
||||||
.gemini
|
.gemini
|
||||||
.bmad*/.cursor/
|
.bmad*/.cursor/
|
||||||
web-bundles/
|
web-bundles/
|
||||||
|
docs/architecture/
|
||||||
|
docs/prd/
|
||||||
|
docs/stories/
|
||||||
|
docs/project-architecture.md
|
||||||
|
tests/
|
||||||
|
custom-output.xml
|
||||||
|
flattened-codebase.xml
|
||||||
|
biome.json
|
||||||
|
|
@ -46,5 +46,9 @@
|
||||||
"tileset",
|
"tileset",
|
||||||
"Trae",
|
"Trae",
|
||||||
"VNET"
|
"VNET"
|
||||||
]
|
],
|
||||||
|
"sonarlint.connectedMode.project": {
|
||||||
|
"connectionId": "manjaroblack",
|
||||||
|
"projectKey": "manjaroblack_texasetiquette"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
40
README.md
40
README.md
|
|
@ -110,6 +110,46 @@ npm run install:bmad # build and install all to a destination folder
|
||||||
|
|
||||||
BMad's natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMad-Method with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
|
BMad's natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMad-Method with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
|
||||||
|
|
||||||
|
## Codebase Flattener Tool
|
||||||
|
|
||||||
|
The BMad-Method includes a powerful codebase flattener tool designed to prepare your project files for AI model consumption. This tool aggregates your entire codebase into a single XML file, making it easy to share your project context with AI assistants for analysis, debugging, or development assistance.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- **AI-Optimized Output**: Generates clean XML format specifically designed for AI model consumption
|
||||||
|
- **Smart Filtering**: Automatically respects `.gitignore` patterns to exclude unnecessary files
|
||||||
|
- **Binary File Detection**: Intelligently identifies and excludes binary files, focusing on source code
|
||||||
|
- **Progress Tracking**: Real-time progress indicators and comprehensive completion statistics
|
||||||
|
- **Flexible Output**: Customizable output file location and naming
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage - creates flattened-codebase.xml in current directory
|
||||||
|
npm run flatten
|
||||||
|
|
||||||
|
# Specify custom output file
|
||||||
|
npm run flatten -- --output my-project.xml
|
||||||
|
npm run flatten -- -o /path/to/output/codebase.xml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Output
|
||||||
|
|
||||||
|
The tool will display progress and provide a comprehensive summary:
|
||||||
|
|
||||||
|
```
|
||||||
|
📊 Completion Summary:
|
||||||
|
✅ Successfully processed 156 files into flattened-codebase.xml
|
||||||
|
📁 Output file: /path/to/your/project/flattened-codebase.xml
|
||||||
|
📏 Total source size: 2.3 MB
|
||||||
|
📄 Generated XML size: 2.1 MB
|
||||||
|
📝 Total lines of code: 15,847
|
||||||
|
🔢 Estimated tokens: 542,891
|
||||||
|
📊 File breakdown: 142 text, 14 binary, 0 errors
|
||||||
|
```
|
||||||
|
|
||||||
|
The generated XML file contains all your project's source code in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
|
||||||
|
|
||||||
## Documentation & Resources
|
## Documentation & Resources
|
||||||
|
|
||||||
### Essential Guides
|
### Essential Guides
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,187 @@
|
||||||
|
# BMad Method Universal Workflow Guide
|
||||||
|
|
||||||
|
This guide outlines the core BMad workflow that applies regardless of which AI-powered IDE you're using.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The BMad Method follows a structured approach to AI-assisted software development:
|
||||||
|
|
||||||
|
1. **Install BMad** in your project
|
||||||
|
2. **Plan with Gemini** using team-fullstack
|
||||||
|
3. **Organize with bmad-master** (document sharding)
|
||||||
|
4. **Develop iteratively** with SM → Dev cycles
|
||||||
|
|
||||||
|
## The Complete Workflow
|
||||||
|
|
||||||
|
### Phase 1: Project Setup
|
||||||
|
|
||||||
|
1. **Install BMad in your project**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx bmad-method install
|
||||||
|
```
|
||||||
|
|
||||||
|
- Choose "Complete installation"
|
||||||
|
- Select your IDE (Cursor, Claude Code, Windsurf, Trae, Roo Code, or GitHub Copilot)
|
||||||
|
|
||||||
|
2. **Verify installation**:
|
||||||
|
- `.bmad-core/` folder created with all agents
|
||||||
|
- IDE-specific integration files created
|
||||||
|
- All agent commands/rules/modes available
|
||||||
|
|
||||||
|
### Phase 2: Ideation & Planning (Gemini)
|
||||||
|
|
||||||
|
Use Google's Gemini for collaborative planning with the full team:
|
||||||
|
|
||||||
|
1. **Open [Google Gems](https://gemini.google.com/gems/view)**
|
||||||
|
2. **Create a new Gem**:
|
||||||
|
- Give it a title and description (e.g., "BMad Team Fullstack")
|
||||||
|
3. **Load team-fullstack**:
|
||||||
|
- Copy contents of: `web-bundles/teams/team-fullstack.txt` from your project
|
||||||
|
- Paste this content into the Gem setup to configure the team
|
||||||
|
4. **Collaborate with the team**:
|
||||||
|
|
||||||
|
- Business Analyst: Requirements gathering
|
||||||
|
- Product Manager: Feature prioritization
|
||||||
|
- Solution Architect: Technical design
|
||||||
|
- UX Expert: User experience design
|
||||||
|
|
||||||
|
**Example Gemini Sessions**
|
||||||
|
|
||||||
|
```text
|
||||||
|
"I want to build a [type] application that [core purpose].
|
||||||
|
Help me brainstorm features and create a comprehensive PRD."
|
||||||
|
|
||||||
|
"Based on this PRD, design a scalable technical architecture
|
||||||
|
that can handle [specific requirements]."
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Export planning documents**:
|
||||||
|
- Copy the PRD output and save as `docs/prd.md` in your project
|
||||||
|
- Copy the architecture output and save as `docs/architecture.md` in your project
|
||||||
|
|
||||||
|
### Phase 3: Document Organization (IDE)
|
||||||
|
|
||||||
|
Switch back to your IDE for document management:
|
||||||
|
|
||||||
|
1. **Load bmad-master agent** (syntax varies by IDE)
|
||||||
|
2. **Shard the PRD**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
*shard-doc docs/prd.md prd
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Shard the architecture**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
*shard-doc docs/architecture.md architecture
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result**: Organized folder structure:
|
||||||
|
|
||||||
|
- `docs/prd/` - Broken down PRD sections
|
||||||
|
- `docs/architecture/` - Broken down architecture sections
|
||||||
|
|
||||||
|
### Phase 4: Iterative Development
|
||||||
|
|
||||||
|
Follow the SM → Dev cycle for systematic story development:
|
||||||
|
|
||||||
|
#### Create new Branch
|
||||||
|
|
||||||
|
1. **Start new branch**
|
||||||
|
|
||||||
|
#### Story Creation (Scrum Master)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load SM agent**
|
||||||
|
3. **Execute**: `*create` (runs create-next-story task)
|
||||||
|
4. **Review generated story** in `docs/stories/`
|
||||||
|
5. **Update status**: Change from "Draft" to "Approved"
|
||||||
|
|
||||||
|
#### Story Implementation (Developer)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load Dev agent**
|
||||||
|
3. **Execute**: `*develop-story {selected-story}` (runs execute-checklist task)
|
||||||
|
4. **Review generated report** in `{selected-story}`
|
||||||
|
|
||||||
|
#### Story Review (Quality Assurance)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load QA agent**
|
||||||
|
3. **Execute**: `*review {selected-story}` (runs review-story task)
|
||||||
|
4. **Review generated report** in `{selected-story}`
|
||||||
|
|
||||||
|
#### Commit Changes and Push
|
||||||
|
|
||||||
|
1. **Commit changes**
|
||||||
|
2. **Push to remote**
|
||||||
|
|
||||||
|
#### Repeat Until Complete
|
||||||
|
|
||||||
|
- **SM**: Create next story → Review → Approve
|
||||||
|
- **Dev**: Implement story → Complete → Mark Ready for Review
|
||||||
|
- **QA**: Review story → Mark done
|
||||||
|
- **Commit**: All changes
|
||||||
|
- **Push**: To remote
|
||||||
|
- **Continue**: Until all features implemented
|
||||||
|
|
||||||
|
## IDE-Specific Syntax
|
||||||
|
|
||||||
|
### Agent Loading Syntax by IDE
|
||||||
|
|
||||||
|
- **Claude Code**: `/agent-name` (e.g., `/bmad-master`)
|
||||||
|
- **Cursor**: `@agent-name` (e.g., `@bmad-master`)
|
||||||
|
- **Gemini CLI**: `*agent-name` (e.g., `*bmad-master`)
|
||||||
|
- **Windsurf**: `@agent-name` (e.g., `@bmad-master`)
|
||||||
|
- **Trae**: `@agent-name` (e.g., `@bmad-master`)
|
||||||
|
- **Roo Code**: Select mode from mode selector (e.g., `bmad-master`)
|
||||||
|
- **GitHub Copilot**: Open the Chat view (`⌃⌘I` on Mac, `Ctrl+Alt+I` on Windows/Linux) and select **Agent** from the chat mode selector.
|
||||||
|
|
||||||
|
### Chat Management
|
||||||
|
|
||||||
|
- **Claude Code, Cursor, Windsurf, Trae**: Start new chats when switching agents
|
||||||
|
- **Roo Code**: Switch modes within the same conversation
|
||||||
|
|
||||||
|
## Available Agents
|
||||||
|
|
||||||
|
### Core Development Agents
|
||||||
|
|
||||||
|
- **bmad-master**: Universal task executor, document management
|
||||||
|
- **sm**: Scrum Master for story creation and agile process
|
||||||
|
- **dev**: Full-stack developer for implementation
|
||||||
|
- **architect**: Solution architect for technical design
|
||||||
|
|
||||||
|
### Specialized Agents
|
||||||
|
|
||||||
|
- **pm**: Product manager for planning and prioritization
|
||||||
|
- **analyst**: Business analyst for requirements
|
||||||
|
- **qa**: QA specialist for testing strategies
|
||||||
|
- **po**: Product owner for backlog management
|
||||||
|
- **ux-expert**: UX specialist for design
|
||||||
|
|
||||||
|
## Key Principles
|
||||||
|
|
||||||
|
1. **Agent Specialization**: Each agent has specific expertise and responsibilities
|
||||||
|
2. **Clean Handoffs**: Always start fresh when switching between agents
|
||||||
|
3. **Status Tracking**: Maintain story statuses (Draft → Approved → InProgress → Done)
|
||||||
|
4. **Iterative Development**: Complete one story before starting the next
|
||||||
|
5. **Documentation First**: Always start with solid PRD and architecture
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
Every agent supports these core commands:
|
||||||
|
|
||||||
|
- `*help` - Show available commands
|
||||||
|
- `*status` - Show current context/progress
|
||||||
|
- `*exit` - Exit the agent mode
|
||||||
|
|
||||||
|
## Success Tips
|
||||||
|
|
||||||
|
- **Use Gemini for big picture planning** - The team-fullstack bundle provides collaborative expertise
|
||||||
|
- **Use bmad-master for document organization** - Sharding creates manageable chunks
|
||||||
|
- **Follow the SM → Dev cycle religiously** - This ensures systematic progress
|
||||||
|
- **Keep conversations focused** - One agent, one task per conversation
|
||||||
|
- **Review everything** - Always review and approve before marking complete
|
||||||
|
|
||||||
|
This workflow ensures systematic, AI-assisted development following agile principles with clear separation of concerns and consistent progress tracking.
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -13,6 +13,8 @@
|
||||||
"build:teams": "node tools/cli.js build --teams-only",
|
"build:teams": "node tools/cli.js build --teams-only",
|
||||||
"list:agents": "node tools/cli.js list:agents",
|
"list:agents": "node tools/cli.js list:agents",
|
||||||
"validate": "node tools/cli.js validate",
|
"validate": "node tools/cli.js validate",
|
||||||
|
"flatten": "node tools/flattener/main.js",
|
||||||
|
"test": "jest",
|
||||||
"install:bmad": "node tools/installer/bin/bmad.js install",
|
"install:bmad": "node tools/installer/bin/bmad.js install",
|
||||||
"format": "prettier --write \"**/*.md\"",
|
"format": "prettier --write \"**/*.md\"",
|
||||||
"version:patch": "node tools/version-bump.js patch",
|
"version:patch": "node tools/version-bump.js patch",
|
||||||
|
|
@ -41,7 +43,8 @@
|
||||||
"glob": "^11.0.3",
|
"glob": "^11.0.3",
|
||||||
"inquirer": "^8.2.6",
|
"inquirer": "^8.2.6",
|
||||||
"js-yaml": "^4.1.0",
|
"js-yaml": "^4.1.0",
|
||||||
"ora": "^5.4.1"
|
"minimatch": "^10.0.3",
|
||||||
|
"ora": "^8.2.0"
|
||||||
},
|
},
|
||||||
"keywords": [
|
"keywords": [
|
||||||
"agile",
|
"agile",
|
||||||
|
|
@ -65,6 +68,7 @@
|
||||||
"@semantic-release/changelog": "^6.0.3",
|
"@semantic-release/changelog": "^6.0.3",
|
||||||
"@semantic-release/git": "^10.0.1",
|
"@semantic-release/git": "^10.0.1",
|
||||||
"husky": "^9.1.7",
|
"husky": "^9.1.7",
|
||||||
|
"jest": "^30.0.4",
|
||||||
"lint-staged": "^16.1.1",
|
"lint-staged": "^16.1.1",
|
||||||
"prettier": "^3.5.3",
|
"prettier": "^3.5.3",
|
||||||
"semantic-release": "^22.0.0",
|
"semantic-release": "^22.0.0",
|
||||||
|
|
|
||||||
|
|
@ -149,4 +149,13 @@ program
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
program
|
||||||
|
.command('flatten')
|
||||||
|
.description('Flatten codebase to XML format')
|
||||||
|
.option('-o, --output <path>', 'Output file path', 'flattened-codebase.xml')
|
||||||
|
.action(async (options) => {
|
||||||
|
const flattener = require('./flattener/main');
|
||||||
|
await flattener.parseAsync(['flatten', '--output', options.output], { from: 'user' });
|
||||||
|
});
|
||||||
|
|
||||||
program.parse();
|
program.parse();
|
||||||
|
|
@ -0,0 +1,435 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
const { Command } = require('commander');
|
||||||
|
const fs = require('fs-extra');
|
||||||
|
const path = require('node:path');
|
||||||
|
const { glob } = require('glob');
|
||||||
|
const { minimatch } = require('minimatch');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Recursively discover all files in a directory
|
||||||
|
* @param {string} rootDir - The root directory to scan
|
||||||
|
* @returns {Promise<string[]>} Array of file paths
|
||||||
|
*/
|
||||||
|
async function discoverFiles(rootDir) {
|
||||||
|
try {
|
||||||
|
const gitignorePath = path.join(rootDir, '.gitignore');
|
||||||
|
const gitignorePatterns = await parseGitignore(gitignorePath);
|
||||||
|
|
||||||
|
const combinedIgnores = [
|
||||||
|
...gitignorePatterns,
|
||||||
|
'.git/**',
|
||||||
|
'flattened-codebase.xml',
|
||||||
|
'repomix-output.xml'
|
||||||
|
];
|
||||||
|
|
||||||
|
// Use glob to recursively find all files, excluding common ignore patterns
|
||||||
|
const files = await glob('**/*', {
|
||||||
|
cwd: rootDir,
|
||||||
|
nodir: true, // Only files, not directories
|
||||||
|
dot: true, // Include hidden files
|
||||||
|
follow: false, // Don't follow symbolic links
|
||||||
|
ignore: combinedIgnores
|
||||||
|
});
|
||||||
|
|
||||||
|
return files.map(file => path.resolve(rootDir, file));
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error discovering files:', error.message);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse .gitignore file and return ignore patterns
|
||||||
|
* @param {string} gitignorePath - Path to .gitignore file
|
||||||
|
* @returns {Promise<string[]>} Array of ignore patterns
|
||||||
|
*/
|
||||||
|
async function parseGitignore(gitignorePath) {
|
||||||
|
try {
|
||||||
|
if (!await fs.pathExists(gitignorePath)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = await fs.readFile(gitignorePath, 'utf8');
|
||||||
|
return content
|
||||||
|
.split('\n')
|
||||||
|
.map(line => line.trim())
|
||||||
|
.filter(line => line && !line.startsWith('#')) // Remove empty lines and comments
|
||||||
|
.map(pattern => {
|
||||||
|
// Convert gitignore patterns to glob patterns
|
||||||
|
if (pattern.endsWith('/')) {
|
||||||
|
return pattern + '**';
|
||||||
|
}
|
||||||
|
return pattern;
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error parsing .gitignore:', error.message);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a file is binary using file command and heuristics
|
||||||
|
* @param {string} filePath - Path to the file
|
||||||
|
* @returns {Promise<boolean>} True if file is binary
|
||||||
|
*/
|
||||||
|
async function isBinaryFile(filePath) {
|
||||||
|
try {
|
||||||
|
// First check by file extension
|
||||||
|
const binaryExtensions = [
|
||||||
|
'.jpg', '.jpeg', '.png', '.gif', '.bmp', '.ico', '.svg',
|
||||||
|
'.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx',
|
||||||
|
'.zip', '.tar', '.gz', '.rar', '.7z',
|
||||||
|
'.exe', '.dll', '.so', '.dylib',
|
||||||
|
'.mp3', '.mp4', '.avi', '.mov', '.wav',
|
||||||
|
'.ttf', '.otf', '.woff', '.woff2',
|
||||||
|
'.bin', '.dat', '.db', '.sqlite'
|
||||||
|
];
|
||||||
|
|
||||||
|
const ext = path.extname(filePath).toLowerCase();
|
||||||
|
if (binaryExtensions.includes(ext)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For files without clear extensions, try to read a small sample
|
||||||
|
const stats = await fs.stat(filePath);
|
||||||
|
if (stats.size === 0) {
|
||||||
|
return false; // Empty files are considered text
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read first 1024 bytes to check for null bytes
|
||||||
|
const sampleSize = Math.min(1024, stats.size);
|
||||||
|
const buffer = await fs.readFile(filePath, { encoding: null, flag: 'r' });
|
||||||
|
const sample = buffer.slice(0, sampleSize);
|
||||||
|
// If we find null bytes, it's likely binary
|
||||||
|
return sample.includes(0);
|
||||||
|
} catch (error) {
|
||||||
|
console.warn(`Warning: Could not determine if file is binary: ${filePath} - ${error.message}`);
|
||||||
|
return false; // Default to text if we can't determine
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read and aggregate content from text files
|
||||||
|
* @param {string[]} files - Array of file paths
|
||||||
|
* @param {string} rootDir - The root directory
|
||||||
|
* @param {Object} spinner - Optional spinner instance for progress display
|
||||||
|
* @returns {Promise<Object>} Object containing file contents and metadata
|
||||||
|
*/
|
||||||
|
async function aggregateFileContents(files, rootDir, spinner = null) {
|
||||||
|
const results = {
|
||||||
|
textFiles: [],
|
||||||
|
binaryFiles: [],
|
||||||
|
errors: [],
|
||||||
|
totalFiles: files.length,
|
||||||
|
processedFiles: 0
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const filePath of files) {
|
||||||
|
try {
|
||||||
|
const relativePath = path.relative(rootDir, filePath);
|
||||||
|
|
||||||
|
// Update progress indicator
|
||||||
|
if (spinner) {
|
||||||
|
spinner.text = `Processing file ${results.processedFiles + 1}/${results.totalFiles}: ${relativePath}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
const isBinary = await isBinaryFile(filePath);
|
||||||
|
|
||||||
|
if (isBinary) {
|
||||||
|
results.binaryFiles.push({
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
size: (await fs.stat(filePath)).size
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Read text file content
|
||||||
|
const content = await fs.readFile(filePath, 'utf8');
|
||||||
|
results.textFiles.push({
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
content: content,
|
||||||
|
size: content.length,
|
||||||
|
lines: content.split('\n').length
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
results.processedFiles++;
|
||||||
|
} catch (error) {
|
||||||
|
const relativePath = path.relative(rootDir, filePath);
|
||||||
|
const errorInfo = {
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
error: error.message
|
||||||
|
};
|
||||||
|
|
||||||
|
results.errors.push(errorInfo);
|
||||||
|
|
||||||
|
// Log warning without interfering with spinner
|
||||||
|
if (spinner) {
|
||||||
|
spinner.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
|
||||||
|
} else {
|
||||||
|
console.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
results.processedFiles++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate XML output with aggregated file contents
|
||||||
|
* @param {Object} aggregatedContent - The aggregated content object
|
||||||
|
* @param {string} projectRoot - The project root directory
|
||||||
|
* @returns {string} XML content
|
||||||
|
*/
|
||||||
|
function generateXMLOutput(aggregatedContent) {
|
||||||
|
const { textFiles } = aggregatedContent;
|
||||||
|
|
||||||
|
let xml = `<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
`;
|
||||||
|
xml += `<files>
|
||||||
|
`;
|
||||||
|
|
||||||
|
// Add text files with content (only text files as per story requirements)
|
||||||
|
for (const file of textFiles) {
|
||||||
|
xml += ` <file path="${escapeXml(file.path)}">`;
|
||||||
|
|
||||||
|
// Use CDATA for code content, handling CDATA end sequences properly
|
||||||
|
if (file.content?.trim()) {
|
||||||
|
const indentedContent = indentFileContent(file.content);
|
||||||
|
if (file.content.includes(']]>')) {
|
||||||
|
// If content contains ]]>, split it and wrap each part in CDATA
|
||||||
|
xml += splitAndWrapCDATA(indentedContent);
|
||||||
|
} else {
|
||||||
|
xml += `<![CDATA[
|
||||||
|
${indentedContent}
|
||||||
|
]]>`;
|
||||||
|
}
|
||||||
|
} else if (file.content) {
|
||||||
|
// Handle empty or whitespace-only content
|
||||||
|
const indentedContent = indentFileContent(file.content);
|
||||||
|
xml += `<![CDATA[
|
||||||
|
${indentedContent}
|
||||||
|
]]>`;
|
||||||
|
}
|
||||||
|
|
||||||
|
xml += `</file>
|
||||||
|
`;
|
||||||
|
}
|
||||||
|
|
||||||
|
xml += `</files>
|
||||||
|
`;
|
||||||
|
return xml;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Escape XML special characters for attributes
|
||||||
|
* @param {string} str - String to escape
|
||||||
|
* @returns {string} Escaped string
|
||||||
|
*/
|
||||||
|
function escapeXml(str) {
|
||||||
|
if (typeof str !== 'string') {
|
||||||
|
return String(str);
|
||||||
|
}
|
||||||
|
return str
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Indent file content with 4 spaces for each line
|
||||||
|
* @param {string} content - Content to indent
|
||||||
|
* @returns {string} Indented content
|
||||||
|
*/
|
||||||
|
function indentFileContent(content) {
|
||||||
|
if (typeof content !== 'string') {
|
||||||
|
return String(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Split content into lines and add 4 spaces of indentation to each line
|
||||||
|
return content.split('\n').map(line => ` ${line}`).join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Split content containing ]]> and wrap each part in CDATA
|
||||||
|
* @param {string} content - Content to process
|
||||||
|
* @returns {string} Content with properly wrapped CDATA sections
|
||||||
|
*/
|
||||||
|
function splitAndWrapCDATA(content) {
|
||||||
|
if (typeof content !== 'string') {
|
||||||
|
return String(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Replace ]]> with ]]]]><![CDATA[> to escape it within CDATA
|
||||||
|
const escapedContent = content.replace(/]]>/g, ']]]]><![CDATA[>');
|
||||||
|
return `<![CDATA[
|
||||||
|
${escapedContent}
|
||||||
|
]]>`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate statistics for the processed files
|
||||||
|
* @param {Object} aggregatedContent - The aggregated content object
|
||||||
|
* @param {string} xmlContent - The generated XML content
|
||||||
|
* @returns {Object} Statistics object
|
||||||
|
*/
|
||||||
|
function calculateStatistics(aggregatedContent, xmlContent) {
|
||||||
|
const { textFiles, binaryFiles, errors } = aggregatedContent;
|
||||||
|
|
||||||
|
// Calculate total file size in bytes
|
||||||
|
const totalTextSize = textFiles.reduce((sum, file) => sum + file.size, 0);
|
||||||
|
const totalBinarySize = binaryFiles.reduce((sum, file) => sum + file.size, 0);
|
||||||
|
const totalSize = totalTextSize + totalBinarySize;
|
||||||
|
|
||||||
|
// Calculate total lines of code
|
||||||
|
const totalLines = textFiles.reduce((sum, file) => sum + file.lines, 0);
|
||||||
|
|
||||||
|
// Estimate token count (rough approximation: 1 token ≈ 4 characters)
|
||||||
|
const estimatedTokens = Math.ceil(xmlContent.length / 4);
|
||||||
|
|
||||||
|
// Format file size
|
||||||
|
const formatSize = (bytes) => {
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||||
|
};
|
||||||
|
|
||||||
|
return {
|
||||||
|
totalFiles: textFiles.length + binaryFiles.length,
|
||||||
|
textFiles: textFiles.length,
|
||||||
|
binaryFiles: binaryFiles.length,
|
||||||
|
errorFiles: errors.length,
|
||||||
|
totalSize: formatSize(totalSize),
|
||||||
|
xmlSize: formatSize(xmlContent.length),
|
||||||
|
totalLines,
|
||||||
|
estimatedTokens: estimatedTokens.toLocaleString()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Filter files based on .gitignore patterns
|
||||||
|
* @param {string[]} files - Array of file paths
|
||||||
|
* @param {string} rootDir - The root directory
|
||||||
|
* @returns {Promise<string[]>} Filtered array of file paths
|
||||||
|
*/
|
||||||
|
async function filterFiles(files, rootDir) {
|
||||||
|
const gitignorePath = path.join(rootDir, '.gitignore');
|
||||||
|
const ignorePatterns = await parseGitignore(gitignorePath);
|
||||||
|
|
||||||
|
if (ignorePatterns.length === 0) {
|
||||||
|
return files;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert absolute paths to relative for pattern matching
|
||||||
|
const relativeFiles = files.map(file => path.relative(rootDir, file));
|
||||||
|
|
||||||
|
// Separate positive and negative patterns
|
||||||
|
const positivePatterns = ignorePatterns.filter(p => !p.startsWith('!'));
|
||||||
|
const negativePatterns = ignorePatterns.filter(p => p.startsWith('!')).map(p => p.slice(1));
|
||||||
|
|
||||||
|
// Filter out files that match ignore patterns
|
||||||
|
const filteredRelative = [];
|
||||||
|
|
||||||
|
for (const file of relativeFiles) {
|
||||||
|
let shouldIgnore = false;
|
||||||
|
|
||||||
|
// First check positive patterns (ignore these files)
|
||||||
|
for (const pattern of positivePatterns) {
|
||||||
|
if (minimatch(file, pattern)) {
|
||||||
|
shouldIgnore = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then check negative patterns (don't ignore these files even if they match positive patterns)
|
||||||
|
if (shouldIgnore) {
|
||||||
|
for (const pattern of negativePatterns) {
|
||||||
|
if (minimatch(file, pattern)) {
|
||||||
|
shouldIgnore = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!shouldIgnore) {
|
||||||
|
filteredRelative.push(file);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert back to absolute paths
|
||||||
|
return filteredRelative.map(file => path.resolve(rootDir, file));
|
||||||
|
}
|
||||||
|
|
||||||
|
const program = new Command();
|
||||||
|
|
||||||
|
program
|
||||||
|
.name('bmad-flatten')
|
||||||
|
.description('BMad-Method codebase flattener tool')
|
||||||
|
.version('1.0.0')
|
||||||
|
.option('-o, --output <path>', 'Output file path', 'flattened-codebase.xml')
|
||||||
|
.action(async (options) => {
|
||||||
|
console.log(`Flattening codebase to: ${options.output}`);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Import ora dynamically
|
||||||
|
const { default: ora } = await import('ora');
|
||||||
|
|
||||||
|
// Start file discovery with spinner
|
||||||
|
const discoverySpinner = ora('🔍 Discovering files...').start();
|
||||||
|
const files = await discoverFiles(process.cwd());
|
||||||
|
const filteredFiles = await filterFiles(files, process.cwd());
|
||||||
|
discoverySpinner.succeed(`📁 Found ${filteredFiles.length} files to include`);
|
||||||
|
|
||||||
|
// Process files with progress tracking
|
||||||
|
console.log('Reading file contents');
|
||||||
|
const processingSpinner = ora('📄 Processing files...').start();
|
||||||
|
const aggregatedContent = await aggregateFileContents(filteredFiles, process.cwd(), processingSpinner);
|
||||||
|
processingSpinner.succeed(`✅ Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
|
||||||
|
|
||||||
|
// Log processing results for test validation
|
||||||
|
console.log(`Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
|
||||||
|
if (aggregatedContent.errors.length > 0) {
|
||||||
|
console.log(`Errors: ${aggregatedContent.errors.length}`);
|
||||||
|
}
|
||||||
|
console.log(`Text files: ${aggregatedContent.textFiles.length}`);
|
||||||
|
if (aggregatedContent.binaryFiles.length > 0) {
|
||||||
|
console.log(`Binary files: ${aggregatedContent.binaryFiles.length}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate XML output
|
||||||
|
const xmlSpinner = ora('🔧 Generating XML output...').start();
|
||||||
|
const xmlOutput = generateXMLOutput(aggregatedContent);
|
||||||
|
await fs.writeFile(options.output, xmlOutput);
|
||||||
|
xmlSpinner.succeed('📝 XML generation completed');
|
||||||
|
|
||||||
|
// Calculate and display statistics
|
||||||
|
const stats = calculateStatistics(aggregatedContent, xmlOutput);
|
||||||
|
|
||||||
|
// Display completion summary
|
||||||
|
console.log('\n📊 Completion Summary:');
|
||||||
|
console.log(`✅ Successfully processed ${filteredFiles.length} files into ${options.output}`);
|
||||||
|
console.log(`📁 Output file: ${path.resolve(options.output)}`);
|
||||||
|
console.log(`📏 Total source size: ${stats.totalSize}`);
|
||||||
|
console.log(`📄 Generated XML size: ${stats.xmlSize}`);
|
||||||
|
console.log(`📝 Total lines of code: ${stats.totalLines.toLocaleString()}`);
|
||||||
|
console.log(`🔢 Estimated tokens: ${stats.estimatedTokens}`);
|
||||||
|
console.log(`📊 File breakdown: ${stats.textFiles} text, ${stats.binaryFiles} binary, ${stats.errorFiles} errors`);
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
console.error('❌ Critical error:', error.message);
|
||||||
|
console.error('An unexpected error occurred.');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (require.main === module) {
|
||||||
|
program.parse();
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = program;
|
||||||
|
|
@ -0,0 +1,95 @@
|
||||||
|
customModes:
|
||||||
|
- slug: bmad-ux-expert
|
||||||
|
name: '🎨 UX Expert'
|
||||||
|
roleDefinition: You are a UX Expert specializing in ux expert tasks and responsibilities.
|
||||||
|
whenToUse: Use for UX Expert tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/ux-expert.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|css|scss|html|jsx|tsx)$
|
||||||
|
description: Design-related files
|
||||||
|
- slug: bmad-sm
|
||||||
|
name: '🏃 Scrum Master'
|
||||||
|
roleDefinition: You are a Scrum Master specializing in scrum master tasks and responsibilities.
|
||||||
|
whenToUse: Use for Scrum Master tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/sm.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|txt)$
|
||||||
|
description: Process and planning docs
|
||||||
|
- slug: bmad-qa
|
||||||
|
name: '🧪 Senior Developer & QA Architect'
|
||||||
|
roleDefinition: You are a Senior Developer & QA Architect specializing in senior developer & qa architect tasks and responsibilities.
|
||||||
|
whenToUse: Use for Senior Developer & QA Architect tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/qa.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(test|spec)\.(js|ts|jsx|tsx)$|\.md$
|
||||||
|
description: Test files and documentation
|
||||||
|
- slug: bmad-po
|
||||||
|
name: '📝 Product Owner'
|
||||||
|
roleDefinition: You are a Product Owner specializing in product owner tasks and responsibilities.
|
||||||
|
whenToUse: Use for Product Owner tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/po.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|txt)$
|
||||||
|
description: Story and requirement docs
|
||||||
|
- slug: bmad-pm
|
||||||
|
name: '📋 Product Manager'
|
||||||
|
roleDefinition: You are a Product Manager specializing in product manager tasks and responsibilities.
|
||||||
|
whenToUse: Use for Product Manager tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/pm.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|txt)$
|
||||||
|
description: Product documentation
|
||||||
|
- slug: bmad-dev
|
||||||
|
name: '💻 Full Stack Developer'
|
||||||
|
roleDefinition: You are a Full Stack Developer specializing in full stack developer tasks and responsibilities.
|
||||||
|
whenToUse: Use for code implementation, debugging, refactoring, and development best practices
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/dev.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- slug: bmad-orchestrator
|
||||||
|
name: '🎭 BMad Master Orchestrator'
|
||||||
|
roleDefinition: You are a BMad Master Orchestrator specializing in bmad master orchestrator tasks and responsibilities.
|
||||||
|
whenToUse: Use for BMad Master Orchestrator tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/bmad-orchestrator.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- slug: bmad-master
|
||||||
|
name: '🧙 BMad Master Task Executor'
|
||||||
|
roleDefinition: You are a BMad Master Task Executor specializing in bmad master task executor tasks and responsibilities.
|
||||||
|
whenToUse: Use for BMad Master Task Executor tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/bmad-master.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- slug: bmad-architect
|
||||||
|
name: '🏗️ Architect'
|
||||||
|
roleDefinition: You are a Architect specializing in architect tasks and responsibilities.
|
||||||
|
whenToUse: Use for Architect tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/architect.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|txt|yml|yaml|json)$
|
||||||
|
description: Architecture docs and configs
|
||||||
|
- slug: bmad-analyst
|
||||||
|
name: '📊 Business Analyst'
|
||||||
|
roleDefinition: You are a Business Analyst specializing in business analyst tasks and responsibilities.
|
||||||
|
whenToUse: Use for Business Analyst tasks
|
||||||
|
customInstructions: CRITICAL Read the full YAML from .bmad-core/agents/analyst.md start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- - edit
|
||||||
|
- fileRegex: \.(md|txt)$
|
||||||
|
description: Documentation and text files
|
||||||
Loading…
Reference in New Issue