From 8860dfe04e926ab73d8121df0e9db3c365f8d1f9 Mon Sep 17 00:00:00 2001 From: Basit Mustafa Date: Fri, 25 Jul 2025 15:13:33 -0700 Subject: [PATCH] Revert "fix(integrations): claude perf improvement WIP" This reverts commit a3b14b4b5cf64dd3c7fe034badd0c79e996c989c. --- bmad-core/tasks/cite-sources.md | 39 ----- bmad-core/tasks/create-scorecard.md | 65 ------- bmad-core/tasks/run-gap-matrix.md | 36 ---- bmad-core/tasks/self-reflect.md | 63 ------- integration/claude/IMPLEMENTATION-SUMMARY.md | 158 ------------------ integration/claude/README.md | 4 +- .../claude/src/templates/agent.mustache | 6 +- 7 files changed, 3 insertions(+), 368 deletions(-) delete mode 100644 bmad-core/tasks/cite-sources.md delete mode 100644 bmad-core/tasks/create-scorecard.md delete mode 100644 bmad-core/tasks/run-gap-matrix.md delete mode 100644 bmad-core/tasks/self-reflect.md delete mode 100644 integration/claude/IMPLEMENTATION-SUMMARY.md diff --git a/bmad-core/tasks/cite-sources.md b/bmad-core/tasks/cite-sources.md deleted file mode 100644 index 888a8f6a..00000000 --- a/bmad-core/tasks/cite-sources.md +++ /dev/null @@ -1,39 +0,0 @@ -# cite-sources - -Ensure proper source attribution for all data and claims in analysis. - -## Citation Requirements - -### Data Citations -For every quantitative claim, include: -- **Source**: File name and line number (e.g., market-sizes.csv:line 3) -- **Date**: When data was collected/published -- **Context**: Methodology or scope limitations - -### Format Examples -``` -Market Size: $7.8B (Source: market-sizes.csv:line 2, Grand View Research 2023) -Growth Rate: 24.3% CAGR (Source: market-sizes.csv:line 3, TechNavio 2023) -Competitor Users: 200M users (Source: competitive-benchmarks.csv:line 2, Atlassian public filings) -``` - -### Web Sources -When referencing external information: -- Include full URL when possible -- Note access date -- Specify page section if applicable - -### Internal BMAD Sources -- Template usage: "Using prd-tmpl.yaml framework" -- Task reference: "Following create-doc.md methodology" -- Data file: "Analysis based on bmad-kb.md section 2.3" - -## Validation Checklist -- [ ] Every statistic has source attribution -- [ ] File references include line numbers -- [ ] External sources include dates -- [ ] Claims are traceable to evidence -- [ ] Sources are credible and recent - -## Implementation -Add citations as you work, not as an afterthought. Use this format consistently throughout all analysis deliverables. diff --git a/bmad-core/tasks/create-scorecard.md b/bmad-core/tasks/create-scorecard.md deleted file mode 100644 index 65b5c4a1..00000000 --- a/bmad-core/tasks/create-scorecard.md +++ /dev/null @@ -1,65 +0,0 @@ -# create-scorecard - -Generate BMAD Opportunity Scorecard for market opportunities or feature ideas. - -## Execution Framework - -1. **Opportunity Definition** - - Define the specific opportunity or feature - - Establish target customer segment - - Clarify success metrics - -2. **Market Analysis Scoring (1-5 scale)** - - Market Size: Total addressable market potential - - Growth Rate: Market expansion velocity - - Competition: Competitive density and strength - - Timing: Market readiness and external factors - -3. **Implementation Feasibility (1-5 scale)** - - Technical Complexity: Development difficulty - - Resource Requirements: Team and budget needs - - Time to Market: Speed of delivery - - Risk Level: Technical and market risks - -4. **Strategic Fit (1-5 scale)** - - Business Model Alignment: Revenue model fit - - Core Competency Match: Team capability alignment - - Competitive Advantage: Defensibility potential - - Customer Value: Benefit magnitude - -5. **Generate Scorecard Output** - ``` - OPPORTUNITY SCORECARD - Opportunity: [Name] - - MARKET ATTRACTIVENESS (Total: /20) - - Market Size: [X]/5 - - Growth Rate: [X]/5 - - Competition: [X]/5 - - Timing: [X]/5 - - IMPLEMENTATION FEASIBILITY (Total: /20) - - Technical Complexity: [X]/5 - - Resource Requirements: [X]/5 - - Time to Market: [X]/5 - - Risk Level: [X]/5 - - STRATEGIC FIT (Total: /20) - - Business Model Alignment: [X]/5 - - Core Competency Match: [X]/5 - - Competitive Advantage: [X]/5 - - Customer Value: [X]/5 - - OVERALL SCORE: [X]/60 - RECOMMENDATION: [Go/No-Go/Investigate Further] - ``` - -## Data Sources -- Market data from bmad-core/data/market-sizes.csv -- Competitive intelligence from competitive-benchmarks.csv -- User-provided opportunity details - -## Validation Criteria -- All scores justified with evidence -- Recommendations align with scoring -- Clear next steps provided diff --git a/bmad-core/tasks/run-gap-matrix.md b/bmad-core/tasks/run-gap-matrix.md deleted file mode 100644 index e7158a00..00000000 --- a/bmad-core/tasks/run-gap-matrix.md +++ /dev/null @@ -1,36 +0,0 @@ -# run-gap-matrix - -Execute competitive gap matrix analysis using BMAD methodology. - -## Execution Steps - -1. **Initialize Framework** - - Load competitive-benchmarks.csv data - - Define evaluation criteria (features, pricing, market position) - - Set up scoring matrix (1-5 scale per criterion) - -2. **Gather Competitive Intelligence** - - Use market research data and user inputs - - Score each competitor on defined criteria - - Identify feature gaps and positioning opportunities - -3. **Generate Gap Matrix** - - Create visual/tabular representation - - Highlight underserved market segments - - Prioritize gaps by market size and difficulty - -4. **Output Deliverable** - - Structured gap analysis with scoring - - Opportunity recommendations - - Strategic positioning insights - -## Required Data Sources -- bmad-core/data/competitive-benchmarks.csv -- bmad-core/data/market-sizes.csv -- User-provided competitive intelligence - -## Template Usage -Use competitor-analysis-tmpl.yaml as base structure for output formatting. - -## Validation -Ensure all major competitors are scored, gaps are quantified, and recommendations are actionable. diff --git a/bmad-core/tasks/self-reflect.md b/bmad-core/tasks/self-reflect.md deleted file mode 100644 index c1150dd9..00000000 --- a/bmad-core/tasks/self-reflect.md +++ /dev/null @@ -1,63 +0,0 @@ -# self-reflect - -Post-analysis reflection and continuous improvement task. - -## Purpose -Critically evaluate completed analysis to improve quality and identify gaps. - -## Reflection Framework - -### 1. Completeness Check -- Did I address all aspects of the original request? -- Are there missing perspectives or stakeholder views? -- What assumptions did I make that should be validated? - -### 2. Evidence Quality Assessment -- How strong is the evidence supporting my conclusions? -- Where did I rely on incomplete or outdated data? -- What additional data would strengthen the analysis? - -### 3. Methodology Review -- Did I follow BMAD methodology consistently? -- Were my hypotheses properly tested and validated? -- Did I use appropriate analytical frameworks? - -### 4. Actionability Evaluation -- Are my recommendations specific and implementable? -- Do I provide clear next steps and success metrics? -- Have I considered implementation challenges? - -### 5. Bias and Blind Spot Check -- What biases might have influenced my analysis? -- What alternative interpretations did I not consider? -- Where might domain experts disagree with my conclusions? - -## Output Format -``` -REFLECTION SUMMARY -Analysis: [Title/Topic] -Completed: [Date] - -STRENGTHS IDENTIFIED: -- [Strength 1] -- [Strength 2] - -GAPS AND IMPROVEMENTS: -- [Gap 1] โ†’ [Improvement Action] -- [Gap 2] โ†’ [Improvement Action] - -FOLLOW-UP QUESTIONS: -- [Question requiring further research] -- [Validation needed] - -LESSONS LEARNED: -- [Insight for future analyses] - -CONFIDENCE LEVEL: [High/Medium/Low] in conclusions -``` - -## Integration with Memory -Store key learnings in memory file for future reference and pattern recognition. - -## Usage Trigger -Execute this task after completing any major analysis, scorecard, or strategic deliverable. diff --git a/integration/claude/IMPLEMENTATION-SUMMARY.md b/integration/claude/IMPLEMENTATION-SUMMARY.md deleted file mode 100644 index 28f8f397..00000000 --- a/integration/claude/IMPLEMENTATION-SUMMARY.md +++ /dev/null @@ -1,158 +0,0 @@ -# BMAD Claude Integration - Implementation Summary - -## ๐ŸŽฏ Achievement Overview - -Successfully transformed BMAD-Method into high-quality Claude Code subagents with **predicted 83-90% evaluation scores** (up from 68% baseline). - -## โœ… Completed Oracle-Directed Improvements - -### P0 Tasks (Critical - 100% Complete) -- [x] **Auto-inject full BMAD artifact lists**: Real files from bmad-core now populate all agents -- [x] **BMAD artifact command group**: 6 specialized commands for each agent -- [x] **Memory primer**: Context persistence instructions for all agents -- [x] **Hypothesis-driven analysis**: 4-step framework embedded in analyst persona - -### P1 Tasks (High Impact - 100% Complete) -- [x] **Shared handoff scratchpad**: `.claude/handoff/current.md` for cross-agent workflows -- [x] **Quantitative data sourcing**: Added market-sizes.csv and competitive-benchmarks.csv -- [x] **Template rendering helper**: Command infrastructure for artifact generation -- [x] **Security & domain cheat-sheets**: security-patterns.md and fintech-compliance.md - -### Additional Enhancements (90%+ Score Targets) -- [x] **Executable task framework**: run-gap-matrix.md, create-scorecard.md -- [x] **Source attribution system**: cite-sources.md for data credibility -- [x] **Self-reflection capability**: self-reflect.md for continuous improvement -- [x] **Enhanced command surface**: 6 BMAD commands with task file references - -## ๐Ÿ“Š Before vs After Comparison - -| Evaluation Criteria | Before (68%) | After (Predicted 83-90%) | Improvement | -|---------------------|--------------|---------------------------|-------------| -| Subagent Persona | 4/5 | 4/5 | โœ“ Maintained | -| BMAD Integration | 2/5 | 4-5/5 | +2-3 points | -| Analytical Expertise | 2/5 | 5/5 | +3 points | -| Response Structure | 4/5 | 4/5 | โœ“ Maintained | -| User Engagement | 4/5 | 4/5 | โœ“ Maintained | -| Quantitative Analysis | 2/5 | 4/5 | +2 points | -| Memory/Advanced Features | 2/5 | 3-4/5 | +1-2 points | -| Domain Expertise | 2/5 | 3-4/5 | +1-2 points | - -## ๐Ÿ—๏ธ Technical Architecture - -### Generated Structure -``` -.claude/ -โ”œโ”€โ”€ agents/ # 6 specialized subagents -โ”‚ โ”œโ”€โ”€ analyst.md # Mary - Market research, gap analysis -โ”‚ โ”œโ”€โ”€ architect.md # Winston - System design -โ”‚ โ”œโ”€โ”€ dev.md # James - Implementation -โ”‚ โ”œโ”€โ”€ pm.md # John - Project management -โ”‚ โ”œโ”€โ”€ qa.md # Quinn - Quality assurance -โ”‚ โ””โ”€โ”€ sm.md # Bob - Scrum facilitation -โ”œโ”€โ”€ memory/ # Context persistence per agent -โ””โ”€โ”€ handoff/ # Cross-agent collaboration -``` - -### Enhanced Data Sources -``` -bmad-core/data/ -โ”œโ”€โ”€ market-sizes.csv # Quantitative market data -โ”œโ”€โ”€ competitive-benchmarks.csv # Competitor intelligence -โ”œโ”€โ”€ security-patterns.md # Security best practices -โ”œโ”€โ”€ fintech-compliance.md # Regulatory guidelines -โ””โ”€โ”€ [existing BMAD data] -``` - -### New Task Framework -``` -bmad-core/tasks/ -โ”œโ”€โ”€ run-gap-matrix.md # Competitive analysis execution -โ”œโ”€โ”€ create-scorecard.md # Opportunity scoring methodology -โ”œโ”€โ”€ cite-sources.md # Source attribution system -โ”œโ”€โ”€ self-reflect.md # Post-analysis improvement -โ””โ”€โ”€ [existing BMAD tasks] -``` - -## ๐ŸŽญ Agent Capabilities Enhancement - -### All Agents Now Include: -- **Real BMAD Artifacts**: 17 tasks, 12 templates, 6 data files -- **6 BMAD Commands**: use-template, run-gap-matrix, create-scorecard, render-template, cite-sources, self-reflect -- **Memory Management**: Persistent context across sessions -- **Cross-Agent Handoff**: Structured collaboration workflows -- **Source Attribution**: Data credibility and citation requirements - -### Analyst-Specific Enhancements: -- **Hypothesis-Driven Framework**: 4-step analytical methodology -- **Market Data Access**: Real CSV data with growth rates and sizing -- **Gap Matrix Execution**: Structured competitive analysis -- **Opportunity Scoring**: BMAD scorecard methodology -- **Reflection Capability**: Post-analysis improvement loops - -## ๐Ÿงช Testing & Validation - -### Automated Validation -- โœ… All agent files generate successfully -- โœ… YAML frontmatter validates correctly -- โœ… Real BMAD artifacts properly injected -- โœ… Tool permissions correctly assigned - -### Manual Testing Framework -- ๐Ÿ“‹ Test scenarios for each agent -- ๐Ÿค– o3 evaluation criteria established -- ๐Ÿ“Š Scoring rubric (5-point scale per criterion) -- ๐Ÿ“ˆ Target: 85%+ for production readiness - -### Usage Commands -```bash -# Build agents -npm run build:claude - -# Validate setup -npm run test:claude - -# Start Claude Code -claude - -# Test analyst -"Use the analyst subagent to research AI project management tools" -``` - -## ๐Ÿš€ Predicted Performance Improvements - -Based on Oracle's detailed analysis: - -### Expected Score Range: **83-90%** -- **P0 + P1 Implementation**: 83-86% (current state) -- **With Remaining Refinements**: 90-92% (production ready) - -### Key Success Evidence: -1. **Real Artifact Integration**: Templates and tasks now executable -2. **Methodology Depth**: Hypothesis-driven analysis embedded -3. **Data-Driven Analysis**: Quantitative sources with citations -4. **Advanced Features**: Memory, handoffs, reflection loops -5. **Quality Assurance**: Self-validation and improvement cycles - -## ๐ŸŽฏ Production Readiness Status - -### โœ… Ready for Production Use: -- Core agent functionality complete -- BMAD methodology properly integrated -- Quality evaluation framework established -- Documentation and testing comprehensive - -### ๐Ÿ”„ Continuous Improvement Pipeline: -- Monitor agent performance in real usage -- Collect feedback and iterate on prompts -- Expand data sources and templates -- Enhance cross-agent collaboration patterns - -## ๐Ÿ“– Next Steps for Users - -1. **Immediate Use**: Run `npm run test:claude` and start testing -2. **Manual Validation**: Test each agent with provided scenarios -3. **o3 Evaluation**: Use Oracle for detailed performance assessment -4. **Iteration**: Apply feedback to further improve agent quality -5. **Production Deployment**: Begin using agents for real BMAD workflows - -This implementation represents a successful transformation of BMAD-Method into Claude Code's subagent system, maintaining methodology integrity while achieving significant quality improvements through Oracle-guided enhancements. diff --git a/integration/claude/README.md b/integration/claude/README.md index d2945242..5595c811 100644 --- a/integration/claude/README.md +++ b/integration/claude/README.md @@ -1,6 +1,4 @@ -# BMAD-Method Claude Code Integration - ALPHA AT BEST! - -PLEASE OPEN ISSUES AGAINST THE [BMAD-AT-CODE](https://github.com/24601/BMAD-AT-CLAUDE/issues) REPO! +# BMAD-Method Claude Code Integration This directory contains the integration layer that ports BMAD-Method agents to Claude Code's subagent system. diff --git a/integration/claude/src/templates/agent.mustache b/integration/claude/src/templates/agent.mustache index cef4a950..40363c01 100644 --- a/integration/claude/src/templates/agent.mustache +++ b/integration/claude/src/templates/agent.mustache @@ -29,11 +29,9 @@ memory: ./.claude/memory/{{agent.id}}.md ### BMAD Commands - **use-template **: Read and embed a BMAD template from templates/ -- **run-gap-matrix**: Execute competitive Gap Matrix analysis (see bmad-core/tasks/run-gap-matrix.md) -- **create-scorecard**: Generate BMAD Opportunity Scorecard (see bmad-core/tasks/create-scorecard.md) +- **run-gap-matrix**: Guide user through competitive Gap Matrix analysis +- **create-scorecard**: Produce Opportunity Scorecard using BMAD template - **render-template **: Read template, replace placeholders, output final artifact -- **cite-sources**: Ensure proper attribution for all data and claims (see bmad-core/tasks/cite-sources.md) -- **self-reflect**: Post-analysis reflection and improvement (see bmad-core/tasks/self-reflect.md) ## Working Mode You are {{agent.name}}, a {{agent.title}} operating within the BMAD-Method framework.