Revert "fix(integrations): claude perf improvement WIP"
This reverts commit a3b14b4b5c.
This commit is contained in:
parent
5fbddac36b
commit
8860dfe04e
|
|
@ -1,39 +0,0 @@
|
||||||
# cite-sources
|
|
||||||
|
|
||||||
Ensure proper source attribution for all data and claims in analysis.
|
|
||||||
|
|
||||||
## Citation Requirements
|
|
||||||
|
|
||||||
### Data Citations
|
|
||||||
For every quantitative claim, include:
|
|
||||||
- **Source**: File name and line number (e.g., market-sizes.csv:line 3)
|
|
||||||
- **Date**: When data was collected/published
|
|
||||||
- **Context**: Methodology or scope limitations
|
|
||||||
|
|
||||||
### Format Examples
|
|
||||||
```
|
|
||||||
Market Size: $7.8B (Source: market-sizes.csv:line 2, Grand View Research 2023)
|
|
||||||
Growth Rate: 24.3% CAGR (Source: market-sizes.csv:line 3, TechNavio 2023)
|
|
||||||
Competitor Users: 200M users (Source: competitive-benchmarks.csv:line 2, Atlassian public filings)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Web Sources
|
|
||||||
When referencing external information:
|
|
||||||
- Include full URL when possible
|
|
||||||
- Note access date
|
|
||||||
- Specify page section if applicable
|
|
||||||
|
|
||||||
### Internal BMAD Sources
|
|
||||||
- Template usage: "Using prd-tmpl.yaml framework"
|
|
||||||
- Task reference: "Following create-doc.md methodology"
|
|
||||||
- Data file: "Analysis based on bmad-kb.md section 2.3"
|
|
||||||
|
|
||||||
## Validation Checklist
|
|
||||||
- [ ] Every statistic has source attribution
|
|
||||||
- [ ] File references include line numbers
|
|
||||||
- [ ] External sources include dates
|
|
||||||
- [ ] Claims are traceable to evidence
|
|
||||||
- [ ] Sources are credible and recent
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
Add citations as you work, not as an afterthought. Use this format consistently throughout all analysis deliverables.
|
|
||||||
|
|
@ -1,65 +0,0 @@
|
||||||
# create-scorecard
|
|
||||||
|
|
||||||
Generate BMAD Opportunity Scorecard for market opportunities or feature ideas.
|
|
||||||
|
|
||||||
## Execution Framework
|
|
||||||
|
|
||||||
1. **Opportunity Definition**
|
|
||||||
- Define the specific opportunity or feature
|
|
||||||
- Establish target customer segment
|
|
||||||
- Clarify success metrics
|
|
||||||
|
|
||||||
2. **Market Analysis Scoring (1-5 scale)**
|
|
||||||
- Market Size: Total addressable market potential
|
|
||||||
- Growth Rate: Market expansion velocity
|
|
||||||
- Competition: Competitive density and strength
|
|
||||||
- Timing: Market readiness and external factors
|
|
||||||
|
|
||||||
3. **Implementation Feasibility (1-5 scale)**
|
|
||||||
- Technical Complexity: Development difficulty
|
|
||||||
- Resource Requirements: Team and budget needs
|
|
||||||
- Time to Market: Speed of delivery
|
|
||||||
- Risk Level: Technical and market risks
|
|
||||||
|
|
||||||
4. **Strategic Fit (1-5 scale)**
|
|
||||||
- Business Model Alignment: Revenue model fit
|
|
||||||
- Core Competency Match: Team capability alignment
|
|
||||||
- Competitive Advantage: Defensibility potential
|
|
||||||
- Customer Value: Benefit magnitude
|
|
||||||
|
|
||||||
5. **Generate Scorecard Output**
|
|
||||||
```
|
|
||||||
OPPORTUNITY SCORECARD
|
|
||||||
Opportunity: [Name]
|
|
||||||
|
|
||||||
MARKET ATTRACTIVENESS (Total: /20)
|
|
||||||
- Market Size: [X]/5
|
|
||||||
- Growth Rate: [X]/5
|
|
||||||
- Competition: [X]/5
|
|
||||||
- Timing: [X]/5
|
|
||||||
|
|
||||||
IMPLEMENTATION FEASIBILITY (Total: /20)
|
|
||||||
- Technical Complexity: [X]/5
|
|
||||||
- Resource Requirements: [X]/5
|
|
||||||
- Time to Market: [X]/5
|
|
||||||
- Risk Level: [X]/5
|
|
||||||
|
|
||||||
STRATEGIC FIT (Total: /20)
|
|
||||||
- Business Model Alignment: [X]/5
|
|
||||||
- Core Competency Match: [X]/5
|
|
||||||
- Competitive Advantage: [X]/5
|
|
||||||
- Customer Value: [X]/5
|
|
||||||
|
|
||||||
OVERALL SCORE: [X]/60
|
|
||||||
RECOMMENDATION: [Go/No-Go/Investigate Further]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Data Sources
|
|
||||||
- Market data from bmad-core/data/market-sizes.csv
|
|
||||||
- Competitive intelligence from competitive-benchmarks.csv
|
|
||||||
- User-provided opportunity details
|
|
||||||
|
|
||||||
## Validation Criteria
|
|
||||||
- All scores justified with evidence
|
|
||||||
- Recommendations align with scoring
|
|
||||||
- Clear next steps provided
|
|
||||||
|
|
@ -1,36 +0,0 @@
|
||||||
# run-gap-matrix
|
|
||||||
|
|
||||||
Execute competitive gap matrix analysis using BMAD methodology.
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
1. **Initialize Framework**
|
|
||||||
- Load competitive-benchmarks.csv data
|
|
||||||
- Define evaluation criteria (features, pricing, market position)
|
|
||||||
- Set up scoring matrix (1-5 scale per criterion)
|
|
||||||
|
|
||||||
2. **Gather Competitive Intelligence**
|
|
||||||
- Use market research data and user inputs
|
|
||||||
- Score each competitor on defined criteria
|
|
||||||
- Identify feature gaps and positioning opportunities
|
|
||||||
|
|
||||||
3. **Generate Gap Matrix**
|
|
||||||
- Create visual/tabular representation
|
|
||||||
- Highlight underserved market segments
|
|
||||||
- Prioritize gaps by market size and difficulty
|
|
||||||
|
|
||||||
4. **Output Deliverable**
|
|
||||||
- Structured gap analysis with scoring
|
|
||||||
- Opportunity recommendations
|
|
||||||
- Strategic positioning insights
|
|
||||||
|
|
||||||
## Required Data Sources
|
|
||||||
- bmad-core/data/competitive-benchmarks.csv
|
|
||||||
- bmad-core/data/market-sizes.csv
|
|
||||||
- User-provided competitive intelligence
|
|
||||||
|
|
||||||
## Template Usage
|
|
||||||
Use competitor-analysis-tmpl.yaml as base structure for output formatting.
|
|
||||||
|
|
||||||
## Validation
|
|
||||||
Ensure all major competitors are scored, gaps are quantified, and recommendations are actionable.
|
|
||||||
|
|
@ -1,63 +0,0 @@
|
||||||
# self-reflect
|
|
||||||
|
|
||||||
Post-analysis reflection and continuous improvement task.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
Critically evaluate completed analysis to improve quality and identify gaps.
|
|
||||||
|
|
||||||
## Reflection Framework
|
|
||||||
|
|
||||||
### 1. Completeness Check
|
|
||||||
- Did I address all aspects of the original request?
|
|
||||||
- Are there missing perspectives or stakeholder views?
|
|
||||||
- What assumptions did I make that should be validated?
|
|
||||||
|
|
||||||
### 2. Evidence Quality Assessment
|
|
||||||
- How strong is the evidence supporting my conclusions?
|
|
||||||
- Where did I rely on incomplete or outdated data?
|
|
||||||
- What additional data would strengthen the analysis?
|
|
||||||
|
|
||||||
### 3. Methodology Review
|
|
||||||
- Did I follow BMAD methodology consistently?
|
|
||||||
- Were my hypotheses properly tested and validated?
|
|
||||||
- Did I use appropriate analytical frameworks?
|
|
||||||
|
|
||||||
### 4. Actionability Evaluation
|
|
||||||
- Are my recommendations specific and implementable?
|
|
||||||
- Do I provide clear next steps and success metrics?
|
|
||||||
- Have I considered implementation challenges?
|
|
||||||
|
|
||||||
### 5. Bias and Blind Spot Check
|
|
||||||
- What biases might have influenced my analysis?
|
|
||||||
- What alternative interpretations did I not consider?
|
|
||||||
- Where might domain experts disagree with my conclusions?
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
```
|
|
||||||
REFLECTION SUMMARY
|
|
||||||
Analysis: [Title/Topic]
|
|
||||||
Completed: [Date]
|
|
||||||
|
|
||||||
STRENGTHS IDENTIFIED:
|
|
||||||
- [Strength 1]
|
|
||||||
- [Strength 2]
|
|
||||||
|
|
||||||
GAPS AND IMPROVEMENTS:
|
|
||||||
- [Gap 1] → [Improvement Action]
|
|
||||||
- [Gap 2] → [Improvement Action]
|
|
||||||
|
|
||||||
FOLLOW-UP QUESTIONS:
|
|
||||||
- [Question requiring further research]
|
|
||||||
- [Validation needed]
|
|
||||||
|
|
||||||
LESSONS LEARNED:
|
|
||||||
- [Insight for future analyses]
|
|
||||||
|
|
||||||
CONFIDENCE LEVEL: [High/Medium/Low] in conclusions
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Memory
|
|
||||||
Store key learnings in memory file for future reference and pattern recognition.
|
|
||||||
|
|
||||||
## Usage Trigger
|
|
||||||
Execute this task after completing any major analysis, scorecard, or strategic deliverable.
|
|
||||||
|
|
@ -1,158 +0,0 @@
|
||||||
# BMAD Claude Integration - Implementation Summary
|
|
||||||
|
|
||||||
## 🎯 Achievement Overview
|
|
||||||
|
|
||||||
Successfully transformed BMAD-Method into high-quality Claude Code subagents with **predicted 83-90% evaluation scores** (up from 68% baseline).
|
|
||||||
|
|
||||||
## ✅ Completed Oracle-Directed Improvements
|
|
||||||
|
|
||||||
### P0 Tasks (Critical - 100% Complete)
|
|
||||||
- [x] **Auto-inject full BMAD artifact lists**: Real files from bmad-core now populate all agents
|
|
||||||
- [x] **BMAD artifact command group**: 6 specialized commands for each agent
|
|
||||||
- [x] **Memory primer**: Context persistence instructions for all agents
|
|
||||||
- [x] **Hypothesis-driven analysis**: 4-step framework embedded in analyst persona
|
|
||||||
|
|
||||||
### P1 Tasks (High Impact - 100% Complete)
|
|
||||||
- [x] **Shared handoff scratchpad**: `.claude/handoff/current.md` for cross-agent workflows
|
|
||||||
- [x] **Quantitative data sourcing**: Added market-sizes.csv and competitive-benchmarks.csv
|
|
||||||
- [x] **Template rendering helper**: Command infrastructure for artifact generation
|
|
||||||
- [x] **Security & domain cheat-sheets**: security-patterns.md and fintech-compliance.md
|
|
||||||
|
|
||||||
### Additional Enhancements (90%+ Score Targets)
|
|
||||||
- [x] **Executable task framework**: run-gap-matrix.md, create-scorecard.md
|
|
||||||
- [x] **Source attribution system**: cite-sources.md for data credibility
|
|
||||||
- [x] **Self-reflection capability**: self-reflect.md for continuous improvement
|
|
||||||
- [x] **Enhanced command surface**: 6 BMAD commands with task file references
|
|
||||||
|
|
||||||
## 📊 Before vs After Comparison
|
|
||||||
|
|
||||||
| Evaluation Criteria | Before (68%) | After (Predicted 83-90%) | Improvement |
|
|
||||||
|---------------------|--------------|---------------------------|-------------|
|
|
||||||
| Subagent Persona | 4/5 | 4/5 | ✓ Maintained |
|
|
||||||
| BMAD Integration | 2/5 | 4-5/5 | +2-3 points |
|
|
||||||
| Analytical Expertise | 2/5 | 5/5 | +3 points |
|
|
||||||
| Response Structure | 4/5 | 4/5 | ✓ Maintained |
|
|
||||||
| User Engagement | 4/5 | 4/5 | ✓ Maintained |
|
|
||||||
| Quantitative Analysis | 2/5 | 4/5 | +2 points |
|
|
||||||
| Memory/Advanced Features | 2/5 | 3-4/5 | +1-2 points |
|
|
||||||
| Domain Expertise | 2/5 | 3-4/5 | +1-2 points |
|
|
||||||
|
|
||||||
## 🏗️ Technical Architecture
|
|
||||||
|
|
||||||
### Generated Structure
|
|
||||||
```
|
|
||||||
.claude/
|
|
||||||
├── agents/ # 6 specialized subagents
|
|
||||||
│ ├── analyst.md # Mary - Market research, gap analysis
|
|
||||||
│ ├── architect.md # Winston - System design
|
|
||||||
│ ├── dev.md # James - Implementation
|
|
||||||
│ ├── pm.md # John - Project management
|
|
||||||
│ ├── qa.md # Quinn - Quality assurance
|
|
||||||
│ └── sm.md # Bob - Scrum facilitation
|
|
||||||
├── memory/ # Context persistence per agent
|
|
||||||
└── handoff/ # Cross-agent collaboration
|
|
||||||
```
|
|
||||||
|
|
||||||
### Enhanced Data Sources
|
|
||||||
```
|
|
||||||
bmad-core/data/
|
|
||||||
├── market-sizes.csv # Quantitative market data
|
|
||||||
├── competitive-benchmarks.csv # Competitor intelligence
|
|
||||||
├── security-patterns.md # Security best practices
|
|
||||||
├── fintech-compliance.md # Regulatory guidelines
|
|
||||||
└── [existing BMAD data]
|
|
||||||
```
|
|
||||||
|
|
||||||
### New Task Framework
|
|
||||||
```
|
|
||||||
bmad-core/tasks/
|
|
||||||
├── run-gap-matrix.md # Competitive analysis execution
|
|
||||||
├── create-scorecard.md # Opportunity scoring methodology
|
|
||||||
├── cite-sources.md # Source attribution system
|
|
||||||
├── self-reflect.md # Post-analysis improvement
|
|
||||||
└── [existing BMAD tasks]
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎭 Agent Capabilities Enhancement
|
|
||||||
|
|
||||||
### All Agents Now Include:
|
|
||||||
- **Real BMAD Artifacts**: 17 tasks, 12 templates, 6 data files
|
|
||||||
- **6 BMAD Commands**: use-template, run-gap-matrix, create-scorecard, render-template, cite-sources, self-reflect
|
|
||||||
- **Memory Management**: Persistent context across sessions
|
|
||||||
- **Cross-Agent Handoff**: Structured collaboration workflows
|
|
||||||
- **Source Attribution**: Data credibility and citation requirements
|
|
||||||
|
|
||||||
### Analyst-Specific Enhancements:
|
|
||||||
- **Hypothesis-Driven Framework**: 4-step analytical methodology
|
|
||||||
- **Market Data Access**: Real CSV data with growth rates and sizing
|
|
||||||
- **Gap Matrix Execution**: Structured competitive analysis
|
|
||||||
- **Opportunity Scoring**: BMAD scorecard methodology
|
|
||||||
- **Reflection Capability**: Post-analysis improvement loops
|
|
||||||
|
|
||||||
## 🧪 Testing & Validation
|
|
||||||
|
|
||||||
### Automated Validation
|
|
||||||
- ✅ All agent files generate successfully
|
|
||||||
- ✅ YAML frontmatter validates correctly
|
|
||||||
- ✅ Real BMAD artifacts properly injected
|
|
||||||
- ✅ Tool permissions correctly assigned
|
|
||||||
|
|
||||||
### Manual Testing Framework
|
|
||||||
- 📋 Test scenarios for each agent
|
|
||||||
- 🤖 o3 evaluation criteria established
|
|
||||||
- 📊 Scoring rubric (5-point scale per criterion)
|
|
||||||
- 📈 Target: 85%+ for production readiness
|
|
||||||
|
|
||||||
### Usage Commands
|
|
||||||
```bash
|
|
||||||
# Build agents
|
|
||||||
npm run build:claude
|
|
||||||
|
|
||||||
# Validate setup
|
|
||||||
npm run test:claude
|
|
||||||
|
|
||||||
# Start Claude Code
|
|
||||||
claude
|
|
||||||
|
|
||||||
# Test analyst
|
|
||||||
"Use the analyst subagent to research AI project management tools"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🚀 Predicted Performance Improvements
|
|
||||||
|
|
||||||
Based on Oracle's detailed analysis:
|
|
||||||
|
|
||||||
### Expected Score Range: **83-90%**
|
|
||||||
- **P0 + P1 Implementation**: 83-86% (current state)
|
|
||||||
- **With Remaining Refinements**: 90-92% (production ready)
|
|
||||||
|
|
||||||
### Key Success Evidence:
|
|
||||||
1. **Real Artifact Integration**: Templates and tasks now executable
|
|
||||||
2. **Methodology Depth**: Hypothesis-driven analysis embedded
|
|
||||||
3. **Data-Driven Analysis**: Quantitative sources with citations
|
|
||||||
4. **Advanced Features**: Memory, handoffs, reflection loops
|
|
||||||
5. **Quality Assurance**: Self-validation and improvement cycles
|
|
||||||
|
|
||||||
## 🎯 Production Readiness Status
|
|
||||||
|
|
||||||
### ✅ Ready for Production Use:
|
|
||||||
- Core agent functionality complete
|
|
||||||
- BMAD methodology properly integrated
|
|
||||||
- Quality evaluation framework established
|
|
||||||
- Documentation and testing comprehensive
|
|
||||||
|
|
||||||
### 🔄 Continuous Improvement Pipeline:
|
|
||||||
- Monitor agent performance in real usage
|
|
||||||
- Collect feedback and iterate on prompts
|
|
||||||
- Expand data sources and templates
|
|
||||||
- Enhance cross-agent collaboration patterns
|
|
||||||
|
|
||||||
## 📖 Next Steps for Users
|
|
||||||
|
|
||||||
1. **Immediate Use**: Run `npm run test:claude` and start testing
|
|
||||||
2. **Manual Validation**: Test each agent with provided scenarios
|
|
||||||
3. **o3 Evaluation**: Use Oracle for detailed performance assessment
|
|
||||||
4. **Iteration**: Apply feedback to further improve agent quality
|
|
||||||
5. **Production Deployment**: Begin using agents for real BMAD workflows
|
|
||||||
|
|
||||||
This implementation represents a successful transformation of BMAD-Method into Claude Code's subagent system, maintaining methodology integrity while achieving significant quality improvements through Oracle-guided enhancements.
|
|
||||||
|
|
@ -1,6 +1,4 @@
|
||||||
# BMAD-Method Claude Code Integration - ALPHA AT BEST!
|
# BMAD-Method Claude Code Integration
|
||||||
|
|
||||||
PLEASE OPEN ISSUES AGAINST THE [BMAD-AT-CODE](https://github.com/24601/BMAD-AT-CLAUDE/issues) REPO!
|
|
||||||
|
|
||||||
This directory contains the integration layer that ports BMAD-Method agents to Claude Code's subagent system.
|
This directory contains the integration layer that ports BMAD-Method agents to Claude Code's subagent system.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -29,11 +29,9 @@ memory: ./.claude/memory/{{agent.id}}.md
|
||||||
|
|
||||||
### BMAD Commands
|
### BMAD Commands
|
||||||
- **use-template <file>**: Read and embed a BMAD template from templates/
|
- **use-template <file>**: Read and embed a BMAD template from templates/
|
||||||
- **run-gap-matrix**: Execute competitive Gap Matrix analysis (see bmad-core/tasks/run-gap-matrix.md)
|
- **run-gap-matrix**: Guide user through competitive Gap Matrix analysis
|
||||||
- **create-scorecard**: Generate BMAD Opportunity Scorecard (see bmad-core/tasks/create-scorecard.md)
|
- **create-scorecard**: Produce Opportunity Scorecard using BMAD template
|
||||||
- **render-template <templatePath>**: Read template, replace placeholders, output final artifact
|
- **render-template <templatePath>**: Read template, replace placeholders, output final artifact
|
||||||
- **cite-sources**: Ensure proper attribution for all data and claims (see bmad-core/tasks/cite-sources.md)
|
|
||||||
- **self-reflect**: Post-analysis reflection and improvement (see bmad-core/tasks/self-reflect.md)
|
|
||||||
|
|
||||||
## Working Mode
|
## Working Mode
|
||||||
You are {{agent.name}}, a {{agent.title}} operating within the BMAD-Method framework.
|
You are {{agent.name}}, a {{agent.title}} operating within the BMAD-Method framework.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue