14 KiB
14 KiB
Learning Coordinator
Cross-System Learning Management
The Learning Coordinator orchestrates knowledge acquisition and improvement across all BMAD system components, enabling Claude Code to become increasingly intelligent through experience.
Learning Architecture
Learning Channels
learning_channels:
project_learning:
within_project_patterns:
- successful_implementations: "What worked well"
- failed_attempts: "What didn't work and why"
- optimization_discoveries: "Performance improvements found"
- team_insights: "Collaboration effectiveness"
cross_project_learning:
- shared_patterns: "Common successful approaches"
- universal_solutions: "Broadly applicable fixes"
- best_practices: "Validated methodologies"
- failure_prevention: "Known pitfalls and avoidance"
external_learning:
industry_trends:
- technology_evolution: "New frameworks, tools, practices"
- methodology_advances: "Improved development processes"
- security_updates: "New threats and protections"
- performance_insights: "Optimization techniques"
community_wisdom:
- open_source_patterns: "Popular GitHub patterns"
- stack_overflow_solutions: "Community-validated fixes"
- blog_insights: "Expert recommendations"
- conference_learnings: "Industry presentations"
system_learning:
performance_patterns:
- tool_usage_optimization: "Best tool combinations"
- workflow_efficiency: "Fastest development paths"
- resource_utilization: "Optimal system usage"
- error_recovery: "Failure handling improvements"
capability_gaps:
- missing_functionality: "Features users need"
- integration_opportunities: "New tool connections"
- automation_potential: "Manual tasks to automate"
- enhancement_priorities: "High-impact improvements"
Learning Pipeline
Knowledge Capture Mechanisms
async def capture_project_learning(project_context, outcome_data):
"""
Automatically capture learning from project experiences
"""
# Extract patterns from successful implementations
success_patterns = await extract_success_patterns(
project_context.implemented_solutions,
outcome_data.performance_metrics
)
# Analyze failure modes and prevention strategies
failure_analysis = await analyze_failures(
project_context.failed_attempts,
outcome_data.error_logs
)
# Identify optimization opportunities
optimizations = await identify_optimizations(
project_context.performance_data,
outcome_data.benchmark_results
)
# Capture team collaboration insights
collaboration_insights = await extract_collaboration_patterns(
project_context.workflow_data,
outcome_data.team_feedback
)
# Store learning with rich metadata
learning_record = {
'project_id': project_context.id,
'timestamp': datetime.utcnow(),
'success_patterns': success_patterns,
'failure_analysis': failure_analysis,
'optimizations': optimizations,
'collaboration_insights': collaboration_insights,
'context_metadata': extract_context_metadata(project_context)
}
await store_learning_record(learning_record)
return learning_record
async def capture_external_learning(source_type, source_data):
"""
Capture learning from external sources
"""
if source_type == 'web_research':
# Use WebFetch to analyze technical articles
insights = await extract_web_insights(source_data.urls)
elif source_type == 'community_patterns':
# Analyze popular GitHub repositories
insights = await analyze_github_patterns(source_data.repositories)
elif source_type == 'documentation':
# Process official documentation updates
insights = await process_documentation_updates(source_data.docs)
# Validate and categorize insights
validated_insights = await validate_external_insights(insights)
# Store with source attribution
await store_external_learning(validated_insights, source_type, source_data)
return validated_insights
Knowledge Processing and Integration
processing_pipeline:
1_validation:
accuracy_verification:
- source_credibility: "Trust score of information source"
- cross_reference_check: "Verification against multiple sources"
- practical_testing: "Real-world validation when possible"
- expert_review: "Domain expert validation"
relevance_assessment:
- context_applicability: "Where can this knowledge be used?"
- technology_compatibility: "What tech stacks does this apply to?"
- project_size_relevance: "Suitable for what project scales?"
- team_size_applicability: "Relevant for what team sizes?"
2_categorization:
knowledge_classification:
- domain_area: "frontend|backend|devops|security|qa"
- abstraction_level: "tactical|strategic|architectural"
- complexity_level: "beginner|intermediate|advanced"
- time_sensitivity: "evergreen|trending|deprecated"
metadata_enrichment:
- confidence_score: "How certain are we about this knowledge?"
- impact_potential: "How much could this improve outcomes?"
- implementation_effort: "How hard is this to apply?"
- prerequisite_knowledge: "What background is needed?"
3_integration:
knowledge_synthesis:
- merge_with_existing: "Combine with current knowledge base"
- resolve_conflicts: "Handle contradictory information"
- update_patterns: "Refine existing pattern recognition"
- enhance_recommendations: "Improve suggestion quality"
system_updates:
- rule_refinement: "Update decision-making rules"
- pattern_evolution: "Evolve pattern repository"
- tool_optimization: "Improve tool usage strategies"
- workflow_enhancement: "Optimize development workflows"
Learning Optimization
Effectiveness Measurement
async def measure_learning_effectiveness():
"""
Measure how well the system is learning and improving
"""
# Track prediction accuracy improvements
prediction_accuracy = await measure_prediction_improvements()
# Measure recommendation quality enhancement
recommendation_quality = await assess_recommendation_improvements()
# Track problem resolution speed improvements
resolution_speed = await measure_resolution_speed_gains()
# Assess user satisfaction improvements
user_satisfaction = await evaluate_user_satisfaction_trends()
# Calculate overall learning effectiveness score
learning_effectiveness = calculate_learning_score({
'prediction_accuracy': prediction_accuracy,
'recommendation_quality': recommendation_quality,
'resolution_speed': resolution_speed,
'user_satisfaction': user_satisfaction
})
return {
'overall_score': learning_effectiveness,
'component_scores': {
'prediction': prediction_accuracy,
'recommendations': recommendation_quality,
'speed': resolution_speed,
'satisfaction': user_satisfaction
},
'improvement_areas': identify_improvement_opportunities(learning_effectiveness)
}
async def optimize_learning_strategy():
"""
Continuously optimize how the system learns
"""
# Analyze which learning sources provide highest value
source_effectiveness = await analyze_learning_source_value()
# Identify knowledge gaps that need priority focus
knowledge_gaps = await identify_critical_knowledge_gaps()
# Optimize knowledge capture mechanisms
capture_optimization = await optimize_capture_mechanisms()
# Refine learning integration processes
integration_optimization = await optimize_integration_processes()
# Update learning strategy based on analysis
updated_strategy = {
'prioritized_sources': source_effectiveness['top_sources'],
'focus_areas': knowledge_gaps['critical_gaps'],
'capture_improvements': capture_optimization['recommendations'],
'integration_enhancements': integration_optimization['improvements']
}
await implement_learning_strategy_updates(updated_strategy)
return updated_strategy
Cross-Persona Learning Integration
Persona Enhancement Through Learning
persona_learning_integration:
individual_persona_improvement:
architect_learning:
- new_architectural_patterns: "Emerging design patterns"
- technology_evaluations: "Framework comparisons and choices"
- scalability_insights: "Performance optimization learnings"
- integration_strategies: "Service connection patterns"
security_learning:
- vulnerability_patterns: "New threat vectors and protections"
- compliance_updates: "Regulatory requirement changes"
- tool_evaluations: "Security tool effectiveness"
- incident_learnings: "Post-mortem insights"
qa_learning:
- testing_strategies: "Effective testing approaches"
- automation_patterns: "Test automation best practices"
- quality_metrics: "Meaningful quality indicators"
- defect_patterns: "Common bug types and prevention"
cross_persona_learning:
shared_insights:
- collaboration_patterns: "Effective teamwork approaches"
- handoff_optimization: "Smooth transition strategies"
- communication_improvements: "Clear information exchange"
- conflict_resolution: "Handling disagreements effectively"
system_wide_improvements:
- workflow_optimization: "End-to-end process improvements"
- tool_integration: "Better tool coordination"
- quality_enhancement: "System-wide quality gains"
- efficiency_gains: "Overall productivity improvements"
Knowledge Propagation and Application
Intelligent Knowledge Distribution
async def propagate_learning_across_system():
"""
Intelligently distribute new learning across all system components
"""
# Get recent learning insights
recent_insights = await get_recent_learning_insights()
# Determine relevance for each system component
for insight in recent_insights:
relevance_map = assess_insight_relevance(insight)
# Update relevant personas
for persona, relevance_score in relevance_map.items():
if relevance_score > 0.7: # High relevance threshold
await update_persona_knowledge(persona, insight)
# Update relevant patterns
if insight.type == 'pattern_learning':
await update_pattern_repository(insight)
# Update decision rules
if insight.type == 'decision_learning':
await update_decision_rules(insight)
# Update tool usage strategies
if insight.type == 'tool_learning':
await update_tool_strategies(insight)
# Track propagation effectiveness
await track_propagation_effectiveness(recent_insights)
async def apply_learning_to_current_context(current_task, available_insights):
"""
Apply relevant learning to the current development task
"""
# Filter insights relevant to current context
relevant_insights = filter_insights_by_context(
available_insights,
current_task.context
)
# Rank insights by potential impact
ranked_insights = rank_insights_by_impact(
relevant_insights,
current_task.objectives
)
# Generate actionable recommendations
recommendations = []
for insight in ranked_insights[:5]: # Top 5 insights
recommendation = generate_actionable_recommendation(
insight,
current_task
)
recommendations.append(recommendation)
return {
'applicable_insights': relevant_insights,
'prioritized_recommendations': recommendations,
'implementation_guidance': generate_implementation_guidance(recommendations)
}
Claude Code Integration
Learning-Enhanced Development Commands
# Learning capture and analysis
bmad learn --from-project <project_path> --outcome "successful"
bmad learn --from-source "web" --topic "react-performance"
bmad learn --analyze-patterns --timeframe "last-month"
# Knowledge application
bmad apply-learning --context "api-design" --problem "scaling"
bmad recommend --based-on-learning --task "database-optimization"
bmad insights --project <path> --learning-focus "security"
# Learning optimization
bmad learning optimize --strategy
bmad learning gaps --identify
bmad learning effectiveness --measure
Continuous Improvement Integration
async def enhance_claude_code_with_learning():
"""
Continuously enhance Claude Code capabilities with accumulated learning
"""
# Improve tool selection based on learning
tool_selection_improvements = await optimize_tool_selection_from_learning()
# Enhance code analysis based on pattern learning
code_analysis_improvements = await enhance_code_analysis_from_patterns()
# Optimize workflow suggestions based on success patterns
workflow_improvements = await optimize_workflows_from_success_patterns()
# Update error prevention based on failure learning
error_prevention_improvements = await update_error_prevention_from_failures()
# Apply improvements to Claude Code integration
await apply_improvements_to_claude_code({
'tool_selection': tool_selection_improvements,
'code_analysis': code_analysis_improvements,
'workflows': workflow_improvements,
'error_prevention': error_prevention_improvements
})
return "Claude Code enhanced with latest learning insights"
This Learning Coordinator ensures that every interaction with Claude Code contributes to the system's growing intelligence, creating a continuously improving development assistant that becomes more valuable over time.