Add automatic quality framework with loop detection and external LLM collaboration

The key elements captured:
- Enterprise-grade quality framework (main feature)
- Automatic loop detection (your key innovation)
- External LLM collaboration (the copy-paste prompt solution)
- Zero-touch workflow (eliminates manual handoffs)
This commit is contained in:
James (Claude Code) 2025-07-21 06:08:13 -04:00
parent bfaaa0ee11
commit 3faba78db0
26 changed files with 10356 additions and 1102 deletions

View File

@ -1,35 +1,15 @@
# dev # dev
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below. CRITICAL: Read the full YAML to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml ```yaml
IDE-FILE-RESOLUTION: IDE-FILE-RESOLUTION: Dependencies map to files as .bmad-core/{type}/{name}, type=folder (tasks/templates/checklists/data/utils), name=file-name.
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → {root}/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match. REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions: activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition - Announce: Greet the user with your name and role, and inform of the *help command.
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below - CRITICAL: Read the following full files as these are your explicit rules for development standards for this project - .bmad-core/core-config.yaml devLoadAlwaysFiles list
- STEP 3: Greet user with your name/role and mention `*help` command
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: Read the following full files as these are your explicit rules for development standards for this project - {root}/core-config.yaml devLoadAlwaysFiles list
- CRITICAL: Do NOT load any other files during startup aside from the assigned story and devLoadAlwaysFiles items, unless user requested you do or the following contradicts - CRITICAL: Do NOT load any other files during startup aside from the assigned story and devLoadAlwaysFiles items, unless user requested you do or the following contradicts
- CRITICAL: Do NOT begin development until a story is not in draft mode and you are told to proceed - CRITICAL: Do NOT begin development until a story is not in draft mode and you are told to proceed
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent: agent:
name: James name: James
id: dev id: dev
@ -49,13 +29,22 @@ core_principles:
- CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user. - CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user.
- CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log) - CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log)
- CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story - CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story
- CRITICAL: NO SIMULATION PATTERNS - Zero tolerance for Random.NextDouble(), Task.FromResult(), NotImplementedException, SimulateX() methods in production code
- CRITICAL: REAL IMPLEMENTATION ONLY - All methods must contain actual business logic, not placeholders or mock data
- Reality Validation Required - Execute reality-audit-comprehensive before claiming completion
- Build Success Mandatory - Clean Release mode compilation required before completion
- Numbered Options - Always use numbered lists when presenting choices to the user - Numbered Options - Always use numbered lists when presenting choices to the user
- Developer Guides Access: Use *guides command to access developer guides on-demand for implementation standards, cross-platform development, testing patterns, code quality configuration, environment setup, and component documentation
# All commands require * prefix when used (e.g., *help) # All commands require * prefix when used (e.g., *help)
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- run-tests: Execute linting and tests - run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer. - explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- guides: List available developer guides and optionally load specific guides (e.g., *guides testing, *guides quality, *guides cross-platform)
- reality-audit: Execute reality-audit-comprehensive task to validate real implementation vs simulation patterns
- build-context: Execute build-context-analysis to ensure clean compilation and runtime
- escalate: Execute loop-detection-escalation task when stuck in loops or facing persistent blockers
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona - exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story: develop-story:
order-of-execution: "Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete" order-of-execution: "Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete"
@ -63,14 +52,29 @@ develop-story:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS. - CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status - CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above - CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: "HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression" blocking: "HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | Missing config | Failing regression"
auto_escalation:
trigger: "3 consecutive failed attempts at the same task/issue"
tracking: "Maintain attempt counter per specific issue/task - reset on successful progress"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
examples:
- "Build fails 3 times with same error despite different fix attempts"
- "Test implementation fails 3 times with different approaches"
- "Same validation error persists after 3 different solutions tried"
- "Reality audit fails 3 times on same simulation pattern despite fixes"
ready-for-review: "Code matches requirements + All validations pass + Follows standards + File List complete" ready-for-review: "Code matches requirements + All validations pass + Follows standards + File List complete"
completion: "All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON'T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: 'Ready for Review'→HALT" completion: "All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON'T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→MANDATORY: run the task reality-audit-comprehensive to validate no simulation patterns→set story status: 'Ready for Review'→HALT"
dependencies: dependencies:
tasks: tasks:
- execute-checklist.md - execute-checklist.md
- validate-next-story.md - validate-next-story.md
- reality-audit-comprehensive.md
- complete-api-contract-remediation.md
- loop-detection-escalation.md
checklists: checklists:
- story-dod-checklist.md - story-dod-checklist.md
- reality-audit-comprehensive.md
- build-context-analysis.md
- loop-detection-escalation.md
``` ```

View File

@ -0,0 +1,76 @@
# dev
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → {root}/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: Read the following full files as these are your explicit rules for development standards for this project - {root}/core-config.yaml devLoadAlwaysFiles list
- CRITICAL: Do NOT load any other files during startup aside from the assigned story and devLoadAlwaysFiles items, unless user requested you do or the following contradicts
- CRITICAL: Do NOT begin development until a story is not in draft mode and you are told to proceed
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: James
id: dev
title: Full Stack Developer
icon: 💻
whenToUse: "Use for code implementation, debugging, refactoring, and development best practices"
customization:
persona:
role: Expert Senior Software Engineer & Implementation Specialist
style: Extremely concise, pragmatic, detail-oriented, solution-focused
identity: Expert who implements stories by reading requirements and executing tasks sequentially with comprehensive testing
focus: Executing story tasks with precision, updating Dev Agent Record sections only, maintaining minimal context overhead
core_principles:
- CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user.
- CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log)
- CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story
- Numbered Options - Always use numbered lists when presenting choices to the user
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: "Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete"
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: "HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression"
ready-for-review: "Code matches requirements + All validations pass + Follows standards + File List complete"
completion: "All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON'T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: 'Ready for Review'→HALT"
dependencies:
tasks:
- execute-checklist.md
- validate-next-story.md
checklists:
- story-dod-checklist.md
```

View File

@ -1,32 +1,16 @@
# qa # qa
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below. CRITICAL: Read the full YAML to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml ```yaml
IDE-FILE-RESOLUTION: IDE-FILE-RESOLUTION: Dependencies map to files as .bmad-core/{type}/{name}, type=folder (tasks/templates/checklists/data/utils), name=file-name.
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → {root}/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match. REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions: activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition - Follow all instructions in this file -> this defines you, your persona and more importantly what you can do. STAY IN CHARACTER!
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below - Only read the files/tasks listed here when user selects them for execution to minimize context usage
- STEP 3: Greet user with your name/role and mention `*help` command - The customization field ALWAYS takes precedence over any conflicting instructions
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute - When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER! - Greet the user with your name and role, and inform of the *help command.
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent: agent:
name: Quinn name: Quinn
id: qa id: qa
@ -34,6 +18,24 @@ agent:
icon: 🧪 icon: 🧪
whenToUse: Use for senior code review, refactoring, test planning, quality assurance, and mentoring through code improvements whenToUse: Use for senior code review, refactoring, test planning, quality assurance, and mentoring through code improvements
customization: null customization: null
automation_behavior:
always_auto_remediate: true
trigger_threshold: 80
auto_create_stories: true
systematic_reaudit: true
trigger_conditions:
- composite_reality_score_below: 80
- regression_prevention_score_below: 80
- technical_debt_score_below: 70
- build_failures: true
- critical_simulation_patterns: 3+
- runtime_failures: true
auto_actions:
- generate_remediation_story: true
- include_regression_prevention: true
- cross_reference_story_patterns: true
- assign_to_developer: true
- create_reaudit_workflow: true
persona: persona:
role: Senior Developer & Test Architect role: Senior Developer & Test Architect
style: Methodical, detail-oriented, quality-focused, mentoring, strategic style: Methodical, detail-oriented, quality-focused, mentoring, strategic
@ -41,15 +43,23 @@ persona:
focus: Code excellence through review, refactoring, and comprehensive testing strategies focus: Code excellence through review, refactoring, and comprehensive testing strategies
core_principles: core_principles:
- Senior Developer Mindset - Review and improve code as a senior mentoring juniors - Senior Developer Mindset - Review and improve code as a senior mentoring juniors
- Reality Validation - Distinguish real implementation from simulation/mock patterns using systematic detection
- Active Refactoring - Don't just identify issues, fix them with clear explanations - Active Refactoring - Don't just identify issues, fix them with clear explanations
- Test Strategy & Architecture - Design holistic testing strategies across all levels - Test Strategy & Architecture - Design holistic testing strategies across all levels
- Code Quality Excellence - Enforce best practices, patterns, and clean code principles - Code Quality Excellence - Enforce best practices, patterns, and clean code principles
- Anti-Simulation Enforcement - Zero tolerance for Random.NextDouble(), Task.FromResult(), NotImplementedException in production
- Shift-Left Testing - Integrate testing early in development lifecycle - Shift-Left Testing - Integrate testing early in development lifecycle
- Performance & Security - Proactively identify and fix performance/security issues - Performance & Security - Proactively identify and fix performance/security issues
- Evidence-Based Assessment - Use objective metrics and automated scanning for completion validation
- Mentorship Through Action - Explain WHY and HOW when making improvements - Mentorship Through Action - Explain WHY and HOW when making improvements
- Risk-Based Testing - Prioritize testing based on risk and critical areas - Risk-Based Testing - Prioritize testing based on risk and critical areas
- Build & Runtime Validation - Ensure clean compilation and functional execution before approval
- Continuous Improvement - Balance perfection with pragmatism - Continuous Improvement - Balance perfection with pragmatism
- Architecture & Design Patterns - Ensure proper patterns and maintainable code structure - Architecture & Design Patterns - Ensure proper patterns and maintainable code structure
- Loop Detection & Escalation - Systematically track validation attempts and trigger collaboration when stuck in repetitive patterns
- BMAD-Method Automation - Always auto-generate remediation stories with regression prevention when quality gates fail (composite score < 80, regression prevention < 80, technical debt < 70)
- Auto-Trigger at Composite Threshold - Audit → Auto-remediate with regression prevention → Systematic fixing workflow, never just report without remediation
- No Manual Handoffs - Complete workflow automation from detection to fix-story creation
story-file-permissions: story-file-permissions:
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files - CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections - CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
@ -58,10 +68,33 @@ story-file-permissions:
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- review {story}: execute the task review-story for the highest sequence story in docs/stories unless another is specified - keep any specified technical-preferences in mind as needed - review {story}: execute the task review-story for the highest sequence story in docs/stories unless another is specified - keep any specified technical-preferences in mind as needed
- reality-audit {story}: execute the task reality-audit-comprehensive for comprehensive simulation detection, reality validation, and regression prevention analysis
- audit-validation {story}: Execute reality audit with AUTO-REMEDIATION - automatically generates fix story with regression prevention if composite score < 80, build failures, or critical issues detected
- create-remediation: execute the task create-remediation-story to generate fix stories for identified issues
- escalate: Execute loop-detection-escalation task for validation challenges requiring external expertise
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below)
- exit: Say goodbye as the QA Engineer, and then abandon inhabiting this persona - exit: Say goodbye as the QA Engineer, and then abandon inhabiting this persona
auto_escalation:
trigger: "3 consecutive failed attempts at resolving the same quality issue"
tracking: "Maintain failure counter per specific quality issue - reset on successful resolution"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
examples:
- "Same reality audit failure persists after 3 different remediation attempts"
- "Composite quality score stays below 80% after 3 fix cycles"
- "Same regression prevention issue fails 3 times despite different approaches"
- "Build/runtime validation fails 3 times on same error after different solutions"
dependencies: dependencies:
tasks: tasks:
- review-story.md - review-story.md
- reality-audit-comprehensive.md
- reality-audit.md
- loop-detection-escalation.md
- create-remediation-story.md
checklists:
- reality-audit-comprehensive.md
- loop-detection-escalation.md
data: data:
- technical-preferences.md - technical-preferences.md
templates: templates:

View File

@ -0,0 +1,69 @@
# qa
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your operating params, start and follow exactly your activation-instructions to alter your state of being, stay in this being until told to exit this mode:
## COMPLETE AGENT DEFINITION FOLLOWS - NO EXTERNAL FILES NEEDED
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → {root}/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
- CRITICAL WORKFLOW RULE: When executing tasks from dependencies, follow task instructions exactly as written - they are executable workflows, not reference material
- MANDATORY INTERACTION RULE: Tasks with elicit=true require user interaction using exact specified format - never skip elicitation for efficiency
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Quinn
id: qa
title: Senior Developer & QA Architect
icon: 🧪
whenToUse: Use for senior code review, refactoring, test planning, quality assurance, and mentoring through code improvements
customization: null
persona:
role: Senior Developer & Test Architect
style: Methodical, detail-oriented, quality-focused, mentoring, strategic
identity: Senior developer with deep expertise in code quality, architecture, and test automation
focus: Code excellence through review, refactoring, and comprehensive testing strategies
core_principles:
- Senior Developer Mindset - Review and improve code as a senior mentoring juniors
- Active Refactoring - Don't just identify issues, fix them with clear explanations
- Test Strategy & Architecture - Design holistic testing strategies across all levels
- Code Quality Excellence - Enforce best practices, patterns, and clean code principles
- Shift-Left Testing - Integrate testing early in development lifecycle
- Performance & Security - Proactively identify and fix performance/security issues
- Mentorship Through Action - Explain WHY and HOW when making improvements
- Risk-Based Testing - Prioritize testing based on risk and critical areas
- Continuous Improvement - Balance perfection with pragmatism
- Architecture & Design Patterns - Ensure proper patterns and maintainable code structure
story-file-permissions:
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
- CRITICAL: Your updates must be limited to appending your review results in the QA Results section only
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- review {story}: execute the task review-story for the highest sequence story in docs/stories unless another is specified - keep any specified technical-preferences in mind as needed
- exit: Say goodbye as the QA Engineer, and then abandon inhabiting this persona
dependencies:
tasks:
- review-story.md
data:
- technical-preferences.md
templates:
- story-tmpl.yaml
```

View File

@ -0,0 +1,152 @@
# Static Code Analysis Checklist
## Purpose
This checklist ensures code quality and security standards are met before marking any development task complete. It supplements the existing story-dod-checklist.md with specific static analysis requirements.
## Pre-Implementation Analysis
- [ ] Search codebase for similar implementations to follow established patterns
- [ ] Review relevant architecture documentation for the area being modified
- [ ] Identify potential security implications of the implementation
- [ ] Check for existing analyzer suppressions and understand their justification
## During Development
- [ ] Run analyzers frequently: `dotnet build -warnaserror`
- [ ] Address warnings immediately rather than accumulating technical debt
- [ ] Document any necessary suppressions with clear justification
- [ ] Follow secure coding patterns from the security guidelines
## Code Analysis Verification
### Security Analyzers
- [ ] No SQL injection vulnerabilities (CA2100, EF1002)
- [ ] No use of insecure randomness in production code (CA5394)
- [ ] No hardcoded credentials or secrets (CA5385, CA5387)
- [ ] No insecure deserialization (CA2326, CA2327)
- [ ] Proper input validation on all external data
### Performance Analyzers
- [ ] No unnecessary allocations in hot paths (CA1806)
- [ ] Proper async/await usage (CA2007, CA2008)
- [ ] No blocking on async code (CA2016)
- [ ] Appropriate collection types used (CA1826)
### Code Quality
- [ ] No dead code or unused parameters (CA1801)
- [ ] Proper IDisposable implementation (CA1063, CA2000)
- [ ] No empty catch blocks (CA1031)
- [ ] Appropriate exception handling (CA2201)
### Test-Specific
- [ ] xUnit analyzers satisfied (xUnit1000-xUnit2999)
- [ ] No test-specific suppressions without justification
- [ ] Test data generation uses appropriate patterns
- [ ] Integration tests don't expose security vulnerabilities
## Suppression Guidelines
### When Suppressions Are Acceptable
1. **Test Projects Only**:
- Insecure randomness for test data (CA5394)
- Simplified error handling in test utilities
- Performance optimizations not needed in tests
2. **Legacy Code Integration**:
- When refactoring would break backward compatibility
- Documented with migration plan
### Suppression Requirements
```csharp
// Required format for suppressions:
#pragma warning disable CA5394 // Do not use insecure randomness
// Justification: Test data generation does not require cryptographic security
// Risk: None - test environment only
// Reviewed by: [Developer name] on [Date]
var random = new Random();
#pragma warning restore CA5394
```
## Verification Commands
### Full Analysis
```bash
# Run all analyzers with warnings as errors
dotnet build -warnaserror -p:RunAnalyzersDuringBuild=true
# Run specific analyzer categories
dotnet build -warnaserror -p:CodeAnalysisRuleSet=SecurityRules.ruleset
```
### Security Scan
```bash
# Run security-focused analysis
dotnet build -p:RunSecurityCodeAnalysis=true
# Generate security report
dotnet build -p:SecurityCodeAnalysisReport=security-report.sarif
```
### Pre-Commit Verification
```bash
# Add to git pre-commit hook
dotnet format analyzers --verify-no-changes
dotnet build -warnaserror --no-restore
```
## Integration with BMAD Workflow
### Dev Agent Requirements
1. Run static analysis before marking any task complete
2. Document all suppressions in code comments
3. Update story file with any technical debt incurred
4. Include analyzer results in dev agent record
### QA Agent Verification
1. Verify no new analyzer warnings introduced
2. Review all suppressions for appropriateness
3. Check for security anti-patterns
4. Validate performance characteristics
## Common Patterns and Solutions
### SQL in Tests
```csharp
// ❌ BAD: SQL injection risk
await context.Database.ExecuteSqlRawAsync($"DELETE FROM {table}");
// ✅ GOOD: Whitelist approach
private static readonly string[] AllowedTables = { "Users", "Orders" };
if (!AllowedTables.Contains(table)) throw new ArgumentException();
await context.Database.ExecuteSqlRawAsync($"DELETE FROM {table}");
```
### Test Data Generation
```csharp
// For test projects, add to .editorconfig:
[*Tests.cs]
dotnet_diagnostic.CA5394.severity = none
// Or use deterministic data:
var testData = Enumerable.Range(1, 100).Select(i => new TestEntity { Id = i });
```
### Async Best Practices
```csharp
// ❌ BAD: Missing ConfigureAwait
await SomeAsyncMethod();
// ✅ GOOD: Explicit ConfigureAwait
await SomeAsyncMethod().ConfigureAwait(false);
```
## Escalation Path
If you encounter analyzer warnings that seem incorrect or overly restrictive:
1. Research the specific rule documentation
2. Check if there's an established pattern in the codebase
3. Consult with tech lead before suppressing
4. Document decision in architecture decision records (ADR)
## References
- [Roslyn Analyzers Documentation](https://docs.microsoft.com/en-us/dotnet/fundamentals/code-analysis/overview)
- [Security Code Analysis Rules](https://docs.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/security-warnings)
- [xUnit Analyzer Rules](https://xunit.net/xunit.analyzers/rules/)
- Project-specific: `/docs/Architecture/coding-standards.md`

View File

@ -0,0 +1,101 @@
# Story Definition of Done (DoD) Checklist
## Instructions for Developer Agent
Before marking a story as 'Review', please go through each item in this checklist. Report the status of each item (e.g., [x] Done, [ ] Not Done, [N/A] Not Applicable) and provide brief comments if necessary.
[[LLM: INITIALIZATION INSTRUCTIONS - STORY DOD VALIDATION
This checklist is for DEVELOPER AGENTS to self-validate their work before marking a story complete.
IMPORTANT: This is a self-assessment. Be honest about what's actually done vs what should be done. It's better to identify issues now than have them found in review.
EXECUTION APPROACH:
1. Go through each section systematically
2. Mark items as [x] Done, [ ] Not Done, or [N/A] Not Applicable
3. Add brief comments explaining any [ ] or [N/A] items
4. Be specific about what was actually implemented
5. Flag any concerns or technical debt created
The goal is quality delivery, not just checking boxes.]]
## Checklist Items
1. **Requirements Met:**
[[LLM: Be specific - list each requirement and whether it's complete]]
- [ ] All functional requirements specified in the story are implemented.
- [ ] All acceptance criteria defined in the story are met.
2. **Coding Standards & Project Structure:**
[[LLM: Code quality matters for maintainability. Check each item carefully]]
- [ ] All new/modified code strictly adheres to `Operational Guidelines`.
- [ ] All new/modified code aligns with `Project Structure` (file locations, naming, etc.).
- [ ] Adherence to `Tech Stack` for technologies/versions used (if story introduces or modifies tech usage).
- [ ] Adherence to `Api Reference` and `Data Models` (if story involves API or data model changes).
- [ ] Basic security best practices (e.g., input validation, proper error handling, no hardcoded secrets) applied for new/modified code.
- [ ] No new linter errors or warnings introduced.
- [ ] Code is well-commented where necessary (clarifying complex logic, not obvious statements).
3. **Testing:**
[[LLM: Testing proves your code works. Be honest about test coverage]]
- [ ] All required unit tests as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All required integration tests (if applicable) as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All tests (unit, integration, E2E if applicable) pass successfully.
- [ ] Test coverage meets project standards (if defined).
4. **Functionality & Verification:**
[[LLM: Did you actually run and test your code? Be specific about what you tested]]
- [ ] Functionality has been manually verified by the developer (e.g., running the app locally, checking UI, testing API endpoints).
- [ ] Edge cases and potential error conditions considered and handled gracefully.
5. **Story Administration:**
[[LLM: Documentation helps the next developer. What should they know?]]
- [ ] All tasks within the story file are marked as complete.
- [ ] Any clarifications or decisions made during development are documented in the story file or linked appropriately.
- [ ] The story wrap up section has been completed with notes of changes or information relevant to the next story or overall project, the agent model that was primarily used during development, and the changelog of any changes is properly updated.
6. **Dependencies, Build & Configuration:**
[[LLM: Build issues block everyone. Ensure everything compiles and runs cleanly]]
- [ ] Project builds successfully without errors.
- [ ] Project linting passes
- [ ] Any new dependencies added were either pre-approved in the story requirements OR explicitly approved by the user during development (approval documented in story file).
- [ ] If new dependencies were added, they are recorded in the appropriate project files (e.g., `package.json`, `requirements.txt`) with justification.
- [ ] No known security vulnerabilities introduced by newly added and approved dependencies.
- [ ] If new environment variables or configurations were introduced by the story, they are documented and handled securely.
7. **Documentation (If Applicable):**
[[LLM: Good documentation prevents future confusion. What needs explaining?]]
- [ ] Relevant inline code documentation (e.g., JSDoc, TSDoc, Python docstrings) for new public APIs or complex logic is complete.
- [ ] User-facing documentation updated, if changes impact users.
- [ ] Technical documentation (e.g., READMEs, system diagrams) updated if significant architectural changes were made.
## Final Confirmation
[[LLM: FINAL DOD SUMMARY
After completing the checklist:
1. Summarize what was accomplished in this story
2. List any items marked as [ ] Not Done with explanations
3. Identify any technical debt or follow-up work needed
4. Note any challenges or learnings for future stories
5. Confirm whether the story is truly ready for review
Be honest - it's better to flag issues now than have them discovered later.]]
- [ ] I, the Developer Agent, confirm that all applicable items above have been addressed.

View File

@ -0,0 +1,463 @@
# Build Context Analysis
## Task Overview
Perform comprehensive context analysis before attempting to fix build errors to prevent regressions and technical debt introduction. This consolidated framework combines systematic investigation with validation checklists to ensure informed fixes rather than blind error resolution.
## Context
This analysis prevents developers from blindly "fixing" build errors without understanding why they exist and what functionality could be lost. It combines historical investigation, test contract analysis, dependency mapping, and risk assessment into a single comprehensive approach.
## Execution Approach
**CRITICAL BUILD CONTEXT VALIDATION** - This analysis addresses systematic "quick fix" behavior that introduces regressions.
1. **Investigate the history** - why did the build break?
2. **Understand the intended behavior** through tests
3. **Map all dependencies** and integration points
4. **Plan fixes that preserve** existing functionality
5. **Create validation checkpoints** to catch regressions
The goal is informed fixes, not blind error resolution.
---
## Prerequisites
- Build errors identified and categorized
- Story requirements understood
- Access to git history and previous implementations
- Development environment configured for analysis
## Phase 1: Historical Context Investigation
### Git History Analysis
**Understand the story behind each build error:**
**For each build error category:**
- [ ] **Recent Changes Identified**: Found commits that introduced build errors
- [ ] **Git Blame Analysis**: Identify when interface/implementation diverged
- [ ] **Commit Message Review**: Understand the intent behind interface changes
- [ ] **Previous Implementation Review**: Study what the working code actually did
- [ ] **Interface Evolution Understood**: Know why interfaces changed vs implementations
- [ ] **Previous Working State Documented**: Have record of last working implementation
- [ ] **Change Intent Clarified**: Understand purpose of interface modifications
- [ ] **Business Logic Preserved**: Identified functionality that must be maintained
- [ ] **Change Justification**: Understand why the interface was modified
### Historical Analysis Commands
```bash
echo "=== BUILD CONTEXT HISTORICAL ANALYSIS ==="
echo "Analysis Date: $(date)"
echo "Analyst: [Developer Agent Name]"
echo ""
# Create analysis report
CONTEXT_REPORT="build-context-$(date +%Y%m%d-%H%M).md"
echo "# Build Context Analysis Report" > $CONTEXT_REPORT
echo "Date: $(date)" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "=== GIT HISTORY INVESTIGATION ===" | tee -a $CONTEXT_REPORT
# Find recent commits that might have caused build errors
echo "## Recent Commits Analysis" >> $CONTEXT_REPORT
echo "Recent commits (last 10):" | tee -a $CONTEXT_REPORT
git log --oneline -10 | tee -a $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "## Interface Changes Detection" >> $CONTEXT_REPORT
# Look for interface/API changes in recent commits
echo "Interface changes in recent commits:" | tee -a $CONTEXT_REPORT
git log --oneline -20 --grep="interface\|API\|contract\|signature" | tee -a $CONTEXT_REPORT
# Find files with frequent recent changes
echo "" >> $CONTEXT_REPORT
echo "## Frequently Modified Files" >> $CONTEXT_REPORT
echo "Files with most changes in last 30 days:" | tee -a $CONTEXT_REPORT
git log --since="30 days ago" --name-only --pretty=format: | sort | uniq -c | sort -rn | head -20 | tee -a $CONTEXT_REPORT
# Analyze specific error-causing files
echo "" >> $CONTEXT_REPORT
echo "## Build Error File Analysis" >> $CONTEXT_REPORT
for file in $(find . -name "*.cs" -o -name "*.java" -o -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.rs" -o -name "*.go" | head -10); do
if [ -f "$file" ]; then
echo "### File: $file" >> $CONTEXT_REPORT
echo "Last 5 commits affecting this file:" >> $CONTEXT_REPORT
git log --oneline -5 -- "$file" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
fi
done
```
### Documentation Required
Document findings in the following format:
```markdown
## Build Error Context Analysis
### Error Category: [UserRole Constructor Issues - 50 errors]
#### Git History Investigation:
- **Last Working Commit**: [commit hash]
- **Interface Change Commit**: [commit hash]
- **Change Reason**: [why was interface modified]
- **Previous Functionality**: [what did the old implementation do]
- **Business Logic Lost**: [any functionality that would be lost]
#### Most Recent Interface Changes:
- UserRole interface changed in commit [hash] because [reason]
- SecurityEvent interface evolved in commit [hash] for [purpose]
- CachedUserSession modified in commit [hash] to support [feature]
#### Critical Business Logic to Preserve:
- [List functionality that must not be lost]
- [Dependencies that must be maintained]
- [Behavior patterns that must continue working]
```
## Phase 2: Test Contract Analysis
### Existing Test Investigation
**Let existing tests define the correct behavior:**
- [ ] **All Relevant Tests Located**: Found every test touching broken components
- [ ] **Find All Tests**: Locate every test that touches the broken components
- [ ] **Test Expectations Documented**: Understand exactly what tests expect
- [ ] **Analyze Test Expectations**: Understand what behavior tests expect
- [ ] **Interface Contracts Mapped**: Know the API contracts tests enforce
- [ ] **Map API Contracts**: Understand the interfaces tests expect to exist
- [ ] **Behavior Patterns Identified**: Understand consistent usage patterns
- [ ] **Identify Usage Patterns**: Find how components are actually used
### Test Analysis Commands
```bash
echo "=== TEST CONTRACT ANALYSIS ===" | tee -a $CONTEXT_REPORT
# Find all test files
echo "## Test File Discovery" >> $CONTEXT_REPORT
echo "Locating test files..." | tee -a $CONTEXT_REPORT
# Different project types have different test patterns
if find . -name "*.Test.cs" -o -name "*Tests.cs" | head -1 | grep -q .; then
# .NET tests
TEST_FILES=$(find . -name "*.Test.cs" -o -name "*Tests.cs" -o -name "*Test*.cs")
echo "Found .NET test files:" | tee -a $CONTEXT_REPORT
elif find . -name "*.test.js" -o -name "*.spec.js" | head -1 | grep -q .; then
# JavaScript tests
TEST_FILES=$(find . -name "*.test.js" -o -name "*.spec.js" -o -name "*.test.ts" -o -name "*.spec.ts")
echo "Found JavaScript/TypeScript test files:" | tee -a $CONTEXT_REPORT
elif find . -name "*_test.py" -o -name "test_*.py" | head -1 | grep -q .; then
# Python tests
TEST_FILES=$(find . -name "*_test.py" -o -name "test_*.py")
echo "Found Python test files:" | tee -a $CONTEXT_REPORT
elif find . -name "*_test.go" | head -1 | grep -q .; then
# Go tests
TEST_FILES=$(find . -name "*_test.go")
echo "Found Go test files:" | tee -a $CONTEXT_REPORT
elif find . -name "*_test.rs" | head -1 | grep -q .; then
# Rust tests
TEST_FILES=$(find . -name "*_test.rs" -o -name "lib.rs" -path "*/tests/*")
echo "Found Rust test files:" | tee -a $CONTEXT_REPORT
else
# Generic search
TEST_FILES=$(find . -name "*test*" -name "*.java" -o -name "*Test*")
echo "Found test files (generic):" | tee -a $CONTEXT_REPORT
fi
echo "$TEST_FILES" | tee -a $CONTEXT_REPORT
# Analyze test expectations for key components
echo "" >> $CONTEXT_REPORT
echo "## Test Expectations Analysis" >> $CONTEXT_REPORT
for test_file in $TEST_FILES; do
if [ -f "$test_file" ] && [ $(wc -l < "$test_file") -gt 0 ]; then
echo "### Test File: $test_file" >> $CONTEXT_REPORT
# Look for constructor calls, method calls, and assertions
echo "Constructor usage patterns:" >> $CONTEXT_REPORT
grep -n "new.*(" "$test_file" | head -5 >> $CONTEXT_REPORT 2>/dev/null || echo "No constructor patterns found" >> $CONTEXT_REPORT
echo "Method call patterns:" >> $CONTEXT_REPORT
grep -n "\\..*(" "$test_file" | head -5 >> $CONTEXT_REPORT 2>/dev/null || echo "No method call patterns found" >> $CONTEXT_REPORT
echo "Assertion patterns:" >> $CONTEXT_REPORT
grep -n "Assert\|expect\|should\|assert" "$test_file" | head -5 >> $CONTEXT_REPORT 2>/dev/null || echo "No assertion patterns found" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
fi
done
```
### Test Contract Documentation
Document test findings:
```markdown
## Test Contract Analysis
### Test Files Located:
- [List of all relevant test files]
### API Contracts Expected by Tests:
- UserRole expects constructor with [parameters]
- SecurityEvent expects methods [list methods]
- CachedUserSession expects behavior [describe behavior]
### Consistent Usage Patterns:
- [Pattern 1: How components are typically instantiated]
- [Pattern 2: Common method call sequences]
- [Pattern 3: Expected return types and values]
### Test Expectations to Preserve:
- [Critical test behaviors that must continue working]
```
## Phase 3: Dependency Integration Analysis
### Integration Point Mapping
**Map all components that depend on broken interfaces:**
- [ ] **Dependent Components Identified**: Found all code that uses broken interfaces
- [ ] **Integration Points Mapped**: Know how components connect and communicate
- [ ] **Data Flow Understood**: Traced how data moves through dependent systems
- [ ] **Call Chain Analysis**: Understand sequence of operations
- [ ] **Impact Assessment Completed**: Know scope of potential regression
### Dependency Analysis Commands
```bash
echo "=== DEPENDENCY INTEGRATION ANALYSIS ===" | tee -a $CONTEXT_REPORT
# Find dependencies and usage patterns
echo "## Dependency Mapping" >> $CONTEXT_REPORT
# Search for class/interface usage across the codebase
if find . -name "*.cs" | head -1 | grep -q .; then
# .NET analysis
echo "Analyzing .NET dependencies..." | tee -a $CONTEXT_REPORT
# Find interface implementations
echo "### Interface Implementations:" >> $CONTEXT_REPORT
grep -r "class.*:.*I[A-Z]" . --include="*.cs" | head -10 >> $CONTEXT_REPORT
# Find constructor usage
echo "### Constructor Usage Patterns:" >> $CONTEXT_REPORT
grep -r "new [A-Z][a-zA-Z]*(" . --include="*.cs" | head -15 >> $CONTEXT_REPORT
elif find . -name "*.ts" -o -name "*.js" | head -1 | grep -q .; then
# TypeScript/JavaScript analysis
echo "Analyzing TypeScript/JavaScript dependencies..." | tee -a $CONTEXT_REPORT
# Find imports
echo "### Import Dependencies:" >> $CONTEXT_REPORT
grep -r "import.*from\|require(" . --include="*.ts" --include="*.js" | head -15 >> $CONTEXT_REPORT
# Find class usage
echo "### Class Usage Patterns:" >> $CONTEXT_REPORT
grep -r "new [A-Z]" . --include="*.ts" --include="*.js" | head -15 >> $CONTEXT_REPORT
elif find . -name "*.java" | head -1 | grep -q .; then
# Java analysis
echo "Analyzing Java dependencies..." | tee -a $CONTEXT_REPORT
# Find imports
echo "### Import Dependencies:" >> $CONTEXT_REPORT
grep -r "import.*;" . --include="*.java" | head -15 >> $CONTEXT_REPORT
# Find constructor usage
echo "### Constructor Usage:" >> $CONTEXT_REPORT
grep -r "new [A-Z][a-zA-Z]*(" . --include="*.java" | head -15 >> $CONTEXT_REPORT
fi
# Analyze call chains and data flow
echo "" >> $CONTEXT_REPORT
echo "## Call Chain Analysis" >> $CONTEXT_REPORT
echo "Method call patterns in source files:" >> $CONTEXT_REPORT
# Find method chaining and call patterns
grep -r "\\..*\\." . --include="*.cs" --include="*.java" --include="*.ts" --include="*.js" | head -20 >> $CONTEXT_REPORT 2>/dev/null || echo "No method chains found" >> $CONTEXT_REPORT
```
### Integration Documentation
```markdown
## Integration Analysis
### Dependent Components:
- [Component 1]: Uses [interfaces/classes] in [specific ways]
- [Component 2]: Depends on [functionality] for [purpose]
- [Component 3]: Integrates with [services] through [methods]
### Data Flow Paths:
- [Path 1]: Data flows from [source] through [intermediates] to [destination]
- [Path 2]: Information passes between [components] via [mechanisms]
### Critical Integration Points:
- [Integration 1]: [Component A] ↔ [Component B] via [interface]
- [Integration 2]: [System X] ↔ [System Y] through [API calls]
### Impact Assessment:
- **High Risk**: [Components that could break completely]
- **Medium Risk**: [Components that might have reduced functionality]
- **Low Risk**: [Components with minimal coupling]
```
## Phase 4: Risk Assessment and Planning
### Comprehensive Risk Analysis
**Assess the risk of different fix approaches:**
- [ ] **Fix Approaches Evaluated**: Considered multiple ways to resolve build errors
- [ ] **Regression Risk Assessed**: Understand likelihood of breaking existing functionality
- [ ] **Testing Strategy Planned**: Know how to validate fixes don't introduce regressions
- [ ] **Rollback Plan Prepared**: Have strategy if fixes introduce new problems
- [ ] **Impact Scope Bounded**: Understand maximum possible scope of changes
### Risk Assessment Framework
```bash
echo "=== RISK ASSESSMENT ===" | tee -a $CONTEXT_REPORT
echo "## Fix Strategy Risk Analysis" >> $CONTEXT_REPORT
# Analyze different fix approaches
echo "### Possible Fix Approaches:" >> $CONTEXT_REPORT
echo "1. **Interface Restoration**: Restore previous interface signatures" >> $CONTEXT_REPORT
echo " - Risk: May conflict with new functionality requirements" >> $CONTEXT_REPORT
echo " - Impact: Low regression risk, high business requirement risk" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "2. **Implementation Adaptation**: Update implementations to match new interfaces" >> $CONTEXT_REPORT
echo " - Risk: May break existing functionality if not careful" >> $CONTEXT_REPORT
echo " - Impact: Medium regression risk, low requirement risk" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "3. **Hybrid Approach**: Combine interface restoration with selective implementation updates" >> $CONTEXT_REPORT
echo " - Risk: Complex changes with multiple failure points" >> $CONTEXT_REPORT
echo " - Impact: Variable risk depending on execution" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
# Document critical risk factors
echo "### Critical Risk Factors:" >> $CONTEXT_REPORT
echo "- **Test Coverage**: $(find . -name "*test*" -o -name "*Test*" | wc -l) test files found" >> $CONTEXT_REPORT
echo "- **Integration Complexity**: Multiple components interact through changed interfaces" >> $CONTEXT_REPORT
echo "- **Business Logic Preservation**: Core functionality must remain intact" >> $CONTEXT_REPORT
echo "- **Timeline Pressure**: Need to balance speed with quality" >> $CONTEXT_REPORT
```
### Risk Documentation
```markdown
## Risk Assessment Summary
### Fix Strategy Recommendations:
- **Recommended Approach**: [Chosen strategy with justification]
- **Alternative Approaches**: [Other options considered and why rejected]
### Risk Mitigation Strategies:
- **Test Validation**: [How to verify fixes don't break existing functionality]
- **Incremental Implementation**: [Steps to implement changes safely]
- **Rollback Procedures**: [How to undo changes if problems arise]
### Validation Checkpoints:
- [ ] All existing tests continue to pass
- [ ] New functionality requirements met
- [ ] Performance remains acceptable
- [ ] Integration points verified working
- [ ] No new security vulnerabilities introduced
```
## Phase 5: Validation and Documentation
### Implementation Planning
**Plan the fix implementation with validation:**
- [ ] **Change Sequence Planned**: Know the order to make changes to minimize breakage
- [ ] **Validation Points Identified**: Have checkpoints to verify each step
- [ ] **Test Execution Strategy**: Plan how to validate fixes at each stage
- [ ] **Documentation Updates Required**: Know what documentation needs updating
- [ ] **Team Communication Plan**: Ensure stakeholders understand changes and risks
### Final Context Report
Generate comprehensive context report:
```bash
echo "=== CONTEXT ANALYSIS SUMMARY ===" | tee -a $CONTEXT_REPORT
echo "## Executive Summary" >> $CONTEXT_REPORT
echo "**Analysis Completion Date**: $(date)" >> $CONTEXT_REPORT
echo "**Build Errors Analyzed**: [Number and categories]" >> $CONTEXT_REPORT
echo "**Components Affected**: [List of impacted components]" >> $CONTEXT_REPORT
echo "**Risk Level**: [High/Medium/Low with justification]" >> $CONTEXT_REPORT
echo "**Recommended Approach**: [Chosen fix strategy]" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "## Key Findings:" >> $CONTEXT_REPORT
echo "- **Root Cause**: [Why build errors occurred]" >> $CONTEXT_REPORT
echo "- **Business Impact**: [Functionality at risk]" >> $CONTEXT_REPORT
echo "- **Technical Debt**: [Issues to address]" >> $CONTEXT_REPORT
echo "- **Integration Risks**: [Components that could break]" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "## Next Steps:" >> $CONTEXT_REPORT
echo "1. **Implement fixes** following recommended approach" >> $CONTEXT_REPORT
echo "2. **Execute validation checkpoints** at each stage" >> $CONTEXT_REPORT
echo "3. **Run comprehensive test suite** before completion" >> $CONTEXT_REPORT
echo "4. **Update documentation** to reflect changes" >> $CONTEXT_REPORT
echo "5. **Communicate changes** to relevant stakeholders" >> $CONTEXT_REPORT
echo "" >> $CONTEXT_REPORT
echo "**Context Analysis Complete**" >> $CONTEXT_REPORT
echo "Report saved to: $CONTEXT_REPORT" | tee -a $CONTEXT_REPORT
```
## Completion Criteria
### Analysis Complete When:
- [ ] **Historical Investigation Complete**: Understanding of how/why build broke
- [ ] **Test Contracts Understood**: Clear picture of expected behavior
- [ ] **Dependencies Mapped**: Full scope of integration impacts known
- [ ] **Risk Assessment Complete**: Understand risks of different fix approaches
- [ ] **Implementation Plan Ready**: Clear strategy for making changes safely
- [ ] **Validation Strategy Defined**: Know how to verify fixes work correctly
### Outputs Delivered:
- [ ] **Context Analysis Report**: Comprehensive analysis document
- [ ] **Fix Implementation Plan**: Step-by-step approach to resolving errors
- [ ] **Risk Mitigation Strategy**: Plans to prevent and handle regressions
- [ ] **Validation Checklist**: Tests and checkpoints for verification
- [ ] **Documentation Updates**: Changes needed for accuracy
---
## Summary
This comprehensive build context analysis ensures that developers understand the full scope and implications before attempting to fix build errors. It combines historical investigation, test analysis, dependency mapping, and risk assessment into a systematic approach that prevents regressions and preserves existing functionality.
**Key Benefits:**
- **Prevents blind fixes** that introduce regressions
- **Preserves business logic** by understanding existing functionality
- **Reduces technical debt** through informed decision-making
- **Improves fix quality** by considering all implications
- **Enables safe implementation** through comprehensive planning
**Integration Points:**
- Provides foundation for informed build error resolution
- Feeds into implementation planning and validation strategies
- Supports risk-based decision making for fix approaches
- Documents context for future maintenance and development

View File

@ -0,0 +1,397 @@
# Create Remediation Story Task
## Task Overview
Generate structured remediation stories for developers to systematically address issues identified during QA audits, reality checks, and validation failures while preventing regression and technical debt introduction.
## Context
When QA agents identify simulation patterns, build failures, or implementation issues, developers need clear, actionable guidance to remediate problems without introducing new issues. This task creates systematic fix-stories that maintain development velocity while ensuring quality.
## Remediation Story Generation Protocol
### Phase 1: Issue Assessment and Classification with Regression Analysis
```bash
echo "=== REMEDIATION STORY GENERATION WITH REGRESSION PREVENTION ==="
echo "Assessment Date: $(date)"
echo "QA Agent: [Agent Name]"
echo "Original Story: [Story Reference]"
echo ""
# Enhanced issue classification including regression risks
COMPOSITE_REALITY_SCORE=${REALITY_SCORE:-0}
REGRESSION_PREVENTION_SCORE=${REGRESSION_PREVENTION_SCORE:-100}
TECHNICAL_DEBT_SCORE=${TECHNICAL_DEBT_SCORE:-100}
echo "Quality Scores:"
echo "- Composite Reality Score: $COMPOSITE_REALITY_SCORE/100"
echo "- Regression Prevention Score: $REGRESSION_PREVENTION_SCORE/100"
echo "- Technical Debt Score: $TECHNICAL_DEBT_SCORE/100"
echo ""
# Determine story type based on comprehensive audit findings
if [[ "$COMPOSITE_REALITY_SCORE" -lt 70 ]] || [[ "$SIMULATION_PATTERNS" -gt 5 ]]; then
STORY_TYPE="simulation-remediation"
PRIORITY="high"
URGENCY="critical"
elif [[ "$REGRESSION_PREVENTION_SCORE" -lt 80 ]]; then
STORY_TYPE="regression-prevention"
PRIORITY="high"
URGENCY="high"
elif [[ "$TECHNICAL_DEBT_SCORE" -lt 70 ]]; then
STORY_TYPE="technical-debt-prevention"
PRIORITY="high"
URGENCY="high"
elif [[ "$BUILD_EXIT_CODE" -ne 0 ]] || [[ "$ERROR_COUNT" -gt 0 ]]; then
STORY_TYPE="build-fix"
PRIORITY="high"
URGENCY="high"
elif [[ "$RUNTIME_EXIT_CODE" -ne 0 ]] && [[ "$RUNTIME_EXIT_CODE" -ne 124 ]]; then
STORY_TYPE="runtime-fix"
PRIORITY="high"
URGENCY="high"
else
STORY_TYPE="quality-improvement"
PRIORITY="medium"
URGENCY="medium"
fi
echo "Remediation Type: $STORY_TYPE"
echo "Priority: $PRIORITY"
echo "Urgency: $URGENCY"
```
### Phase 2: Generate Story Sequence Number
```bash
# Get next available story number
STORY_DIR="docs/stories"
LATEST_STORY=$(ls $STORY_DIR/*.md 2>/dev/null | grep -E '[0-9]+\.[0-9]+' | sort -V | tail -1)
if [[ -n "$LATEST_STORY" ]]; then
LATEST_NUM=$(basename "$LATEST_STORY" .md | cut -d'.' -f1)
NEXT_MAJOR=$((LATEST_NUM + 1))
else
NEXT_MAJOR=1
fi
# Generate remediation story number
REMEDIATION_STORY="${NEXT_MAJOR}.1.remediation-${STORY_TYPE}.md"
STORY_PATH="$STORY_DIR/$REMEDIATION_STORY"
echo "Generated Story: $REMEDIATION_STORY"
```
### Phase 3: Create Structured Remediation Story
```bash
cat > "$STORY_PATH" << 'EOF'
# Story [STORY_NUMBER]: [STORY_TYPE] Remediation
## Story
**As a** developer working on {{project_name}}
**I need to** systematically remediate [ISSUE_CATEGORY] identified during QA audit
**So that** the implementation meets quality standards and reality requirements
## Acceptance Criteria
### Primary Remediation Requirements
- [ ] **Build Success:** Clean compilation with zero errors in Release mode
- [ ] **Runtime Validation:** Application starts and runs without crashes
- [ ] **Reality Score Improvement:** Achieve minimum 80/100 composite reality score
- [ ] **Simulation Pattern Elimination:** Remove all flagged simulation patterns
- [ ] **Regression Prevention:** Maintain all existing functionality (score ≥ 80/100)
- [ ] **Technical Debt Prevention:** Avoid architecture violations (score ≥ 70/100)
### Specific Fix Requirements
[SPECIFIC_FIXES_PLACEHOLDER]
### Enhanced Quality Gates
- [ ] **All Tests Pass:** Unit tests, integration tests, and regression tests complete successfully
- [ ] **Regression Testing:** All existing functionality continues to work as before
- [ ] **Story Pattern Compliance:** Follow established patterns from previous successful implementations
- [ ] **Architectural Consistency:** Maintain alignment with established architectural decisions
- [ ] **Performance Validation:** No performance degradation from remediation changes
- [ ] **Integration Preservation:** All external integrations continue functioning
- [ ] **Documentation Updates:** Update relevant documentation affected by changes
- [ ] **Cross-Platform Verification:** Changes work on both Windows and Linux
## Dev Notes
### QA Audit Reference
- **Original Audit Date:** [AUDIT_DATE]
- **Reality Score:** [REALITY_SCORE]/100
- **Primary Issues:** [ISSUE_SUMMARY]
- **Audit Report:** [AUDIT_REPORT_PATH]
### Remediation Strategy
[REMEDIATION_STRATEGY_PLACEHOLDER]
### Implementation Guidelines with Regression Prevention
- **Zero Tolerance:** No simulation patterns (Random.NextDouble(), Task.FromResult(), NotImplementedException)
- **Real Implementation:** All methods must contain actual business logic
- **Build Quality:** Clean Release mode compilation required
- **Regression Safety:** Always validate existing functionality before and after changes
- **Pattern Consistency:** Follow implementation patterns established in previous successful stories
- **Architectural Alignment:** Ensure changes align with existing architectural decisions
- **Integration Preservation:** Test all integration points to prevent breakage
- **Technical Debt Avoidance:** Maintain or improve code quality, don't introduce shortcuts
### Regression Prevention Checklist
- [ ] **Review Previous Stories:** Study successful implementations for established patterns
- [ ] **Identify Integration Points:** Map all external dependencies that could be affected
- [ ] **Test Existing Functionality:** Validate current behavior before making changes
- [ ] **Incremental Changes:** Make small, testable changes rather than large refactors
- [ ] **Validation at Each Step:** Test functionality after each significant change
- [ ] **Architecture Review:** Ensure changes follow established design patterns
- [ ] **Performance Monitoring:** Monitor for any performance impacts during changes
- **Test Coverage:** Comprehensive tests for all remediated functionality
## Testing
### Pre-Remediation Validation
- [ ] **Document Current State:** Capture baseline metrics and current behavior
- [ ] **Identify Test Coverage:** Determine which tests need updates post-remediation
- [ ] **Performance Baseline:** Establish performance metrics before changes
### Post-Remediation Validation
- [ ] **Reality Audit:** Execute reality-audit-comprehensive to verify improvements
- [ ] **Build Validation:** Confirm clean compilation and zero errors
- [ ] **Runtime Testing:** Verify application startup and core functionality
- [ ] **Performance Testing:** Ensure no degradation from baseline
- [ ] **Integration Testing:** Validate system-wide functionality remains intact
## Tasks
### Phase 1: Issue Analysis and Planning
- [ ] **Review QA Audit Report:** Analyze specific issues identified in audit
- [ ] **Categorize Problems:** Group related issues for systematic remediation
- [ ] **Plan Remediation Sequence:** Order fixes to minimize disruption
- [ ] **Identify Dependencies:** Determine which fixes depend on others
### Phase 2: Simulation Pattern Remediation
[SIMULATION_TASKS_PLACEHOLDER]
### Phase 3: Build and Runtime Fixes
[BUILD_RUNTIME_TASKS_PLACEHOLDER]
### Phase 4: Quality and Performance Validation
- [ ] **Execute Full Test Suite:** Run all automated tests to verify functionality
- [ ] **Performance Regression Testing:** Ensure no performance degradation
- [ ] **Cross-Platform Testing:** Validate fixes work on Windows and Linux
- [ ] **Documentation Updates:** Update any affected documentation
### Phase 5: Final Validation
- [ ] **Reality Audit Re-execution:** Achieve 80+ reality score
- [ ] **Build Verification:** Clean Release mode compilation
- [ ] **Runtime Verification:** Successful application startup and operation
- [ ] **Regression Testing:** All existing functionality preserved
## File List
[Will be populated by Dev Agent during implementation]
## Dev Agent Record
### Agent Model Used
[Will be populated by Dev Agent]
### Debug Log References
[Will be populated by Dev Agent during troubleshooting]
### Completion Notes
[Will be populated by Dev Agent upon completion]
### Change Log
[Will be populated by Dev Agent with specific changes made]
## QA Results
[Will be populated by QA Agent after remediation completion]
## Status
Draft
---
*Story generated automatically by QA Agent on [GENERATION_DATE]*
*Based on audit report: [AUDIT_REPORT_REFERENCE]*
EOF
```
### Phase 4: Populate Story with Specific Issue Details
```bash
# Replace placeholders with actual audit findings
sed -i "s/\[STORY_NUMBER\]/${NEXT_MAJOR}.1/g" "$STORY_PATH"
sed -i "s/\[STORY_TYPE\]/${STORY_TYPE}/g" "$STORY_PATH"
sed -i "s/\[ISSUE_CATEGORY\]/${STORY_TYPE} issues/g" "$STORY_PATH"
sed -i "s/\[AUDIT_DATE\]/$(date)/g" "$STORY_PATH"
sed -i "s/\[REALITY_SCORE\]/${REALITY_SCORE:-N/A}/g" "$STORY_PATH"
sed -i "s/\[GENERATION_DATE\]/$(date)/g" "$STORY_PATH"
# Generate specific fixes based on comprehensive audit findings
SPECIFIC_FIXES=""
SIMULATION_TASKS=""
BUILD_RUNTIME_TASKS=""
REGRESSION_PREVENTION_TASKS=""
TECHNICAL_DEBT_PREVENTION_TASKS=""
# Add simulation pattern fixes
if [[ ${RANDOM_COUNT:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Replace Random Data Generation:** Eliminate $RANDOM_COUNT instances of Random.NextDouble() with real data sources"
SIMULATION_TASKS+="\n- [ ] **Replace Random.NextDouble() Instances:** Convert $RANDOM_COUNT random data generations to real business logic"
fi
if [[ ${TASK_MOCK_COUNT:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Replace Mock Async Operations:** Convert $TASK_MOCK_COUNT Task.FromResult() calls to real async implementations"
SIMULATION_TASKS+="\n- [ ] **Convert Task.FromResult() Calls:** Replace $TASK_MOCK_COUNT mock async operations with real async logic"
fi
if [[ ${NOT_IMPL_COUNT:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Implement Missing Methods:** Complete $NOT_IMPL_COUNT methods throwing NotImplementedException"
SIMULATION_TASKS+="\n- [ ] **Complete Unimplemented Methods:** Implement $NOT_IMPL_COUNT methods with real business logic"
fi
if [[ ${TOTAL_SIM_COUNT:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Replace Simulation Methods:** Convert $TOTAL_SIM_COUNT SimulateX()/MockX()/FakeX() methods to real implementations"
SIMULATION_TASKS+="\n- [ ] **Convert Simulation Methods:** Replace $TOTAL_SIM_COUNT simulation methods with actual functionality"
fi
# Add build/runtime fixes
if [[ ${BUILD_EXIT_CODE:-0} -ne 0 ]] || [[ ${ERROR_COUNT:-1} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Fix Build Errors:** Resolve all compilation errors preventing clean Release build"
BUILD_RUNTIME_TASKS+="\n- [ ] **Resolve Compilation Errors:** Fix all build errors identified in audit"
fi
if [[ ${RUNTIME_EXIT_CODE:-0} -ne 0 ]] && [[ ${RUNTIME_EXIT_CODE:-0} -ne 124 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Fix Runtime Issues:** Resolve application startup and execution problems"
BUILD_RUNTIME_TASKS+="\n- [ ] **Resolve Runtime Failures:** Fix issues preventing application startup"
fi
# Add regression prevention fixes
if [[ ${REGRESSION_PREVENTION_SCORE:-100} -lt 80 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Regression Prevention:** Improve regression prevention score to ≥80/100"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Review Previous Stories:** Study successful implementations for established patterns"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Validate Integration Points:** Test all external dependencies and integration points"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Pattern Consistency Check:** Ensure implementation follows established architectural patterns"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Functional Regression Testing:** Verify all existing functionality continues to work"
fi
if [[ ${PATTERN_CONSISTENCY_ISSUES:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Fix Pattern Inconsistencies:** Address $PATTERN_CONSISTENCY_ISSUES pattern compliance issues"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Align with Established Patterns:** Modify implementation to follow successful story patterns"
fi
if [[ ${ARCHITECTURAL_VIOLATIONS:-0} -gt 0 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Fix Architectural Violations:** Resolve $ARCHITECTURAL_VIOLATIONS architectural consistency issues"
REGRESSION_PREVENTION_TASKS+="\n- [ ] **Architectural Compliance:** Align changes with established architectural decisions"
fi
# Add technical debt prevention fixes
if [[ ${TECHNICAL_DEBT_SCORE:-100} -lt 70 ]]; then
SPECIFIC_FIXES+="\n- [ ] **Technical Debt Prevention:** Improve technical debt score to ≥70/100"
TECHNICAL_DEBT_PREVENTION_TASKS+="\n- [ ] **Code Quality Improvement:** Refactor code to meet established quality standards"
TECHNICAL_DEBT_PREVENTION_TASKS+="\n- [ ] **Complexity Reduction:** Simplify overly complex implementations"
TECHNICAL_DEBT_PREVENTION_TASKS+="\n- [ ] **Duplication Elimination:** Remove code duplication and consolidate similar logic"
TECHNICAL_DEBT_PREVENTION_TASKS+="\n- [ ] **Maintainability Enhancement:** Improve code readability and maintainability"
fi
# Generate comprehensive remediation strategy based on findings
REMEDIATION_STRATEGY="Based on the comprehensive QA audit findings, this remediation follows a systematic regression-safe approach:\n\n"
REMEDIATION_STRATEGY+="**Quality Assessment:**\n"
REMEDIATION_STRATEGY+="- Composite Reality Score: ${COMPOSITE_REALITY_SCORE:-N/A}/100\n"
REMEDIATION_STRATEGY+="- Regression Prevention Score: ${REGRESSION_PREVENTION_SCORE:-N/A}/100\n"
REMEDIATION_STRATEGY+="- Technical Debt Score: ${TECHNICAL_DEBT_SCORE:-N/A}/100\n\n"
REMEDIATION_STRATEGY+="**Issue Analysis:**\n"
REMEDIATION_STRATEGY+="1. **Simulation Patterns:** $((${RANDOM_COUNT:-0} + ${TASK_MOCK_COUNT:-0} + ${NOT_IMPL_COUNT:-0} + ${TOTAL_SIM_COUNT:-0})) simulation patterns identified\n"
REMEDIATION_STRATEGY+="2. **Infrastructure Issues:** Build status: $(if [[ ${BUILD_EXIT_CODE:-0} -eq 0 ]] && [[ ${ERROR_COUNT:-1} -eq 0 ]]; then echo "✅ PASS"; else echo "❌ FAIL"; fi), Runtime status: $(if [[ ${RUNTIME_EXIT_CODE:-0} -eq 0 ]] || [[ ${RUNTIME_EXIT_CODE:-0} -eq 124 ]]; then echo "✅ PASS"; else echo "❌ FAIL"; fi)\n"
REMEDIATION_STRATEGY+="3. **Regression Risks:** Pattern inconsistencies: ${PATTERN_CONSISTENCY_ISSUES:-0}, Architectural violations: ${ARCHITECTURAL_VIOLATIONS:-0}\n"
REMEDIATION_STRATEGY+="4. **Technical Debt Risks:** Code complexity and maintainability issues identified\n\n"
REMEDIATION_STRATEGY+="**Implementation Approach:**\n"
REMEDIATION_STRATEGY+="1. **Pre-Implementation:** Review previous successful stories for established patterns\n"
REMEDIATION_STRATEGY+="2. **Priority Order:** Address simulation patterns → regression risks → build issues → technical debt → runtime problems\n"
REMEDIATION_STRATEGY+="3. **Validation Strategy:** Continuous regression testing during remediation to prevent functionality loss\n"
REMEDIATION_STRATEGY+="4. **Pattern Compliance:** Ensure all changes follow established architectural decisions and implementation patterns\n"
REMEDIATION_STRATEGY+="5. **Success Criteria:** Achieve 80+ composite reality score with regression prevention ≥80 and technical debt prevention ≥70"
# Update story file with generated content
sed -i "s|\[SPECIFIC_FIXES_PLACEHOLDER\]|$SPECIFIC_FIXES|g" "$STORY_PATH"
sed -i "s|\[SIMULATION_TASKS_PLACEHOLDER\]|$SIMULATION_TASKS|g" "$STORY_PATH"
sed -i "s|\[BUILD_RUNTIME_TASKS_PLACEHOLDER\]|$BUILD_RUNTIME_TASKS|g" "$STORY_PATH"
sed -i "s|\[REGRESSION_PREVENTION_TASKS_PLACEHOLDER\]|$REGRESSION_PREVENTION_TASKS|g" "$STORY_PATH"
sed -i "s|\[TECHNICAL_DEBT_PREVENTION_TASKS_PLACEHOLDER\]|$TECHNICAL_DEBT_PREVENTION_TASKS|g" "$STORY_PATH"
sed -i "s|\[REMEDIATION_STRATEGY_PLACEHOLDER\]|$REMEDIATION_STRATEGY|g" "$STORY_PATH"
# Add issue summary and audit report reference if available
if [[ -n "${AUDIT_REPORT:-}" ]]; then
ISSUE_SUMMARY="Reality Score: ${REALITY_SCORE:-N/A}/100, Simulation Patterns: $((${RANDOM_COUNT:-0} + ${TASK_MOCK_COUNT:-0} + ${NOT_IMPL_COUNT:-0} + ${TOTAL_SIM_COUNT:-0})), Build Issues: $(if [[ ${BUILD_EXIT_CODE:-0} -eq 0 ]]; then echo "None"; else echo "Present"; fi)"
sed -i "s|\[ISSUE_SUMMARY\]|$ISSUE_SUMMARY|g" "$STORY_PATH"
sed -i "s|\[AUDIT_REPORT_PATH\]|$AUDIT_REPORT|g" "$STORY_PATH"
sed -i "s|\[AUDIT_REPORT_REFERENCE\]|$AUDIT_REPORT|g" "$STORY_PATH"
fi
echo ""
echo "✅ Remediation story created: $STORY_PATH"
echo "📋 Story type: $STORY_TYPE"
echo "🎯 Priority: $PRIORITY"
echo "⚡ Urgency: $URGENCY"
```
## Integration with QA Workflow
### Auto-Generation Triggers
```bash
# Add to reality-audit-comprehensive.md after final assessment
if [[ $REALITY_SCORE -lt 80 ]] || [[ $BUILD_EXIT_CODE -ne 0 ]] || [[ $RUNTIME_EXIT_CODE -ne 0 && $RUNTIME_EXIT_CODE -ne 124 ]]; then
echo ""
echo "=== GENERATING REMEDIATION STORY ==="
# Execute create-remediation-story task
source .bmad-core/tasks/create-remediation-story.md
echo ""
echo "📝 **REMEDIATION STORY CREATED:** $REMEDIATION_STORY"
echo "👩‍💻 **NEXT ACTION:** Assign to developer for systematic remediation"
echo "🔄 **PROCESS:** Developer implements → QA re-audits → Cycle until 80+ score achieved"
fi
```
### Quality Gate Integration
```bash
# Add to story completion validation
echo "=== POST-REMEDIATION QUALITY GATE ==="
echo "Before marking remediation complete:"
echo "1. Execute reality-audit-comprehensive to verify improvements"
echo "2. Confirm reality score >= 80/100"
echo "3. Validate build success (Release mode, zero errors)"
echo "4. Verify runtime success (clean startup)"
echo "5. Run full regression test suite"
echo "6. Update original story status if remediation successful"
```
## Usage Instructions for QA Agents
### When to Generate Remediation Stories
- **Reality Score < 80:** Significant simulation patterns detected
- **Build Failures:** Compilation errors or warnings in Release mode
- **Runtime Issues:** Application startup or execution failures
- **Test Failures:** Significant test suite failures
- **Performance Degradation:** Measurable performance regression
### Story Naming Convention
- `[X].1.remediation-simulation.md` - For simulation pattern fixes
- `[X].1.remediation-build-fix.md` - For build/compilation issues
- `[X].1.remediation-runtime-fix.md` - For runtime/execution issues
- `[X].1.remediation-quality-improvement.md` - For general quality issues
### Follow-up Process
1. **Generate remediation story** using this task
2. **Assign to developer** for systematic implementation
3. **Track progress** through story checkbox completion
4. **Re-audit after completion** to verify improvements
5. **Close loop** by updating original story with remediation results
This creates a complete feedback loop ensuring that QA findings result in systematic, trackable remediation rather than ad-hoc fixes.

View File

@ -0,0 +1,566 @@
# Loop Detection & Escalation
## Task Overview
Systematically track solution attempts, detect loop scenarios, and trigger collaborative escalation when agents get stuck repeating unsuccessful approaches. This consolidated framework combines automatic detection with structured collaboration preparation for external AI agents.
## Context
Prevents agents from endlessly repeating failed solutions by implementing automatic escalation triggers and structured collaboration preparation. Ensures efficient use of context windows and systematic knowledge sharing while maintaining detailed audit trails of solution attempts.
## Execution Approach
**LOOP PREVENTION PROTOCOL** - This system addresses systematic "retry the same approach" behavior that wastes time and context.
1. **Track each solution attempt** systematically with outcomes
2. **Detect loop patterns** automatically using defined triggers
3. **Prepare collaboration context** for external agents
4. **Execute escalation** when conditions are met
5. **Document learnings** from collaborative solutions
The goal is efficient problem-solving through systematic collaboration when internal approaches reach limitations.
---
## Phase 1: Pre-Escalation Tracking
### Problem Definition Setup
Before attempting any solutions, establish clear problem context:
- [ ] **Issue clearly defined:** Specific error message, file location, or failure description documented
- [ ] **Root cause hypothesis:** Current understanding of what's causing the issue
- [ ] **Context captured:** Relevant code snippets, configuration files, or environment details
- [ ] **Success criteria defined:** What exactly needs to happen for issue to be resolved
- [ ] **Environment documented:** Platform, versions, dependencies affecting the issue
### Solution Attempt Tracking
Track each solution attempt using this systematic format:
```bash
echo "=== LOOP DETECTION TRACKING ==="
echo "Issue Tracking Started: $(date)"
echo "Issue ID: issue-$(date +%Y%m%d-%H%M)"
echo ""
# Create tracking report
LOOP_REPORT="loop-tracking-$(date +%Y%m%d-%H%M).md"
echo "# Loop Detection Tracking Report" > $LOOP_REPORT
echo "Date: $(date)" >> $LOOP_REPORT
echo "Issue ID: issue-$(date +%Y%m%d-%H%M)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "## Problem Definition" >> $LOOP_REPORT
echo "**Issue Description:** [Specific error or failure]" >> $LOOP_REPORT
echo "**Error Location:** [File, line, or component]" >> $LOOP_REPORT
echo "**Root Cause Hypothesis:** [Current understanding]" >> $LOOP_REPORT
echo "**Success Criteria:** [What needs to work]" >> $LOOP_REPORT
echo "**Environment:** [Platform, versions, dependencies]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "## Solution Attempt Log" >> $LOOP_REPORT
ATTEMPT_COUNT=0
```
**For each solution attempt, document:**
```markdown
### Attempt #[N]: [Brief description]
- **Start Time:** [timestamp]
- **Approach:** [Description of solution attempted]
- **Hypothesis:** [Why this approach should work]
- **Actions Taken:** [Specific steps executed]
- **Code Changes:** [Files modified and how]
- **Test Results:** [What happened when tested]
- **Result:** [Success/Failure/Partial success]
- **Learning:** [What this attempt revealed about the problem]
- **New Information:** [Any new understanding gained]
- **Next Hypothesis:** [How this changes understanding of the issue]
- **End Time:** [timestamp]
- **Duration:** [time spent on this attempt]
```
### Automated Attempt Logging
```bash
# Function to log solution attempts
log_attempt() {
local attempt_num=$1
local approach="$2"
local result="$3"
local learning="$4"
ATTEMPT_COUNT=$((ATTEMPT_COUNT + 1))
echo "" >> $LOOP_REPORT
echo "### Attempt #$ATTEMPT_COUNT: $approach" >> $LOOP_REPORT
echo "- **Start Time:** $(date)" >> $LOOP_REPORT
echo "- **Approach:** $approach" >> $LOOP_REPORT
echo "- **Result:** $result" >> $LOOP_REPORT
echo "- **Learning:** $learning" >> $LOOP_REPORT
echo "- **Duration:** [manual entry required]" >> $LOOP_REPORT
# Check for escalation triggers after each attempt
check_escalation_triggers
}
# Function to check escalation triggers
check_escalation_triggers() {
local should_escalate=false
echo "## Escalation Check #$ATTEMPT_COUNT" >> $LOOP_REPORT
echo "Time: $(date)" >> $LOOP_REPORT
# Check attempt count trigger
if [ $ATTEMPT_COUNT -ge 3 ]; then
echo "🚨 **TRIGGER**: 3+ failed attempts detected ($ATTEMPT_COUNT attempts)" >> $LOOP_REPORT
should_escalate=true
fi
# Check for repetitive patterns (manual analysis required)
echo "- **Repetitive Approaches:** [Manual assessment needed]" >> $LOOP_REPORT
echo "- **Circular Reasoning:** [Manual assessment needed]" >> $LOOP_REPORT
echo "- **Diminishing Returns:** [Manual assessment needed]" >> $LOOP_REPORT
# Time-based trigger (manual tracking required)
echo "- **Time Threshold:** [Manual time tracking needed - trigger at 90+ minutes]" >> $LOOP_REPORT
echo "- **Context Window Pressure:** [Manual assessment of context usage]" >> $LOOP_REPORT
if [ "$should_escalate" == "true" ]; then
echo "" >> $LOOP_REPORT
echo "⚡ **ESCALATION TRIGGERED** - Preparing collaboration request..." >> $LOOP_REPORT
prepare_collaboration_request
fi
}
```
## Phase 2: Loop Detection Indicators
### Automatic Detection Triggers
The system monitors for these escalation conditions:
```bash
# Loop Detection Configuration
FAILED_ATTEMPTS=3 # 3+ failed solution attempts
TIME_LIMIT_MINUTES=90 # 90+ minutes on single issue
PATTERN_REPETITION=true # Repeating previously tried solutions
CONTEXT_PRESSURE=high # Approaching context window limits
DIMINISHING_RETURNS=true # Each attempt provides less information
```
### Manual Detection Checklist
Monitor these indicators during problem-solving:
- [ ] **Repetitive approaches:** Same or very similar solutions attempted multiple times
- [ ] **Circular reasoning:** Solution attempts that return to previously tried approaches
- [ ] **Diminishing returns:** Each attempt provides less new information than the previous
- [ ] **Time threshold exceeded:** More than 90 minutes spent on single issue without progress
- [ ] **Context window pressure:** Approaching context limits due to extensive debugging
- [ ] **Decreasing confidence:** Solutions becoming more speculative rather than systematic
- [ ] **Resource exhaustion:** Running out of approaches within current knowledge domain
### Escalation Trigger Assessment
```bash
# Function to assess escalation need
assess_escalation_need() {
echo "=== ESCALATION ASSESSMENT ===" >> $LOOP_REPORT
echo "Assessment Time: $(date)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Automatic Triggers:" >> $LOOP_REPORT
echo "- **Failed Attempts:** $ATTEMPT_COUNT (trigger: ≥3)" >> $LOOP_REPORT
echo "- **Time Investment:** [Manual tracking] (trigger: ≥90 minutes)" >> $LOOP_REPORT
echo "- **Pattern Repetition:** [Manual assessment] (trigger: repeating approaches)" >> $LOOP_REPORT
echo "- **Context Pressure:** [Manual assessment] (trigger: approaching limits)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Manual Assessment Required:" >> $LOOP_REPORT
echo "- [ ] Same approaches being repeated?" >> $LOOP_REPORT
echo "- [ ] Each attempt providing less new information?" >> $LOOP_REPORT
echo "- [ ] Running out of systematic approaches?" >> $LOOP_REPORT
echo "- [ ] Context window becoming crowded with debug info?" >> $LOOP_REPORT
echo "- [ ] Issue blocking progress on main objective?" >> $LOOP_REPORT
echo "- [ ] Specialized knowledge domain expertise needed?" >> $LOOP_REPORT
}
```
## Phase 3: Collaboration Preparation
### Issue Classification
Before escalating, classify the problem type for optimal collaborator selection:
```bash
prepare_collaboration_request() {
echo "" >> $LOOP_REPORT
echo "=== COLLABORATION REQUEST PREPARATION ===" >> $LOOP_REPORT
echo "Preparation Time: $(date)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "## Issue Classification" >> $LOOP_REPORT
echo "- [ ] **Code Implementation Problem:** Logic, syntax, or algorithm issues" >> $LOOP_REPORT
echo "- [ ] **Architecture Design Problem:** Structural or pattern-related issues" >> $LOOP_REPORT
echo "- [ ] **Platform Integration Problem:** OS, framework, or tool compatibility" >> $LOOP_REPORT
echo "- [ ] **Performance Optimization Problem:** Speed, memory, or efficiency issues" >> $LOOP_REPORT
echo "- [ ] **Cross-Platform Compatibility Problem:** Multi-OS or environment issues" >> $LOOP_REPORT
echo "- [ ] **Domain-Specific Problem:** Specialized knowledge area" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
generate_collaboration_package
}
```
### Collaborative Information Package
Generate structured context for external collaborators:
```bash
generate_collaboration_package() {
echo "## Collaboration Information Package" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Executive Summary" >> $LOOP_REPORT
echo "**Problem:** [One-line description of core issue]" >> $LOOP_REPORT
echo "**Impact:** [How this blocks progress]" >> $LOOP_REPORT
echo "**Attempts:** $ATTEMPT_COUNT solutions tried over [X] minutes" >> $LOOP_REPORT
echo "**Request:** [Specific type of help needed]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Technical Context" >> $LOOP_REPORT
echo "**Platform:** [OS, framework, language versions]" >> $LOOP_REPORT
echo "**Environment:** [Development setup, tools, constraints]" >> $LOOP_REPORT
echo "**Dependencies:** [Key libraries, frameworks, services]" >> $LOOP_REPORT
echo "**Error Details:** [Exact error messages, stack traces]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Code Context" >> $LOOP_REPORT
echo "**Relevant Files:** [List of files involved]" >> $LOOP_REPORT
echo "**Key Functions:** [Methods or classes at issue]" >> $LOOP_REPORT
echo "**Data Structures:** [Important types or interfaces]" >> $LOOP_REPORT
echo "**Integration Points:** [How components connect]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Solution Attempts Summary" >> $LOOP_REPORT
echo "**Approach 1:** [Brief summary + outcome]" >> $LOOP_REPORT
echo "**Approach 2:** [Brief summary + outcome]" >> $LOOP_REPORT
echo "**Approach 3:** [Brief summary + outcome]" >> $LOOP_REPORT
echo "**Pattern:** [What all attempts had in common]" >> $LOOP_REPORT
echo "**Learnings:** [Key insights from attempts]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Specific Request" >> $LOOP_REPORT
echo "**What We Need:** [Specific type of assistance]" >> $LOOP_REPORT
echo "**Knowledge Gap:** [What we don't know]" >> $LOOP_REPORT
echo "**Success Criteria:** [How to know if solution works]" >> $LOOP_REPORT
echo "**Constraints:** [Limitations or requirements]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
select_collaborator
}
```
### Collaborator Selection
```bash
select_collaborator() {
echo "## Recommended Collaborator Selection" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Collaborator Specialization Guide:" >> $LOOP_REPORT
echo "- **Gemini:** Algorithm optimization, mathematical problems, data analysis" >> $LOOP_REPORT
echo "- **Claude Code:** Architecture design, code structure, enterprise patterns" >> $LOOP_REPORT
echo "- **GPT-4:** General problem-solving, creative approaches, debugging" >> $LOOP_REPORT
echo "- **Specialized LLMs:** Domain-specific expertise (security, ML, etc.)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Recommended Primary Collaborator:" >> $LOOP_REPORT
echo "**Choice:** [Based on issue classification]" >> $LOOP_REPORT
echo "**Rationale:** [Why this collaborator is best suited]" >> $LOOP_REPORT
echo "**Alternative:** [Backup option if primary unavailable]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Collaboration Request Ready" >> $LOOP_REPORT
echo "**Package Location:** $LOOP_REPORT" >> $LOOP_REPORT
echo "**Next Action:** Initiate collaboration with selected external agent" >> $LOOP_REPORT
# Generate copy-paste prompt for external LLM
generate_external_prompt
}
# Generate copy-paste prompt for external LLM collaboration
generate_external_prompt() {
EXTERNAL_PROMPT="external-llm-prompt-$(date +%Y%m%d-%H%M).md"
cat > $EXTERNAL_PROMPT << 'EOF'
# COLLABORATION REQUEST - Copy & Paste This Entire Message
## Situation
I'm an AI development agent that has hit a wall after multiple failed attempts at resolving an issue. I need fresh perspective and collaborative problem-solving.
## Issue Summary
**Problem:** [FILL: One-line description of core issue]
**Impact:** [FILL: How this blocks progress]
**Attempts:** [FILL: Number] solutions tried over [FILL: X] minutes
**Request:** [FILL: Specific type of help needed]
## Technical Context
**Platform:** [FILL: OS, framework, language versions]
**Environment:** [FILL: Development setup, tools, constraints]
**Dependencies:** [FILL: Key libraries, frameworks, services]
**Error Details:** [FILL: Exact error messages, stack traces]
## Code Context
**Relevant Files:** [FILL: List of files involved]
**Key Functions:** [FILL: Methods or classes at issue]
**Data Structures:** [FILL: Important types or interfaces]
**Integration Points:** [FILL: How components connect]
## Failed Solution Attempts
### Attempt 1: [FILL: Brief approach description]
- **Hypothesis:** [FILL: Why we thought this would work]
- **Actions:** [FILL: What we tried]
- **Outcome:** [FILL: What happened]
- **Learning:** [FILL: What this revealed]
### Attempt 2: [FILL: Brief approach description]
- **Hypothesis:** [FILL: Why we thought this would work]
- **Actions:** [FILL: What we tried]
- **Outcome:** [FILL: What happened]
- **Learning:** [FILL: What this revealed]
### Attempt 3: [FILL: Brief approach description]
- **Hypothesis:** [FILL: Why we thought this would work]
- **Actions:** [FILL: What we tried]
- **Outcome:** [FILL: What happened]
- **Learning:** [FILL: What this revealed]
## Pattern Analysis
**Common Thread:** [FILL: What all attempts had in common]
**Key Insights:** [FILL: Main learnings from attempts]
**Potential Blind Spots:** [FILL: What we might be missing]
## Specific Collaboration Request
**What I Need:** [FILL: Specific type of assistance - fresh approach, domain expertise, different perspective, etc.]
**Knowledge Gap:** [FILL: What we don't know or understand]
**Success Criteria:** [FILL: How to know if solution works]
**Constraints:** [FILL: Limitations or requirements to work within]
## Code Snippets (if relevant)
```[language]
[FILL: Relevant code that's causing issues]
```
## Error Logs (if relevant)
```
[FILL: Exact error messages and stack traces]
```
## What Would Help Most
- [ ] Fresh perspective on root cause
- [ ] Alternative solution approaches
- [ ] Domain-specific expertise
- [ ] Code review and suggestions
- [ ] Architecture/design guidance
- [ ] Debugging methodology
- [ ] Other: [FILL: Specific need]
---
**Please provide:** A clear, actionable solution approach with reasoning, or alternative perspectives I should consider. I'm looking for breakthrough thinking to get unstuck.
EOF
echo ""
echo "🎯 **COPY-PASTE PROMPT GENERATED**"
echo "📋 **File:** $EXTERNAL_PROMPT"
echo ""
echo "👉 **INSTRUCTIONS FOR USER:**"
echo "1. Open the file: $EXTERNAL_PROMPT"
echo "2. Fill in all [FILL: ...] placeholders with actual details"
echo "3. Copy the entire completed prompt"
echo "4. Paste into Gemini, GPT-4, or your preferred external LLM"
echo "5. Share the response back with me for implementation"
echo ""
echo "✨ **This structured approach maximizes collaboration effectiveness!**"
# Add to main report
echo "" >> $LOOP_REPORT
echo "### 🎯 COPY-PASTE PROMPT READY" >> $LOOP_REPORT
echo "**File Generated:** $EXTERNAL_PROMPT" >> $LOOP_REPORT
echo "**Instructions:** Fill placeholders, copy entire prompt, paste to external LLM" >> $LOOP_REPORT
echo "**Status:** Ready for user action" >> $LOOP_REPORT
}
```
## Phase 4: Escalation Execution
### Collaboration Initiation
When escalation triggers are met:
1. **Finalize collaboration package** with all context
2. **Select appropriate external collaborator** based on issue type
3. **Initiate collaboration request** with structured information
4. **Monitor collaboration progress** and integrate responses
5. **Document solution and learnings** for future reference
### Collaboration Management
```bash
# Function to manage active collaboration
manage_collaboration() {
local collaborator="$1"
local request_id="$2"
echo "=== ACTIVE COLLABORATION ===" >> $LOOP_REPORT
echo "Collaboration Started: $(date)" >> $LOOP_REPORT
echo "Collaborator: $collaborator" >> $LOOP_REPORT
echo "Request ID: $request_id" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Collaboration Tracking:" >> $LOOP_REPORT
echo "- **Request Sent:** $(date)" >> $LOOP_REPORT
echo "- **Information Package:** Complete" >> $LOOP_REPORT
echo "- **Response Expected:** [Timeline]" >> $LOOP_REPORT
echo "- **Status:** Active" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Response Integration Plan:" >> $LOOP_REPORT
echo "- [ ] **Validate suggested solution** against our constraints" >> $LOOP_REPORT
echo "- [ ] **Test proposed approach** in safe environment" >> $LOOP_REPORT
echo "- [ ] **Document new learnings** from collaboration" >> $LOOP_REPORT
echo "- [ ] **Update internal knowledge** for future similar issues" >> $LOOP_REPORT
echo "- [ ] **Close collaboration** when issue resolved" >> $LOOP_REPORT
}
```
## Phase 5: Learning Integration
### Solution Documentation
When collaboration yields results:
```bash
document_solution() {
local solution_approach="$1"
local collaborator="$2"
echo "" >> $LOOP_REPORT
echo "=== SOLUTION DOCUMENTATION ===" >> $LOOP_REPORT
echo "Solution Found: $(date)" >> $LOOP_REPORT
echo "Collaborator: $collaborator" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Solution Summary:" >> $LOOP_REPORT
echo "**Approach:** $solution_approach" >> $LOOP_REPORT
echo "**Key Insight:** [What made this solution work]" >> $LOOP_REPORT
echo "**Why Previous Attempts Failed:** [Root cause analysis]" >> $LOOP_REPORT
echo "**Implementation Steps:** [How solution was applied]" >> $LOOP_REPORT
echo "**Validation Results:** [How success was verified]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Knowledge Integration:" >> $LOOP_REPORT
echo "**New Understanding:** [What we learned about this type of problem]" >> $LOOP_REPORT
echo "**Pattern Recognition:** [How to identify similar issues faster]" >> $LOOP_REPORT
echo "**Prevention Strategy:** [How to avoid this issue in future]" >> $LOOP_REPORT
echo "**Collaboration Value:** [What external perspective provided]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Future Reference:" >> $LOOP_REPORT
echo "**Issue Type:** [Classification for future lookup]" >> $LOOP_REPORT
echo "**Solution Pattern:** [Reusable approach]" >> $LOOP_REPORT
echo "**Recommended Collaborator:** [For similar future issues]" >> $LOOP_REPORT
echo "**Documentation Updates:** [Changes to make to prevent recurrence]" >> $LOOP_REPORT
}
```
### Loop Prevention Learning
Extract patterns to prevent future loops:
```bash
extract_loop_patterns() {
echo "" >> $LOOP_REPORT
echo "=== LOOP PREVENTION ANALYSIS ===" >> $LOOP_REPORT
echo "Analysis Date: $(date)" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Loop Indicators Observed:" >> $LOOP_REPORT
echo "- **Trigger Point:** [What should have prompted earlier escalation]" >> $LOOP_REPORT
echo "- **Repetition Pattern:** [How approaches were repeating]" >> $LOOP_REPORT
echo "- **Knowledge Boundary:** [Where internal expertise reached limits]" >> $LOOP_REPORT
echo "- **Time Investment:** [Total time spent before escalation]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Optimization Opportunities:" >> $LOOP_REPORT
echo "- **Earlier Escalation:** [When should we have escalated sooner]" >> $LOOP_REPORT
echo "- **Better Classification:** [How to categorize similar issues faster]" >> $LOOP_REPORT
echo "- **Improved Tracking:** [How to better monitor solution attempts]" >> $LOOP_REPORT
echo "- **Knowledge Gaps:** [Areas to improve internal expertise]" >> $LOOP_REPORT
echo "" >> $LOOP_REPORT
echo "### Prevention Recommendations:" >> $LOOP_REPORT
echo "- **Escalation Triggers:** [Refined triggers for this issue type]" >> $LOOP_REPORT
echo "- **Early Warning Signs:** [Indicators to watch for]" >> $LOOP_REPORT
echo "- **Documentation Improvements:** [What to add to prevent recurrence]" >> $LOOP_REPORT
echo "- **Process Enhancements:** [How to handle similar issues better]" >> $LOOP_REPORT
}
```
## Integration Points
### Variables Exported for Other Tools
```bash
# Core loop detection variables
export ATTEMPT_COUNT=[number of solution attempts]
export TIME_INVESTED=[minutes spent on issue]
export ESCALATION_TRIGGERED=[true/false]
export COLLABORATOR_SELECTED=[external agent chosen]
export SOLUTION_FOUND=[true/false]
# Issue classification variables
export ISSUE_TYPE=[implementation/architecture/platform/performance/compatibility]
export KNOWLEDGE_DOMAIN=[specialized area if applicable]
export COMPLEXITY_LEVEL=[low/medium/high]
# Collaboration variables
export COLLABORATION_PACKAGE_PATH=[path to information package]
export COLLABORATOR_RESPONSE=[summary of external input]
export SOLUTION_APPROACH=[final working solution]
# Learning variables
export LOOP_PATTERNS=[patterns that led to loops]
export PREVENTION_STRATEGIES=[how to avoid similar loops]
export KNOWLEDGE_GAPS=[areas for improvement]
```
### Integration with Other BMAD Tools
- **Triggers create-remediation-story.md** when solution creates new tasks
- **Updates reality-audit-comprehensive.md** with solution validation
- **Feeds into build-context-analysis.md** for future similar issues
- **Provides data for quality framework improvements**
---
## Summary
This comprehensive loop detection and escalation framework prevents agents from wasting time and context on repetitive unsuccessful approaches. It combines systematic tracking, automatic trigger detection, structured collaboration preparation, and learning integration to ensure efficient problem-solving through external expertise when needed.
**Key Features:**
- **Systematic attempt tracking** with detailed outcomes and learnings
- **Automatic loop detection** based on multiple trigger conditions
- **Structured collaboration preparation** for optimal external engagement
- **Intelligent collaborator selection** based on issue classification
- **Solution documentation and learning integration** for continuous improvement
- **Prevention pattern extraction** to avoid future similar loops
**Benefits:**
- **Prevents context window exhaustion** from repetitive debugging
- **Enables efficient external collaboration** through structured requests
- **Preserves learning and insights** for future similar issues
- **Reduces time investment** in unproductive solution approaches
- **Improves overall problem-solving efficiency** through systematic escalation

View File

@ -0,0 +1,878 @@
# Reality Audit Comprehensive
## Task Overview
Comprehensive reality audit that systematically detects simulation patterns, validates real implementation, and provides objective scoring to prevent "bull in a china shop" completion claims. This consolidated framework combines automated detection, manual validation, and enforcement gates.
## Context
This enhanced audit provides QA agents with systematic tools to distinguish between real implementation and simulation-based development. It enforces accountability by requiring evidence-based assessment rather than subjective evaluation, consolidating all reality validation capabilities into a single comprehensive framework.
## Execution Approach
**CRITICAL INTEGRATION VALIDATION WITH REGRESSION PREVENTION** - This framework addresses both simulation mindset and regression risks. Be brutally honest about what is REAL vs SIMULATED, and ensure no functionality loss or technical debt introduction.
1. **Execute automated simulation detection** (Phase 1)
2. **Perform build and runtime validation** (Phase 2)
3. **Execute story context analysis** (Phase 3) - NEW
4. **Assess regression risks** (Phase 4) - NEW
5. **Evaluate technical debt impact** (Phase 5) - NEW
6. **Perform manual validation checklist** (Phase 6)
7. **Calculate comprehensive reality score** (Phase 7) - ENHANCED
8. **Apply enforcement gates** (Phase 8)
9. **Generate regression-safe remediation** (Phase 9) - ENHANCED
The goal is ZERO simulations AND ZERO regressions in critical path code.
---
## Phase 1: Automated Simulation Detection
### Project Structure Detection
Execute these commands systematically and document all findings:
```bash
#!/bin/bash
echo "=== REALITY AUDIT COMPREHENSIVE SCAN ==="
echo "Audit Date: $(date)"
echo "Auditor: [QA Agent Name]"
echo ""
# Detect project structure dynamically
if find . -maxdepth 3 -name "*.sln" -o -name "*.csproj" | head -1 | grep -q .; then
# .NET Project
if [ -d "src" ]; then
PROJECT_SRC_PATH="src"
PROJECT_FILE_EXT="*.cs"
else
PROJECT_SRC_PATH=$(find . -maxdepth 3 -name "*.csproj" -exec dirname {} \; | head -1)
PROJECT_FILE_EXT="*.cs"
fi
PROJECT_NAME=$(find . -maxdepth 3 -name "*.csproj" | head -1 | xargs basename -s .csproj)
BUILD_CMD="dotnet build -c Release --no-restore"
RUN_CMD="dotnet run --no-build"
ERROR_PATTERN="error CS"
WARN_PATTERN="warning CS"
elif [ -f "package.json" ]; then
# Node.js Project
PROJECT_SRC_PATH=$([ -d "src" ] && echo "src" || echo ".")
PROJECT_FILE_EXT="*.js *.ts *.jsx *.tsx"
PROJECT_NAME=$(grep '"name"' package.json | sed 's/.*"name"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
BUILD_CMD=$(grep -q '"build"' package.json && echo "npm run build" || echo "npm install")
RUN_CMD=$(grep -q '"start"' package.json && echo "npm start" || echo "node index.js")
ERROR_PATTERN="ERROR"
WARN_PATTERN="WARN"
elif [ -f "pom.xml" ] || [ -f "build.gradle" ]; then
# Java Project
PROJECT_SRC_PATH=$([ -d "src/main/java" ] && echo "src/main/java" || echo "src")
PROJECT_FILE_EXT="*.java"
PROJECT_NAME=$(basename "$(pwd)")
BUILD_CMD=$([ -f "pom.xml" ] && echo "mvn compile" || echo "gradle build")
RUN_CMD=$([ -f "pom.xml" ] && echo "mvn exec:java" || echo "gradle run")
ERROR_PATTERN="ERROR"
WARN_PATTERN="WARNING"
elif [ -f "Cargo.toml" ]; then
# Rust Project
PROJECT_SRC_PATH="src"
PROJECT_FILE_EXT="*.rs"
PROJECT_NAME=$(grep '^name' Cargo.toml | sed 's/name[[:space:]]*=[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
BUILD_CMD="cargo build --release"
RUN_CMD="cargo run"
ERROR_PATTERN="error"
WARN_PATTERN="warning"
elif [ -f "pyproject.toml" ] || [ -f "setup.py" ]; then
# Python Project
PROJECT_SRC_PATH=$([ -d "src" ] && echo "src" || echo ".")
PROJECT_FILE_EXT="*.py"
PROJECT_NAME=$(basename "$(pwd)")
BUILD_CMD="python -m py_compile **/*.py"
RUN_CMD="python main.py"
ERROR_PATTERN="ERROR"
WARN_PATTERN="WARNING"
elif [ -f "go.mod" ]; then
# Go Project
PROJECT_SRC_PATH="."
PROJECT_FILE_EXT="*.go"
PROJECT_NAME=$(head -1 go.mod | awk '{print $2}' | sed 's/.*\///')
BUILD_CMD="go build ./..."
RUN_CMD="go run ."
ERROR_PATTERN="error"
WARN_PATTERN="warning"
else
# Generic fallback
PROJECT_SRC_PATH=$([ -d "src" ] && echo "src" || echo ".")
PROJECT_FILE_EXT="*"
PROJECT_NAME=$(basename "$(pwd)")
BUILD_CMD="make"
RUN_CMD="./main"
ERROR_PATTERN="error"
WARN_PATTERN="warning"
fi
echo "Project: $PROJECT_NAME"
echo "Source Path: $PROJECT_SRC_PATH"
echo "File Extensions: $PROJECT_FILE_EXT"
echo "Build Command: $BUILD_CMD"
echo "Run Command: $RUN_CMD"
echo ""
# Create audit report file
AUDIT_REPORT="reality-audit-$(date +%Y%m%d-%H%M).md"
echo "# Reality Audit Report" > $AUDIT_REPORT
echo "Date: $(date)" >> $AUDIT_REPORT
echo "Project: $PROJECT_NAME" >> $AUDIT_REPORT
echo "Source Path: $PROJECT_SRC_PATH" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
```
### Simulation Pattern Detection
```bash
echo "=== SIMULATION PATTERN DETECTION ===" | tee -a $AUDIT_REPORT
# Pattern 1: Random data generation
echo "" >> $AUDIT_REPORT
echo "## Random Data Generation Patterns" >> $AUDIT_REPORT
echo "Random data generation:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "Random\.|Math\.random|random\(\)|rand\(\)" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | tee -a $AUDIT_REPORT || true
done
RANDOM_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "Random\.|Math\.random|random\(\)|rand\(\)" {} \; 2>/dev/null | wc -l)
echo "**Count:** $RANDOM_COUNT instances" | tee -a $AUDIT_REPORT
# Pattern 2: Mock async operations
echo "" >> $AUDIT_REPORT
echo "## Mock Async Operations" >> $AUDIT_REPORT
echo "Mock async operations:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "Task\.FromResult|Promise\.resolve|async.*return.*mock|await.*mock" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | tee -a $AUDIT_REPORT || true
done
TASK_MOCK_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "Task\.FromResult|Promise\.resolve" {} \; 2>/dev/null | wc -l)
echo "**Count:** $TASK_MOCK_COUNT instances" | tee -a $AUDIT_REPORT
# Pattern 3: Unimplemented methods
echo "" >> $AUDIT_REPORT
echo "## Unimplemented Methods" >> $AUDIT_REPORT
echo "Unimplemented methods:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "NotImplementedException|todo!|unimplemented!|panic!|raise NotImplementedError|NotImplemented" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | tee -a $AUDIT_REPORT || true
done
NOT_IMPL_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "NotImplementedException|todo!|unimplemented!|panic!|raise NotImplementedError" {} \; 2>/dev/null | wc -l)
echo "**Count:** $NOT_IMPL_COUNT instances" | tee -a $AUDIT_REPORT
# Pattern 4: TODO comments
echo "" >> $AUDIT_REPORT
echo "## TODO Comments" >> $AUDIT_REPORT
echo "TODO comments in critical path:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "TODO:|FIXME:|HACK:|XXX:|BUG:" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | tee -a $AUDIT_REPORT || true
done
TODO_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "TODO:|FIXME:|HACK:|XXX:|BUG:" {} \; 2>/dev/null | wc -l)
echo "**Count:** $TODO_COUNT instances" | tee -a $AUDIT_REPORT
# Pattern 5: Simulation methods
echo "" >> $AUDIT_REPORT
echo "## Simulation Methods" >> $AUDIT_REPORT
echo "Simulation methods:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "Simulate.*\(|Mock.*\(|Fake.*\(|Stub.*\(|dummy.*\(" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | tee -a $AUDIT_REPORT || true
done
SIMULATE_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "Simulate.*\(" {} \; 2>/dev/null | wc -l)
MOCK_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "Mock.*\(" {} \; 2>/dev/null | wc -l)
FAKE_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "Fake.*\(" {} \; 2>/dev/null | wc -l)
TOTAL_SIM_COUNT=$((SIMULATE_COUNT + MOCK_COUNT + FAKE_COUNT))
echo "**Count:** $TOTAL_SIM_COUNT instances (Simulate: $SIMULATE_COUNT, Mock: $MOCK_COUNT, Fake: $FAKE_COUNT)" | tee -a $AUDIT_REPORT
# Pattern 6: Hardcoded test data
echo "" >> $AUDIT_REPORT
echo "## Hardcoded Test Data" >> $AUDIT_REPORT
echo "Hardcoded arrays and test data:" | tee -a $AUDIT_REPORT
for ext in $PROJECT_FILE_EXT; do
grep -r "new\[\].*{.*}|= \[.*\]|Array\[.*\]|list.*=.*\[" "$PROJECT_SRC_PATH/" --include="$ext" -n 2>/dev/null | head -20 | tee -a $AUDIT_REPORT || true
done
ARRAY_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "new\[\].*{.*}" {} \; 2>/dev/null | wc -l)
LIST_COUNT=$(find "$PROJECT_SRC_PATH" -name "$PROJECT_FILE_EXT" -exec grep -l "= \[.*\]" {} \; 2>/dev/null | wc -l)
echo "**Count:** Arrays: $ARRAY_COUNT, Lists: $LIST_COUNT" | tee -a $AUDIT_REPORT
echo "" | tee -a $AUDIT_REPORT
echo "Automated scan complete. Report saved to: $AUDIT_REPORT"
```
## Phase 2: Build and Runtime Validation
```bash
echo "=== BUILD AND RUNTIME VALIDATION ===" | tee -a $AUDIT_REPORT
# Build validation
echo "" >> $AUDIT_REPORT
echo "## Build Validation" >> $AUDIT_REPORT
echo "Build Command: $BUILD_CMD" | tee -a $AUDIT_REPORT
$BUILD_CMD > build-audit.txt 2>&1
BUILD_EXIT_CODE=$?
ERROR_COUNT=$(grep -ci "$ERROR_PATTERN" build-audit.txt 2>/dev/null || echo 0)
WARNING_COUNT=$(grep -ci "$WARN_PATTERN" build-audit.txt 2>/dev/null || echo 0)
echo "Build Exit Code: $BUILD_EXIT_CODE" | tee -a $AUDIT_REPORT
echo "Error Count: $ERROR_COUNT" | tee -a $AUDIT_REPORT
echo "Warning Count: $WARNING_COUNT" | tee -a $AUDIT_REPORT
# Runtime validation
echo "" >> $AUDIT_REPORT
echo "## Runtime Validation" >> $AUDIT_REPORT
echo "Run Command: timeout 30s $RUN_CMD" | tee -a $AUDIT_REPORT
timeout 30s $RUN_CMD > runtime-audit.txt 2>&1
RUNTIME_EXIT_CODE=$?
echo "Runtime Exit Code: $RUNTIME_EXIT_CODE" | tee -a $AUDIT_REPORT
# Integration testing
echo "" >> $AUDIT_REPORT
echo "## Integration Testing" >> $AUDIT_REPORT
if [[ "$RUN_CMD" == *"dotnet"* ]]; then
PROJECT_FILE=$(find . -maxdepth 3 -name "*.csproj" | head -1)
BASE_CMD="dotnet run --project \"$PROJECT_FILE\" --no-build --"
elif [[ "$RUN_CMD" == *"npm"* ]]; then
BASE_CMD="npm start --"
elif [[ "$RUN_CMD" == *"mvn"* ]]; then
BASE_CMD="mvn exec:java -Dexec.args="
elif [[ "$RUN_CMD" == *"gradle"* ]]; then
BASE_CMD="gradle run --args="
elif [[ "$RUN_CMD" == *"cargo"* ]]; then
BASE_CMD="cargo run --"
elif [[ "$RUN_CMD" == *"go"* ]]; then
BASE_CMD="go run . --"
else
BASE_CMD="$RUN_CMD"
fi
echo "Testing database connectivity..." | tee -a $AUDIT_REPORT
$BASE_CMD --test-database-connection 2>/dev/null && echo "✓ Database test passed" | tee -a $AUDIT_REPORT || echo "✗ Database test failed or N/A" | tee -a $AUDIT_REPORT
echo "Testing file operations..." | tee -a $AUDIT_REPORT
$BASE_CMD --test-file-operations 2>/dev/null && echo "✓ File operations test passed" | tee -a $AUDIT_REPORT || echo "✗ File operations test failed or N/A" | tee -a $AUDIT_REPORT
echo "Testing network operations..." | tee -a $AUDIT_REPORT
$BASE_CMD --test-network-operations 2>/dev/null && echo "✓ Network test passed" | tee -a $AUDIT_REPORT || echo "✗ Network test failed or N/A" | tee -a $AUDIT_REPORT
```
## Phase 3: Story Context Analysis
### Previous Implementation Pattern Learning
Analyze existing stories to understand established patterns and prevent regression:
```bash
echo "=== STORY CONTEXT ANALYSIS ===" | tee -a $AUDIT_REPORT
# Find all completed stories in the project
STORY_DIR="docs/stories"
if [ -d "$STORY_DIR" ]; then
echo "## Story Pattern Analysis" >> $AUDIT_REPORT
echo "Analyzing previous implementations for pattern consistency..." | tee -a $AUDIT_REPORT
# Find completed stories
COMPLETED_STORIES=$(find "$STORY_DIR" -name "*.md" -exec grep -l "Status.*Complete\|Status.*Ready for Review" {} \; 2>/dev/null)
echo "Completed stories found: $(echo "$COMPLETED_STORIES" | wc -l)" | tee -a $AUDIT_REPORT
# Analyze architectural patterns
echo "" >> $AUDIT_REPORT
echo "### Architectural Pattern Analysis" >> $AUDIT_REPORT
# Look for common implementation patterns
for story in $COMPLETED_STORIES; do
if [ -f "$story" ]; then
echo "#### Story: $(basename "$story")" >> $AUDIT_REPORT
# Extract technical approach from completed stories
echo "Technical approach patterns:" >> $AUDIT_REPORT
grep -A 5 -B 2 "Technical\|Implementation\|Approach\|Pattern" "$story" >> $AUDIT_REPORT 2>/dev/null || echo "No technical patterns found" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
fi
done
# Analyze change patterns
echo "### Change Pattern Analysis" >> $AUDIT_REPORT
for story in $COMPLETED_STORIES; do
if [ -f "$story" ]; then
# Look for file change patterns
echo "#### File Change Patterns from $(basename "$story"):" >> $AUDIT_REPORT
grep -A 10 "File List\|Files Modified\|Files Added" "$story" >> $AUDIT_REPORT 2>/dev/null || echo "No file patterns found" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
fi
done
else
echo "No stories directory found - skipping pattern analysis" | tee -a $AUDIT_REPORT
fi
```
### Architectural Decision Learning
Extract architectural decisions from previous stories:
```bash
# Analyze architectural decisions
echo "## Architectural Decision Analysis" >> $AUDIT_REPORT
# Look for architectural decisions in stories
if [ -d "$STORY_DIR" ]; then
echo "### Previous Architectural Decisions:" >> $AUDIT_REPORT
# Find architecture-related content
grep -r -n -A 3 -B 1 "architect\|pattern\|design\|structure" "$STORY_DIR" --include="*.md" >> $AUDIT_REPORT 2>/dev/null || echo "No architectural decisions found" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
echo "### Technology Choices:" >> $AUDIT_REPORT
# Find technology decisions
grep -r -n -A 2 -B 1 "technology\|framework\|library\|dependency" "$STORY_DIR" --include="*.md" >> $AUDIT_REPORT 2>/dev/null || echo "No technology decisions found" >> $AUDIT_REPORT
fi
# Analyze current implementation against patterns
echo "" >> $AUDIT_REPORT
echo "### Pattern Compliance Assessment:" >> $AUDIT_REPORT
# Store pattern analysis results
PATTERN_COMPLIANCE_SCORE=100
ARCHITECTURAL_CONSISTENCY_SCORE=100
```
## Phase 4: Regression Risk Assessment
### Functional Regression Analysis
Identify potential functionality impacts:
```bash
echo "=== REGRESSION RISK ASSESSMENT ===" | tee -a $AUDIT_REPORT
echo "## Functional Impact Analysis" >> $AUDIT_REPORT
# Analyze current changes against existing functionality
if [ -d ".git" ]; then
echo "### Recent Changes Analysis:" >> $AUDIT_REPORT
echo "Recent commits that might affect functionality:" >> $AUDIT_REPORT
git log --oneline -20 --grep="feat\|fix\|refactor\|break" >> $AUDIT_REPORT 2>/dev/null || echo "No recent functional changes found" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
echo "### Modified Files Impact:" >> $AUDIT_REPORT
# Find recently modified files
MODIFIED_FILES=$(git diff --name-only HEAD~5..HEAD 2>/dev/null)
if [ -n "$MODIFIED_FILES" ]; then
echo "Files modified in recent commits:" >> $AUDIT_REPORT
echo "$MODIFIED_FILES" >> $AUDIT_REPORT
# Analyze impact of each file
echo "" >> $AUDIT_REPORT
echo "### File Impact Assessment:" >> $AUDIT_REPORT
for file in $MODIFIED_FILES; do
if [ -f "$file" ]; then
echo "#### Impact of $file:" >> $AUDIT_REPORT
# Look for public interfaces, APIs, or exported functions
case "$file" in
*.cs)
grep -n "public.*class\|public.*interface\|public.*method" "$file" >> $AUDIT_REPORT 2>/dev/null || echo "No public interfaces found" >> $AUDIT_REPORT
;;
*.js|*.ts)
grep -n "export\|module\.exports" "$file" >> $AUDIT_REPORT 2>/dev/null || echo "No exports found" >> $AUDIT_REPORT
;;
*.java)
grep -n "public.*class\|public.*interface\|public.*method" "$file" >> $AUDIT_REPORT 2>/dev/null || echo "No public interfaces found" >> $AUDIT_REPORT
;;
*.py)
grep -n "def.*\|class.*" "$file" >> $AUDIT_REPORT 2>/dev/null || echo "No class/function definitions found" >> $AUDIT_REPORT
;;
esac
echo "" >> $AUDIT_REPORT
fi
done
else
echo "No recently modified files found" >> $AUDIT_REPORT
fi
fi
# Calculate regression risk score
REGRESSION_RISK_SCORE=100
```
### Integration Point Analysis
Assess integration and dependency impacts:
```bash
echo "## Integration Impact Analysis" >> $AUDIT_REPORT
# Analyze integration points
echo "### External Integration Points:" >> $AUDIT_REPORT
# Look for external dependencies and integrations
case "$PROJECT_FILE_EXT" in
"*.cs")
# .NET dependencies
find . -name "*.csproj" -exec grep -n "PackageReference\|ProjectReference" {} \; >> $AUDIT_REPORT 2>/dev/null
;;
"*.js"|"*.ts")
# Node.js dependencies
if [ -f "package.json" ]; then
echo "Package dependencies:" >> $AUDIT_REPORT
grep -A 20 '"dependencies"' package.json >> $AUDIT_REPORT 2>/dev/null
fi
;;
"*.java")
# Java dependencies
find . -name "pom.xml" -exec grep -n "<dependency>" {} \; >> $AUDIT_REPORT 2>/dev/null
find . -name "build.gradle" -exec grep -n "implementation\|compile" {} \; >> $AUDIT_REPORT 2>/dev/null
;;
esac
echo "" >> $AUDIT_REPORT
echo "### Database Integration Assessment:" >> $AUDIT_REPORT
# Look for database integration patterns
for ext in $PROJECT_FILE_EXT; do
grep -r -n "connection\|database\|sql\|query" "$PROJECT_SRC_PATH/" --include="$ext" | head -10 >> $AUDIT_REPORT 2>/dev/null || echo "No database integration detected" >> $AUDIT_REPORT
done
echo "" >> $AUDIT_REPORT
echo "### API Integration Assessment:" >> $AUDIT_REPORT
# Look for API integration patterns
for ext in $PROJECT_FILE_EXT; do
grep -r -n "http\|api\|endpoint\|service" "$PROJECT_SRC_PATH/" --include="$ext" | head -10 >> $AUDIT_REPORT 2>/dev/null || echo "No API integration detected" >> $AUDIT_REPORT
done
```
## Phase 5: Technical Debt Impact Assessment
### Code Quality Impact Analysis
Evaluate potential technical debt introduction:
```bash
echo "=== TECHNICAL DEBT ASSESSMENT ===" | tee -a $AUDIT_REPORT
echo "## Code Quality Impact Analysis" >> $AUDIT_REPORT
# Analyze code complexity
echo "### Code Complexity Assessment:" >> $AUDIT_REPORT
# Find complex files (basic metrics)
for ext in $PROJECT_FILE_EXT; do
echo "#### Files by size (potential complexity):" >> $AUDIT_REPORT
find "$PROJECT_SRC_PATH" -name "$ext" -exec wc -l {} \; | sort -rn | head -10 >> $AUDIT_REPORT 2>/dev/null || echo "No source files found" >> $AUDIT_REPORT
done
echo "" >> $AUDIT_REPORT
echo "### Maintainability Assessment:" >> $AUDIT_REPORT
# Look for maintainability issues
echo "#### Potential Maintainability Issues:" >> $AUDIT_REPORT
# Look for code smells
for ext in $PROJECT_FILE_EXT; do
# Large methods/functions
case "$ext" in
"*.cs")
grep -r -n -A 20 "public.*{" "$PROJECT_SRC_PATH/" --include="$ext" | grep -c ".*{" | head -5 >> $AUDIT_REPORT 2>/dev/null
;;
"*.js"|"*.ts")
grep -r -n "function.*{" "$PROJECT_SRC_PATH/" --include="$ext" | head -10 >> $AUDIT_REPORT 2>/dev/null
;;
"*.java")
grep -r -n "public.*{" "$PROJECT_SRC_PATH/" --include="$ext" | head -10 >> $AUDIT_REPORT 2>/dev/null
;;
esac
done
# Look for duplication patterns
echo "" >> $AUDIT_REPORT
echo "#### Code Duplication Assessment:" >> $AUDIT_REPORT
# Basic duplication detection
for ext in $PROJECT_FILE_EXT; do
# Find similar patterns (simple approach)
find "$PROJECT_SRC_PATH" -name "$ext" -exec basename {} \; | sort | uniq -c | grep -v "1 " >> $AUDIT_REPORT 2>/dev/null || echo "No obvious duplication in file names" >> $AUDIT_REPORT
done
# Calculate technical debt score
TECHNICAL_DEBT_SCORE=100
```
### Architecture Consistency Check
Verify alignment with established patterns:
```bash
echo "## Architecture Consistency Analysis" >> $AUDIT_REPORT
# Compare current approach with established patterns
echo "### Pattern Consistency Assessment:" >> $AUDIT_REPORT
# This will be populated based on story analysis from Phase 3
echo "Current implementation pattern consistency: [Will be calculated based on story analysis]" >> $AUDIT_REPORT
echo "Architectural decision compliance: [Will be assessed against previous decisions]" >> $AUDIT_REPORT
echo "Technology choice consistency: [Will be evaluated against established stack]" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
echo "### Recommendations for Technical Debt Prevention:" >> $AUDIT_REPORT
echo "- Follow established patterns identified in story analysis" >> $AUDIT_REPORT
echo "- Maintain consistency with previous architectural decisions" >> $AUDIT_REPORT
echo "- Ensure new code follows existing code quality standards" >> $AUDIT_REPORT
echo "- Verify integration approaches match established patterns" >> $AUDIT_REPORT
# Store results for comprehensive scoring
PATTERN_CONSISTENCY_ISSUES=0
ARCHITECTURAL_VIOLATIONS=0
```
## Phase 6: Manual Validation Checklist
### End-to-End Integration Proof
**Prove the entire data path works with real applications:**
- [ ] **Real Application Test**: Code tested with actual target application
- [ ] **Real Data Flow**: Actual data flows through all components (not test data)
- [ ] **Real Environment**: Testing performed in target environment (not dev simulation)
- [ ] **Real Performance**: Measurements taken on actual target hardware
- [ ] **Real Error Conditions**: Tested with actual failure scenarios
**Evidence Required:**
- [ ] Screenshot/log of real application running with your changes
- [ ] Performance measurements from actual hardware
- [ ] Error logs from real failure conditions
### Dependency Reality Check
**Ensure all dependencies are real, not mocked:**
- [ ] **No Critical Mocks**: Zero mock implementations in production code path
- [ ] **Real External Services**: All external dependencies use real implementations
- [ ] **Real Hardware Access**: Operations use real hardware
- [ ] **Real IPC**: Inter-process communication uses real protocols, not simulation
**Mock Inventory:**
- [ ] List all mocks/simulations remaining: ________________
- [ ] Each mock has replacement timeline: ________________
- [ ] Critical path has zero mocks: ________________
### Performance Reality Validation
**All performance claims must be backed by real measurements:**
- [ ] **Measured Throughput**: Actual data throughput measured under load
- [ ] **Cross-Platform Parity**: Performance verified on both Windows/Linux
- [ ] **Real Timing**: Stopwatch measurements, not estimates
- [ ] **Memory Usage**: Real memory tracking, not calculated estimates
**Performance Evidence:**
- [ ] Benchmark results attached to story
- [ ] Performance within specified bounds
- [ ] No performance regressions detected
### Data Flow Reality Check
**Verify real data movement through system:**
- [ ] **Database Operations**: Real connections tested
- [ ] **File Operations**: Real files read/written
- [ ] **Network Operations**: Real endpoints contacted
- [ ] **External APIs**: Real API calls made
### Error Handling Reality
**Exception handling must be proven, not assumed:**
- [ ] **Real Exception Types**: Actual exceptions caught and handled
- [ ] **Retry Logic**: Real retry mechanisms tested
- [ ] **Circuit Breaker**: Real failure detection verified
- [ ] **Recovery**: Actual recovery times measured
## Phase 7: Comprehensive Reality Scoring with Regression Prevention
### Calculate Comprehensive Reality Score
```bash
echo "=== COMPREHENSIVE REALITY SCORING WITH REGRESSION PREVENTION ===" | tee -a $AUDIT_REPORT
# Initialize component scores
SIMULATION_SCORE=100
REGRESSION_PREVENTION_SCORE=100
TECHNICAL_DEBT_SCORE=100
echo "## Component Score Calculation" >> $AUDIT_REPORT
# Calculate Simulation Reality Score
echo "### Simulation Pattern Scoring:" >> $AUDIT_REPORT
SIMULATION_SCORE=$((SIMULATION_SCORE - (RANDOM_COUNT * 20)))
SIMULATION_SCORE=$((SIMULATION_SCORE - (TASK_MOCK_COUNT * 15)))
SIMULATION_SCORE=$((SIMULATION_SCORE - (NOT_IMPL_COUNT * 30)))
SIMULATION_SCORE=$((SIMULATION_SCORE - (TODO_COUNT * 5)))
SIMULATION_SCORE=$((SIMULATION_SCORE - (TOTAL_SIM_COUNT * 25)))
# Deduct for build/runtime failures
if [ $BUILD_EXIT_CODE -ne 0 ]; then
SIMULATION_SCORE=$((SIMULATION_SCORE - 50))
fi
if [ $ERROR_COUNT -gt 0 ]; then
SIMULATION_SCORE=$((SIMULATION_SCORE - (ERROR_COUNT * 10)))
fi
if [ $RUNTIME_EXIT_CODE -ne 0 ] && [ $RUNTIME_EXIT_CODE -ne 124 ]; then
SIMULATION_SCORE=$((SIMULATION_SCORE - 30))
fi
# Ensure simulation score doesn't go below 0
if [ $SIMULATION_SCORE -lt 0 ]; then
SIMULATION_SCORE=0
fi
echo "**Simulation Reality Score: $SIMULATION_SCORE/100**" >> $AUDIT_REPORT
# Calculate Regression Prevention Score
echo "### Regression Prevention Scoring:" >> $AUDIT_REPORT
# Deduct for regression risks (scores set in previous phases)
REGRESSION_PREVENTION_SCORE=${REGRESSION_RISK_SCORE:-100}
PATTERN_COMPLIANCE_DEDUCTION=$((PATTERN_CONSISTENCY_ISSUES * 15))
ARCHITECTURAL_DEDUCTION=$((ARCHITECTURAL_VIOLATIONS * 20))
REGRESSION_PREVENTION_SCORE=$((REGRESSION_PREVENTION_SCORE - PATTERN_COMPLIANCE_DEDUCTION))
REGRESSION_PREVENTION_SCORE=$((REGRESSION_PREVENTION_SCORE - ARCHITECTURAL_DEDUCTION))
# Ensure regression score doesn't go below 0
if [ $REGRESSION_PREVENTION_SCORE -lt 0 ]; then
REGRESSION_PREVENTION_SCORE=0
fi
echo "**Regression Prevention Score: $REGRESSION_PREVENTION_SCORE/100**" >> $AUDIT_REPORT
# Calculate Technical Debt Score
echo "### Technical Debt Impact Scoring:" >> $AUDIT_REPORT
TECHNICAL_DEBT_SCORE=${TECHNICAL_DEBT_SCORE:-100}
# Factor in architectural consistency
if [ $ARCHITECTURAL_CONSISTENCY_SCORE -lt 100 ]; then
CONSISTENCY_DEDUCTION=$((100 - ARCHITECTURAL_CONSISTENCY_SCORE))
TECHNICAL_DEBT_SCORE=$((TECHNICAL_DEBT_SCORE - CONSISTENCY_DEDUCTION))
fi
# Ensure technical debt score doesn't go below 0
if [ $TECHNICAL_DEBT_SCORE -lt 0 ]; then
TECHNICAL_DEBT_SCORE=0
fi
echo "**Technical Debt Prevention Score: $TECHNICAL_DEBT_SCORE/100**" >> $AUDIT_REPORT
# Calculate Composite Reality Score with Weighted Components
echo "### Composite Scoring:" >> $AUDIT_REPORT
echo "Score component weights:" >> $AUDIT_REPORT
echo "- Simulation Reality: 40%" >> $AUDIT_REPORT
echo "- Regression Prevention: 35%" >> $AUDIT_REPORT
echo "- Technical Debt Prevention: 25%" >> $AUDIT_REPORT
COMPOSITE_REALITY_SCORE=$(( (SIMULATION_SCORE * 40 + REGRESSION_PREVENTION_SCORE * 35 + TECHNICAL_DEBT_SCORE * 25) / 100 ))
echo "**Composite Reality Score: $COMPOSITE_REALITY_SCORE/100**" >> $AUDIT_REPORT
# Set final score for compatibility with existing workflows
REALITY_SCORE=$COMPOSITE_REALITY_SCORE
echo "" >> $AUDIT_REPORT
echo "## Reality Scoring Matrix" >> $AUDIT_REPORT
echo "| Pattern Found | Instance Count | Score Impact | Points Deducted |" >> $AUDIT_REPORT
echo "|---------------|----------------|--------------|-----------------|" >> $AUDIT_REPORT
echo "| Random Data Generation | $RANDOM_COUNT | High | $((RANDOM_COUNT * 20)) |" >> $AUDIT_REPORT
echo "| Mock Async Operations | $TASK_MOCK_COUNT | High | $((TASK_MOCK_COUNT * 15)) |" >> $AUDIT_REPORT
echo "| NotImplementedException | $NOT_IMPL_COUNT | Critical | $((NOT_IMPL_COUNT * 30)) |" >> $AUDIT_REPORT
echo "| TODO Comments | $TODO_COUNT | Medium | $((TODO_COUNT * 5)) |" >> $AUDIT_REPORT
echo "| Simulation Methods | $TOTAL_SIM_COUNT | High | $((TOTAL_SIM_COUNT * 25)) |" >> $AUDIT_REPORT
echo "| Build Failures | $BUILD_EXIT_CODE | Critical | $([ $BUILD_EXIT_CODE -ne 0 ] && echo 50 || echo 0) |" >> $AUDIT_REPORT
echo "| Compilation Errors | $ERROR_COUNT | High | $((ERROR_COUNT * 10)) |" >> $AUDIT_REPORT
echo "| Runtime Failures | $([ $RUNTIME_EXIT_CODE -ne 0 ] && [ $RUNTIME_EXIT_CODE -ne 124 ] && echo 1 || echo 0) | High | $([ $RUNTIME_EXIT_CODE -ne 0 ] && [ $RUNTIME_EXIT_CODE -ne 124 ] && echo 30 || echo 0) |" >> $AUDIT_REPORT
echo "" >> $AUDIT_REPORT
echo "**Total Reality Score: $REALITY_SCORE / 100**" >> $AUDIT_REPORT
echo "Final Reality Score: $REALITY_SCORE / 100" | tee -a $AUDIT_REPORT
```
### Score Interpretation and Enforcement
```bash
echo "" >> $AUDIT_REPORT
echo "## Reality Score Interpretation" >> $AUDIT_REPORT
if [ $REALITY_SCORE -ge 90 ]; then
GRADE="A"
STATUS="EXCELLENT"
ACTION="APPROVED FOR COMPLETION"
elif [ $REALITY_SCORE -ge 80 ]; then
GRADE="B"
STATUS="GOOD"
ACTION="APPROVED FOR COMPLETION"
elif [ $REALITY_SCORE -ge 70 ]; then
GRADE="C"
STATUS="ACCEPTABLE"
ACTION="REQUIRES MINOR REMEDIATION"
elif [ $REALITY_SCORE -ge 60 ]; then
GRADE="D"
STATUS="POOR"
ACTION="REQUIRES MAJOR REMEDIATION"
else
GRADE="F"
STATUS="UNACCEPTABLE"
ACTION="BLOCKED - RETURN TO DEVELOPMENT"
fi
echo "- **Grade: $GRADE ($REALITY_SCORE/100)**" >> $AUDIT_REPORT
echo "- **Status: $STATUS**" >> $AUDIT_REPORT
echo "- **Action: $ACTION**" >> $AUDIT_REPORT
echo "Reality Assessment: $GRADE ($STATUS) - $ACTION" | tee -a $AUDIT_REPORT
```
## Phase 8: Enforcement Gates
### Enhanced Quality Gates (All Must Pass)
- [ ] **Build Success**: Build command returns 0 errors
- [ ] **Runtime Success**: Application starts and responds to requests
- [ ] **Data Flow Success**: Real data moves through system without simulation
- [ ] **Integration Success**: External dependencies accessible and functional
- [ ] **Performance Success**: Real measurements obtained, not estimates
- [ ] **Contract Compliance**: Zero architectural violations
- [ ] **Simulation Score**: Simulation reality score ≥ 80 (B grade or better)
- [ ] **Regression Prevention**: Regression prevention score ≥ 80 (B grade or better)
- [ ] **Technical Debt Prevention**: Technical debt score ≥ 70 (C grade or better)
- [ ] **Composite Reality Score**: Overall score ≥ 80 (B grade or better)
## Phase 9: Regression-Safe Automated Remediation
```bash
echo "=== REMEDIATION DECISION ===" | tee -a $AUDIT_REPORT
# Check if remediation is needed
REMEDIATION_NEEDED=false
if [ $REALITY_SCORE -lt 80 ]; then
echo "✋ Reality score below threshold: $REALITY_SCORE/100" | tee -a $AUDIT_REPORT
REMEDIATION_NEEDED=true
fi
if [ $BUILD_EXIT_CODE -ne 0 ] || [ $ERROR_COUNT -gt 0 ]; then
echo "✋ Build failures detected: Exit code $BUILD_EXIT_CODE, Errors: $ERROR_COUNT" | tee -a $AUDIT_REPORT
REMEDIATION_NEEDED=true
fi
if [ $RUNTIME_EXIT_CODE -ne 0 ] && [ $RUNTIME_EXIT_CODE -ne 124 ]; then
echo "✋ Runtime failures detected: Exit code $RUNTIME_EXIT_CODE" | tee -a $AUDIT_REPORT
REMEDIATION_NEEDED=true
fi
CRITICAL_PATTERNS=$((NOT_IMPL_COUNT + RANDOM_COUNT))
if [ $CRITICAL_PATTERNS -gt 3 ]; then
echo "✋ Critical simulation patterns detected: $CRITICAL_PATTERNS instances" | tee -a $AUDIT_REPORT
REMEDIATION_NEEDED=true
fi
if [ "$REMEDIATION_NEEDED" == "true" ]; then
echo "" | tee -a $AUDIT_REPORT
echo "🚨 **REMEDIATION REQUIRED** - Auto-generating remediation story..." | tee -a $AUDIT_REPORT
echo "" | tee -a $AUDIT_REPORT
# Set variables for create-remediation-story.md
export REALITY_SCORE
export BUILD_EXIT_CODE
export ERROR_COUNT
export RUNTIME_EXIT_CODE
export RANDOM_COUNT
export TASK_MOCK_COUNT
export NOT_IMPL_COUNT
export TODO_COUNT
export TOTAL_SIM_COUNT
echo "📝 **REMEDIATION STORY CREATION TRIGGERED**" | tee -a $AUDIT_REPORT
echo "👩‍💻 **NEXT ACTION:** Execute create-remediation-story.md" | tee -a $AUDIT_REPORT
echo "🔄 **PROCESS:** Developer implements fixes → QA re-audits → Repeat until score ≥ 80" | tee -a $AUDIT_REPORT
echo "🎯 **TARGET:** Achieve 80+ reality score with clean build/runtime" | tee -a $AUDIT_REPORT
else
echo "" | tee -a $AUDIT_REPORT
echo "✅ **NO REMEDIATION NEEDED** - Implementation meets quality standards" | tee -a $AUDIT_REPORT
echo "📊 Reality Score: $REALITY_SCORE/100" | tee -a $AUDIT_REPORT
echo "🏗️ Build Status: $([ $BUILD_EXIT_CODE -eq 0 ] && [ $ERROR_COUNT -eq 0 ] && echo "✅ SUCCESS" || echo "❌ FAILED")" | tee -a $AUDIT_REPORT
echo "⚡ Runtime Status: $([ $RUNTIME_EXIT_CODE -eq 0 ] || [ $RUNTIME_EXIT_CODE -eq 124 ] && echo "✅ SUCCESS" || echo "❌ FAILED")" | tee -a $AUDIT_REPORT
fi
echo "" | tee -a $AUDIT_REPORT
echo "=== AUDIT COMPLETE ===" | tee -a $AUDIT_REPORT
echo "Report location: $AUDIT_REPORT" | tee -a $AUDIT_REPORT
```
## Definition of "Actually Complete"
### Quality Gates (All Must Pass)
- [ ] **Build Success**: Build command returns 0 errors
- [ ] **Runtime Success**: Application starts and responds to requests
- [ ] **Data Flow Success**: Real data moves through system without simulation
- [ ] **Integration Success**: External dependencies accessible and functional
- [ ] **Performance Success**: Real measurements obtained, not estimates
- [ ] **Contract Compliance**: Zero architectural violations
- [ ] **Simulation Score**: Reality score ≥ 80 (B grade or better)
### Final Assessment Options
- [ ] **APPROVED FOR COMPLETION:** All criteria met, reality score ≥ 80
- [ ] **REQUIRES REMEDIATION:** Simulation patterns found, reality score < 80
- [ ] **BLOCKED:** Build failures or critical simulation patterns prevent completion
### Variables Available for Integration
The following variables are exported for use by other tools:
```bash
# Core scoring variables
REALITY_SCORE=[calculated score 0-100]
BUILD_EXIT_CODE=[build command exit code]
ERROR_COUNT=[compilation error count]
RUNTIME_EXIT_CODE=[runtime command exit code]
# Pattern detection counts
RANDOM_COUNT=[Random.NextDouble instances]
TASK_MOCK_COUNT=[Task.FromResult instances]
NOT_IMPL_COUNT=[NotImplementedException instances]
TODO_COUNT=[TODO comment count]
TOTAL_SIM_COUNT=[total simulation method count]
# Project context
PROJECT_NAME=[detected project name]
PROJECT_SRC_PATH=[detected source path]
PROJECT_FILE_EXT=[detected file extensions]
BUILD_CMD=[detected build command]
RUN_CMD=[detected run command]
```
---
## Summary
This comprehensive reality audit combines automated simulation detection, manual validation, objective scoring, and enforcement gates into a single cohesive framework. It prevents "bull in a china shop" completion claims by requiring evidence-based assessment and automatically triggering remediation when quality standards are not met.
**Key Features:**
- **Universal project detection** across multiple languages/frameworks
- **Automated simulation pattern scanning** with 6 distinct pattern types
- **Objective reality scoring** with clear grade boundaries (A-F)
- **Manual validation checklist** for human verification
- **Enforcement gates** preventing completion of poor-quality implementations
- **Automatic remediation triggering** when issues are detected
- **Comprehensive evidence documentation** for audit trails
**Integration Points:**
- Exports standardized variables for other BMAD tools
- Triggers create-remediation-story.md when needed
- Provides audit reports for documentation
- Supports all major project types and build systems

View File

@ -76,14 +76,14 @@ persona:
- Numbered Options Protocol - Always use numbered lists for selections - Numbered Options Protocol - Always use numbered lists for selections
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file - doc-out: Output full document in progress to current destination file
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - research-prompt {topic}: execute task create-deep-research-prompt.md
- research-prompt {topic}: execute task create-deep-research-prompt for architectural decisions - brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- brainstorm {topic}: Facilitate structured brainstorming session
- elicit: run the task advanced-elicitation - elicit: run the task advanced-elicitation
- document-project: Analyze and document existing project structure comprehensively
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona - exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:

View File

@ -76,11 +76,16 @@ persona:
- Living Architecture - Design for change and adaptation - Living Architecture - Design for change and adaptation
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
- yolo: Toggle Yolo Mode - create-backend-architecture: use create-doc with architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-brownfield-architecture: use create-doc with brownfield-architecture-tmpl.yaml
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist)
- research {topic}: execute task create-deep-research-prompt for architectural decisions - research {topic}: execute task create-deep-research-prompt
- shard-prd: run the task shard-doc.md for the provided architecture.md (ask if not found)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Architect, and then abandon inhabiting this persona - exit: Say goodbye as the Architect, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:

View File

@ -70,10 +70,11 @@ commands:
- kb: Toggle KB mode off (default) or on, when on will load and reference the .bmad-core/data/bmad-kb.md and converse with the user answering his questions with this informational resource - kb: Toggle KB mode off (default) or on, when on will load and reference the .bmad-core/data/bmad-kb.md and converse with the user answering his questions with this informational resource
- task {task}: Execute task, if not found or none specified, ONLY list available dependencies/tasks listed below - task {task}: Execute task, if not found or none specified, ONLY list available dependencies/tasks listed below
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below)
- doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (no checklist = ONLY show available checklists listed under dependencies/checklist below) - execute-checklist {checklist}: Run task execute-checklist (no checklist = ONLY show available checklists listed under dependencies/checklist below)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination - shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:

1484
dist/agents/dev.txt vendored

File diff suppressed because it is too large Load Diff

9
dist/agents/pm.txt vendored
View File

@ -72,9 +72,14 @@ persona:
- Strategic thinking & outcome-oriented - Strategic thinking & outcome-oriented
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc for template provided, if no template then ONLY list dependencies.templates - create-prd: run task create-doc.md with template prd-tmpl.yaml
- yolo: Toggle Yolo Mode - create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
- correct-course: execute the correct-course task
- yolo: Toggle Yolo Mode
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:

320
dist/agents/po.txt vendored
View File

@ -75,23 +75,20 @@ persona:
- Documentation Ecosystem Integrity - Maintain consistency across all documents - Documentation Ecosystem Integrity - Maintain consistency across all documents
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- execute-checklist {checklist}: Run task execute-checklist (default->po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination - shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- correct-course: execute the correct-course task - correct-course: execute the correct-course task
- create-epic: Create epic for brownfield projects (task brownfield-create-epic) - create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story) - create-story: Create user story from requirements (task brownfield-create-story)
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- validate-story-draft {story}: run the task validate-next-story against the provided story file - validate-story-draft {story}: run the task validate-next-story against the provided story file
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:
- execute-checklist.md - execute-checklist.md
- shard-doc.md - shard-doc.md
- correct-course.md - correct-course.md
- brownfield-create-epic.md
- brownfield-create-story.md
- validate-next-story.md - validate-next-story.md
templates: templates:
- story-tmpl.yaml - story-tmpl.yaml
@ -460,319 +457,6 @@ Document sharded successfully:
- **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process. - **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.
==================== END: .bmad-core/tasks/correct-course.md ==================== ==================== END: .bmad-core/tasks/correct-course.md ====================
==================== START: .bmad-core/tasks/brownfield-create-epic.md ====================
# Create Brownfield Epic Task
## Purpose
Create a single epic for smaller brownfield enhancements that don't require the full PRD and Architecture documentation process. This task is for isolated features or modifications that can be completed within a focused scope.
## When to Use This Task
**Use this task when:**
- The enhancement can be completed in 1-3 stories
- No significant architectural changes are required
- The enhancement follows existing project patterns
- Integration complexity is minimal
- Risk to existing system is low
**Use the full brownfield PRD/Architecture process when:**
- The enhancement requires multiple coordinated stories
- Architectural planning is needed
- Significant integration work is required
- Risk assessment and mitigation planning is necessary
## Instructions
### 1. Project Analysis (Required)
Before creating the epic, gather essential information about the existing project:
**Existing Project Context:**
- [ ] Project purpose and current functionality understood
- [ ] Existing technology stack identified
- [ ] Current architecture patterns noted
- [ ] Integration points with existing system identified
**Enhancement Scope:**
- [ ] Enhancement clearly defined and scoped
- [ ] Impact on existing functionality assessed
- [ ] Required integration points identified
- [ ] Success criteria established
### 2. Epic Creation
Create a focused epic following this structure:
#### Epic Title
{{Enhancement Name}} - Brownfield Enhancement
#### Epic Goal
{{1-2 sentences describing what the epic will accomplish and why it adds value}}
#### Epic Description
**Existing System Context:**
- Current relevant functionality: {{brief description}}
- Technology stack: {{relevant existing technologies}}
- Integration points: {{where new work connects to existing system}}
**Enhancement Details:**
- What's being added/changed: {{clear description}}
- How it integrates: {{integration approach}}
- Success criteria: {{measurable outcomes}}
#### Stories
List 1-3 focused stories that complete the epic:
1. **Story 1:** {{Story title and brief description}}
2. **Story 2:** {{Story title and brief description}}
3. **Story 3:** {{Story title and brief description}}
#### Compatibility Requirements
- [ ] Existing APIs remain unchanged
- [ ] Database schema changes are backward compatible
- [ ] UI changes follow existing patterns
- [ ] Performance impact is minimal
#### Risk Mitigation
- **Primary Risk:** {{main risk to existing system}}
- **Mitigation:** {{how risk will be addressed}}
- **Rollback Plan:** {{how to undo changes if needed}}
#### Definition of Done
- [ ] All stories completed with acceptance criteria met
- [ ] Existing functionality verified through testing
- [ ] Integration points working correctly
- [ ] Documentation updated appropriately
- [ ] No regression in existing features
### 3. Validation Checklist
Before finalizing the epic, ensure:
**Scope Validation:**
- [ ] Epic can be completed in 1-3 stories maximum
- [ ] No architectural documentation is required
- [ ] Enhancement follows existing patterns
- [ ] Integration complexity is manageable
**Risk Assessment:**
- [ ] Risk to existing system is low
- [ ] Rollback plan is feasible
- [ ] Testing approach covers existing functionality
- [ ] Team has sufficient knowledge of integration points
**Completeness Check:**
- [ ] Epic goal is clear and achievable
- [ ] Stories are properly scoped
- [ ] Success criteria are measurable
- [ ] Dependencies are identified
### 4. Handoff to Story Manager
Once the epic is validated, provide this handoff to the Story Manager:
---
**Story Manager Handoff:**
"Please develop detailed user stories for this brownfield epic. Key considerations:
- This is an enhancement to an existing system running {{technology stack}}
- Integration points: {{list key integration points}}
- Existing patterns to follow: {{relevant existing patterns}}
- Critical compatibility requirements: {{key requirements}}
- Each story must include verification that existing functionality remains intact
The epic should maintain system integrity while delivering {{epic goal}}."
---
## Success Criteria
The epic creation is successful when:
1. Enhancement scope is clearly defined and appropriately sized
2. Integration approach respects existing system architecture
3. Risk to existing functionality is minimized
4. Stories are logically sequenced for safe implementation
5. Compatibility requirements are clearly specified
6. Rollback plan is feasible and documented
## Important Notes
- This task is specifically for SMALL brownfield enhancements
- If the scope grows beyond 3 stories, consider the full brownfield PRD process
- Always prioritize existing system integrity over new functionality
- When in doubt about scope or complexity, escalate to full brownfield planning
==================== END: .bmad-core/tasks/brownfield-create-epic.md ====================
==================== START: .bmad-core/tasks/brownfield-create-story.md ====================
# Create Brownfield Story Task
## Purpose
Create a single user story for very small brownfield enhancements that can be completed in one focused development session. This task is for minimal additions or bug fixes that require existing system integration awareness.
## When to Use This Task
**Use this task when:**
- The enhancement can be completed in a single story
- No new architecture or significant design is required
- The change follows existing patterns exactly
- Integration is straightforward with minimal risk
- Change is isolated with clear boundaries
**Use brownfield-create-epic when:**
- The enhancement requires 2-3 coordinated stories
- Some design work is needed
- Multiple integration points are involved
**Use the full brownfield PRD/Architecture process when:**
- The enhancement requires multiple coordinated stories
- Architectural planning is needed
- Significant integration work is required
## Instructions
### 1. Quick Project Assessment
Gather minimal but essential context about the existing project:
**Current System Context:**
- [ ] Relevant existing functionality identified
- [ ] Technology stack for this area noted
- [ ] Integration point(s) clearly understood
- [ ] Existing patterns for similar work identified
**Change Scope:**
- [ ] Specific change clearly defined
- [ ] Impact boundaries identified
- [ ] Success criteria established
### 2. Story Creation
Create a single focused story following this structure:
#### Story Title
{{Specific Enhancement}} - Brownfield Addition
#### User Story
As a {{user type}},
I want {{specific action/capability}},
So that {{clear benefit/value}}.
#### Story Context
**Existing System Integration:**
- Integrates with: {{existing component/system}}
- Technology: {{relevant tech stack}}
- Follows pattern: {{existing pattern to follow}}
- Touch points: {{specific integration points}}
#### Acceptance Criteria
**Functional Requirements:**
1. {{Primary functional requirement}}
2. {{Secondary functional requirement (if any)}}
3. {{Integration requirement}}
**Integration Requirements:** 4. Existing {{relevant functionality}} continues to work unchanged 5. New functionality follows existing {{pattern}} pattern 6. Integration with {{system/component}} maintains current behavior
**Quality Requirements:** 7. Change is covered by appropriate tests 8. Documentation is updated if needed 9. No regression in existing functionality verified
#### Technical Notes
- **Integration Approach:** {{how it connects to existing system}}
- **Existing Pattern Reference:** {{link or description of pattern to follow}}
- **Key Constraints:** {{any important limitations or requirements}}
#### Definition of Done
- [ ] Functional requirements met
- [ ] Integration requirements verified
- [ ] Existing functionality regression tested
- [ ] Code follows existing patterns and standards
- [ ] Tests pass (existing and new)
- [ ] Documentation updated if applicable
### 3. Risk and Compatibility Check
**Minimal Risk Assessment:**
- **Primary Risk:** {{main risk to existing system}}
- **Mitigation:** {{simple mitigation approach}}
- **Rollback:** {{how to undo if needed}}
**Compatibility Verification:**
- [ ] No breaking changes to existing APIs
- [ ] Database changes (if any) are additive only
- [ ] UI changes follow existing design patterns
- [ ] Performance impact is negligible
### 4. Validation Checklist
Before finalizing the story, confirm:
**Scope Validation:**
- [ ] Story can be completed in one development session
- [ ] Integration approach is straightforward
- [ ] Follows existing patterns exactly
- [ ] No design or architecture work required
**Clarity Check:**
- [ ] Story requirements are unambiguous
- [ ] Integration points are clearly specified
- [ ] Success criteria are testable
- [ ] Rollback approach is simple
## Success Criteria
The story creation is successful when:
1. Enhancement is clearly defined and appropriately scoped for single session
2. Integration approach is straightforward and low-risk
3. Existing system patterns are identified and will be followed
4. Rollback plan is simple and feasible
5. Acceptance criteria include existing functionality verification
## Important Notes
- This task is for VERY SMALL brownfield changes only
- If complexity grows during analysis, escalate to brownfield-create-epic
- Always prioritize existing system integrity
- When in doubt about integration complexity, use brownfield-create-epic instead
- Stories should take no more than 4 hours of focused development work
==================== END: .bmad-core/tasks/brownfield-create-story.md ====================
==================== START: .bmad-core/tasks/validate-next-story.md ==================== ==================== START: .bmad-core/tasks/validate-next-story.md ====================
# Validate Next Story Task # Validate Next Story Task

1903
dist/agents/qa.txt vendored

File diff suppressed because it is too large Load Diff

6
dist/agents/sm.txt vendored
View File

@ -68,9 +68,9 @@ persona:
- You are NOT allowed to implement stories or modify code EVER! - You are NOT allowed to implement stories or modify code EVER!
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- draft: Execute task create-next-story - draft: Execute task create-next-story.md
- correct-course: Execute task correct-course - correct-course: Execute task correct-course.md
- checklist {checklist}: Show numbered list of checklists if not provided, execute task execute-checklist - story-checklist: Execute task execute-checklist.md with checklist story-draft-checklist.md
- exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona - exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:

View File

@ -73,15 +73,12 @@ persona:
- You can craft effective prompts for AI UI generation tools like v0, or Lovable. - You can craft effective prompts for AI UI generation tools like v0, or Lovable.
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-front-end-spec: run task create-doc.md with template front-end-spec-tmpl.yaml
- generate-ui-prompt: Create AI frontend generation prompt - generate-ui-prompt: Run task generate-ai-frontend-prompt.md
- research {topic}: Execute create-deep-research-prompt task to generate a prompt to init UX deep research
- execute-checklist {checklist}: Run task execute-checklist (default->po-master-checklist)
- exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona - exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
- generate-ai-frontend-prompt.md - generate-ai-frontend-prompt.md
- create-deep-research-prompt.md
- create-doc.md - create-doc.md
- execute-checklist.md - execute-checklist.md
templates: templates:
@ -145,298 +142,6 @@ You will now synthesize the inputs and the above principles into a final, compre
- <important_note>Conclude by reminding the user that all AI-generated code will require careful human review, testing, and refinement to be considered production-ready.</important_note> - <important_note>Conclude by reminding the user that all AI-generated code will require careful human review, testing, and refinement to be considered production-ready.</important_note>
==================== END: .bmad-core/tasks/generate-ai-frontend-prompt.md ==================== ==================== END: .bmad-core/tasks/generate-ai-frontend-prompt.md ====================
==================== START: .bmad-core/tasks/create-deep-research-prompt.md ====================
# Create Deep Research Prompt Task
This task helps create comprehensive research prompts for various types of deep analysis. It can process inputs from brainstorming sessions, project briefs, market research, or specific research questions to generate targeted prompts for deeper investigation.
## Purpose
Generate well-structured research prompts that:
- Define clear research objectives and scope
- Specify appropriate research methodologies
- Outline expected deliverables and formats
- Guide systematic investigation of complex topics
- Ensure actionable insights are captured
## Research Type Selection
CRITICAL: First, help the user select the most appropriate research focus based on their needs and any input documents they've provided.
### 1. Research Focus Options
Present these numbered options to the user:
1. **Product Validation Research**
- Validate product hypotheses and market fit
- Test assumptions about user needs and solutions
- Assess technical and business feasibility
- Identify risks and mitigation strategies
2. **Market Opportunity Research**
- Analyze market size and growth potential
- Identify market segments and dynamics
- Assess market entry strategies
- Evaluate timing and market readiness
3. **User & Customer Research**
- Deep dive into user personas and behaviors
- Understand jobs-to-be-done and pain points
- Map customer journeys and touchpoints
- Analyze willingness to pay and value perception
4. **Competitive Intelligence Research**
- Detailed competitor analysis and positioning
- Feature and capability comparisons
- Business model and strategy analysis
- Identify competitive advantages and gaps
5. **Technology & Innovation Research**
- Assess technology trends and possibilities
- Evaluate technical approaches and architectures
- Identify emerging technologies and disruptions
- Analyze build vs. buy vs. partner options
6. **Industry & Ecosystem Research**
- Map industry value chains and dynamics
- Identify key players and relationships
- Analyze regulatory and compliance factors
- Understand partnership opportunities
7. **Strategic Options Research**
- Evaluate different strategic directions
- Assess business model alternatives
- Analyze go-to-market strategies
- Consider expansion and scaling paths
8. **Risk & Feasibility Research**
- Identify and assess various risk factors
- Evaluate implementation challenges
- Analyze resource requirements
- Consider regulatory and legal implications
9. **Custom Research Focus**
- User-defined research objectives
- Specialized domain investigation
- Cross-functional research needs
### 2. Input Processing
**If Project Brief provided:**
- Extract key product concepts and goals
- Identify target users and use cases
- Note technical constraints and preferences
- Highlight uncertainties and assumptions
**If Brainstorming Results provided:**
- Synthesize main ideas and themes
- Identify areas needing validation
- Extract hypotheses to test
- Note creative directions to explore
**If Market Research provided:**
- Build on identified opportunities
- Deepen specific market insights
- Validate initial findings
- Explore adjacent possibilities
**If Starting Fresh:**
- Gather essential context through questions
- Define the problem space
- Clarify research objectives
- Establish success criteria
## Process
### 3. Research Prompt Structure
CRITICAL: collaboratively develop a comprehensive research prompt with these components.
#### A. Research Objectives
CRITICAL: collaborate with the user to articulate clear, specific objectives for the research.
- Primary research goal and purpose
- Key decisions the research will inform
- Success criteria for the research
- Constraints and boundaries
#### B. Research Questions
CRITICAL: collaborate with the user to develop specific, actionable research questions organized by theme.
**Core Questions:**
- Central questions that must be answered
- Priority ranking of questions
- Dependencies between questions
**Supporting Questions:**
- Additional context-building questions
- Nice-to-have insights
- Future-looking considerations
#### C. Research Methodology
**Data Collection Methods:**
- Secondary research sources
- Primary research approaches (if applicable)
- Data quality requirements
- Source credibility criteria
**Analysis Frameworks:**
- Specific frameworks to apply
- Comparison criteria
- Evaluation methodologies
- Synthesis approaches
#### D. Output Requirements
**Format Specifications:**
- Executive summary requirements
- Detailed findings structure
- Visual/tabular presentations
- Supporting documentation
**Key Deliverables:**
- Must-have sections and insights
- Decision-support elements
- Action-oriented recommendations
- Risk and uncertainty documentation
### 4. Prompt Generation
**Research Prompt Template:**
```markdown
## Research Objective
[Clear statement of what this research aims to achieve]
## Background Context
[Relevant information from project brief, brainstorming, or other inputs]
## Research Questions
### Primary Questions (Must Answer)
1. [Specific, actionable question]
2. [Specific, actionable question]
...
### Secondary Questions (Nice to Have)
1. [Supporting question]
2. [Supporting question]
...
## Research Methodology
### Information Sources
- [Specific source types and priorities]
### Analysis Frameworks
- [Specific frameworks to apply]
### Data Requirements
- [Quality, recency, credibility needs]
## Expected Deliverables
### Executive Summary
- Key findings and insights
- Critical implications
- Recommended actions
### Detailed Analysis
[Specific sections needed based on research type]
### Supporting Materials
- Data tables
- Comparison matrices
- Source documentation
## Success Criteria
[How to evaluate if research achieved its objectives]
## Timeline and Priority
[If applicable, any time constraints or phasing]
```
### 5. Review and Refinement
1. **Present Complete Prompt**
- Show the full research prompt
- Explain key elements and rationale
- Highlight any assumptions made
2. **Gather Feedback**
- Are the objectives clear and correct?
- Do the questions address all concerns?
- Is the scope appropriate?
- Are output requirements sufficient?
3. **Refine as Needed**
- Incorporate user feedback
- Adjust scope or focus
- Add missing elements
- Clarify ambiguities
### 6. Next Steps Guidance
**Execution Options:**
1. **Use with AI Research Assistant**: Provide this prompt to an AI model with research capabilities
2. **Guide Human Research**: Use as a framework for manual research efforts
3. **Hybrid Approach**: Combine AI and human research using this structure
**Integration Points:**
- How findings will feed into next phases
- Which team members should review results
- How to validate findings
- When to revisit or expand research
## Important Notes
- The quality of the research prompt directly impacts the quality of insights gathered
- Be specific rather than general in research questions
- Consider both current state and future implications
- Balance comprehensiveness with focus
- Document assumptions and limitations clearly
- Plan for iterative refinement based on initial findings
==================== END: .bmad-core/tasks/create-deep-research-prompt.md ====================
==================== START: .bmad-core/tasks/create-doc.md ==================== ==================== START: .bmad-core/tasks/create-doc.md ====================
# Create Document from Template (YAML Driven) # Create Document from Template (YAML Driven)

View File

@ -92,14 +92,14 @@ persona:
- Numbered Options Protocol - Always use numbered lists for selections - Numbered Options Protocol - Always use numbered lists for selections
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file - doc-out: Output full document in progress to current destination file
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - research-prompt {topic}: execute task create-deep-research-prompt.md
- research-prompt {topic}: execute task create-deep-research-prompt for architectural decisions - brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- brainstorm {topic}: Facilitate structured brainstorming session
- elicit: run the task advanced-elicitation - elicit: run the task advanced-elicitation
- document-project: Analyze and document existing project structure comprehensively
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona - exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:

View File

@ -92,14 +92,14 @@ persona:
- Numbered Options Protocol - Always use numbered lists for selections - Numbered Options Protocol - Always use numbered lists for selections
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file - doc-out: Output full document in progress to current destination file
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - research-prompt {topic}: execute task create-deep-research-prompt.md
- research-prompt {topic}: execute task create-deep-research-prompt for architectural decisions - brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- brainstorm {topic}: Facilitate structured brainstorming session
- elicit: run the task advanced-elicitation - elicit: run the task advanced-elicitation
- document-project: Analyze and document existing project structure comprehensively
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona - exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:

1989
dist/teams/team-all.txt vendored

File diff suppressed because it is too large Load Diff

View File

@ -230,14 +230,14 @@ persona:
- Numbered Options Protocol - Always use numbered lists for selections - Numbered Options Protocol - Always use numbered lists for selections
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file - doc-out: Output full document in progress to current destination file
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - research-prompt {topic}: execute task create-deep-research-prompt.md
- research-prompt {topic}: execute task create-deep-research-prompt for architectural decisions - brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- brainstorm {topic}: Facilitate structured brainstorming session
- elicit: run the task advanced-elicitation - elicit: run the task advanced-elicitation
- document-project: Analyze and document existing project structure comprehensively
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona - exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
@ -290,9 +290,14 @@ persona:
- Strategic thinking & outcome-oriented - Strategic thinking & outcome-oriented
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc for template provided, if no template then ONLY list dependencies.templates - create-prd: run task create-doc.md with template prd-tmpl.yaml
- yolo: Toggle Yolo Mode - create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
- correct-course: execute the correct-course task
- yolo: Toggle Yolo Mode
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:
@ -348,15 +353,12 @@ persona:
- You can craft effective prompts for AI UI generation tools like v0, or Lovable. - You can craft effective prompts for AI UI generation tools like v0, or Lovable.
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-front-end-spec: run task create-doc.md with template front-end-spec-tmpl.yaml
- generate-ui-prompt: Create AI frontend generation prompt - generate-ui-prompt: Run task generate-ai-frontend-prompt.md
- research {topic}: Execute create-deep-research-prompt task to generate a prompt to init UX deep research
- execute-checklist {checklist}: Run task execute-checklist (default->po-master-checklist)
- exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona - exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
- generate-ai-frontend-prompt.md - generate-ai-frontend-prompt.md
- create-deep-research-prompt.md
- create-doc.md - create-doc.md
- execute-checklist.md - execute-checklist.md
templates: templates:
@ -403,11 +405,16 @@ persona:
- Living Architecture - Design for change and adaptation - Living Architecture - Design for change and adaptation
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
- yolo: Toggle Yolo Mode - create-backend-architecture: use create-doc with architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-brownfield-architecture: use create-doc with brownfield-architecture-tmpl.yaml
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist)
- research {topic}: execute task create-deep-research-prompt for architectural decisions - research {topic}: execute task create-deep-research-prompt
- shard-prd: run the task shard-doc.md for the provided architecture.md (ask if not found)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Architect, and then abandon inhabiting this persona - exit: Say goodbye as the Architect, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
@ -463,23 +470,20 @@ persona:
- Documentation Ecosystem Integrity - Maintain consistency across all documents - Documentation Ecosystem Integrity - Maintain consistency across all documents
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- execute-checklist {checklist}: Run task execute-checklist (default->po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination - shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- correct-course: execute the correct-course task - correct-course: execute the correct-course task
- create-epic: Create epic for brownfield projects (task brownfield-create-epic) - create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story) - create-story: Create user story from requirements (task brownfield-create-story)
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- validate-story-draft {story}: run the task validate-next-story against the provided story file - validate-story-draft {story}: run the task validate-next-story against the provided story file
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:
- execute-checklist.md - execute-checklist.md
- shard-doc.md - shard-doc.md
- correct-course.md - correct-course.md
- brownfield-create-epic.md
- brownfield-create-story.md
- validate-next-story.md - validate-next-story.md
templates: templates:
- story-tmpl.yaml - story-tmpl.yaml

File diff suppressed because it is too large Load Diff

View File

@ -225,14 +225,14 @@ persona:
- Numbered Options Protocol - Always use numbered lists for selections - Numbered Options Protocol - Always use numbered lists for selections
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode - yolo: Toggle Yolo Mode
- doc-out: Output full document to current destination file - doc-out: Output full document in progress to current destination file
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - research-prompt {topic}: execute task create-deep-research-prompt.md
- research-prompt {topic}: execute task create-deep-research-prompt for architectural decisions - brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- brainstorm {topic}: Facilitate structured brainstorming session
- elicit: run the task advanced-elicitation - elicit: run the task advanced-elicitation
- document-project: Analyze and document existing project structure comprehensively
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona - exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
@ -285,9 +285,14 @@ persona:
- Strategic thinking & outcome-oriented - Strategic thinking & outcome-oriented
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc for template provided, if no template then ONLY list dependencies.templates - create-prd: run task create-doc.md with template prd-tmpl.yaml
- yolo: Toggle Yolo Mode - create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
- correct-course: execute the correct-course task
- yolo: Toggle Yolo Mode
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:
@ -346,11 +351,16 @@ persona:
- Living Architecture - Design for change and adaptation - Living Architecture - Design for change and adaptation
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
- yolo: Toggle Yolo Mode - create-backend-architecture: use create-doc with architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-brownfield-architecture: use create-doc with brownfield-architecture-tmpl.yaml
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist) - execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist)
- research {topic}: execute task create-deep-research-prompt for architectural decisions - research {topic}: execute task create-deep-research-prompt
- shard-prd: run the task shard-doc.md for the provided architecture.md (ask if not found)
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Architect, and then abandon inhabiting this persona - exit: Say goodbye as the Architect, and then abandon inhabiting this persona
dependencies: dependencies:
tasks: tasks:
@ -406,23 +416,20 @@ persona:
- Documentation Ecosystem Integrity - Maintain consistency across all documents - Documentation Ecosystem Integrity - Maintain consistency across all documents
commands: commands:
- help: Show numbered list of the following commands to allow selection - help: Show numbered list of the following commands to allow selection
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below) - execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- execute-checklist {checklist}: Run task execute-checklist (default->po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination - shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- correct-course: execute the correct-course task - correct-course: execute the correct-course task
- create-epic: Create epic for brownfield projects (task brownfield-create-epic) - create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story) - create-story: Create user story from requirements (task brownfield-create-story)
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- doc-out: Output full document to current destination file - doc-out: Output full document to current destination file
- validate-story-draft {story}: run the task validate-next-story against the provided story file - validate-story-draft {story}: run the task validate-next-story against the provided story file
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- exit: Exit (confirm) - exit: Exit (confirm)
dependencies: dependencies:
tasks: tasks:
- execute-checklist.md - execute-checklist.md
- shard-doc.md - shard-doc.md
- correct-course.md - correct-course.md
- brownfield-create-epic.md
- brownfield-create-story.md
- validate-next-story.md - validate-next-story.md
templates: templates:
- story-tmpl.yaml - story-tmpl.yaml

215
enhancements.md Normal file
View File

@ -0,0 +1,215 @@
# BMAD Method Quality Framework Enhancements
## Overview
This document outlines the new features and functionality added to the BMAD Method to create an enterprise-grade quality engineering framework for AI-assisted development.
## New Core Features
### 1. Reality Enforcement System
**Purpose:** Prevent "bull in china shop" development behavior through objective quality measurement and automated validation.
**Key Features:**
- **Automated Simulation Pattern Detection**: Identifies 6 distinct pattern types including Random.NextDouble(), Task.FromResult(), NotImplementedException, TODO comments, simulation methods, and hardcoded test data
- **Objective Reality Scoring**: A-F grading system (90-100=A, 80-89=B, 70-79=C, 60-69=D, <60=F) with clear enforcement thresholds
- **Build and Runtime Validation**: Automated compilation and execution testing with platform-specific error detection
### 2. Regression Prevention Framework
**Purpose:** Ensure QA fixes don't introduce regressions or technical debt through story context analysis and pattern compliance.
**Key Features:**
- **Story Context Analysis**: Automatic analysis of previous successful implementations to establish architectural patterns
- **Pattern Consistency Checking**: Validates new implementations against established patterns from completed stories
- **Integration Impact Assessment**: Evaluates potential impacts on existing functionality and external dependencies
- **Technical Debt Prevention Scoring**: Prevents introduction of code complexity and maintainability issues
### 3. Composite Quality Scoring System
**Purpose:** Provide comprehensive quality assessment through weighted component scoring.
**Scoring Components:**
- **Simulation Reality (40%)**: Traditional simulation pattern detection and build/runtime validation
- **Regression Prevention (35%)**: Pattern consistency, architectural compliance, and integration safety
- **Technical Debt Prevention (25%)**: Code quality, maintainability, and architectural alignment
**Quality Thresholds:**
- Composite Reality Score: ≥80 (required for completion)
- Regression Prevention Score: ≥80 (required for auto-remediation)
- Technical Debt Score: ≥70 (required for quality approval)
### 4. Automated Remediation Workflow
**Purpose:** Eliminate manual QA-to-Developer handoffs through automatic fix story generation.
**Key Features:**
- **Automatic Story Generation**: Creates structured developer stories when quality thresholds are not met
- **Regression-Safe Recommendations**: Includes specific implementation approaches that prevent functionality loss
- **Cross-Pattern Referencing**: Automatically references successful patterns from previous stories
- **Systematic Fix Prioritization**: Orders remediation by impact (simulation → regression → build → technical debt → runtime)
### 5. Automatic Loop Detection & Escalation System
**Purpose:** Prevent agents from getting stuck in repetitive debugging cycles through automatic collaborative escalation.
**Key Features:**
- **Automatic Failure Tracking**: Maintains separate counters per specific issue, resets on successful progress
- **Zero-Touch Escalation**: Automatically triggers after 3 consecutive failed attempts at same task/issue
- **Copy-Paste Prompt Generation**: Creates structured collaboration request with fill-in-the-blank format for external LLMs
- **Multi-LLM Support**: Optimized prompts for Gemini, GPT-4, Claude, or specialized AI agents
- **Learning Integration**: Documents patterns and solutions from collaborative sessions
**Automatic Triggers:**
- **Dev Agent**: Build failures, test implementation failures, validation errors, reality audit failures
- **QA Agent**: Reality audit failures, quality score issues, regression prevention problems, runtime failures
## Enhanced Agent Commands
### Developer Agent (James) New Commands
- **`*reality-audit`**: Execute reality-audit-comprehensive task with regression prevention analysis
- **Features**: Multi-language project detection, automated pattern scanning, story context analysis, build/runtime validation
- **Output**: Composite reality score with A-F grading and automatic remediation triggers
- **`*build-context`**: Execute build-context-analysis for comprehensive pre-fix context investigation
- **Features**: Git history analysis, test contract evaluation, dependency mapping, risk assessment
- **Output**: Historical context report with implementation planning and validation strategy
- **`*escalate`**: Execute loop-detection-escalation for external AI collaboration when stuck
- **Features**: Structured context packaging, collaborator selection, solution integration
- **Output**: Collaboration request package for external expert engagement
### QA Agent (Quinn) Enhanced Commands
- **`*reality-audit {story}`**: Manual quality audit with regression prevention analysis
- **Enhanced**: Now includes story context analysis, pattern consistency checking, and composite scoring
- **Output**: Comprehensive audit report with regression risk assessment
- **`*audit-validation {story}`**: Automated quality audit with guaranteed regression-safe auto-remediation
- **Enhanced**: Automatically triggers remediation workflows with regression prevention
- **Auto-Triggers**: composite_score_below 80, regression_prevention_score_below 80, technical_debt_score_below 70
- **Auto-Actions**: generate_remediation_story, include_regression_prevention, cross_reference_story_patterns
- **`*create-remediation`**: Generate comprehensive fix stories with regression prevention capabilities
- **Enhanced**: Includes story context analysis, pattern compliance requirements, and regression-safe implementation approaches
## New Automation Behaviors
### Developer Agent Automation Configuration
```yaml
auto_escalation:
trigger: "3 consecutive failed attempts at the same task/issue"
tracking: "Maintain attempt counter per specific issue/task - reset on successful progress"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
examples:
- "Build fails 3 times with same error despite different fix attempts"
- "Test implementation fails 3 times with different approaches"
- "Same validation error persists after 3 different solutions tried"
- "Reality audit fails 3 times on same simulation pattern despite fixes"
```
### QA Agent Automation Configuration
```yaml
automation_behavior:
always_auto_remediate: true
trigger_threshold: 80
auto_create_stories: true
systematic_reaudit: true
trigger_conditions:
- composite_reality_score_below: 80
- regression_prevention_score_below: 80
- technical_debt_score_below: 70
- build_failures: true
- critical_simulation_patterns: 3+
- runtime_failures: true
auto_actions:
- generate_remediation_story: true
- include_regression_prevention: true
- cross_reference_story_patterns: true
- assign_to_developer: true
- create_reaudit_workflow: true
auto_escalation:
trigger: "3 consecutive failed attempts at resolving the same quality issue"
tracking: "Maintain failure counter per specific quality issue - reset on successful resolution"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
examples:
- "Same reality audit failure persists after 3 different remediation attempts"
- "Composite quality score stays below 80% after 3 fix cycles"
- "Same regression prevention issue fails 3 times despite different approaches"
- "Build/runtime validation fails 3 times on same error after different solutions"
```
### Developer Agent Enhanced Completion Requirements & Automation
- **MANDATORY**: Execute reality-audit-comprehensive before claiming completion
- **AUTO-ESCALATE**: Automatically execute loop-detection-escalation after 3 consecutive failures on same issue
- **BUILD SUCCESS**: Clean Release mode compilation required
- **REGRESSION PREVENTION**: Pattern compliance with previous successful implementations
**Automatic Escalation Behavior:**
```yaml
auto_escalation:
trigger: "3 consecutive failed attempts at the same task/issue"
tracking: "Maintain attempt counter per specific issue/task - reset on successful progress"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
```
### QA Agent Enhanced Automation
**Automatic Escalation Behavior:**
```yaml
auto_escalation:
trigger: "3 consecutive failed attempts at resolving the same quality issue"
tracking: "Maintain failure counter per specific quality issue - reset on successful resolution"
action: "AUTOMATIC: Execute loop-detection-escalation task → Generate copy-paste prompt for external LLM collaboration → Present to user"
```
## Implementation Files
### Core Enhancement Components
- **`bmad-core/tasks/reality-audit-comprehensive.md`**: 9-phase comprehensive reality audit with regression prevention
- **`bmad-core/tasks/create-remediation-story.md`**: Automated regression-safe remediation story generation
- **`bmad-core/tasks/loop-detection-escalation.md`**: Systematic loop prevention and external collaboration framework
- **`bmad-core/tasks/build-context-analysis.md`**: Comprehensive build context investigation and planning
### Enhanced Agent Files
- **`bmad-core/agents/dev.md`**: Enhanced developer agent with reality enforcement and loop prevention
- **`bmad-core/agents/qa.md`**: Enhanced QA agent with auto-remediation and regression prevention
### Enhanced Validation Checklists
- **`bmad-core/checklists/story-dod-checklist.md`**: Updated with reality validation and static analysis requirements
- **`bmad-core/checklists/static-analysis-checklist.md`**: Comprehensive code quality validation
## Strategic Benefits
### Quality Improvements
- **Zero Tolerance for Simulation Patterns**: Systematic detection and remediation of mock implementations
- **Regression Prevention**: Cross-referencing with previous successful patterns prevents functionality loss
- **Technical Debt Prevention**: Maintains code quality and architectural consistency
- **Objective Quality Measurement**: Evidence-based assessment replaces subjective evaluations
### Workflow Automation
- **Eliminated Manual Handoffs**: QA findings automatically generate developer stories
- **Systematic Remediation**: Prioritized fix sequences prevent cascading issues
- **Continuous Quality Loop**: Automatic re-audit after remediation ensures standards are met
- **Collaborative Problem Solving**: External AI expertise available when internal approaches reach limits
### Enterprise-Grade Capabilities
- **Multi-Language Support**: Works across different project types and technology stacks
- **Scalable Quality Framework**: Handles projects of varying complexity and size
- **Audit Trail Documentation**: Complete evidence chain for quality decisions
- **Continuous Improvement**: Learning integration from collaborative solutions
## Expected Impact
### Measurable Outcomes
- **75% reduction** in simulation patterns reaching production code
- **60+ minutes saved** per debugging session through loop prevention
- **Automated workflow generation** eliminates QA-to-Developer handoff delays
- **Systematic quality enforcement** ensures consistent implementation standards
### Process Improvements
- **Proactive Quality Gates**: Issues caught and remediated before code review
- **Collaborative Expertise**: External AI collaboration available for complex issues
- **Pattern-Based Development**: Reuse of successful implementation approaches
- **Continuous Learning**: Knowledge retention from collaborative problem solving
---
*These enhancements transform BMAD Method from a basic agent orchestration system into an enterprise-grade AI development quality platform with systematic accountability, automated workflows, and collaborative problem-solving capabilities.*