Merge branch 'main' into fix/epics-consume-ux-design-spec

This commit is contained in:
Brian 2026-03-11 22:36:43 -05:00 committed by GitHub
commit ed413ad51b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 25 additions and 11 deletions

View File

@ -95,11 +95,11 @@ TEA also supports P0-P3 risk-based prioritization and optional integrations with
## How Testing Fits into Workflows ## How Testing Fits into Workflows
Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. A typical sequence: Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence:
1. Implement a story with the Dev workflow (`DS`) 1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
2. Generate tests with Quinn (`QA`) or TEA's Automate workflow 2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Validate implementation with Code Review (`CR`) 3. Run retrospective (`bmad-retrospective`) to capture lessons learned
Quinn works directly from source code without loading planning documents (PRD, architecture). TEA workflows can integrate with upstream planning artifacts for traceability. Quinn works directly from source code without loading planning documents (PRD, architecture). TEA workflows can integrate with upstream planning artifacts for traceability.

View File

@ -99,6 +99,22 @@ Review the entire document with PRD purpose principles in mind:
- Are technical terms used appropriately? - Are technical terms used appropriately?
- Would stakeholders find this easy to understand? - Would stakeholders find this easy to understand?
### 2b. Brainstorming Reconciliation (if brainstorming input exists)
**Check the PRD frontmatter `inputDocuments` for any brainstorming document** (e.g., `brainstorming-session*.md`, `brainstorming-report.md`). If a brainstorming document was used as input:
1. **Load the brainstorming document** and extract all distinct ideas, themes, and recommendations
2. **Cross-reference against the PRD** — for each brainstorming idea, check if it landed in any PRD section (requirements, success criteria, user journeys, scope, etc.)
3. **Identify dropped ideas** — ideas from brainstorming that do not appear anywhere in the PRD. Pay special attention to:
- Tone, personality, and interaction design ideas (these are most commonly lost)
- Design philosophy and coaching approach ideas
- "What should this feel like" ideas (UX feel, not just UX function)
- Qualitative/soft ideas that don't map cleanly to functional requirements
4. **Present findings to user**: "These brainstorming ideas did not make it into the PRD: [list]. Should any be incorporated?"
5. **If user wants to incorporate dropped ideas**: Add them to the most appropriate PRD section (success criteria, non-functional requirements, or a new section if needed)
**Why this matters**: Brainstorming documents are often long, and the PRD's structured template has an implicit bias toward concrete/structural ideas. Soft ideas (tone, philosophy, interaction feel) frequently get silently dropped because they don't map cleanly to FR/NFR format.
### 3. Optimization Actions ### 3. Optimization Actions
Make targeted improvements: Make targeted improvements:
@ -193,6 +209,7 @@ When user selects 'C', replace the entire document content with the polished ver
✅ User's voice and intent preserved ✅ User's voice and intent preserved
✅ Document is more readable and professional ✅ Document is more readable and professional
✅ A/P/C menu presented and handled correctly ✅ A/P/C menu presented and handled correctly
✅ Brainstorming reconciliation completed (if brainstorming input exists)
✅ Polished document saved when C selected ✅ Polished document saved when C selected
## FAILURE MODES: ## FAILURE MODES:

View File

@ -13,7 +13,7 @@ description: 'Perform adversarial code review finding specific issues. Use when
- Generate all documents in {document_output_language} - Generate all documents in {document_output_language}
- Your purpose: Validate story file claims against actual implementation - Your purpose: Validate story file claims against actual implementation
- Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented? - Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?
- Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent that wrote this slop - Be thorough and specific — find real issues, not manufactured ones. If the code is genuinely good after fixes, say so
- Read EVERY file in the File List - verify implementation against story requirements - Read EVERY file in the File List - verify implementation against story requirements
- Tasks marked complete but not done = CRITICAL finding - Tasks marked complete but not done = CRITICAL finding
- Acceptance Criteria not implemented = HIGH severity finding - Acceptance Criteria not implemented = HIGH severity finding
@ -136,17 +136,14 @@ Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
5. **Test Quality**: Are tests real assertions or placeholders? 5. **Test Quality**: Are tests real assertions or placeholders?
</action> </action>
<check if="total_issues_found lt 3"> <check if="total_issues_found == 0">
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical> <action>Double-check by re-examining code for:
<action>Re-examine code for:
- Edge cases and null handling - Edge cases and null handling
- Architecture violations - Architecture violations
- Documentation gaps
- Integration issues - Integration issues
- Dependency problems - Dependency problems
- Git commit message quality (if applicable)
</action> </action>
<action>Find at least 3 more specific, actionable issues</action> <action>If still no issues found after thorough re-examination, that is a valid outcome — report a clean review</action>
</check> </check>
</step> </step>